Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

VTFrameRateConversionConfiguration don't support 640x480
hello, I'm using VideoTololbox VTFrameRateConversionConfiguration to perform frame interpolation: https://developer.apple.com/documentation/videotoolbox/vtframerateconversionconfiguration?language=objc ,when using 640x480 vidoe input, I got error: Error ! Invalid configuration [VEEspressoModel] build failure : flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 [EpsressoModel] Cannot load Net file flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 Error: failed to create FRCFlowAdaptationFeatureExtractor for usage 8 Failed to switch (0x12c40e140) [usage:8, 1/4 flow:0, adaptation layer:1, twoStage:0, revision:2, flow size (320x240)]. Could not init FlowAdaptation initFlowAdaptationWithError fail tried 2048x1080 is ok.
3
0
298
5d
FPS not able to do > 30 fps for Pro and ProMax models only
We’re developing an AVFoundation-based video recording app (4K @ 60 fps required for biomechanical analysis). On most devices this works perfectly (iPhone 12/14/15/16 non-Pro models), but on several iPhone Pro models (12 Pro, 13 Pro, 14 Pro, 15 Pro/Pro Max), we consistently get 4K 30 fps recordings—even when the device should support 4K 60 fps on the wide-angle camera. What we observe We configure the session for .hd4K3840x2160. We iterate through AVCaptureDevice.formats and select formats that: have 3840×2160 resolution support ≥60 fps (videoSupportedFrameRateRanges) On some Pro devices, this format search returns no results, even though: The Camera app records 4K60 fine. External references list the wide camera as 4K60 capable. The fallback becomes the device's default 4K30 format, so final files are 3840×2160 @ 30 fps. This happens immediately on app launch (not after heating), so not thermal-related. What we’ve tried Force selecting .builtInWideAngleCamera instead of dual/triple cameras. Disabling HDR (videoHDREnabled = false). Disabling low-light boost. Allowing 59.94 fps formats (in case exact 60.0 isn’t exposed). Logging all videoSupportedFrameRateRanges per format. What we’re seeing in logs On affected Pro devices, the capture device reports only 4K formats with maxFrameRate ≈ 30 fps, despite the hardware being able to do 4K60. Main question Has anyone encountered cases where 4K60 formats are available in the Camera app but not exposed through AVFoundation, especially on Pro models or multi-camera devices? Could HEVC/HDR capability or multi-camera constraints be preventing certain formats from appearing? Are there known conditions where 4K60 formats are hidden unless specific device configuration is applied? Any guidance on reliably locking 4K60 on iPhone Pro models via AVFoundation would be hugely appreciated.
0
0
251
1w
VideoPlayer crashes on Archive build
I have found that following code runs without issue from Xcode, either in Debug or Release mode, yet crashes when running from the binary produced by archiving - i.e. what will be sent to the app store. import SwiftUI import AVKit @main struct tcApp: App { var body: some Scene { WindowGroup { VideoPlayer(player: nil) } } } This is the most stripped down code that shows the issue. One can try and point the VideoPlayer at a file and the same issue will occur. I've attached the crash log: Crash log Please note that this was seen with Xcode 26.2 and MacOS 26.2.
0
0
61
23h
Video freezing with FairPlay streaming on iOS 18
Since iOS/iPadOs/tvOS 18 then we have run into a new problem with streaming of FairPlay encrypted video. On the affected streams then the audio plays perfectly but the video freezes for periods of a few seconds, so it will freeze for 5s or so, then be OK for a few seconds then freeze again. It is entirely reproducible when all the following are true the video streams were produced by a particular encoder (or particular settings, not sure on that) the video must be encrypted device is running some variety of iOS 18 (or iPadOS or tvOS) the device is an affected device Known devices are AppleTV 4K 2nd Gen iPad Pro 11" 1st and 2nd gen Devices known not to show the problem are all other AppleTV models iPhone 13 Pro and 16 Pro If we stream the same content, but unencrypted, then it plays perfectly, or if you play the encrypted stream on, say, tvOS 17. When the freezing occurs then we can see in the console logs repeating blocks of lines like the following default 18:08:46.578582+0000 videocodecd AppleAVD: AppleAVDDecodeFrameResponse(): Frame# 5771 DecodeFrame failed with error 0x0000013c default 18:08:46.578756+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): failed - error: 316 default 18:08:46.579018+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): avdDec - Frame# 5771, DecodeFrame failed with error: 0x13c default 18:08:46.579169+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 5771 with err -12909 - internalStatus: 315 also more relevant looking lines: default 18:17:39.122019+0000 kernel AppleAVD: avdOutbox0ISR(): FRM DONE (cid: 2.0, fno: 10970, codecT: 1) FAILED!! default 18:17:39.122155+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 10970 with err -12909 - internalStatus: 315 default 18:17:39.122221+0000 kernel AppleAVD: ## client[ 2.0] @ frm 10970, errStatus: 0x10 default 18:17:39.122338+0000 kernel AppleAVD: decodeFailIdentify(): VP error bit 4 has EP3B0 error default 18:17:39.122401+0000 kernel AppleAVD: processHWResponse(): clientID 2.0 frameNumber 10970 error 315, offsetIndex 10, isHwErr 1 So it would seem to me that one of the following must be happening: When these particular HLS files are encrypted then the data is being corrupted in some way that played back on iOS 17 and earlier but now won't on 18+, or There's a regression in iOS 18 that means that this particular format of video data is corrupted on decryption If anyone has seen similar behaviour, or has any ideas how to identify which of the two scenarios it is, please say. Unfortunately we don't have control of the servers so can't make changes there unless we can identify they are definitely the cause of the problem. Thanks, Simon.
2
0
719
Sep ’25
Is Apple Log open to developers for 3rd party apps?
Hello! I am building a video camera app and trying to implement Apple log for iPhone 15 Pro and 16 Pro. I am not seeing a lot of documentation on it and notice the amount of apps that use it on the app is rather limited. Less an 5 to be exact. Is Apple Log recording a feature that is accessible to developers? Here is a link to documentation: https://developer.apple.com/documentation/avfoundation/avcapturecolorspace/applelog
1
0
496
Jan ’25
AVAssetWriter append audio/video streams concurrently in Real time recording setup
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance? `videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1 `audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
1
0
637
Jan ’25
Exporting Higher Resolution Spatial Videos in Final Cut
How do we export a spatial video at a higher resolution than 2200 x 2200? For example, I want to export a video that I edited in a spatial timeline as 4096 x 4096 resolution with spatial metadata to output as an MV-HEVC file. I looked under both Final Cut and Compressor export settings, but couldn't find a way to do this. Thanks for your help!
1
0
483
Jan ’25
Help diagnosing crash with only system code in its stack trace
Hello, I work on a video streaming app, and I have been working on this crash that we are seeing quite frequently (it is our #2 crasher at the moment). The stack trace indicates that an AVPictureInPictureController is being deallocated on a background thread. This leads to dangling AutoLayout constraints getting cleaned up, further resulting in an exception being thrown from the framework about the layout engine being accessed from a background thread. We have internal analytics which indicate that the crash occurs after the user comes back to the app after putting it in the background, and switches playback from Picture in Picture mode back to the app's regular playback UI. What has me puzzled here is that, as I'm sure you know, we have no control over the PIP UI. It is entirely system-provided, and there is none of our app's code in the stack trace. The whole process is even initiated by a KVO on an AVPlayerController property, and our app doesn't use that class directly anywhere. So how did we manage to cause this process to happen on a background thread? Add to this the fact that the bug only appeared once we switched to compiling with Xcode 16, and is overwhelmingly present only on devices running iOS 18. These factors lead me to believe that this is probably an OS issue. But before I go to file a feedback, I thought I would post here in case anyone has any ideas. I have attached an instance of the crash log to this post. 2025-01-08_17-32-45.7003_-0800-ea8d5c3323e0f1fc059cf83f6ec86377bdae1788.crash
1
0
437
Jan ’25
How start AVPictureInPicture when video is paused
I have AVPlayer with AVPictureInPictureController. Play video in app and picture In Picture works except one situation. Issue is: I pause video in application and during switch to background is not PiP activate. What do I wrong? import UIKit import AVKit import AVFoundation class ViewControllerSec: UIViewController,AVPictureInPictureControllerDelegate { var pipPlayer: AVPlayer! var avCanvas : UIView! var pipCanvas: AVPlayerLayer? var pipController: AVPictureInPictureController! var mainViewControler : UIViewController! var playerItem : AVPlayerItem! var videoAvasset : AVAsset! public func link(to parentViewController : UIViewController) { mainViewControler = parentViewController setup() } @objc func appWillResignActiveNotification(application: UIApplication) { guard let pipController = pipController else { print("PiP not supported") return } print("PIP isSuspend: \(pipController.isPictureInPictureSuspended)") print("PIP isPossible: \(pipController.isPictureInPicturePossible)" if playerItem.status == .readyToPlay { if pipPlayer.rate == 0 { pipPlayer.play() } pipController.startPictureInPicture(). ---> Errorin log: Failed to start picture in picture. } else { print("Player not ready for PiP.") } } private func setupAudio() { do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .moviePlayback) try session.setActive(true) } catch { print("Audio session setup failed: \(error.localizedDescription)") } } @objc func playerItemDidFailToPlayToEnd(_ notification: Notification) { if let error = notification.userInfo?[AVPlayerItemFailedToPlayToEndTimeErrorKey] as? Error { print("Failed to play to end: \(error.localizedDescription)") } } func setup() { setupAudio() guard let videoURL = URL(string: "https://demo.unified-streaming.com/k8s/features/stable/video/tears-of-steel/tears-of-steel.mp4/.m3u8") else { return } videoAvasset = AVAsset(url: videoURL) playerItem = AVPlayerItem(asset: videoAvasset) addPlayerObservers() pipPlayer = AVPlayer(playerItem: playerItem) avCanvas = UIView(frame: view.bounds) pipCanvas = AVPlayerLayer(player: pipPlayer) guard let pipCanvas else { return } pipCanvas.frame = avCanvas.bounds //pipCanvas.videoGravity = .resizeAspectFill mainViewControler.view.addSubview(avCanvas) avCanvas.layer.addSublayer(pipCanvas) if AVPictureInPictureController.isPictureInPictureSupported() { pipController = AVPictureInPictureController(playerLayer: pipCanvas) pipController?.delegate = self pipController?.canStartPictureInPictureAutomaticallyFromInline = true } let playButton = UIButton(frame: CGRect(x: 20, y: 50, width: 100, height: 50)) playButton.setTitle("Play", for: .normal) playButton.backgroundColor = .blue playButton.addTarget(self, action: #selector(playTapped), for: .touchUpInside) mainViewControler.view.addSubview(playButton) let pauseButton = UIButton(frame: CGRect(x: 140, y: 50, width: 100, height: 50)) pauseButton.setTitle("Pause", for: .normal) pauseButton.backgroundColor = .red pauseButton.addTarget(self, action: #selector(pauseTapped), for: .touchUpInside) mainViewControler.view.addSubview(pauseButton) let pipButton = UIButton(frame: CGRect(x: 260, y: 50, width: 150, height: 50)) pipButton.setTitle("Start PiP", for: .normal) pipButton.backgroundColor = .green pipButton.addTarget(self, action: #selector(startPictureInPicture), for: .touchUpInside) mainViewControler.view.addSubview(pipButton) print("Error:\(String(describing: pipPlayer.error?.localizedDescription))") NotificationCenter.default.addObserver(forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: nil) { [weak self] _ in guard let self = self else { return } if self.pipPlayer.rate == 0 { self.pipPlayer.play() pipController?.startPictureInPicture() } } func addPlayerObservers() { playerItem?.addObserver(self, forKeyPath: "status", options: [.old, .new], context: nil) NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying(_:)), name: .AVPlayerItemDidPlayToEndTime, object: playerItem) } override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) { if keyPath == "status" { if let statusNumber = change?[.newKey] as? NSNumber { let status = AVPlayer.Status(rawValue: statusNumber.intValue)! switch status { case .readyToPlay: print("Player is ready to play") case .failed: print("Player failed: \(String(describing: playerItem?.error))") case .unknown: print("Player status is unknown") @unknown default: fatalError() } } } } @objc func playerDidFinishPlaying(_ notification: Notification) { print("Video finished playing.") } deinit { playerItem?.removeObserver(self, forKeyPath: "status") NotificationCenter.default.removeObserver(self) } @objc func playTapped() { pipPlayer.play() } @objc func pauseTapped() { pipPlayer.pause() } @objc func startPictureInPicture() { if let pipController = pipController, !pipController.isPictureInPictureActive { pipController.startPictureInPicture() } } @objc func stopPictureInPicture() { if let pipController = pipController, pipController.isPictureInPictureActive { pipController.stopPictureInPicture() } } func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: Error) { print("Failed to start PiP: \(error.localizedDescription)") if let underlyingError = (error as NSError).userInfo[NSUnderlyingErrorKey] { print("Underlying error: \(underlyingError)") } } }
1
0
546
Jan ’25
AVPlayerItem step(byCount:) callback or notification
Hello there, I need to move through video loaded in an AVPlayer one frame at a time back or forth. For that I tried to use AVPlayerItem's method step(byCount:) and it works just fine. However I need to know when stepping happened and as far as I observed it is not immediate using the method. If I check the currentTime() just after calling the method it's the same and if I do it slightly later (depending of the video itself) it shows the correct "jumped" time. To achieve my goal I tried subclassing AVPlayerItem and implement my own async method utilizing NotificationCenter and the timeJumpedNotification assuming it would deliver it as the time actually jumps but it's not the case. Here is my "stripped" and simplified version of the custom Player Item: import AVFoundation final class PlayerItem: AVPlayerItem { private var jumpCompletion: ( (CMTime) -> () )? override init(asset: AVAsset, automaticallyLoadedAssetKeys: [String]?) { super .init(asset: asset, automaticallyLoadedAssetKeys: automaticallyLoadedAssetKeys) NotificationCenter.default.addObserver(self, selector: #selector(timeDidChange(_:)), name: AVPlayerItem.timeJumpedNotification, object: self) } deinit { NotificationCenter.default.removeObserver(self, name: AVPlayerItem.timeJumpedNotification, object: self) jumpCompletion = nil } @discardableResult func step(by count: Int) async -> CMTime { await withCheckedContinuation { continuation in step(by: count) { time in continuation.resume(returning: time) } } } func step(by count: Int, completion: @escaping ( (CMTime) -> () )) { guard jumpCompletion == nil else { completion(currentTime()) return } jumpCompletion = completion step(byCount: count) } @objc private func timeDidChange(_ notification: Notification) { switch notification.name { case AVPlayerItem.timeJumpedNotification where notification.object as? AVPlayerItem [==](https://www.example.com/) self: jumpCompletion?(currentTime()) jumpCompletion = nil default: return } } } In short the notification never gets called thus the above is not working. I guess the key there is that in the docs about the timeJumpedNotification: is said: "A notification the system posts when a player item’s time changes discontinuously." so the step(byCount:) is not considered as discontinuous operation and doesn't trigger it. I'd be really helpful if somebody can help as I don't want to use seek(to:toleranceBefore:toleranceAfter:) mainly cause it's not accurate in terms of the exact next/previous frame as the video might have VFR and that causes repeating frames sometimes or even skipping one or another. Thanks a lot
2
0
636
Jan ’25
Encoder Performance Unable to Achieve 4k 120fps on iPhone 16 Pro
Hello, I am trying to get the new iPhone 16 pro to achieve 4k 120fps encoding when we are getting the video feed from the default, wide angle camera on the back. We are using the apple API to capture the individual frames from the camera as they are processed and we get them in this callback: // this is the main callback function to handle video frames captured func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { We are then taking these frames as they come in and encoding them using VideoToolBox. After they are encoded, they are added to a ring buffer so we can access them after they have been encoded. The problem is that when we are encoding these frames on an iPhone 16 Pro, we are only reaching 80-90fps instead of 120fps. We have removed as much processing as we can. We get some small attributes about the frame when it comes in, encode the frame, and then add it to our ring buffer. I have attached a sample project that is broken down as much as possible to the basic task of encoding 4k 120fps footage. Inside the sample app, there is an fps and pps display showing how many frames we are encoding per second. FPS represents how many frames we are coming in per second from the camera, and PPS represents how many frames we are processing (encoding) per second. Link to sample project: https://github.com/jake-fishtech/EncoderPerformance Thanks you for any help or suggestions.
2
0
583
Jan ’25
H.265 Decoding with VideoToolBox
I am creating an app that decodes H.265 elementary streams on iOS. I use VideoToolBox to decode from H.265 to NV12. The decoded data is enqueued in the CMSampleBufferDisplayLayer as a CMSampleBuffer. However, nothing is displayed in the VideoPlayerView. It remains black. The decoding in VideoToolBox is successful. I confirmed this by saving the NV12 data in the CMSampleBuffer to a file and displaying it using a tool. Why is nothing displayed in the VideoPlayerView? I can provide other source code as well. // // ContentView.swift // H265Decoder // // Created by Kohshin Tokunaga on 2025/02/15. // import SwiftUI struct ContentView: View { var body: some View { VStack { Text("H.265 Player (temp.h265)") .font(.headline) VideoPlayerView() .frame(width: 360, height: 640) // Adjust or make it responsive for iOS } .padding() } } #Preview { ContentView() } // // VideoPlayerView.swift // H265Decoder // // Created by Kohshin Tokunaga on 2025/02/15. // import SwiftUI import AVFoundation struct VideoPlayerView: UIViewRepresentable { // Return an H265Player as the coordinator, and start playback there. func makeCoordinator() -> H265Player { H265Player() } func makeUIView(context: Context) -> UIView { let uiView = UIView(frame: .zero) // Base layer for attaching sublayers uiView.backgroundColor = .black // Screen background color (for iOS) // Create the display layer and add it to uiView.layer let displayLayer = context.coordinator.displayLayer displayLayer.frame = uiView.bounds displayLayer.backgroundColor = UIColor.clear.cgColor uiView.layer.addSublayer(displayLayer) // Start playback context.coordinator.startPlayback() return uiView } func updateUIView(_ uiView: UIView, context: Context) { // Reset the frame of the AVSampleBufferDisplayLayer when the view's size changes. let displayLayer = context.coordinator.displayLayer displayLayer.frame = uiView.layer.bounds // Optionally update the layer's background color, etc. uiView.backgroundColor = .black displayLayer.backgroundColor = UIColor.clear.cgColor // Flush transactions if necessary CATransaction.flush() } } // // H265Player.swift // H265Decoder // // Created by Kohshin Tokunaga on 2025/02/15. // import Foundation import AVFoundation import CoreMedia class H265Player: NSObject, VideoDecoderDelegate { let displayLayer = AVSampleBufferDisplayLayer() private var decoder: H265Decoder? override init() { super.init() // Initial configuration for the display layer displayLayer.videoGravity = .resizeAspect // Initialize the decoder (delegate = self) decoder = H265Decoder(delegate: self) // For simple playback, set isBaseline to true decoder?.isBaseline = true } func startPlayback() { // Load the file "cars_320x240.h265" guard let url = Bundle.main.url(forResource: "temp2", withExtension: "h265") else { print("File not found") return } do { let data = try Data(contentsOf: url) // Set FPS and video size as needed let packet = VideoPacket(data: data, type: .h265, fps: 30, videoSize: CGSize(width: 1080, height: 1920)) // Decode as a single packet decoder?.decodeOnePacket(packet) } catch { print("Failed to load file: \(error)") } } // MARK: - VideoDecoderDelegate func decodeOutput(video: CMSampleBuffer) { // When decoding is complete, send the output to AVSampleBufferDisplayLayer displayLayer.enqueue(video) } func decodeOutput(error: DecodeError) { print("Decoding error: \(error)") } }
2
0
607
May ’25
SDAVAssetExportSession or AVAssetWriter fail to process iphone16 video with spatial audio tracks
I have been using SDAVAssetExportSession to compress videos in an app I am building, everything goes very smoothly until I have my new Iphone16, on the device, the spatial audio in camera setting is turned on by default, then the SDAVAssetExportSession starts to fail. I know it has something to do with audioSetting. the current setting is something like this: exportSession.audioSettings = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVNumberOfChannelsKey: 2, AVSampleRateKey: 44100, AVEncoderBitRateKey: 128000 ] And also, this is passed to the underlying object AVAssetReader or AVAssetWriter. I am not experienced in this area, and I really had a hard time trying to figure out. Does anyone know how to set up AVAssetReader or AVAssetWriter to process video with spatial audio tracks ? thanks in advance.
1
0
290
Mar ’25
iPhone 15 Pro Has USB-C, but AVCaptureDevice Doesn't Support External Devices?
I'm using an iPhone 15 Pro, which has switched from Lightning to USB Type-C. My iOS version is 18.3. According to Apple's documentation, AVCaptureDevice.DeviceType should support external device types. 🔗 Apple's Official Documentation: https://developer.apple.com/documentation/avfoundation/avcapturedevice/devicetype-swift.struct/external The documentation clearly states that iPadOS 17.0+ and iOS 17.0+ support external devices. However, in my actual tests: On iPhone, discoverySession does not detect any external devices. On iPad, discoverySession can detect external devices without any issues. My Question: Does iPhone USB-C actually support external devices (e.g., UVC cameras)? If not, why does Apple's documentation claim that iOS 17 supports external devices instead of specifying iPadOS 17 only?
1
0
426
Mar ’25
Configuring CaptureVideoDelegate to avoid gamma/transfer function
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive video in as unprocessed format as possible. In particular, I'm interested in getting pixel buffers that contain pretty much the bayer data as the sensor sees it - with the minimum processing of color possible. Currently we configure the AVCaptureDevice to fix the focus and exposure, use a low ISO with no gain and set the white balance gains to 1. AVCaptureVideoDataOutput is using 32BGRA. What I'd like to do is remove any additional color and brightness processing such that the data is effectively processed with a linear transfer function (i.e. gamma function is 1). I thought that this might be down to using the AVCaptureDevice activeColorSpace - we currently use P3_D65 for this. But there only seems to be a few choices (e.g. sRGB, HLG_BT2020) all of which I think affect the gamma. So: is it possible to control or specify the gamma / transfer function when using CaptureVideoDelegate? if not, does one of the color space settings have a defined gamma function that I can effectively reverse it from the pixel data without losing too much information? or is there a better way to capture video-ish speed images (15-30fps) from the camera sensor that skips processing like this? Many thanks for any suggestions.
3
0
99
Mar ’25