Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

Coverting CVPixelBuffer 2VUY to a Metal Texture
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image. I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off. I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors. On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues. My code for the conversion is also attached. If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it. Many Thanks Conversion_Logs.txt ConversionCode.swift
3
0
166
Apr ’25
AVCaptureSession video and audio out of sync
I'm using an AVCaptureSession to send video and audio samples to an AVAssetWriter. When I play back the resultant video, sometimes there is a significant lag between the audio compared with the video, so they're just not in sync. But sometimes they are, with the same code. If I look at the very first presentation time stamps of the buffers being sent to the delegate, via func captureOutput(_: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) I see something like this: Adding audio samples for pts time 227711.0855328798, Adding video samples for pts time 227710.778785374 That is, the clock for audio vs video is behind: the first audio sample I receive is at 11.08 something, while the video video sample is earlier in time, at 10.778 something. The times are the presentation time stamps of the buffer, and the outputPresentationTimeStamp is the exact same number. It feels like "video" vs the "audio" clock are just mismatched. This doesn't always happen: sometimes they're synced. Sometimes they're not. Any ideas? The device I'm recording is a webcam, on iPadOS, connected via the usb-c port.
3
0
121
Apr ’25
CVPixelBufferCreate EXC_BAD_ACCESS
I am doing something similar to this post Within an AVCaptureDataOutputSynchronizerDelegate method, I create a pixelBuffer using CVPixelBufferCreate with the following attributes: kCVPixelBufferIOSurfacePropertiesKey as String: true, kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey as String: true When I copy the data from the vImagePixelBuffer "rotatedImageBuffer", I get the following error: Thread 10: EXC_BAD_ACCESS (code=1, address=0x14caa8000) I get the same error with memcpy and data.copyBytes (not running them at the same time obviously). If I use CVPixelBufferCreateWithBytes, I do not get this error. However, CVPixelBufferCreateWithBytes does not let you include attributes (see linked post above). I am using vImage because I need the original CVPixelBuffer from the camera output and a rotated version with a different color scheme. // Copy to pixel buffer let attributes: NSDictionary = [ true : kCVPixelBufferIOSurfacePropertiesKey, true : kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey, ] var colorBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, attributes, &colorBuffer) //let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes, nil, nil, attributes as CFDictionary, &colorBuffer) // does NOT produce error, but also does not have attributes guard status == kCVReturnSuccess, let colorBuffer = colorBuffer else { print("Failed to create buffer") return } let lockFlags = CVPixelBufferLockFlags(rawValue: 0) guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(colorBuffer, lockFlags) else { print("Failed to lock base address") return } let colorBufferMemory = CVPixelBufferGetBaseAddress(colorBuffer)! let data = Data(bytes: rotatedImageBuffer.data, count: rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) data.copyBytes(to: colorBufferMemory.assumingMemoryBound(to: UInt8.self), count: data.count) // Fails here //memcpy(colorBufferMemory, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) // Also produces the same error CVPixelBufferUnlockBaseAddress(colorBuffer, lockFlags)
1
0
124
Apr ’25
Save MPEG-TS (h264 or HEVC) video stream using AVAssetWriter.
I'm capturing video stream from GoPro camera (I demux UDP MPEG-TS packets) and create CMSampleBuffers from them, this works fine when I display them using CMSampleBufferLayer. However when I dump them to disk using AVAssetWriter and then playback it with AVPlayer, AVPlayer has problems with scrubbing, it also cannot render previous frames, it needs to go back to key frames. Also thumbnails generated with AVAssetImageGenerator are mostly distorted and green, even though I set the requestedTimeToleranceAfter longer than the key frames frequency. When I re-encode saved video once again with AVAssetExportSession and play it back then I can scrub the video just fine. Is it because re-transcoding adds additional metadata to enable generating frames when rewinding the video and scrubbing? If so is there a way to achieve it with AVAssetWriter without much time penalty? I need the dump/save operation to be very fast. I also considered the following: Instead of de-muxing video and creating CMSampleBuffers, maybe I could directly dump the stream to disk and somehow add moov atoms with timing information. Would this approach work? If so where I can find information how to do it? Thank you!
3
0
136
Apr ’25
videoCaptureQueue would make the app crashed when I using IOS 18.4.1
Hi All I have some problem when I using the IOS 18.4.1 I have iphone16 pro and ipad Air, both are updated to IOS 18.4.1 I tried to following sample code. However, when I run the app around 30 seconds to 1 minutes, the application would be crashed When I using another Ipad with IOS 17, it would not have the same problem. https://developer.apple.com/documentation/createml/creating-an-action-classifier-model https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed#overview%29,
6
0
130
May ’25
FxPlug SDK 4.3.2 causes dyld errors when loaded on versions of macOS prior to 14.6
FxPlug is one of Apple’s official SDKs, recently updated to version 4.3.2. In theory the SDK should guarantee third-parties can build plug-ins that are backward compatible with older versions of Final Cut Pro, Motion and Compressor. FxPlug SDK includes two frameworks that third-party developers like me end up bundling inside our third-party plugins: FxPlug.framework and PlugInManager.framework. Behind the scenes, the SDK relies on PlugInKit, but the FxPlug.framework provides abstractions so that third-parties don't have to handle the intricacies of XPC directly. The most recent version of FxPlug.framework included with the SDK was possibly built with an error: the Info.plist shows a LSMinimumSystemVersion entry of 14.6, suggesting the binary may have been compiled and linked with MACOSX_DEPLOYMENT_TARGET set to 14.6 by accident. The problem: when older versions of Final Cut Pro or Motion load a third-party plugin (itself built with the appropriate deployment target, macOS 11 or 12, for example) on pre-macOS 14.6, the dynamic linker immediately loads Apple’s own FxPlug.framework, but this causes the process to crash immediately: Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 libobjc.A.dylib 0x7ff81e065955 map_images_nolock + 5399 1 libobjc.A.dylib 0x7ff81e0643d6 map_images + 67 2 dyld 0x10bd551fb invocation function for block in dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 275 3 dyld 0x10bd506c9 dyld4::RuntimeState::withLoadersReadLock(void () block_pointer) + 41 4 dyld 0x10bd550e2 dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 82 5 dyld 0x10bd68d45 dyld4::APIs::_dyld_objc_notify_register(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 79 6 libobjc.A.dylib 0x7ff81e064244 _objc_init + 1279 7 libdispatch.dylib 0x7ff81e01d993 _os_object_init + 13 8 libdispatch.dylib 0x7ff81e02b1b8 libdispatch_init + 311 9 libSystem.B.dylib 0x7ff828fd585f libSystem_initializer + 238 10 dyld 0x10bd5ae4f invocation function for block in dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 182 11 dyld 0x10bd81aad invocation function for block in dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 242 12 dyld 0x10bd78e26 invocation function for block in dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 557 13 dyld 0x10bd47db3 dyld3::MachOFile::forEachLoadCommand(Diagnostics&, void (load_command const*, bool&) block_pointer) const + 129 14 dyld 0x10bd78bb7 dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 179 15 dyld 0x10bd81604 dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 466 16 dyld 0x10bd5ad82 dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 144 17 dyld 0x10bd6165a dyld4::PrebuiltLoader::runInitializers(dyld4::RuntimeState&) const + 30 18 dyld 0x10bd6e76e dyld4::APIs::runAllInitializersForMain() + 38 19 dyld 0x10bd4c38d dyld4::prepare(dyld4::APIs&, dyld3::MachOAnalyzer const*) + 3443 20 dyld 0x10bd4b4e4 start + 388 Can someone at Apple with the right domain expertise confirm that this is the type of crash you would see because the framework was built assuming it would run on macOS 14.6 and later, and when facing an older environment (e.g. ObjC runtime) it lacks extra code that would ensure backward compatibility with the earlier ObjC runtime found on macOS 12.x?
2
0
119
Jun ’25
Importing pictures with non-QT metadata
Movies taken with Android phones store their location metadata (and probably others) in ways that are ignored by Apple's ecosystem (QuickTime Player, Photos.app). I am considering creating a Spotlight importer so that this metadata is available to the sytem. But I have a couple of questions: Can a Spotlight importer add new data (like location) to the data that the standard importer already captured? Or would the new importer need to take over the whole data gathering? If so, would macOS allow that? Would that Spotlight importer be somehow used by e.g. Photos.app and QT Player to capture the location? Or would this end up in Spotlight "knowing" the location but Photos.app ignoring it? If so, maybe there is something more broadly useful than a Spotlight importer?
2
0
86
May ’25
AVPlayer freezes after ReplaceCurrentItemWithPlayerItem + immediate seek on iOS 18.4 (streaming only)
Starting in iOS 18.4, (and still in the iOS 18.5 beta), the AVPlayer seems to freeze when we: Replace the current AVPlayerItem, ReplaceCurrentItemWithPlayerItem and then: Call Seek very shortly afterwards (seekToTime:toleranceBefore:toleranceAfter: / seek(to:)) And then subsequent calls to play after have no effect. However, it feels scrubbing to see after works and also changing the playback rate (i.e. fast forward) tends to clear up the frozen state. Our primary workflow involves video playback, replacing video to show new clips and in some cases seeking to specific frames. This appears to only be occurring while streaming video, reports are all that local downloaded video playback remains fine. This same code path has worked without issue on 17.x and 18.3.2 and for years before that. What is particularly strange is that time observers log that video is still playing or feeding frames. The reported status is ReadyToPlay, IsLikelyToKeepUp is true, and there are no indications of stalling or buffering. A similar issue is true for our web application in Safari. While on Sonoma and Safari 17.x, there is no issue. When you update to macOS Sequoia 15.4.1 and Safari 18.4, you begin observing a similar freezing. The same does not occur on Chrome or other tested browsers. There appears to be in the release notes for Safari 18.4, an interesting "fix" note that seems similar to what we are now experiencing: https://developer.apple.com/documentation/safari-release-notes/safari-18_4-release-notes "Fixed an issue where playback doesn’t always resume after a seek. (140097993)" "Fixed playing video generating non-monotonic ‘timeupdate’ events. (142275184) (FB16222910)" "Fixed websites calling play() during a seek() is allowed by the specification so that the play event is fired even if the seek hasn’t completed. (142517488)" "Fixed seek not completing for WebM under some circumstances. (143372794)" "Fixed MediaRecorderPrivateEncoder writing frames out of order. (143956063)"
2
0
166
May ’25
I’m using ScreenCaptureKit on macOS to grab frames and measure end-to-end latency (capture → my delegate callback). For each CMSampleBuffer I read:
I’m using ScreenCaptureKit on macOS to grab frames and measure end-to-end latency (capture → my delegate callback). For each CMSampleBuffer I read: let pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds to get the “capture” timestamp, and I also extract the mach-absolute display time: let attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] let displayMach = attachments?.first?[.displayTime] as? UInt64 // convert mach ticks to seconds... Then I compare both against the current time: let now = CACurrentMediaTime() let latencyFromPTS = now - pts let latencyFromDisplay = now - displayTimeSeconds But I consistently see negative values for both calculations—i.e. the PTS or displayTime often end up numerically larger than now. This suggests that the “presentation timestamp” and the mach-absolute display time are coming from a different epoch or clock domain than CACurrentMediaTime(). Questions: Which clocks/epochs does ScreenCaptureKit use for PTS and for .displayTime? How can I align these timestamps with CACurrentMediaTime() so that now - pts and now - displayTime reliably yield non-negative real-world latencies? Any pointers on the correct clock conversions or APIs to use would be greatly appreciated.
1
0
153
May ’25
WideFOV - APMP - Stereo
Does anyone have a template of an Apple Projected Media Profile Format Description or a File of a Stereo wideFOV video? Use case I have 2 compatible cameras that I stereo sync and I want to move the projection information from the compatible video to the Spatial video that combines them. Every version I can come up with crashes the AVP and when viewing as Spatial in Tahoe I just get a black screen.
4
0
176
Jun ’25
Obtain the screen rotation direction in the background
I use replaykit for system-level screen recording. I want to determine whether the screen is in landscape mode by calling back CMSamplebuffer, but CMSamplebuffer does not come with this information. The other several apis related to obtaining the screen orientation are also restricted by the background. I want to know whether the information of the screen rotation direction can be obtained in real time in the background
1
0
72
Jun ’25
WideCamera consumes more CPU that telePhotoCamera
I have beet taking images from the iOS video camera feed and have encountered an issue. When you take images form the wideCamera this consumes about half the phone's CPU. The same is not the case when you take images from the telephotoCamera video stream. Is there a way of disabling the extra processing that is being done?
1
0
57
Jun ’25
AVFoundation — MJPEG Custom-Resolution UVC Stream Not Working on macOS
Hello, I'm Soonwon. We’re currently developing a UVC camera device and trying to stream MJPEG video via AVFoundation on macOS. However, we’re running into a problem with custom resolutions. When we try to use AVFoundation on macOS to capture MJPEG video at 1000x6000, the stream is not accepted or simply doesn’t work. Lower resolutions work fine. (Interestingly, using the same device on iPadOS, we can capture the 1000x6000 MJPEG stream successfully by using AVCaptureSessionPresetInputPriority.) Is there any way to receive custom-resolution MJPEG streams (like 1000x6000) from a UVC device using AVFoundation on macOS? Are there specific session presets, entitlements, or known limitations that affect MJPEG handling at custom resolutions on macOS? Does macOS handle MJPEG differently from iPadOS in AVFoundation? Any insight or guidance would be greatly appreciated. Thank you! NSError *error = nil; if ([selectedDevice lockForConfiguration:&error]) { [session beginConfiguration]; session.sessionPreset = AVCaptureSessionPresetHigh; bool foundFormat = false; for (AVCaptureDeviceFormat *format in selectedDevice.formats) { CMVideoDimensions dims = CMVideoFormatDescriptionGetDimensions(format.formatDescription); FourCharCode pixelFormat = CMFormatDescriptionGetMediaSubType(format.formatDescription); foundFormat = true; if (dims.width == 1000 && dims.height == 6000) { selectedDevice.activeFormat = format; foundFormat = true; break; } } if(foundFormat == false) { NSLog(@"Failed to foundFormat : "); [session commitConfiguration]; return false; } NSError* error = nil; AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:selectedDevice error:&error]; if (error || ![session canAddInput:input]) { NSLog(@"Failed to add video input: %@", error.localizedDescription); [session commitConfiguration]; return false; } [session addInput:input]; AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init]; output.alwaysDiscardsLateVideoFrames = YES; output.videoSettings = @{ (NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) }; [output setSampleBufferDelegate:delegate queue:queue]; if ([session canAddOutput:output]) { [session addOutput:output]; } [session commitConfiguration]; [selectedDevice unlockForConfiguration]; } else { NSLog(@"Failed to lock device for configuration: %@", error.localizedDescription); } // start~
1
0
372
Jul ’25
play videos in webM format on iOS Deveices
I would like to play videos in webM format on my iPhone. I understand that it is basically impossible to play videos in webM format on an iPhone, but is there any way to display videos in webM format? I would like to know if there is an official Swift SDK or development kit released by Apple. Or if there are any third-party products, please let me know.
1
0
291
Jul ’25
AVAssetReaderOutput.Provider Missing symbols
Recurring crash on install of any app with the new sourceVideoTrackProvider.next() dyld[41966]: Symbol not found: _$sSo19AVAssetReaderOutputC12AVFoundationE8ProviderC4nextxSgyYaKFTjTu Referenced from: <79AA2BE0-A6B4-32F5-A804-E84BBE5D1AEA> /Users/<username>/Library/Developer/Xcode/DerivedData/TrackProviderCrash-bbbhjptcxnmfdcackxtpucnunxyc/Build/Products/Debug-maccatalyst/TrackProviderCrash.app/Contents/MacOS/TrackProviderCrash.debug.dylib Expected in: <1B847AF9-7973-3B28-95C2-09E73F6DD50B> /usr/lib/swift/libswiftAVFoundation.dylib Can be reproduced with the current Xcode Beta 4 by running on to MacCatalyst and macOS https://developer.apple.com/documentation/AVFoundation/converting-projected-video-to-apple-projected-media-profile Crash goes away of you comment out lines 154-158 and 164-170 which are while let sampleBuffer = try await sourceVideoTrackProvider.next(){/*other code*/} Can also be reproduced if you add the code below to a MacCatalyst project import AVKit let asset: AVURLAsset = .init(url: Bundle.main.url(forResource: "SomeVideo.mp4", withExtension: nil)!) let videoReader = try! AVAssetReader(asset: asset) let videoTracks = try! await asset.loadTracks(withMediaCharacteristic: .visual) // Get the side-by-side video track. let videoTrack = videoTracks.first! let videoInputTrack = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: nil) let sourceVideoTrackProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>> = videoReader.outputProvider(for: videoInputTrack) //Comment out this while let sb = try! await sourceVideoTrackProvider.next() { }
1
0
634
Jul ’25
AVPlayerViewController `customInfoViewControllers` crash/workaround on tvOS 26
One thing I've noticed on tvOS 26 is that if you try to set the AVPlayerViewController customInfoViewControllers property while the Content Tabs are on screen, your app will crash. *** Terminating app due to uncaught exception 'UIViewControllerHierarchyInconsistency', reason: 'trying to add child view controller that is already presented: <AVInfoPanelViewController: 0x1030cdc00>' *** First throw call stack: (0x18a7167bc 0x189a77510 0x18a7166a8 0x1ab425658 0x1b2ee9d54 0x1b2efcd60 0x1b2eaf3f0 0x1080f744c 0x107e021a8 0x107e01b3c 0x18de41c14 0x18de41ba8 0x18de48d28 0x18ad9e358 0x101fac5f0 0x101fc6228 0x101fe7278 0x101fbc6fc 0x101fbc63c 0x18a67a2e0 0x18a679418 0x18a673b34 0x1937e4d5c 0x1abb36588 0x1abb3ae80 0x1aae9dec4 0x108610174 0x1086100e4 0x108615140 0x189abd4d0) I've logged a feedback (FB19554461) but it's getting awfully late in the dev cycle. So I've been trying to think of a workaround. The problem is that customInfoViewControllers is pretty declarative in nature. There are no properties or delegate methods I am aware of that let me know when they are displaying or not. One trick I came up with was seeing if my custom info view controller's view was "visible" or not - I put that in quotes because it turns out it can be visible even when I think it's not, as when the transport bar is scrolled to the top my custom VC still has its top pixels showing, so it gets a viewDidAppear call. So, I tried to see if my view controllers view is completely visible, ie based on the results of the GGRect contains method. And that works! But the problem is it only accounts for my own custom info view controllers, and not the standard one that Apple provides. I can't think of a way at all to know whether that is showing. Any ideas?
1
0
149
Aug ’25
Looking for solutions to build a video chat app (Omegle/Chatroulette style) on Vision Pro
Hi everyone, We are working on a prototype app for Apple Vision Pro that is similar in functionality to Omegle or Chatroulette, but exclusively for Vision Pro owners. The core idea is: – a matching system where one user connects to another through a virtual persona; – real-time video and audio transmission; – time limits for sessions with the ability to extend them; – users can skip a match and move on to the next one. We have explored WebRTC and Twilio, but unfortunately, they don’t fit our use case. Question: What alternative services or SDKs are available for implementing real-time video/audio communication on Vision Pro that would work with this scenario? Has anyone encountered a similar challenge and can recommend which technologies or tools to use? Thanks in advance!
2
0
259
Aug ’25
Threading guarantees with AVCaptureVideoDataOutput
I'm writing some camera functionality that uses AVCaptureVideoDataOutput. I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput. My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? For example, does [AVCaptureSession stopRunning] imply a blocking call until all pending frame-callbacks are done? I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource. I have setup similar to the following: // Foreground thread logic dispatch_queue_t queue = dispatch_queue_create("qt_avf_camera_queue", nullptr); AVCaptureSession *captureSession = [[AVCaptureSession alloc] init]; setupInputDevice(captureSession); // Connects the AVCaptureDevice... // Store some arbitrary data to be attached to the frame, stored on the foreground thread FrameMetaData frameMetaData = ...; MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc]; // Capture frameMetaData by reference in lambda [sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }]; AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue]; [captureSession addOutput:captureVideoDataOutput]; [captureSession startRunning]; [captureSession stopRunning]; // Is it now safe to destroy frameMetaData, or do we need manual barrier? And then in MySampleBufferDelegate: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Invokes the callback set above FrameMetaData *frameMetaData = frameMetaDataGetter(); emitSampleBuffer(sampleBuffer, frameMetaData); }
2
0
421
Sep ’25
What changes were made to the VideoToolbox HEVC encoder in iOS 26?
Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues. However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct. After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect. I created a VTCompressionSession using the following configuration. var newSession: VTCompressionSession! var status = VTCompressionSessionCreate( allocator: kCFAllocatorDefault, width: Int32(frameSize.width), height: Int32(frameSize.height), codecType: kCMVideoCodecType_HEVC, encoderSpecification: nil, imageBufferAttributes: nil, compressedDataAllocator: nil, outputCallback: nil, refcon: nil, compressionSessionOut: &newSession ) try check(status, VideoToolboxErrorDomain) let properties: [CFString: Any] = [ kVTCompressionPropertyKey_AllowFrameReordering: false, kVTCompressionPropertyKey_AllowTemporalCompression: false, kVTCompressionPropertyKey_RealTime: false, kVTCompressionPropertyKey_MaximizePowerEfficiency: false, kVTCompressionPropertyKey_ProfileLevel: profileLevel, kVTCompressionPropertyKey_Quality: quality.rawValue, ] status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary) try check(status, VideoToolboxErrorDomain) { VTCompressionSessionInvalidate(newSession) } Then use the following code to encode each Grid of the image. let status = VTCompressionSessionEncodeFrame( session, imageBuffer: buffer, presentationTimeStamp: presentationTimeStamp, duration: frameDuration, frameProperties: nil, infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in try check(status, VideoToolboxErrorDomain) if let sampleBuffer { let encodedImage = try self.encodedImage(from: sampleBuffer) // handle encodedImage } } try check(status, VideoToolboxErrorDomain) If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding. createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.
3
0
417
Sep ’25