Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

Using the AVPlayer to play encrypted streams causes the system to reboot
When we use AVPlayer to play DRM encrypted streams, it will not play normally under iOS 17.6.1 system version, and there is a high probability of system restart. This is the relevant core error log: error 14:47:53.323369+0800 audiomxd [AirPlayError] carManager_copyProperty_block_invoke:499: got error -12784/0xFFFFCE10 kCMBaseObjectError_PropertyNotFound error 14:47:53.323414+0800 audiomxd [SPEndpointManagerFactory] SidePlay Endpoint Manager creation failed with -72390/0xFFFEE53A error 14:47:53.364949+0800 audiomxd [APBrowserCarSessionHelper] [0xF6AA] [Bonjour/WiFi] Unrecognized ConnectivityHelper event 101 error 14:47:53.375313+0800 audiomxd AddInstanceForFactory: No factory registered for id <CFUUID 0xa5c5118c0> F8BB1C28-BAE8-11D6-9C31-00039315CD46
1
0
247
Mar ’25
SDAVAssetExportSession or AVAssetWriter fail to process iphone16 video with spatial audio tracks
I have been using SDAVAssetExportSession to compress videos in an app I am building, everything goes very smoothly until I have my new Iphone16, on the device, the spatial audio in camera setting is turned on by default, then the SDAVAssetExportSession starts to fail. I know it has something to do with audioSetting. the current setting is something like this: exportSession.audioSettings = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVNumberOfChannelsKey: 2, AVSampleRateKey: 44100, AVEncoderBitRateKey: 128000 ] And also, this is passed to the underlying object AVAssetReader or AVAssetWriter. I am not experienced in this area, and I really had a hard time trying to figure out. Does anyone know how to set up AVAssetReader or AVAssetWriter to process video with spatial audio tracks ? thanks in advance.
1
0
289
Mar ’25
iPhone 15 Pro Has USB-C, but AVCaptureDevice Doesn't Support External Devices?
I'm using an iPhone 15 Pro, which has switched from Lightning to USB Type-C. My iOS version is 18.3. According to Apple's documentation, AVCaptureDevice.DeviceType should support external device types. 🔗 Apple's Official Documentation: https://developer.apple.com/documentation/avfoundation/avcapturedevice/devicetype-swift.struct/external The documentation clearly states that iPadOS 17.0+ and iOS 17.0+ support external devices. However, in my actual tests: On iPhone, discoverySession does not detect any external devices. On iPad, discoverySession can detect external devices without any issues. My Question: Does iPhone USB-C actually support external devices (e.g., UVC cameras)? If not, why does Apple's documentation claim that iOS 17 supports external devices instead of specifying iPadOS 17 only?
1
0
423
Mar ’25
AVPlayer: Significant Delays and Asset Loss When Playing Partially Downloaded HLS Content Offline
We're experiencing significant issues with AVPlayer when attempting to play partially downloaded HLS content in offline mode. Our app downloads HLS video content for offline viewing, but users encounter the following problems: Excessive Loading Delay: When offline, AVPlayer attempts to load resources for up to 60 seconds before playing the locally available segments Asset Loss: Sometimes AVPlayer completely loses the asset reference and fails to play the video on subsequent attempts Inconsistent Behavior: The same partially downloaded asset might play immediately in one session but take 30+ seconds in another Network Activity Despite Offline Settings: Despite configuring options to prevent network usage, AVPlayer still appears to be attempting network connections These issues severely impact our offline user experience, especially for users with intermittent connectivity. Technical Details Implementation Context Our app downloads HLS videos for offline viewing using AVAssetDownloadTask. We store the downloaded content locally and maintain a dictionary mapping of file identifiers to local paths. When attempting to play these videos offline, we experience the described issues. Current Implementation Here's our current implementation for playing the videos: - (void)presentNativeAvplayerForVideo:(Video *)video navContext:(NavContext *)context { NSString *localPath = video.localHlsPath; if (localPath) { NSURL *videoURL = [NSURL URLWithString:localPath]; NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey: @YES, AVURLAssetAllowsCellularAccessKey: @NO, AVURLAssetAllowsExpensiveNetworkAccessKey: @NO, AVURLAssetAllowsConstrainedNetworkAccessKey: @NO }; AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoURL options:options]; AVPlayerViewController *playerViewController = [[AVPlayerViewController alloc] init]; NSArray *keys = @[@"duration", @"tracks"]; [asset loadValuesAsynchronouslyForKeys:keys completionHandler:^{ dispatch_async(dispatch_get_main_queue(), ^{ AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset]; AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem]; playerViewController.player = player; [player play]; }); }]; playerViewController.modalPresentationStyle = UIModalPresentationFullScreen; [context presentViewController:playerViewController animated:YES completion:nil]; } } Attempted Solutions We've tried several approaches to mitigate these issues: Modified Asset Options: NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey: @NO, // Changed to NO AVURLAssetAllowsCellularAccessKey: @NO, AVURLAssetAllowsExpensiveNetworkAccessKey: @NO, AVURLAssetAllowsConstrainedNetworkAccessKey: @NO, AVAssetReferenceRestrictionsKey: @(AVAssetReferenceRestrictionForbidRemoteReferenceToLocal) }; Skipped Asynchronous Key Loading: AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset automaticallyLoadedAssetKeys:nil]; Modified Player Settings: player.automaticallyWaitsToMinimizeStalling = NO; [playerItem setPreferredForwardBufferDuration:2.0]; Added Network Resource Restrictions: playerItem.canUseNetworkResourcesForLiveStreamingWhilePaused = NO; Used File URLs Instead of HTTP URLs where possible Despite these attempts, the issues persist. Expected vs. Actual Behavior Expected Behavior: AVPlayer should immediately begin playback of locally available HLS segments When offline, it should not attempt to load from network for more than a few seconds Once an asset is successfully played, it should be reliably available for future playback Actual Behavior: AVPlayer waits 10-60 seconds before playing locally available segments Network activity is observed despite all network-restricting options Sometimes the player fails completely to play a previously available asset Behavior is inconsistent between playback attempts with the same asset Questions: What is the recommended approach for playing partially downloaded HLS content offline with minimal delay? Is there a way to force AVPlayer to immediately use available local segments without attempting to load from the network? Are there any known issues with AVPlayer losing references to locally stored HLS assets? What diagnostic steps would you recommend to track down the specific cause of these delays? Does AVFoundation have specific timeouts for offline HLS playback that could be configured? Any guidance would be greatly appreciated as this issue is significantly impacting our user experience. Device Information iOS Versions Tested: 14.5 - 18.1 Device Models: iPhone 12, iPhone 13, iPhone 14, iPhone 15 Xcode Version: 15.3-16.2.1
1
0
482
Mar ’25
Visual isTranslatable: NO; reason: observation failure: noObservations, when trying to play custom compositor video with AVPlayer
I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds. However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, I get this warning in the console :- Visual isTranslatable: NO; reason: observation failure: noObservations Here is the screenshot :- My entire code :- import AVFoundation import Metal class GradientVideoCompositorTest: NSObject, AVVideoCompositing { var sourcePixelBufferAttributes: [String: Any]? = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] var requiredPixelBufferAttributesForRenderContext: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] private var renderContext: AVVideoCompositionRenderContext? private var metalDevice: MTLDevice! private var metalCommandQueue: MTLCommandQueue! private var metalLibrary: MTLLibrary! private var metalPipeline: MTLComputePipelineState! override init() { super.init() setupMetal() } func setupMetal() { guard let device = MTLCreateSystemDefaultDevice(), let queue = device.makeCommandQueue(), let library = try? device.makeDefaultLibrary(), let function = library.makeFunction(name: "gradientShader") else { fatalError("Metal setup failed") } self.metalDevice = device self.metalCommandQueue = queue self.metalLibrary = library self.metalPipeline = try? device.makeComputePipelineState(function: function) } func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) { renderContext = newRenderContext } func startRequest(_ request: AVAsynchronousVideoCompositionRequest) { guard let outputPixelBuffer = renderContext?.newPixelBuffer(), let metalTexture = createMetalTexture(from: outputPixelBuffer) else { request.finish(with: NSError(domain: "com.example.gradient", code: -1, userInfo: nil)) return } var time = Float(request.compositionTime.seconds) renderGradient(to: metalTexture, time: time) request.finish(withComposedVideoFrame: outputPixelBuffer) } private func createMetalTexture(from pixelBuffer: CVPixelBuffer) -> MTLTexture? { var texture: MTLTexture? let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .bgra8Unorm, width: width, height: height, mipmapped: false ) textureDescriptor.usage = [.shaderWrite, .shaderRead] CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly) if let textureCache = createTextureCache(), let cvTexture = createCVMetalTexture(from: pixelBuffer, cache: textureCache) { texture = CVMetalTextureGetTexture(cvTexture) } CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly) return texture } private func renderGradient(to texture: MTLTexture, time: Float) { guard let commandBuffer = metalCommandQueue.makeCommandBuffer(), let commandEncoder = commandBuffer.makeComputeCommandEncoder() else { return } commandEncoder.setComputePipelineState(metalPipeline) commandEncoder.setTexture(texture, index: 0) var mutableTime = time commandEncoder.setBytes(&mutableTime, length: MemoryLayout<Float>.size, index: 0) let threadsPerGroup = MTLSize(width: 16, height: 16, depth: 1) let threadGroups = MTLSize( width: (texture.width + 15) / 16, height: (texture.height + 15) / 16, depth: 1 ) commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadsPerGroup) commandEncoder.endEncoding() commandBuffer.commit() } private func createTextureCache() -> CVMetalTextureCache? { var cache: CVMetalTextureCache? CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &cache) return cache } private func createCVMetalTexture(from pixelBuffer: CVPixelBuffer, cache: CVMetalTextureCache) -> CVMetalTexture? { var cvTexture: CVMetalTexture? let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) CVMetalTextureCacheCreateTextureFromImage( kCFAllocatorDefault, cache, pixelBuffer, nil, .bgra8Unorm, width, height, 0, &cvTexture ) return cvTexture } } class GradientCompositionInstructionTest: NSObject, AVVideoCompositionInstructionProtocol { var timeRange: CMTimeRange var enablePostProcessing: Bool = true var containsTweening: Bool = true var requiredSourceTrackIDs: [NSValue]? = nil var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid init(timeRange: CMTimeRange) { self.timeRange = timeRange } } func createGradientVideoComposition(duration: CMTime, size: CGSize) -> AVMutableVideoComposition { let composition = AVMutableComposition() let instruction = GradientCompositionInstructionTest(timeRange: CMTimeRange(start: .zero, duration: duration)) let videoComposition = AVMutableVideoComposition() videoComposition.customVideoCompositorClass = GradientVideoCompositorTest.self videoComposition.renderSize = size videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // 30 FPS videoComposition.instructions = [instruction] return videoComposition } #include <metal_stdlib> using namespace metal; kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]], constant float &time [[buffer(0)]], uint2 id [[thread_position_in_grid]]) { float2 uv = float2(id) / float2(output.get_width(), output.get_height()); // Animated colors based on time float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0); float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3); // Linear interpolation for gradient float3 gradientColor = mix(color1, color2, uv.y); output.write(float4(gradientColor, 1.0), id); }
1
0
337
Apr ’25
VideoToolbox Encoder's Unregistered User Data SEI NAL UUID
I was advised to post here by a Code-Level Support representative. Below will be a copy of my initial issue report, and my minimally reproductive test project can be found at the following GitHub repository URL... https://github.com/PierceLBrooks/vtUudSeiNalCmake DESCRIPTION OF PROBLEM When encoding H264 video codec data using the VTCompressionSession API facilities available through the VideoToolbox framework on MacOS, the resultant bitstream will invariably include Unregistered User Data SEI NAL units that carry the UUID "47564adc-5c4c-433f-94ef-c5113cd143a8". The proprietary decoders we are working with currently struggle with filtering out these NAL units. Can you explain what purpose this serves, what the meaning of the byte-wise unit payloads are, and which configuration settings the VideoToolbox encoder instance specifically depends upon for triggering the insertion of them? STEPS TO REPRODUCE 1. Invoke the instantiation of a new VideoToolbox H264 encoder object by calling VTCompressionSessionCreate with appropriate configuration flags. 2. Push frames through the encoder, receiving their encoded byte buffer counterparts through an asynchronous callback. 3. Write that encoded data to some buffer which will contain the totality of the encoder's output. 4. Inspect the NAL units of the initial portion of this output bitstream buffer. 5. Observe the presence of at least one Unregistered User Data SEI NAL unit carrying the "47564adc-5c4c-433f-94ef-c5113cd143a8" UUID near the beginning of the output segment.
1
2
197
Apr ’25
Case-ID: 12759603: Memory Leak in UIViewControllerRepresentable and VideoPlayer
Dear Developers and DTS team, This is writing to seek your expert guidance on a persistent memory leak issue I've discovered while implementing video playback in a SwiftUI application. Environment Details: iOS 17+, Swift (SwiftUI, AVKit), Xcode 16.2 Target Devices: iPhone 15 Pro (iOS 18.3.2) iPhone 16 Plus (iOS 18.3.2) Detailed Issue Description: I am experiencing consistent memory leaks when using UIViewControllerRepresentable with AVPlayerViewController for FullscreenVideoPlayer and native VideoPlayer during video playback termination. Code Context: I have implemented the following approaches: Added static func dismantleUIViewController(: coordinator:) Included deinit in Coordinator Utilized both UIViewControllerRepresentable and native VideoPlayer /// A custom AVPlayer integrated with AVPlayerViewController for fullscreen video playback. /// /// - Parameters: /// - videoURL: The URL of the video to be played. struct FullscreenVideoPlayer: UIViewControllerRepresentable { // @Binding something for controlling fullscreen let videoURL: URL? func makeUIViewController(context: Context) -> AVPlayerViewController { let controller = AVPlayerViewController() controller.delegate = context.coordinator print("AVPlayerViewController created: \(String(describing: controller))") return controller } /// Updates the `AVPlayerViewController` with the provided video URL and playback state. /// /// - Parameters: /// - uiViewController: The `AVPlayerViewController` instance to update. /// - context: The SwiftUI context for updates. func updateUIViewController(_ uiViewController: AVPlayerViewController, context: Context) { guard let videoURL else { print("Invalid videoURL") return } // Initialize AVPlayer if it's not already set if uiViewController.player == nil || uiViewController.player?.currentItem == nil { uiViewController.player = AVPlayer(url: videoURL) print("AVPlayer updated: \(String(describing: uiViewController.player))") } // Handle playback state } func makeCoordinator() -> Coordinator { Coordinator(parent: self) } static func dismantleUIViewController(_ uiViewController: AVPlayerViewController, coordinator: Coordinator) { uiViewController.player?.pause() uiViewController.player?.replaceCurrentItem(with: nil) uiViewController.player = nil print("dismantleUIViewController called for \(String(describing: uiViewController))") } } extension FullscreenVideoPlayer { class Coordinator: NSObject, AVPlayerViewControllerDelegate { var parent: FullscreenVideoPlayer init(parent: FullscreenVideoPlayer) { self.parent = parent } deinit { print("Coordinator deinitialized") } } } struct ContentView: View { private let videoURL: URL? = URL(string: "https://interactive-examples.mdn.mozilla.net/media/cc0-videos/flower.mp4") var body: some View { NavigationStack { Text("My Userful View") List { Section("VideoPlayer") { NavigationLink("FullscreenVideoPlayer") { FullscreenVideoPlayer(videoURL: videoURL) .frame(height: 500) } NavigationLink("Native VideoPlayer") { VideoPlayer(player: .init(url: videoURL!)) .frame(height: 500) } } } } } } Reproducibility Steps: Run application on target devices Scenario A - FullscreenVideoPlayer: Tap FullscreenVideoPlayer Play video to completion Repeat process 5 times Scenario B - VideoPlayer: Navigate back to main screen Tap Video Player Play video to completion Repeat process 5 times Observed Memory Leak Characteristics: Per Iteration (Debug Memory Graph): 4 instances of NSMutableDictionary (Storage) leaked 4 instances of __NSDictionaryM leaked 4 × 112-byte malloc blocks leaked Cumulative Effects: Debug console prints: "dismantleUIViewController called for <AVPlayerViewController: 0x{String}> Coordinator deinitialized" when navigate back to main screen After multiple iterations, leak instances double Specific Questions: What underlying mechanisms are causing these memory leaks in UIViewControllerRepresentable and VideoPlayer? What are the recommended strategies to comprehensively prevent and resolve these memory management issues?
1
0
160
Mar ’25
Image brightness adapts despite exposure lock
Short summary When setting exposureMode to .locked or .custom the brightness of a video stream still changes depending on the composition and contrast of the visible scene. These changes seem to come from contrast enhancements or dynamic range optimizations and totally break any analysis of the image that requires to assess absolute luminance. While exposure lock seems to indeed lock the physical exposure parameters of the camera (shutter speed and ISO), I cannot find any way to control these "soft" modifiers. Details Background I am the developer of the app "phyphox", an educational app that makes the phone's sensors accessible to students as measurement tools in science experiments. Currently I am working on implementing photometric measurements through the camera and one very important aspect of it is luminance measurements. This is particularly relevant since the light sensor of the phone has no publicly accessible API and the camera could to some extend make experiments available to Apple users that are otherwise only possible on Android devices. Implementation The app uses AVFoundation and explicitly picks individual cameras since camera groups do not support custom exposure settings. This means that it handles camera switching during zoom by itself and even implements its own auto exposure routines to optimize for the use in experiments. Therefore it always stays in custom exposure mode. The app uses YUV420 color space and the individual frames are analyzed in Metal using compute shaders. However, the effects discussed here still occur if I remove all code to control the camera and replace it with a simple sequence of setting the exposure mode to custom, setting custom exposure values, setting a fixed white balance and then setting the exposure mode to locked as suggested on stackoverflow. This neither helps on an iPhone 14 Pro nor on an iPhone 8 despite a report on the developer forums that it would resolve the issue for older devices. The app is open source, so the code can be seen in our current development branch (without the changes for the tests here, though) on github. The videos below use the implementation with the suggestion from stackoverflow, but they can be reproduced in the same way with "professional" camera apps that promise manual control over the camera (like the Blackmagic cam to quote a reputable company) as well as the stock camera app after pressing and holding on the preview to enable AE/AF lock. Demonstration These examples were captured on an iPhone 14 Pro. The central part of the image (highlighted by the app using metal shaders after capture) should not change with fixed exposure settings, but significant changes are noticable if there are changes at the edge of the frame when I move a black piece of cardboard in from above: https://share.icloud.com/photos/0b1f_3IB6yAQG-qSH27pm6oDQ The graph above the camera preview is the average luminance (gamma corrected and weighted based on sRGB) across the highlighted central area and as mentioned before it should not change because of something happening at the side of the frame (worst case it should get a bit darker because of the cardboard's shadow). In my opinion, the iPhone changes its mind on the ideal contrast as soon as it has a different exposure histogram because of the dark image part from the cardboard, but that's just me guessing. For completeness here is the same effect in the stock camera app with AE/AF lock enabled: https://share.icloud.com/photos/0cd7QM8ucBZKwPwE9mybnEowg Here you can also see that the iPhone "ramps" the changes. The brightness of the gray area does not change immediately but transitions smoothly, so this is clearly deliberate postprocessing. So... Any suggestion on how to prevent this behavior would be highly appreciated.
1
0
118
Apr ’25
CVPixelBufferCreate EXC_BAD_ACCESS
I am doing something similar to this post Within an AVCaptureDataOutputSynchronizerDelegate method, I create a pixelBuffer using CVPixelBufferCreate with the following attributes: kCVPixelBufferIOSurfacePropertiesKey as String: true, kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey as String: true When I copy the data from the vImagePixelBuffer "rotatedImageBuffer", I get the following error: Thread 10: EXC_BAD_ACCESS (code=1, address=0x14caa8000) I get the same error with memcpy and data.copyBytes (not running them at the same time obviously). If I use CVPixelBufferCreateWithBytes, I do not get this error. However, CVPixelBufferCreateWithBytes does not let you include attributes (see linked post above). I am using vImage because I need the original CVPixelBuffer from the camera output and a rotated version with a different color scheme. // Copy to pixel buffer let attributes: NSDictionary = [ true : kCVPixelBufferIOSurfacePropertiesKey, true : kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey, ] var colorBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, attributes, &colorBuffer) //let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes, nil, nil, attributes as CFDictionary, &colorBuffer) // does NOT produce error, but also does not have attributes guard status == kCVReturnSuccess, let colorBuffer = colorBuffer else { print("Failed to create buffer") return } let lockFlags = CVPixelBufferLockFlags(rawValue: 0) guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(colorBuffer, lockFlags) else { print("Failed to lock base address") return } let colorBufferMemory = CVPixelBufferGetBaseAddress(colorBuffer)! let data = Data(bytes: rotatedImageBuffer.data, count: rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) data.copyBytes(to: colorBufferMemory.assumingMemoryBound(to: UInt8.self), count: data.count) // Fails here //memcpy(colorBufferMemory, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) // Also produces the same error CVPixelBufferUnlockBaseAddress(colorBuffer, lockFlags)
1
0
124
Apr ’25
I’m using ScreenCaptureKit on macOS to grab frames and measure end-to-end latency (capture → my delegate callback). For each CMSampleBuffer I read:
I’m using ScreenCaptureKit on macOS to grab frames and measure end-to-end latency (capture → my delegate callback). For each CMSampleBuffer I read: let pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds to get the “capture” timestamp, and I also extract the mach-absolute display time: let attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: false) as? [[SCStreamFrameInfo: Any]] let displayMach = attachments?.first?[.displayTime] as? UInt64 // convert mach ticks to seconds... Then I compare both against the current time: let now = CACurrentMediaTime() let latencyFromPTS = now - pts let latencyFromDisplay = now - displayTimeSeconds But I consistently see negative values for both calculations—i.e. the PTS or displayTime often end up numerically larger than now. This suggests that the “presentation timestamp” and the mach-absolute display time are coming from a different epoch or clock domain than CACurrentMediaTime(). Questions: Which clocks/epochs does ScreenCaptureKit use for PTS and for .displayTime? How can I align these timestamps with CACurrentMediaTime() so that now - pts and now - displayTime reliably yield non-negative real-world latencies? Any pointers on the correct clock conversions or APIs to use would be greatly appreciated.
1
0
154
May ’25
HDR video & screen brightness
When I play an HDR video in the iPhone Photos app, I can see the HDR effect obviously. But if this HDR video is played continuously for more than 30-40 minutes, the HDR effect will disappear and the brightness will be compressed to the SDR range. This issue will appear on any iPhone. Depending on the phone, it may be 20-30 minutes, or 30-40 minutes, or even a few minutes, such as iPhone 12 mini. Similarly, if I use AVPlayer to play and preview an HDR video, if it plays more than 30-40 minutes, the HDR effect will disappear and the screen brightness will dim. Also the currentEDRHeadroom will gradually decrease to 1 Note, test it with an HDR video longer than 1 hour, and if the video is short, please loop it. My question is how to avoid losing the HDR effect after 30-40 minutes when I use CAMetalLayer to render any HDR video.
1
0
142
Jul ’25
Obtain the screen rotation direction in the background
I use replaykit for system-level screen recording. I want to determine whether the screen is in landscape mode by calling back CMSamplebuffer, but CMSamplebuffer does not come with this information. The other several apis related to obtaining the screen orientation are also restricted by the background. I want to know whether the information of the screen rotation direction can be obtained in real time in the background
1
0
72
Jun ’25
How to request for Video Subscriber SSO entitlement from Apple
Hi All. I'm working on Single-Sign-On feature in my application to let customers sign into their TV Provider. I need to add Video Subscriber SSO entitlement (com.apple.developer.video-subscriber-single-sign-on) to the app, but I found out that it's a special entitlement, need to contact Apple to enable it for my Apple account. On https://developer.apple.com/account I navigated to Support -&gt; Contact Us -&gt; Development and Technical -&gt; Entitlements and ask in the email about missing entitlement (ticket ID 102478794279). The support team couldn't help me, they redirected me to the operations team. I've been waiting for a few months now but they inform me to keep waiting. Is there a better way to contact Apple and get Video Subscriber SSO entitlement in an efficient way?
1
0
89
Jun ’25
WideCamera consumes more CPU that telePhotoCamera
I have beet taking images from the iOS video camera feed and have encountered an issue. When you take images form the wideCamera this consumes about half the phone's CPU. The same is not the case when you take images from the telephotoCamera video stream. Is there a way of disabling the extra processing that is being done?
1
0
57
Jun ’25
AVFoundation — MJPEG Custom-Resolution UVC Stream Not Working on macOS
Hello, I'm Soonwon. We’re currently developing a UVC camera device and trying to stream MJPEG video via AVFoundation on macOS. However, we’re running into a problem with custom resolutions. When we try to use AVFoundation on macOS to capture MJPEG video at 1000x6000, the stream is not accepted or simply doesn’t work. Lower resolutions work fine. (Interestingly, using the same device on iPadOS, we can capture the 1000x6000 MJPEG stream successfully by using AVCaptureSessionPresetInputPriority.) Is there any way to receive custom-resolution MJPEG streams (like 1000x6000) from a UVC device using AVFoundation on macOS? Are there specific session presets, entitlements, or known limitations that affect MJPEG handling at custom resolutions on macOS? Does macOS handle MJPEG differently from iPadOS in AVFoundation? Any insight or guidance would be greatly appreciated. Thank you! NSError *error = nil; if ([selectedDevice lockForConfiguration:&error]) { [session beginConfiguration]; session.sessionPreset = AVCaptureSessionPresetHigh; bool foundFormat = false; for (AVCaptureDeviceFormat *format in selectedDevice.formats) { CMVideoDimensions dims = CMVideoFormatDescriptionGetDimensions(format.formatDescription); FourCharCode pixelFormat = CMFormatDescriptionGetMediaSubType(format.formatDescription); foundFormat = true; if (dims.width == 1000 && dims.height == 6000) { selectedDevice.activeFormat = format; foundFormat = true; break; } } if(foundFormat == false) { NSLog(@"Failed to foundFormat : "); [session commitConfiguration]; return false; } NSError* error = nil; AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:selectedDevice error:&error]; if (error || ![session canAddInput:input]) { NSLog(@"Failed to add video input: %@", error.localizedDescription); [session commitConfiguration]; return false; } [session addInput:input]; AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init]; output.alwaysDiscardsLateVideoFrames = YES; output.videoSettings = @{ (NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) }; [output setSampleBufferDelegate:delegate queue:queue]; if ([session canAddOutput:output]) { [session addOutput:output]; } [session commitConfiguration]; [selectedDevice unlockForConfiguration]; } else { NSLog(@"Failed to lock device for configuration: %@", error.localizedDescription); } // start~
1
0
373
Jul ’25
play videos in webM format on iOS Deveices
I would like to play videos in webM format on my iPhone. I understand that it is basically impossible to play videos in webM format on an iPhone, but is there any way to display videos in webM format? I would like to know if there is an official Swift SDK or development kit released by Apple. Or if there are any third-party products, please let me know.
1
0
292
Jul ’25
AVAssetReaderOutput.Provider Missing symbols
Recurring crash on install of any app with the new sourceVideoTrackProvider.next() dyld[41966]: Symbol not found: _$sSo19AVAssetReaderOutputC12AVFoundationE8ProviderC4nextxSgyYaKFTjTu Referenced from: <79AA2BE0-A6B4-32F5-A804-E84BBE5D1AEA> /Users/<username>/Library/Developer/Xcode/DerivedData/TrackProviderCrash-bbbhjptcxnmfdcackxtpucnunxyc/Build/Products/Debug-maccatalyst/TrackProviderCrash.app/Contents/MacOS/TrackProviderCrash.debug.dylib Expected in: <1B847AF9-7973-3B28-95C2-09E73F6DD50B> /usr/lib/swift/libswiftAVFoundation.dylib Can be reproduced with the current Xcode Beta 4 by running on to MacCatalyst and macOS https://developer.apple.com/documentation/AVFoundation/converting-projected-video-to-apple-projected-media-profile Crash goes away of you comment out lines 154-158 and 164-170 which are while let sampleBuffer = try await sourceVideoTrackProvider.next(){/*other code*/} Can also be reproduced if you add the code below to a MacCatalyst project import AVKit let asset: AVURLAsset = .init(url: Bundle.main.url(forResource: "SomeVideo.mp4", withExtension: nil)!) let videoReader = try! AVAssetReader(asset: asset) let videoTracks = try! await asset.loadTracks(withMediaCharacteristic: .visual) // Get the side-by-side video track. let videoTrack = videoTracks.first! let videoInputTrack = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: nil) let sourceVideoTrackProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>> = videoReader.outputProvider(for: videoInputTrack) //Comment out this while let sb = try! await sourceVideoTrackProvider.next() { }
1
0
634
Jul ’25