Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

Using the AVPlayer to play encrypted streams causes the system to reboot
When we use AVPlayer to play DRM encrypted streams, it will not play normally under iOS 17.6.1 system version, and there is a high probability of system restart. This is the relevant core error log: error 14:47:53.323369+0800 audiomxd [AirPlayError] carManager_copyProperty_block_invoke:499: got error -12784/0xFFFFCE10 kCMBaseObjectError_PropertyNotFound error 14:47:53.323414+0800 audiomxd [SPEndpointManagerFactory] SidePlay Endpoint Manager creation failed with -72390/0xFFFEE53A error 14:47:53.364949+0800 audiomxd [APBrowserCarSessionHelper] [0xF6AA] [Bonjour/WiFi] Unrecognized ConnectivityHelper event 101 error 14:47:53.375313+0800 audiomxd AddInstanceForFactory: No factory registered for id <CFUUID 0xa5c5118c0> F8BB1C28-BAE8-11D6-9C31-00039315CD46
1
0
247
Mar ’25
AVPlayer: Significant Delays and Asset Loss When Playing Partially Downloaded HLS Content Offline
We're experiencing significant issues with AVPlayer when attempting to play partially downloaded HLS content in offline mode. Our app downloads HLS video content for offline viewing, but users encounter the following problems: Excessive Loading Delay: When offline, AVPlayer attempts to load resources for up to 60 seconds before playing the locally available segments Asset Loss: Sometimes AVPlayer completely loses the asset reference and fails to play the video on subsequent attempts Inconsistent Behavior: The same partially downloaded asset might play immediately in one session but take 30+ seconds in another Network Activity Despite Offline Settings: Despite configuring options to prevent network usage, AVPlayer still appears to be attempting network connections These issues severely impact our offline user experience, especially for users with intermittent connectivity. Technical Details Implementation Context Our app downloads HLS videos for offline viewing using AVAssetDownloadTask. We store the downloaded content locally and maintain a dictionary mapping of file identifiers to local paths. When attempting to play these videos offline, we experience the described issues. Current Implementation Here's our current implementation for playing the videos: - (void)presentNativeAvplayerForVideo:(Video *)video navContext:(NavContext *)context { NSString *localPath = video.localHlsPath; if (localPath) { NSURL *videoURL = [NSURL URLWithString:localPath]; NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey: @YES, AVURLAssetAllowsCellularAccessKey: @NO, AVURLAssetAllowsExpensiveNetworkAccessKey: @NO, AVURLAssetAllowsConstrainedNetworkAccessKey: @NO }; AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoURL options:options]; AVPlayerViewController *playerViewController = [[AVPlayerViewController alloc] init]; NSArray *keys = @[@"duration", @"tracks"]; [asset loadValuesAsynchronouslyForKeys:keys completionHandler:^{ dispatch_async(dispatch_get_main_queue(), ^{ AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset]; AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem]; playerViewController.player = player; [player play]; }); }]; playerViewController.modalPresentationStyle = UIModalPresentationFullScreen; [context presentViewController:playerViewController animated:YES completion:nil]; } } Attempted Solutions We've tried several approaches to mitigate these issues: Modified Asset Options: NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey: @NO, // Changed to NO AVURLAssetAllowsCellularAccessKey: @NO, AVURLAssetAllowsExpensiveNetworkAccessKey: @NO, AVURLAssetAllowsConstrainedNetworkAccessKey: @NO, AVAssetReferenceRestrictionsKey: @(AVAssetReferenceRestrictionForbidRemoteReferenceToLocal) }; Skipped Asynchronous Key Loading: AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset automaticallyLoadedAssetKeys:nil]; Modified Player Settings: player.automaticallyWaitsToMinimizeStalling = NO; [playerItem setPreferredForwardBufferDuration:2.0]; Added Network Resource Restrictions: playerItem.canUseNetworkResourcesForLiveStreamingWhilePaused = NO; Used File URLs Instead of HTTP URLs where possible Despite these attempts, the issues persist. Expected vs. Actual Behavior Expected Behavior: AVPlayer should immediately begin playback of locally available HLS segments When offline, it should not attempt to load from network for more than a few seconds Once an asset is successfully played, it should be reliably available for future playback Actual Behavior: AVPlayer waits 10-60 seconds before playing locally available segments Network activity is observed despite all network-restricting options Sometimes the player fails completely to play a previously available asset Behavior is inconsistent between playback attempts with the same asset Questions: What is the recommended approach for playing partially downloaded HLS content offline with minimal delay? Is there a way to force AVPlayer to immediately use available local segments without attempting to load from the network? Are there any known issues with AVPlayer losing references to locally stored HLS assets? What diagnostic steps would you recommend to track down the specific cause of these delays? Does AVFoundation have specific timeouts for offline HLS playback that could be configured? Any guidance would be greatly appreciated as this issue is significantly impacting our user experience. Device Information iOS Versions Tested: 14.5 - 18.1 Device Models: iPhone 12, iPhone 13, iPhone 14, iPhone 15 Xcode Version: 15.3-16.2.1
1
0
482
Mar ’25
Visual isTranslatable: NO; reason: observation failure: noObservations, when trying to play custom compositor video with AVPlayer
I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds. However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, I get this warning in the console :- Visual isTranslatable: NO; reason: observation failure: noObservations Here is the screenshot :- My entire code :- import AVFoundation import Metal class GradientVideoCompositorTest: NSObject, AVVideoCompositing { var sourcePixelBufferAttributes: [String: Any]? = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] var requiredPixelBufferAttributesForRenderContext: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] private var renderContext: AVVideoCompositionRenderContext? private var metalDevice: MTLDevice! private var metalCommandQueue: MTLCommandQueue! private var metalLibrary: MTLLibrary! private var metalPipeline: MTLComputePipelineState! override init() { super.init() setupMetal() } func setupMetal() { guard let device = MTLCreateSystemDefaultDevice(), let queue = device.makeCommandQueue(), let library = try? device.makeDefaultLibrary(), let function = library.makeFunction(name: "gradientShader") else { fatalError("Metal setup failed") } self.metalDevice = device self.metalCommandQueue = queue self.metalLibrary = library self.metalPipeline = try? device.makeComputePipelineState(function: function) } func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) { renderContext = newRenderContext } func startRequest(_ request: AVAsynchronousVideoCompositionRequest) { guard let outputPixelBuffer = renderContext?.newPixelBuffer(), let metalTexture = createMetalTexture(from: outputPixelBuffer) else { request.finish(with: NSError(domain: "com.example.gradient", code: -1, userInfo: nil)) return } var time = Float(request.compositionTime.seconds) renderGradient(to: metalTexture, time: time) request.finish(withComposedVideoFrame: outputPixelBuffer) } private func createMetalTexture(from pixelBuffer: CVPixelBuffer) -> MTLTexture? { var texture: MTLTexture? let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .bgra8Unorm, width: width, height: height, mipmapped: false ) textureDescriptor.usage = [.shaderWrite, .shaderRead] CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly) if let textureCache = createTextureCache(), let cvTexture = createCVMetalTexture(from: pixelBuffer, cache: textureCache) { texture = CVMetalTextureGetTexture(cvTexture) } CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly) return texture } private func renderGradient(to texture: MTLTexture, time: Float) { guard let commandBuffer = metalCommandQueue.makeCommandBuffer(), let commandEncoder = commandBuffer.makeComputeCommandEncoder() else { return } commandEncoder.setComputePipelineState(metalPipeline) commandEncoder.setTexture(texture, index: 0) var mutableTime = time commandEncoder.setBytes(&mutableTime, length: MemoryLayout<Float>.size, index: 0) let threadsPerGroup = MTLSize(width: 16, height: 16, depth: 1) let threadGroups = MTLSize( width: (texture.width + 15) / 16, height: (texture.height + 15) / 16, depth: 1 ) commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadsPerGroup) commandEncoder.endEncoding() commandBuffer.commit() } private func createTextureCache() -> CVMetalTextureCache? { var cache: CVMetalTextureCache? CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &cache) return cache } private func createCVMetalTexture(from pixelBuffer: CVPixelBuffer, cache: CVMetalTextureCache) -> CVMetalTexture? { var cvTexture: CVMetalTexture? let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) CVMetalTextureCacheCreateTextureFromImage( kCFAllocatorDefault, cache, pixelBuffer, nil, .bgra8Unorm, width, height, 0, &cvTexture ) return cvTexture } } class GradientCompositionInstructionTest: NSObject, AVVideoCompositionInstructionProtocol { var timeRange: CMTimeRange var enablePostProcessing: Bool = true var containsTweening: Bool = true var requiredSourceTrackIDs: [NSValue]? = nil var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid init(timeRange: CMTimeRange) { self.timeRange = timeRange } } func createGradientVideoComposition(duration: CMTime, size: CGSize) -> AVMutableVideoComposition { let composition = AVMutableComposition() let instruction = GradientCompositionInstructionTest(timeRange: CMTimeRange(start: .zero, duration: duration)) let videoComposition = AVMutableVideoComposition() videoComposition.customVideoCompositorClass = GradientVideoCompositorTest.self videoComposition.renderSize = size videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // 30 FPS videoComposition.instructions = [instruction] return videoComposition } #include <metal_stdlib> using namespace metal; kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]], constant float &time [[buffer(0)]], uint2 id [[thread_position_in_grid]]) { float2 uv = float2(id) / float2(output.get_width(), output.get_height()); // Animated colors based on time float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0); float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3); // Linear interpolation for gradient float3 gradientColor = mix(color1, color2, uv.y); output.write(float4(gradientColor, 1.0), id); }
1
0
336
Apr ’25
AVPlayer handle 403 error
Hello, Is there a way to handle 403 error returned by the server, eg token expired ? Cannot find any information about this and everything that I tried wasn't working (addObserver, NotificationCenter with .AVPlayerItemNewErrorLogEntry, AVPlayerItemPlaybackStalled, ...) Thank you very much.
0
0
138
Mar ’25
Videos viewed in WKWebView have Spatial Audio issues
I have an app that has a WKWebView for watching YouTube videos. When the videos are windowed the audio seems fine, positionally as well. All perfectly. When I fullscreen the video and it goes into the native visionOS video player the audio messes up. It will suddenly sound like it is in your ears, or maybe even just one ear channel, or the position will be wrong. It might be fine for a moment but the second I touch the controls or move the window the sound jumps across the room, away from the window, or switches to stereo. Sometimes exiting windows entirely you will still hear the videos playing. Even if you open the window back up and go to another screen and open another video, now you hear 2 videos playing at the same time with no way to stop the first one in the background, requiring to force restart the app. It is all sorts of glitchy. I haven't the slightest clue what is happening here. I am strongly feeling this is a visionOS bug. I tried using AVAudioSession to change some of the sound settings, and that makes zero difference in behavior. Multiple testers have also reported this behavior and it has been seen on both visionOS 2.3 and 2.4 betas. Thanks for the help! This is driving me mad! It is extremely consistent behavior!
0
0
152
Mar ’25
Coverting CVPixelBuffer 2VUY to a Metal Texture
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image. I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off. I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors. On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues. My code for the conversion is also attached. If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it. Many Thanks Conversion_Logs.txt ConversionCode.swift
0
0
89
Mar ’25
Image brightness adapts despite exposure lock
Short summary When setting exposureMode to .locked or .custom the brightness of a video stream still changes depending on the composition and contrast of the visible scene. These changes seem to come from contrast enhancements or dynamic range optimizations and totally break any analysis of the image that requires to assess absolute luminance. While exposure lock seems to indeed lock the physical exposure parameters of the camera (shutter speed and ISO), I cannot find any way to control these "soft" modifiers. Details Background I am the developer of the app "phyphox", an educational app that makes the phone's sensors accessible to students as measurement tools in science experiments. Currently I am working on implementing photometric measurements through the camera and one very important aspect of it is luminance measurements. This is particularly relevant since the light sensor of the phone has no publicly accessible API and the camera could to some extend make experiments available to Apple users that are otherwise only possible on Android devices. Implementation The app uses AVFoundation and explicitly picks individual cameras since camera groups do not support custom exposure settings. This means that it handles camera switching during zoom by itself and even implements its own auto exposure routines to optimize for the use in experiments. Therefore it always stays in custom exposure mode. The app uses YUV420 color space and the individual frames are analyzed in Metal using compute shaders. However, the effects discussed here still occur if I remove all code to control the camera and replace it with a simple sequence of setting the exposure mode to custom, setting custom exposure values, setting a fixed white balance and then setting the exposure mode to locked as suggested on stackoverflow. This neither helps on an iPhone 14 Pro nor on an iPhone 8 despite a report on the developer forums that it would resolve the issue for older devices. The app is open source, so the code can be seen in our current development branch (without the changes for the tests here, though) on github. The videos below use the implementation with the suggestion from stackoverflow, but they can be reproduced in the same way with "professional" camera apps that promise manual control over the camera (like the Blackmagic cam to quote a reputable company) as well as the stock camera app after pressing and holding on the preview to enable AE/AF lock. Demonstration These examples were captured on an iPhone 14 Pro. The central part of the image (highlighted by the app using metal shaders after capture) should not change with fixed exposure settings, but significant changes are noticable if there are changes at the edge of the frame when I move a black piece of cardboard in from above: https://share.icloud.com/photos/0b1f_3IB6yAQG-qSH27pm6oDQ The graph above the camera preview is the average luminance (gamma corrected and weighted based on sRGB) across the highlighted central area and as mentioned before it should not change because of something happening at the side of the frame (worst case it should get a bit darker because of the cardboard's shadow). In my opinion, the iPhone changes its mind on the ideal contrast as soon as it has a different exposure histogram because of the dark image part from the cardboard, but that's just me guessing. For completeness here is the same effect in the stock camera app with AE/AF lock enabled: https://share.icloud.com/photos/0cd7QM8ucBZKwPwE9mybnEowg Here you can also see that the iPhone "ramps" the changes. The brightness of the gray area does not change immediately but transitions smoothly, so this is clearly deliberate postprocessing. So... Any suggestion on how to prevent this behavior would be highly appreciated.
1
0
115
Apr ’25
iOS AVPlayer Subtitles / Captions
As of iOS 18, as far as I can tell, it appears there's still no AVPlayer options that allow users to toggle the caption / subtitle track on and off. Does anyone know of a way to do this with AVPlayer or with SwiftUI's VideoPlayer? The following code reproduces this issue. It can be pasted into an app playground. This is a random video and a random vtt file I found on the internet. import SwiftUI import AVKit import UIKit struct ContentView: View { private let video = URL(string: "https://server15700.contentdm.oclc.org/dmwebservices/index.php?q=dmGetStreamingFile/p15700coll2/15.mp4/byte/json")! private let captions = URL(string: "https://gist.githubusercontent.com/samdutton/ca37f3adaf4e23679957b8083e061177/raw/e19399fbccbc069a2af4266e5120ae6bad62699a/sample.vtt")! @State private var player: AVPlayer? var body: some View { VStack { VideoPlayerView(player: player) .frame(maxWidth: .infinity, maxHeight: 200) } .task { // Captions won't work for some reason player = try? await loadPlayer(video: video, captions: captions) } } } private struct VideoPlayerView: UIViewControllerRepresentable { let player: AVPlayer? func makeUIViewController(context: Context) -> AVPlayerViewController { let controller = AVPlayerViewController() controller.player = player controller.modalPresentationStyle = .overFullScreen return controller } func updateUIViewController(_ uiViewController: AVPlayerViewController, context: Context) { uiViewController.player = player } } private func loadPlayer(video: URL, captions: URL?) async throws -> AVPlayer { let videoAsset = AVURLAsset(url: video) let videoPlusSubtitles = AVMutableComposition() try await videoPlusSubtitles.add(videoAsset, withMediaType: .video) try await videoPlusSubtitles.add(videoAsset, withMediaType: .audio) if let captions { let captionAsset = AVURLAsset(url: captions) // Must add as .text. .closedCaption and .subtitle don't work? try await videoPlusSubtitles.add(captionAsset, withMediaType: .text) } return await AVPlayer(playerItem: AVPlayerItem(asset: videoPlusSubtitles)) } private extension AVMutableComposition { func add(_ asset: AVAsset, withMediaType mediaType: AVMediaType) async throws { let duration = try await asset.load(.duration) try await asset.loadTracks(withMediaType: mediaType).first.map { track in let newTrack = self.addMutableTrack(withMediaType: mediaType, preferredTrackID: kCMPersistentTrackID_Invalid) let range = CMTimeRangeMake(start: .zero, duration: duration) try newTrack?.insertTimeRange(range, of: track, at: .zero) } } }
0
0
100
Apr ’25
How correctly setup AVSampleBufferDisplayLayer
How can I setup correctly AVSampleBufferDisplayLayer for video display when I have input picture format kCVPixelFormatType_32BGRA? Currently video i visible in simulator, but not iPhone, miss I something? Render code: var pixelBuffer: CVPixelBuffer? let attrs: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey as String: width, kCVPixelBufferHeightKey as String: height, kCVPixelBufferBytesPerRowAlignmentKey as String: width * 4, kCVPixelBufferIOSurfacePropertiesKey as String: [:] ] let status = CVPixelBufferCreateWithBytes( nil, width, height, kCVPixelFormatType_32BGRA, img, width * 4, nil, nil, attrs as CFDictionary, &pixelBuffer ) guard status == kCVReturnSuccess, let pb = pixelBuffer else { return } var formatDesc: CMVideoFormatDescription? CMVideoFormatDescriptionCreateForImageBuffer( allocator: nil, imageBuffer: pb, formatDescriptionOut: &formatDesc ) guard let format = formatDesc else { return } var timingInfo = CMSampleTimingInfo( duration: .invalid, presentationTimeStamp: currentTime, decodeTimeStamp: .invalid ) var sampleBuffer: CMSampleBuffer? CMSampleBufferCreateForImageBuffer( allocator: kCFAllocatorDefault, imageBuffer: pb, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: format, sampleTiming: &timingInfo, sampleBufferOut: &sampleBuffer ) if let sb = sampleBuffer { if CMSampleBufferGetPresentationTimeStamp(sb) == .invalid { print("Invalid video timestamp") } if (displayLayer.status == .failed) { displayLayer.flush() } DispatchQueue.main.async { [weak self] in guard let self = self else { print("Lost reference to self drawing") return } displayLayer.enqueue(sb) } frameIndex += 1 }
0
0
113
Apr ’25
AirDropped Videos from Photos Save to Files Instead of Photos on Receiving Device
My app allows users to capture and save videos to the Photos app using the following Swift code: PHPhotoLibrary.shared().performChanges { PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: fileURL) } completionHandler: { success, error in Videos are successfully saved to Photos and play correctly. However, users report that when they AirDrop these videos from the Photos app to another device (e.g., iPad to iPhone), the videos are saved in the Files app on the receiving device instead of the Photos app. This issue is more common with higher-resolution videos, such as 2K, recorded in HEVC format at 30 fps. I wasn't able to reproduce the issue locally. I've found a thread in public apple forum: https://discussions.apple.com/thread/255276865?sortBy=rank but I wonder maybe there are some special flags that I should clear or add to my videos (e.g. PHAssetChangeRequest)? Thank you!
0
0
95
May ’25
​​Can VideoToolbox properly decode HEVC bitstreams when a single frame is split into multiple slice NALUs?​
I am currently developing an HEVC player using VideoToolbox on an iOS device. I have successfully created an HEVC decoder that receives HEVC streams from our custom image capture and encoding device, and it can decode and display images properly. However, when my image capture and encoding device configures the encoder to output HEVC streams with ​​fragmented NALUs​​ (i.e., an I-frame or P-frame is split and stored across multiple slice NALUs), the iOS decoder can be initialized successfully but fails to decode and output images. ​​Can VideoToolbox properly decode HEVC bitstreams when a single frame is split into multiple slice NALUs?​​ Key Observations: ​​1. Single-NALU frames​​ work fine. ​​2. Multi-NALU frames​​ (sliced I/P-frames) cause decoding failure. 3. The decoder session is created successfully (VTDecompressionSessionCreate returns no error).
0
0
130
May ’25
How to disable automatic updates to MPNowPlayingInfoCenter from AVPlayer
I’m building a SwiftUI app whose primary job is to play audio. I manage all of the Now-Playing metadata and Command center manually via the available shared instances: MPRemoteCommandCenter.shared() MPNowPlayingInfoCenter.default().nowPlayingInfo In certain parts of the app I also need to display videos, but as soon as I attach another AVPlayer, it automatically pushes its own metadata into the Control Center and overwrites my audio info. What I need: a way to show video inline without ever having that video player update the system’s Now-Playing info (or Control Center). In my app, I start by configuring the shared audio session do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [ .allowAirPlay, .allowBluetoothA2DP ]) try AVAudioSession.sharedInstance().setActive(true) } catch { NSLog("%@", "**** Failed to set up AVAudioSession \(error.localizedDescription)") } and then set the MPRemoteCommandCenter commands and MPNowPlayingInfoCenter nowPlayingInfo like mentioned above. All this works without any issues as long as I only have one AVPlayer in my app. But when I add other AVPlayers to display some videos (and keep the main AVPlayer for the sound) they push undesired updates to MPNowPlayingInfoCenter: struct VideoCardView: View { @State private var player: AVPlayer let videoName: String init(player: AVPlayer = AVPlayer(), videoName: String) { self.player = player self.videoName = videoName guard let path = Bundle.main.path(forResource: videoName, ofType: nil) else { return } let url = URL(fileURLWithPath: path) let item = AVPlayerItem(url: url) self.player.replaceCurrentItem(with: item) } var body: some View { VideoPlayer(player: player) .aspectRatio(contentMode: .fill) .onAppear { player.isMuted = true player.allowsExternalPlayback = false player.actionAtItemEnd = .none player.play() MPNowPlayingInfoCenter.default().nowPlayingInfo = nil MPNowPlayingInfoCenter.default().playbackState = .stopped NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player.currentItem, queue: .main) { notification in guard let finishedItem = notification.object as? AVPlayerItem, finishedItem === player.currentItem else { return } player.seek(to: .zero) player.play() } } .onDisappear { player.pause() } } } Which is why I tried adding: MPNowPlayingInfoCenter.default().nowPlayingInfo = nil MPNowPlayingInfoCenter.default().playbackState = .stopped // or .interrupted, .unknown But that didn't work. I also tried making a wrapper around the AVPlayerViewController in order to set updatesNowPlayingInfoCenter to false, but that didn’t work either: struct CustomAVPlayerView: UIViewControllerRepresentable { let player: AVPlayer func makeUIViewController(context: Context) -> AVPlayerViewController { let vc = AVPlayerViewController() vc.player = player vc.updatesNowPlayingInfoCenter = false vc.showsPlaybackControls = false return vc } func updateUIViewController(_ controller: AVPlayerViewController, context: Context) { controller.player = player } } Hence any help on how to embed video in SwiftUI without its AVPlayer touching MPNowPlayingInfoCenter would be greatly appreciated. All this was tested on an actual device with iOS 18.4.1, and built with Xcode 16.2 on macOS 15.5
2
0
190
Jun ’25
how to enumerate build-in media devices?
Hello, I need to enumerate built-in media devices (cameras, microphones, etc.). For this purpose, I am using the CoreAudio and CoreMediaIO frameworks. According to the table 'Daemon-Safe Frameworks' in Apple’s TN2083, CoreAudio is daemon-safe. However, the documentation does not mention CoreMediaIO. Can CoreMediaIO be used in a daemon? If not, are there any documented alternatives to detect built-in cameras in a daemon (e.g., via device classes in IOKit)? Thank you in advance, Pavel
0
0
92
May ’25
Option to Set Default Interpolation for Keyframes to “Linear” in Final Cut Pro
In Final Cut Pro, keyframes for transform parameters (such as Position, Scale, and Rotation) are automatically set to “Smooth” interpolation. This often results in undesired easing between keyframes, especially when linear motion is required. Currently, we have to manually adjust each keyframe to "Linear" using the Video Animation Editor, which can be time-consuming when working with many keyframes. Would it be possible to add an option to set the default keyframe interpolation to "Linear"—either globally in Preferences or per parameter in the Inspector? This would greatly streamline the animation workflow for many editors. Thank you for considering this request!
0
0
104
Jun ’25
HDR video & screen brightness
When I play an HDR video in the iPhone Photos app, I can see the HDR effect obviously. But if this HDR video is played continuously for more than 30-40 minutes, the HDR effect will disappear and the brightness will be compressed to the SDR range. This issue will appear on any iPhone. Depending on the phone, it may be 20-30 minutes, or 30-40 minutes, or even a few minutes, such as iPhone 12 mini. Similarly, if I use AVPlayer to play and preview an HDR video, if it plays more than 30-40 minutes, the HDR effect will disappear and the screen brightness will dim. Also the currentEDRHeadroom will gradually decrease to 1 Note, test it with an HDR video longer than 1 hour, and if the video is short, please loop it. My question is how to avoid losing the HDR effect after 30-40 minutes when I use CAMetalLayer to render any HDR video.
1
0
142
Jul ’25
How to request for Video Subscriber SSO entitlement from Apple
Hi All. I'm working on Single-Sign-On feature in my application to let customers sign into their TV Provider. I need to add Video Subscriber SSO entitlement (com.apple.developer.video-subscriber-single-sign-on) to the app, but I found out that it's a special entitlement, need to contact Apple to enable it for my Apple account. On https://developer.apple.com/account I navigated to Support -&gt; Contact Us -&gt; Development and Technical -&gt; Entitlements and ask in the email about missing entitlement (ticket ID 102478794279). The support team couldn't help me, they redirected me to the operations team. I've been waiting for a few months now but they inform me to keep waiting. Is there a better way to contact Apple and get Video Subscriber SSO entitlement in an efficient way?
1
0
89
Jun ’25