Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
1
2
229
Oct ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
2
2
464
Oct ’25
MPMusicPlayerController.applicationMusicPlayer.currentPlaybackRate no longer working in iOS 26.0 (Tahoe)?
I'm wondering if someone happened issues with currentPlaybackRate in released version of iOS 26.0? There is no issue happened in case of iOS 18.5. When I changed currentPlaybackRate on iOS 26.0, it seems to unexpectedly change currentPlaybackRate to be 0 or 1.0 forcibly no matter DRM or non-DRM contents. And also, playing music will be abnormal behavior and unstable with noise if currentPlaybackRate is not 1.0. And changes stop state and play state frequently.
2
2
685
Oct ’25
Facing issue with fairplay Streaming server SDK 26.0.0
I am trying to Build server for testing on Linux(Alma linux 9 VM) NAME="AlmaLinux" VERSION="9.7 (Moss Jungle Cat)" ID="almalinux" ID_LIKE="rhel centos fedora" VERSION_ID="9.7" PLATFORM_ID="platform:el9" PRETTY_NAME="AlmaLinux 9.7 (Moss Jungle Cat)" ANSI_COLOR="0;34" [azuki@AlmaDevVM ~]$ uname -m x86_64 I have tried the following steps: Before starting, ensured that Swift 6 installed. Referred https://www.swift.org/install/ for instructions. Build the library In Terminal, uses the following commands to compile the Swift library: cd Development/Key_Server_Module/Swift swift build -Xbuild-tools-swiftc -DTEST_CREDENTIALS After building the library, ran test cases to ensure the library behaves as expected. ALL unit tests are passing with the development credentials. • Since I was using an x86_64 machine: export LD_LIBRARY_PATH=./Sources/prebuilt/x86_64-unknown-linux-gnu/ Run all tests: swift test -Xbuild-tools-swiftc -DTEST_CREDENTIALS --disable-swift-testing Build the server Build the server: Apache Before starting, ensured the following: a. Installed Apache HTTPD and the dev tools. Using the following command for installation: yum install httpd httpd-devel redhat-rpm-config b. After this, integrated it into the Apache server environment with swift library that was built above. Used the following command to build the server using apxs: • Since I was using an x86_64 machine: apxs -i -a -c -Wl,-L${PWD}/.build/x86_64-unknown-linux-gnu/debug/ -Wl,-lswift_fpssdk -Wl,-L${PWD}/Sources/prebuilt/x86_64-unknown-linux-gnu -lfpscrypto -Wl,-R${PWD}/.build/x86_64-unknown-linux-gnu/debug server_setup/mod_fps.c c. Next, copied the dependent libraries to the Apache modules folder using these commands: • If using an x86_64 machine: cp Sources/prebuilt/x86_64-unknown-linux-gnu/libfpscrypto.so /usr/lib64/httpd/modules/libfpscrypto.so cp .build/x86_64-unknown-linux-gnu/debug/libswift_fpssdk.so /usr/lib64/httpd/modules/libswift_fpssdk.so d. Configuring Apache HTTPD Configured Apache HTTPD by adding the module and handler to your Apache HTTPD configuration (/etc/httpd/conf/httpd.conf). Note that the apxs command may automatically add the LoadModule line in the previous step. Listen 8080 LoadFile /usr/lib64/httpd/modules/libfpscrypto.so LoadFile /usr/lib64/httpd/modules/libswift_fpssdk.so LoadModule fps_module /usr/lib64/httpd/modules/mod_fps.so <Location "/fps"> SetHandler fps_handler Copy the credentials to the Apache modules folder. cp -r ../credentials /usr/lib64/httpd/modules/ export FPS_CERT_PATH= /usr/lib64/httpd/modules/credentials/test_certificates.json e. Run your server You can run the Apache HTTPD server with the configured module by using the following command: httpd -D FOREGROUND No issues see till step. Get SDK version [azuki@AlmaDevVM Key_Server_Module]$ curl localhost:8080/fps/v 26.0.0 But when i try to generate license [azuki@AlmaDevVM Key_Server_Module]$ curl -d ../Test_Inputs/iOS/spc_ios_hd_lease_2048.json localhost:8080/fps {"fairplay-streaming-response":{"create-ckc":[{"id":1,"status":-42601}]}} Can you please suggest what i might be missing here?
8
0
1.1k
Feb ’26
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
3
0
575
Jan ’26
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
3
2
809
Mar ’25
CoreMediaError with Lightning HDMI output on FairPlay content
Hello, our application is unable to HDMI output FairPlay protected content to TV via official Lightning HDMI AV Adapter, by checking the console log on mediaplayerd it is found that a CoreMediaErrorDomain Code=-19156 is raised, but we are unable to know what this error code means. default 11:18:15.121584+0800 mediaplaybackd keyboss ckb_customURLReadCallback: 0x7fa62f800 60/0 customURLReqID 4 isComplete 1 err -19156 error <private> (0) dokeyCallbacksExist 0 default 11:18:15.121670+0800 mediaplaybackd keyboss ckb_processErrorForRequest: 0x7fa62f800 60/0 handler 4 err 0 default 11:18:15.121752+0800 mediaplaybackd <<<< FigCustomURLHandling >>>> curll_cancelRequestOnQueue: 0x7fa031360: requestID: 4 default 11:18:15.121932+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 reqFin err Error Domain=CoreMediaErrorDomain Code=-19156 (-19156) dokeyCallbacksExist 0 default 11:18:15.122025+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 retry default 11:18:15.123195+0800 mediaplaybackd <<<< FigCPECryptorPKD >>>> PostKeyRequestErrorOccurred: 0x7fab7be80 029592C2-093D-400D-B57F-7AB06CC292D1 key request error: Error Domain=CoreMediaErrorDomain Code=-19160 (-19160)
1
2
281
Jan ’26
MusicKit: Multichannel Dolby Atmos Limited to Stereo Output - Is This Intended Behavior?
I'm experiencing a significant limitation with MusicKit's Dolby Atmos implementation on macOS and would appreciate clarification on whether this is intended behavior or if there are solutions available. When streaming Dolby Atmos content through MusicKit's ApplicationMusicPlayer, the output is limited to 2-channel stereo, even when: Audio MIDI Setup is configured for 7.1.4 (12-channel) output The same tracks play in full multichannel through the native Apple Music app Dolby Atmos is set to "Automatic" in Apple Music preferences Please let me know if there is anyway to enable this. If not, is this documented anywhere? Thanks!
1
2
303
Aug ’25
CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Hi Apple Developer Forums, I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage. What’s slow (important detail) I’m measuring the time for: let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint) let ciImage = rawFilter.outputImage This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup). Observed behavior First time accessing CIRAWFilter.outputImage: ~3 seconds Second time (same app session, similar RAW): ~3 milliseconds So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.). Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern. Warm-up attempts using bundled DNG I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo: Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms This helps, but it’s still far from the steady-state ~3ms. Warm-up by capturing a real RAW (works, but concerns) The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing. However: In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX. I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded. Questions Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)? Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo? Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter? Attachments Figure 1: First RAW capture with no warm-up (~3s outputImage time) Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms) Thanks for any guidance or experience sharing!
3
2
648
Jan ’26
Unexpected artist names in song table
Hi team, In the Apple Music Feed datasets, we've noticed some unexpected values in the song and album tables. The primaryartists column from either song or album may contain a "non-default" artist name such as the katakana name shown in the example below: select id, name, namedefault, primaryartists from amf_song where id = '1698723329' id | name | namedefault | primaryartists ---------------------------------------- 1698723329 | {default=California} | California | [{id=1264818718, name=チャペル・ローン}] select * from amf_artist where id = '1264818718' id | name | namedefault | namepronunciation | ---------------------------------------------- 1264818718 | {default=Chappell Roan, ja=チャペル・ローン} | Chappell Roan | {ja=チャペルローン} | Shouldn't the primaryartists column be showing the namedefault instead of the Japanese language version? When can we expect this bug to resolved? Thanks,
0
2
512
Dec ’25
How to use the SpeechDetector Module
I am trying to use SpeechDetector Module in Speech framework along with SpeechTranscriber. and it is giving me an error Cannot convert value of type 'SpeechDetector' to expected element type 'Array.ArrayLiteralElement' (aka 'any SpeechModule') Below is how I am using it let speechDetector = Speech.SpeechDetector() let transcriber = SpeechTranscriber(locale: Locale.current, transcriptionOptions: [], reportingOptions: [.volatileResults], attributeOptions: [.audioTimeRange]) speechAnalyzer = try SpeechAnalyzer(modules: [transcriber,speechDetector])
4
2
475
Aug ’25
WKWebView Crashes on iOS During YouTube Playlist Playback
I’m encountering a consistent crash in WebKit when using WKWebView to play a YouTube playlist in my iOS app. Playback starts successfully, but the web process terminates during the second video in the playlist. This only occurs on physical devices, not in the simulator. Here’s a simplified Swift example of my setup: import SwiftUI import WebKit struct ContentView: View { private let playlistID = "PLig2mjpwQBZnghraUKGhCqc9eAy0UbpDN" var body: some View { YouTubeWebView(playlistID: playlistID) .edgesIgnoringSafeArea(.all) } } struct YouTubeWebView: UIViewRepresentable { let playlistID: String func makeUIView(context: Context) -> WKWebView { let config = WKWebViewConfiguration() config.allowsInlineMediaPlayback = true let webView = WKWebView(frame: .zero, configuration: config) webView.scrollView.isScrollEnabled = true let html = """ <!doctype html> <html> <head> <meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0"> <style>body,html{height:100%;margin:0;background:#000}iframe{width:100%;height:100%;border:0}</style> </head> <body> <iframe src="https://www.youtube-nocookie.com/embed/videoseries?list=\(playlistID)&controls=1&rel=0&playsinline=1&iv_load_policy=3" frameborder="0" allow="encrypted-media; picture-in-picture; fullscreen" webkit-playsinline allowfullscreen ></iframe> </body> </html> """ webView.loadHTMLString(html, baseURL: nil) return webView } func updateUIView(_ uiView: WKWebView, context: Context) {} } #Preview { ContentView() } Observed behavior: First video plays without issue. Web process crashes when the second video in the playlist starts. Console logs show WebProcessProxy::didClose and repeated memory status messages. Using ProcessAssertion or background activity does not prevent the crash. Only occurs on physical devices; simulators do not reproduce the issue. Questions: Is there something I should change or add in my WKWebView setup or HTML/iframe to prevent the crash when playing the second video in a playlist on physical iOS devices? Is there an officially supported way to limit memory or prevent WebKit from terminating the web process during multi-video playback? Are there recommended patterns for playing YouTube playlists in a WKWebView on iOS without risking crashes? Any tips for debugging or configuring WKWebView to make it more stable for continuous playlist playback? Thanks in advance for any guidance!
2
0
615
Oct ’25
TV A1625 Using 3× More CPU After tvOS 26 Update
Hi everyone, After updating my Apple TV HD (model A1625) to tvOS 26, I’ve noticed a significant spike in CPU usage—up to 3× higher than before the update. Go from around 40% to 120% Model: Apple TV HD (A1625) tvOS Version: 26 (stable release) and beta version of 26.1, App downgrade stream due to lack of cpu power If anyone else is experiencing this, please share your findings or workarounds. Would love to hear from Apple engineers or other developers if this is a known regression or if there’s a recommended fix. Thanks!
4
0
289
Oct ’25
IPadOS 17 external camera exposure
I'm developing iPad app that will be mostly dedicated for certain external camera for visually impaired people. The linux UVC api (e.g. using guvcview) allows to enable automatic exposure for the camera. IOs api "isExposureModeSupported" unfortunately returns false for any of the exposure modes. Is it a bug? Or perhaps AVFoundation doesn't support UVC exposure yet?
1
2
685
Jul ’25
AVPlayer unpredictable range requests on iOS when streaming *.mov file
Hi all, I'm trying to diagnose and resolve an issue with stuttering video playback using the standard AVPlayer. The video in question is a 4K, 39-second file in *.mov format, being played on an iOS device. It's served via a local HTTP server that proxies requests to a backend to fetch and process the content. The project uses end-to-end encrypted storage, which necessitates the proxy for handling data processing. While playback in offline scenarios is smooth, we are encountering issues with smooth playback during streaming. The same video streams smoothly on other platforms using the same connection, so network limitations are not a factor. On iOS, playback is consistently choppy, with pauses every 1-3 seconds. The video does not appear to buffer adequately for smooth playback. One particularly curious aspect is the seemingly random pattern of Content-Range requests made by the AVPlayer when streaming the video. Below is an example of the range requests:
3
2
558
Apr ’25
Recurring FigXPCUtilities / FigCaptureSourceRemote err=-17281 logs when using AVCaptureVideoDataOutput on iOS 26.x
Hi everyone, I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline. These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down. <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) Even in this clean, minimal setup, the same logs appear on iOS 26.2 The exact same logic did not produce these logs on iOS 18.x. To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch. My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview. Thanks in advance for any insight. Example Code
3
2
814
1w
After iPadOS 26 beta and iOS 26 beta, AVCaptureMetadataOutput no longer detects Face on some devices.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face. After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (9th Gen) iPad air (4th Gen) iPhone 15 This issue has not occur on any other devices I have. I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
0
2
147
Jul ’25
Camera become black for few propduction users during photo capture
PLATFORM AND VERSION :iOS 18.5 I wanted to bring to your attention a critical issue some of our production users are experiencing with the CoinOut app. Specifically, users are encountering a problem when attempting to capture photos of receipts using the app's customized camera feature. The camera, which utilizes AVCaptureVideoPreviewLayer and AVCaptureDevice, occasionally fails to load the preview, resulting in a black screen instead of the expected camera view. This camera blackout issue is significantly impacting the user experience as it prevents them from snapping photos of their receipts, which is a core functionality of the CoinOut app. Any help/suggestion to this issue would be greatly appreciated. STEPS TO REPRODUCE Open the app and click on camera icon. It will display camera to capture photo. Camera shows black for few production user's. class ViewController: UIViewController { @IBOutlet private weak var captureButton: UIButton! private var fillLayer: CAShapeLayer! private var previewLayer : AVCaptureVideoPreviewLayer! private var output: AVCapturePhotoOutput! private var device: AVCaptureDevice! private var session : AVCaptureSession! private var highResolutionEnabled: Bool = false private let sessionQueue = DispatchQueue(label: "session queue") override func viewDidLoad() { super.viewDidLoad() setupCamera() customiseUI() } @IBAction func startCamera(sender: UIButton) { didTapTakePhoto() } private func setupCamera() { let session = AVCaptureSession() session.sessionPreset = AVCaptureSession.Preset.high previewLayer = AVCaptureVideoPreviewLayer(session: session) output = AVCapturePhotoOutput() device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back) if let device = self.device{ do{ let input = try AVCaptureDeviceInput(device: device) if session.canAddInput(input){ session.addInput(input)} else { print("\(#fileID):\(#function):\(#line) : Session Input addition failed") } if session.canAddOutput(output){ output.isHighResolutionCaptureEnabled = self.highResolutionEnabled session.addOutput(output) } else { print("\(#fileID):\(#function):\(#line) : Session Input high resolution failed") } previewLayer.videoGravity = .resizeAspectFill previewLayer.session = session sessionQueue.async { session.startRunning() } self.session = session self.session.accessibilityElementIsFocused() try device.lockForConfiguration() if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.autoWhiteBalance) { device.whiteBalanceMode = .autoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.continuousAutoWhiteBalance) { device.whiteBalanceMode = .continuousAutoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isFocusModeSupported(.continuousAutoFocus) { device.focusMode = .continuousAutoFocus} else if device.isFocusModeSupported(.autoFocus) { device.focusMode = .autoFocus } device.unlockForConfiguration() } catch { print("\(#fileID):\(#function):\(#line) : \(error.localizedDescription)") } } else { print("\(#fileID):\(#function):\(#line) : Device found as nil") } } private func customiseUI() { let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height), cornerRadius: 0) let rectangleWidth = view.frame.width - (view.frame.width * 0.16) let x = (view.frame.width - rectangleWidth) / 2 let rectangleHeight = view.frame.height - (view.frame.height * 0.16) let y = (view.frame.height - rectangleHeight) / 2 let roundRect = UIBezierPath(roundedRect: CGRect(x: x, y: y, width: rectangleWidth, height: rectangleHeight), byRoundingCorners:.allCorners, cornerRadii: CGSize(width: 0, height: 0)) roundRect.move(to: CGPoint(x: self.view.center.x , y: self.view.center.y)) path.append(roundRect) path.usesEvenOddFillRule = true fillLayer = CAShapeLayer() fillLayer.path = path.cgPath fillLayer.fillRule = .evenOdd fillLayer.opacity = 0.4 previewLayer.addSublayer(fillLayer) previewLayer.frame = view.bounds view.layer.addSublayer(previewLayer) view.bringSubviewToFront(captureButton) } private func didTapTakePhoto() { let settings = self.getSettings(camera: self.device) if device.isAdjustingFocus { do { try device.lockForConfiguration() device.focusMode = .continuousAutoFocus device.unlockForConfiguration() device.addObserver(self, forKeyPath: "adjustingFocus", options: [.new], context: nil) } catch { print(error) } } else { output.capturePhoto(with: settings, delegate: self) } } func getSettings(camera: AVCaptureDevice) -> AVCapturePhotoSettings { var settings = AVCapturePhotoSettings() if let rawFormat = output.availableRawPhotoPixelFormatTypes.first { settings = AVCapturePhotoSettings(rawPixelFormatType: OSType(rawFormat)) } settings.isHighResolutionPhotoEnabled = self.highResolutionEnabled let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first! let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType] as [String : Any] settings.previewPhotoFormat = previewFormat return settings } } extension ViewController: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) { AudioServicesDisposeSystemSoundID(1108) } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { guard let data = photo.fileDataRepresentation() else { return } let image = UIImage(data: data)! showImage(cropped: image) } func showImage(cropped: UIImage) { let vc = self.storyboard?.instantiateViewController(withIdentifier: "ImagePreviewViewController") as? ImagePreviewViewController vc?.captured = cropped self.present(vc!, animated: true) } }```
1
0
243
Jul ’25
CheckError.swift:CheckError(_:):211:kAudioUnitErr_InvalidParameter (CheckError.swift:CheckError(_:):211)
I'm getting this error when I launch my application on the iPhone 14 Pro via Xcode. Everything builds OK. I"m using the audio kit plugin and Sound Pipe Audiokit. The error starts as soon as I start the app and will carry on repeatedly. I have background processing turned on as I'd like the sounds to play when the phone is locked via the headphones. I can't find anything online about this error. None of my catches are printing anything in the logs either. So I don't know if this is just something that pops up repeatedly or whether there is something fundamentally wrong. private func setupAudioSession() { do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .default, options: [.mixWithOthers]) try session.setActive(true, options: .notifyOthersOnDeactivation) } catch { errorMessage = "Failed to set up audio session: (error.localizedDescription)" print(errorMessage ?? "") } } // MARK: - Background Task Handling private func setupBackgroundTaskHandling() { // Handle app entering background notificationObservers.append( NotificationCenter.default.addObserver( forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: .main, using: { [weak self] _ in // Safely unwrap self guard let self = self else { return } self.handleBackgroundTransition() } ) ) I'm not sure if this is the code causing the issue. Any help would be gratefully appreciated. This is my first app I'm working on .
2
2
191
Apr ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
Replies
1
Boosts
2
Views
229
Activity
Oct ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
Replies
2
Boosts
2
Views
464
Activity
Oct ’25
MPMusicPlayerController.applicationMusicPlayer.currentPlaybackRate no longer working in iOS 26.0 (Tahoe)?
I'm wondering if someone happened issues with currentPlaybackRate in released version of iOS 26.0? There is no issue happened in case of iOS 18.5. When I changed currentPlaybackRate on iOS 26.0, it seems to unexpectedly change currentPlaybackRate to be 0 or 1.0 forcibly no matter DRM or non-DRM contents. And also, playing music will be abnormal behavior and unstable with noise if currentPlaybackRate is not 1.0. And changes stop state and play state frequently.
Replies
2
Boosts
2
Views
685
Activity
Oct ’25
Facing issue with fairplay Streaming server SDK 26.0.0
I am trying to Build server for testing on Linux(Alma linux 9 VM) NAME="AlmaLinux" VERSION="9.7 (Moss Jungle Cat)" ID="almalinux" ID_LIKE="rhel centos fedora" VERSION_ID="9.7" PLATFORM_ID="platform:el9" PRETTY_NAME="AlmaLinux 9.7 (Moss Jungle Cat)" ANSI_COLOR="0;34" [azuki@AlmaDevVM ~]$ uname -m x86_64 I have tried the following steps: Before starting, ensured that Swift 6 installed. Referred https://www.swift.org/install/ for instructions. Build the library In Terminal, uses the following commands to compile the Swift library: cd Development/Key_Server_Module/Swift swift build -Xbuild-tools-swiftc -DTEST_CREDENTIALS After building the library, ran test cases to ensure the library behaves as expected. ALL unit tests are passing with the development credentials. • Since I was using an x86_64 machine: export LD_LIBRARY_PATH=./Sources/prebuilt/x86_64-unknown-linux-gnu/ Run all tests: swift test -Xbuild-tools-swiftc -DTEST_CREDENTIALS --disable-swift-testing Build the server Build the server: Apache Before starting, ensured the following: a. Installed Apache HTTPD and the dev tools. Using the following command for installation: yum install httpd httpd-devel redhat-rpm-config b. After this, integrated it into the Apache server environment with swift library that was built above. Used the following command to build the server using apxs: • Since I was using an x86_64 machine: apxs -i -a -c -Wl,-L${PWD}/.build/x86_64-unknown-linux-gnu/debug/ -Wl,-lswift_fpssdk -Wl,-L${PWD}/Sources/prebuilt/x86_64-unknown-linux-gnu -lfpscrypto -Wl,-R${PWD}/.build/x86_64-unknown-linux-gnu/debug server_setup/mod_fps.c c. Next, copied the dependent libraries to the Apache modules folder using these commands: • If using an x86_64 machine: cp Sources/prebuilt/x86_64-unknown-linux-gnu/libfpscrypto.so /usr/lib64/httpd/modules/libfpscrypto.so cp .build/x86_64-unknown-linux-gnu/debug/libswift_fpssdk.so /usr/lib64/httpd/modules/libswift_fpssdk.so d. Configuring Apache HTTPD Configured Apache HTTPD by adding the module and handler to your Apache HTTPD configuration (/etc/httpd/conf/httpd.conf). Note that the apxs command may automatically add the LoadModule line in the previous step. Listen 8080 LoadFile /usr/lib64/httpd/modules/libfpscrypto.so LoadFile /usr/lib64/httpd/modules/libswift_fpssdk.so LoadModule fps_module /usr/lib64/httpd/modules/mod_fps.so <Location "/fps"> SetHandler fps_handler Copy the credentials to the Apache modules folder. cp -r ../credentials /usr/lib64/httpd/modules/ export FPS_CERT_PATH= /usr/lib64/httpd/modules/credentials/test_certificates.json e. Run your server You can run the Apache HTTPD server with the configured module by using the following command: httpd -D FOREGROUND No issues see till step. Get SDK version [azuki@AlmaDevVM Key_Server_Module]$ curl localhost:8080/fps/v 26.0.0 But when i try to generate license [azuki@AlmaDevVM Key_Server_Module]$ curl -d ../Test_Inputs/iOS/spc_ios_hd_lease_2048.json localhost:8080/fps {"fairplay-streaming-response":{"create-ckc":[{"id":1,"status":-42601}]}} Can you please suggest what i might be missing here?
Replies
8
Boosts
0
Views
1.1k
Activity
Feb ’26
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
Replies
3
Boosts
0
Views
575
Activity
Jan ’26
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
Replies
3
Boosts
2
Views
809
Activity
Mar ’25
CoreMediaError with Lightning HDMI output on FairPlay content
Hello, our application is unable to HDMI output FairPlay protected content to TV via official Lightning HDMI AV Adapter, by checking the console log on mediaplayerd it is found that a CoreMediaErrorDomain Code=-19156 is raised, but we are unable to know what this error code means. default 11:18:15.121584+0800 mediaplaybackd keyboss ckb_customURLReadCallback: 0x7fa62f800 60/0 customURLReqID 4 isComplete 1 err -19156 error <private> (0) dokeyCallbacksExist 0 default 11:18:15.121670+0800 mediaplaybackd keyboss ckb_processErrorForRequest: 0x7fa62f800 60/0 handler 4 err 0 default 11:18:15.121752+0800 mediaplaybackd <<<< FigCustomURLHandling >>>> curll_cancelRequestOnQueue: 0x7fa031360: requestID: 4 default 11:18:15.121932+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 reqFin err Error Domain=CoreMediaErrorDomain Code=-19156 (-19156) dokeyCallbacksExist 0 default 11:18:15.122025+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 retry default 11:18:15.123195+0800 mediaplaybackd <<<< FigCPECryptorPKD >>>> PostKeyRequestErrorOccurred: 0x7fab7be80 029592C2-093D-400D-B57F-7AB06CC292D1 key request error: Error Domain=CoreMediaErrorDomain Code=-19160 (-19160)
Replies
1
Boosts
2
Views
281
Activity
Jan ’26
MusicKit: Multichannel Dolby Atmos Limited to Stereo Output - Is This Intended Behavior?
I'm experiencing a significant limitation with MusicKit's Dolby Atmos implementation on macOS and would appreciate clarification on whether this is intended behavior or if there are solutions available. When streaming Dolby Atmos content through MusicKit's ApplicationMusicPlayer, the output is limited to 2-channel stereo, even when: Audio MIDI Setup is configured for 7.1.4 (12-channel) output The same tracks play in full multichannel through the native Apple Music app Dolby Atmos is set to "Automatic" in Apple Music preferences Please let me know if there is anyway to enable this. If not, is this documented anywhere? Thanks!
Replies
1
Boosts
2
Views
303
Activity
Aug ’25
CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Hi Apple Developer Forums, I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage. What’s slow (important detail) I’m measuring the time for: let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint) let ciImage = rawFilter.outputImage This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup). Observed behavior First time accessing CIRAWFilter.outputImage: ~3 seconds Second time (same app session, similar RAW): ~3 milliseconds So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.). Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern. Warm-up attempts using bundled DNG I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo: Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms This helps, but it’s still far from the steady-state ~3ms. Warm-up by capturing a real RAW (works, but concerns) The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing. However: In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX. I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded. Questions Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)? Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo? Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter? Attachments Figure 1: First RAW capture with no warm-up (~3s outputImage time) Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms) Thanks for any guidance or experience sharing!
Replies
3
Boosts
2
Views
648
Activity
Jan ’26
Unexpected artist names in song table
Hi team, In the Apple Music Feed datasets, we've noticed some unexpected values in the song and album tables. The primaryartists column from either song or album may contain a "non-default" artist name such as the katakana name shown in the example below: select id, name, namedefault, primaryartists from amf_song where id = '1698723329' id | name | namedefault | primaryartists ---------------------------------------- 1698723329 | {default=California} | California | [{id=1264818718, name=チャペル・ローン}] select * from amf_artist where id = '1264818718' id | name | namedefault | namepronunciation | ---------------------------------------------- 1264818718 | {default=Chappell Roan, ja=チャペル・ローン} | Chappell Roan | {ja=チャペルローン} | Shouldn't the primaryartists column be showing the namedefault instead of the Japanese language version? When can we expect this bug to resolved? Thanks,
Replies
0
Boosts
2
Views
512
Activity
Dec ’25
How to use the SpeechDetector Module
I am trying to use SpeechDetector Module in Speech framework along with SpeechTranscriber. and it is giving me an error Cannot convert value of type 'SpeechDetector' to expected element type 'Array.ArrayLiteralElement' (aka 'any SpeechModule') Below is how I am using it let speechDetector = Speech.SpeechDetector() let transcriber = SpeechTranscriber(locale: Locale.current, transcriptionOptions: [], reportingOptions: [.volatileResults], attributeOptions: [.audioTimeRange]) speechAnalyzer = try SpeechAnalyzer(modules: [transcriber,speechDetector])
Replies
4
Boosts
2
Views
475
Activity
Aug ’25
WKWebView Crashes on iOS During YouTube Playlist Playback
I’m encountering a consistent crash in WebKit when using WKWebView to play a YouTube playlist in my iOS app. Playback starts successfully, but the web process terminates during the second video in the playlist. This only occurs on physical devices, not in the simulator. Here’s a simplified Swift example of my setup: import SwiftUI import WebKit struct ContentView: View { private let playlistID = "PLig2mjpwQBZnghraUKGhCqc9eAy0UbpDN" var body: some View { YouTubeWebView(playlistID: playlistID) .edgesIgnoringSafeArea(.all) } } struct YouTubeWebView: UIViewRepresentable { let playlistID: String func makeUIView(context: Context) -> WKWebView { let config = WKWebViewConfiguration() config.allowsInlineMediaPlayback = true let webView = WKWebView(frame: .zero, configuration: config) webView.scrollView.isScrollEnabled = true let html = """ <!doctype html> <html> <head> <meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0"> <style>body,html{height:100%;margin:0;background:#000}iframe{width:100%;height:100%;border:0}</style> </head> <body> <iframe src="https://www.youtube-nocookie.com/embed/videoseries?list=\(playlistID)&controls=1&rel=0&playsinline=1&iv_load_policy=3" frameborder="0" allow="encrypted-media; picture-in-picture; fullscreen" webkit-playsinline allowfullscreen ></iframe> </body> </html> """ webView.loadHTMLString(html, baseURL: nil) return webView } func updateUIView(_ uiView: WKWebView, context: Context) {} } #Preview { ContentView() } Observed behavior: First video plays without issue. Web process crashes when the second video in the playlist starts. Console logs show WebProcessProxy::didClose and repeated memory status messages. Using ProcessAssertion or background activity does not prevent the crash. Only occurs on physical devices; simulators do not reproduce the issue. Questions: Is there something I should change or add in my WKWebView setup or HTML/iframe to prevent the crash when playing the second video in a playlist on physical iOS devices? Is there an officially supported way to limit memory or prevent WebKit from terminating the web process during multi-video playback? Are there recommended patterns for playing YouTube playlists in a WKWebView on iOS without risking crashes? Any tips for debugging or configuring WKWebView to make it more stable for continuous playlist playback? Thanks in advance for any guidance!
Replies
2
Boosts
0
Views
615
Activity
Oct ’25
TV A1625 Using 3× More CPU After tvOS 26 Update
Hi everyone, After updating my Apple TV HD (model A1625) to tvOS 26, I’ve noticed a significant spike in CPU usage—up to 3× higher than before the update. Go from around 40% to 120% Model: Apple TV HD (A1625) tvOS Version: 26 (stable release) and beta version of 26.1, App downgrade stream due to lack of cpu power If anyone else is experiencing this, please share your findings or workarounds. Would love to hear from Apple engineers or other developers if this is a known regression or if there’s a recommended fix. Thanks!
Replies
4
Boosts
0
Views
289
Activity
Oct ’25
IPadOS 17 external camera exposure
I'm developing iPad app that will be mostly dedicated for certain external camera for visually impaired people. The linux UVC api (e.g. using guvcview) allows to enable automatic exposure for the camera. IOs api "isExposureModeSupported" unfortunately returns false for any of the exposure modes. Is it a bug? Or perhaps AVFoundation doesn't support UVC exposure yet?
Replies
1
Boosts
2
Views
685
Activity
Jul ’25
AVPlayer unpredictable range requests on iOS when streaming *.mov file
Hi all, I'm trying to diagnose and resolve an issue with stuttering video playback using the standard AVPlayer. The video in question is a 4K, 39-second file in *.mov format, being played on an iOS device. It's served via a local HTTP server that proxies requests to a backend to fetch and process the content. The project uses end-to-end encrypted storage, which necessitates the proxy for handling data processing. While playback in offline scenarios is smooth, we are encountering issues with smooth playback during streaming. The same video streams smoothly on other platforms using the same connection, so network limitations are not a factor. On iOS, playback is consistently choppy, with pauses every 1-3 seconds. The video does not appear to buffer adequately for smooth playback. One particularly curious aspect is the seemingly random pattern of Content-Range requests made by the AVPlayer when streaming the video. Below is an example of the range requests:
Replies
3
Boosts
2
Views
558
Activity
Apr ’25
Recurring FigXPCUtilities / FigCaptureSourceRemote err=-17281 logs when using AVCaptureVideoDataOutput on iOS 26.x
Hi everyone, I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline. These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down. <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) Even in this clean, minimal setup, the same logs appear on iOS 26.2 The exact same logic did not produce these logs on iOS 18.x. To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch. My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview. Thanks in advance for any insight. Example Code
Replies
3
Boosts
2
Views
814
Activity
1w
SpeechTranscriber on Simulator
I am trying to use SpeechTranscriber from Speech framework. Is it possible to use it on Simulator of iOS 26 (Mac OS Tahoe)? Function "supportedLocales" returns an empty array.
Replies
3
Boosts
2
Views
991
Activity
Nov ’25
After iPadOS 26 beta and iOS 26 beta, AVCaptureMetadataOutput no longer detects Face on some devices.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face. After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (9th Gen) iPad air (4th Gen) iPhone 15 This issue has not occur on any other devices I have. I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
Replies
0
Boosts
2
Views
147
Activity
Jul ’25
Camera become black for few propduction users during photo capture
PLATFORM AND VERSION :iOS 18.5 I wanted to bring to your attention a critical issue some of our production users are experiencing with the CoinOut app. Specifically, users are encountering a problem when attempting to capture photos of receipts using the app's customized camera feature. The camera, which utilizes AVCaptureVideoPreviewLayer and AVCaptureDevice, occasionally fails to load the preview, resulting in a black screen instead of the expected camera view. This camera blackout issue is significantly impacting the user experience as it prevents them from snapping photos of their receipts, which is a core functionality of the CoinOut app. Any help/suggestion to this issue would be greatly appreciated. STEPS TO REPRODUCE Open the app and click on camera icon. It will display camera to capture photo. Camera shows black for few production user's. class ViewController: UIViewController { @IBOutlet private weak var captureButton: UIButton! private var fillLayer: CAShapeLayer! private var previewLayer : AVCaptureVideoPreviewLayer! private var output: AVCapturePhotoOutput! private var device: AVCaptureDevice! private var session : AVCaptureSession! private var highResolutionEnabled: Bool = false private let sessionQueue = DispatchQueue(label: "session queue") override func viewDidLoad() { super.viewDidLoad() setupCamera() customiseUI() } @IBAction func startCamera(sender: UIButton) { didTapTakePhoto() } private func setupCamera() { let session = AVCaptureSession() session.sessionPreset = AVCaptureSession.Preset.high previewLayer = AVCaptureVideoPreviewLayer(session: session) output = AVCapturePhotoOutput() device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back) if let device = self.device{ do{ let input = try AVCaptureDeviceInput(device: device) if session.canAddInput(input){ session.addInput(input)} else { print("\(#fileID):\(#function):\(#line) : Session Input addition failed") } if session.canAddOutput(output){ output.isHighResolutionCaptureEnabled = self.highResolutionEnabled session.addOutput(output) } else { print("\(#fileID):\(#function):\(#line) : Session Input high resolution failed") } previewLayer.videoGravity = .resizeAspectFill previewLayer.session = session sessionQueue.async { session.startRunning() } self.session = session self.session.accessibilityElementIsFocused() try device.lockForConfiguration() if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.autoWhiteBalance) { device.whiteBalanceMode = .autoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.continuousAutoWhiteBalance) { device.whiteBalanceMode = .continuousAutoWhiteBalance } else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") } if device.isFocusModeSupported(.continuousAutoFocus) { device.focusMode = .continuousAutoFocus} else if device.isFocusModeSupported(.autoFocus) { device.focusMode = .autoFocus } device.unlockForConfiguration() } catch { print("\(#fileID):\(#function):\(#line) : \(error.localizedDescription)") } } else { print("\(#fileID):\(#function):\(#line) : Device found as nil") } } private func customiseUI() { let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height), cornerRadius: 0) let rectangleWidth = view.frame.width - (view.frame.width * 0.16) let x = (view.frame.width - rectangleWidth) / 2 let rectangleHeight = view.frame.height - (view.frame.height * 0.16) let y = (view.frame.height - rectangleHeight) / 2 let roundRect = UIBezierPath(roundedRect: CGRect(x: x, y: y, width: rectangleWidth, height: rectangleHeight), byRoundingCorners:.allCorners, cornerRadii: CGSize(width: 0, height: 0)) roundRect.move(to: CGPoint(x: self.view.center.x , y: self.view.center.y)) path.append(roundRect) path.usesEvenOddFillRule = true fillLayer = CAShapeLayer() fillLayer.path = path.cgPath fillLayer.fillRule = .evenOdd fillLayer.opacity = 0.4 previewLayer.addSublayer(fillLayer) previewLayer.frame = view.bounds view.layer.addSublayer(previewLayer) view.bringSubviewToFront(captureButton) } private func didTapTakePhoto() { let settings = self.getSettings(camera: self.device) if device.isAdjustingFocus { do { try device.lockForConfiguration() device.focusMode = .continuousAutoFocus device.unlockForConfiguration() device.addObserver(self, forKeyPath: "adjustingFocus", options: [.new], context: nil) } catch { print(error) } } else { output.capturePhoto(with: settings, delegate: self) } } func getSettings(camera: AVCaptureDevice) -> AVCapturePhotoSettings { var settings = AVCapturePhotoSettings() if let rawFormat = output.availableRawPhotoPixelFormatTypes.first { settings = AVCapturePhotoSettings(rawPixelFormatType: OSType(rawFormat)) } settings.isHighResolutionPhotoEnabled = self.highResolutionEnabled let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first! let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType] as [String : Any] settings.previewPhotoFormat = previewFormat return settings } } extension ViewController: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) { AudioServicesDisposeSystemSoundID(1108) } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { guard let data = photo.fileDataRepresentation() else { return } let image = UIImage(data: data)! showImage(cropped: image) } func showImage(cropped: UIImage) { let vc = self.storyboard?.instantiateViewController(withIdentifier: "ImagePreviewViewController") as? ImagePreviewViewController vc?.captured = cropped self.present(vc!, animated: true) } }```
Replies
1
Boosts
0
Views
243
Activity
Jul ’25
CheckError.swift:CheckError(_:):211:kAudioUnitErr_InvalidParameter (CheckError.swift:CheckError(_:):211)
I'm getting this error when I launch my application on the iPhone 14 Pro via Xcode. Everything builds OK. I"m using the audio kit plugin and Sound Pipe Audiokit. The error starts as soon as I start the app and will carry on repeatedly. I have background processing turned on as I'd like the sounds to play when the phone is locked via the headphones. I can't find anything online about this error. None of my catches are printing anything in the logs either. So I don't know if this is just something that pops up repeatedly or whether there is something fundamentally wrong. private func setupAudioSession() { do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .default, options: [.mixWithOthers]) try session.setActive(true, options: .notifyOthersOnDeactivation) } catch { errorMessage = "Failed to set up audio session: (error.localizedDescription)" print(errorMessage ?? "") } } // MARK: - Background Task Handling private func setupBackgroundTaskHandling() { // Handle app entering background notificationObservers.append( NotificationCenter.default.addObserver( forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: .main, using: { [weak self] _ in // Safely unwrap self guard let self = self else { return } self.handleBackgroundTransition() } ) ) I'm not sure if this is the code causing the issue. Any help would be gratefully appreciated. This is my first app I'm working on .
Replies
2
Boosts
2
Views
191
Activity
Apr ’25