Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Audio Unit v3 host v2 third party plugins
Hi, I have just implemented an Audio Unit v3 host. AgsAudioUnitPlugin *audio_unit_plugin; AVAudioUnitComponentManager *audio_unit_component_manager; NSArray<AVAudioUnitComponent *> *av_component_arr; AudioComponentDescription description; guint i, i_stop; if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){ return; } audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager]; /* effects */ description = (AudioComponentDescription) {0,}; description.componentType = kAudioUnitType_Effect; av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description]; i_stop = [av_component_arr count]; for(i = 0; i < i_stop; i++){ ags_audio_unit_manager_load_component(audio_unit_manager, (gpointer) av_component_arr[i]); } /* instruments */ description = (AudioComponentDescription) {0,}; description.componentType = kAudioUnitType_MusicDevice; av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description]; i_stop = [av_component_arr count]; for(i = 0; i < i_stop; i++){ ags_audio_unit_manager_load_component(audio_unit_manager, (gpointer) av_component_arr[i]); } But this doesn't show me Audio Unit v2 plugins, why? regards, Joël
3
0
744
Aug ’25
Question about PT Framework channel tone behaviour
I've been wondering if there is a way to modify or even disable tones for indicating channel states. The behaviour regarding tones seems like a black box with little documentation. During migration to Apple's PT Framework we've noticed that there are few scenarios where a tone is played which doesn't match certain certifications. For example; moving from a channel to another produces a tone which would fail a test case. I understand the reasoning fully, as it marks that the channel is ready to transmit or receive, but this doesn't mirror the behaviour of TETRA which would be wanted in this case. I'm also wondering if there would be any way to directly communicate feedback regarding PT Framework?
3
0
414
Oct ’25
Apple watch for child
Hi, my daughter was given an Apple Watch due to her grandfather's passing. It is not GPS/cellular, and we cannot connect it to her Apple ID because of this. It doesn't seem right to leave her logged in to his Apple ID, but we are currently out of options. Is there a workaround to this? When I try to set her up from my phone, it tells me that the watch must have GPS/cellular to set it up. Why? Am I missing something?
3
0
171
Jun ’25
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
3
1
1.4k
Oct ’25
Why does CADisplayLink of an external UIScreen drift in time?
I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable. I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:). The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below). Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely. Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same). When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected. Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else? Can the drifting be stopped? func onDisplayLinkUpdate(displayLink: CADisplayLink) { // Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime() }
3
0
462
Aug ’25
CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Hi Apple Developer Forums, I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage. What’s slow (important detail) I’m measuring the time for: let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint) let ciImage = rawFilter.outputImage This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup). Observed behavior First time accessing CIRAWFilter.outputImage: ~3 seconds Second time (same app session, similar RAW): ~3 milliseconds So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.). Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern. Warm-up attempts using bundled DNG I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo: Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms This helps, but it’s still far from the steady-state ~3ms. Warm-up by capturing a real RAW (works, but concerns) The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing. However: In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX. I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded. Questions Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)? Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo? Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter? Attachments Figure 1: First RAW capture with no warm-up (~3s outputImage time) Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms) Thanks for any guidance or experience sharing!
3
2
648
Jan ’26
AVAudioSession.outputVolume not reporting correctly in iOS 18+ devices
I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume. However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer. What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations. Disabling and re-enabling the audio session is essential to how my application functions. I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
3
1
342
Feb ’26
Photos are captured with incorrect exposure bias in specific scenarios on iPhone 17 Pro
Hey, There seems to be an inconsistency when capturing a photo using QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced. See below: with .Quality with .Balanced This does not happen on the other lenses. I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful. Thanks, Alex
3
0
544
Dec ’25
Handling AVAudioEngine Configuration Change
Hi all, I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in. Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped. Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played. Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset? I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity. Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them Thanks for any feedback!
3
0
953
Oct ’25
ShazamKit for Android and 16 KB native library alignment
Hello, I'm working on a Flutter app targeting both Android and iOS, where I implemented ShazamKit. In order to achieve that, I first tried with the flutter_shazam_kit package, but since it's not maintained anymore, I forked it here, and tried to update it to meet the Google Play Store requirements, as you can see here: https://github.com/mregnauld/flutter_shazam_kit/tree/fix-16k Unfortunately, after trying everything, my app still doesn't meet the (not so) new 16 KB native library alignment. Also, I'm 100% sure it comes from that because the error message disappears if I remove that package from my app. So after investigating, it seems that the problem comes from the ShazamKit for Android (that you can find here: https://developer.apple.com/download/all/?q=Android%20ShazamKit), and especially the .so files in the .aar file. Is there anything I can do to fix that, or should I wait before the ShazamKit team fix that? I'm totally stuck with that so any help is highly appreciated. Thanks.
3
0
624
Oct ’25
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
3
2
809
Mar ’25
Mac OS Tahoe 26.0 (25A354) Sound Glitches When opening the simulator app
Hey there, I just upgraded to Mac OS Tahoe ,son an apple MacBook Pro 2019 16inch. am using IntellijIDEA and Flutter to develop a mobile app which I test on the simulator app running iOS 18.4 . the issue: when I start the simulator app. ( while in the loading phase and in the operation phase as well ), the audio from an already open YouTube tab on safari (this happens on chrome browser as well). the sound glitches and becomes Noise. a fix I found online is to kill the audio deamon on Mac OS, This works using the command: "sudo killall coreaudiod" this kills the audio process, (while the emulator is operational), then the macOS restarts the audio deamon then the audio works fine alongside with the simulator being open. I just want to ask is there a permanent fix for this? is Apple working on a fix for this in the upcoming update?
3
5
1.3k
Oct ’25
Recurring FigXPCUtilities / FigCaptureSourceRemote err=-17281 logs when using AVCaptureVideoDataOutput on iOS 26.x
Hi everyone, I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline. These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down. <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) Even in this clean, minimal setup, the same logs appear on iOS 26.2 The exact same logic did not produce these logs on iOS 18.x. To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch. My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview. Thanks in advance for any insight. Example Code
3
2
815
1w
Why Does AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps Occur on iPhone?
Hi everyone, We're encountering an unexpected issue with our iPhone-only camera app: 👉 TimeMark - Photo Proof https://apps.apple.com/us/app/timemark-photo-proof/id6446071834 Problem Description: Our app uses a full-screen camera view via AVCaptureSession. In some cases reported by users, the camera fails immediately upon app launch, and we receive this interruption reason: AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps According to the Apple documentation https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps?language=objc , this interruption typically occurs when the app is running in a multi-app layout such as Slide Over, Split View, or Picture in Picture — all of which are iPad-only features. However, this issue is being reported on iPhones, and our app does not support iPad at all. Also noted in the documentation: "Given your present AVCaptureSession configuration, the session may only be run if your app occupies the full screen." Additional Context: The issue occurs immediately on app launch, before the user can interact with the camera. We don’t enable multitaskingCameraAccessEnabled. We are 100% sure this is happening on iPhone, not iPad. It’s hard to reproduce; users report it happening sporadically. Locally, we tried playing Picture-in-Picture videos (e.g., Safari/YouTube) before launching our app, but we could not reproduce the issue. Questions: Why is this interruption reason occurring on iPhone, which doesn’t officially support Slide Over or Split View? Could this be caused by some system-level multitasking or resource contention (e.g., Picture in Picture from FaceTime or Safari)? Would enabling multitaskingCameraAccessEnabled help prevent this issue on iPhone, even though it's designed for iPad? Enabling multitaskingCameraAccessEnabled seems to require enabling UIBackgroundModes → voip. Would adding this background mode cause any App Store review risk or rejection if our app doesn't actually use VoIP functionality? Any help, insight, or suggestions would be greatly appreciated. Thanks in advance!
3
0
884
Oct ’25
Launch The Main App from LockedCameraCapture
If the app is launched from LockedCameraCapture and if the settings button is tapped, I need to launch the main app. CameraViewController: func settingsButtonTapped() { #if isLockedCameraCaptureExtension //App is launched from Lock Screen //Launch main app here... #else //App is launched from Home Screen self.showSettings(animated: true) #endif } In this document: https://developer.apple.com/documentation/lockedcameracapture/creating-a-camera-experience-for-the-lock-screen Apple asks you to use: func launchApp(with session: LockedCameraCaptureSession, info: String) { Task { do { let activity = NSUserActivityTypeLockedCameraCapture activity.userInfo = [UserInfoKey: info] try await session.openApplication(for: activity) } catch { StatusManager.displayError("Unable to open app - \(error.localizedDescription)") } } } However, the documentation states that this should be placed within the extension code - LockedCameraCapture. If I do that, how can I call that all the way down from the main app's CameraViewController?
3
0
580
Nov ’25
How to transcode HEIC to JPEG while preserving the ISO 21496-1 gain map
When shooting with an iPhone 15 or later, it’s possible to capture HEIC or JPEG images that include gain map information conforming to the ISO 21496-1 standard. However, during image format transcoding, the HEIC codec is able to preserve the ISO 21496-1 gain map. But when converting from HEIC to JPEG, the gain map is transformed into the Apple Gain Map format instead. Is there any solution to this issue?
3
0
1.2k
Jul ’25
AVAudioSessionCategoryPlayback is not allowed while CallKit call is active
We require assistance in resolving a critical audio design conflict within our Push-to-Talk (PTT) application. Our current volume amplification strategy—which relies on applying a GAIN factor to PCM samples in conjunction with setting the AVAudioSession category to Playback—is working successfully when PTT is used independently. However, upon integrating and reporting the same PTT call through the CallKit framework, this amplification effect is lost. The CallKit integration appears to be forcing a different, non-amplifying audio session category or configuration, negatively impacting the user's perceived call volume. We need guidance on how to maintain the AVAudioSessionCategoryPlayback setting, or an equivalent high-volume configuration, while operating under the control of CallKit.
3
0
412
Nov ’25
Audio Unit v3 host v2 third party plugins
Hi, I have just implemented an Audio Unit v3 host. AgsAudioUnitPlugin *audio_unit_plugin; AVAudioUnitComponentManager *audio_unit_component_manager; NSArray<AVAudioUnitComponent *> *av_component_arr; AudioComponentDescription description; guint i, i_stop; if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){ return; } audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager]; /* effects */ description = (AudioComponentDescription) {0,}; description.componentType = kAudioUnitType_Effect; av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description]; i_stop = [av_component_arr count]; for(i = 0; i < i_stop; i++){ ags_audio_unit_manager_load_component(audio_unit_manager, (gpointer) av_component_arr[i]); } /* instruments */ description = (AudioComponentDescription) {0,}; description.componentType = kAudioUnitType_MusicDevice; av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description]; i_stop = [av_component_arr count]; for(i = 0; i < i_stop; i++){ ags_audio_unit_manager_load_component(audio_unit_manager, (gpointer) av_component_arr[i]); } But this doesn't show me Audio Unit v2 plugins, why? regards, Joël
Replies
3
Boosts
0
Views
744
Activity
Aug ’25
Question about PT Framework channel tone behaviour
I've been wondering if there is a way to modify or even disable tones for indicating channel states. The behaviour regarding tones seems like a black box with little documentation. During migration to Apple's PT Framework we've noticed that there are few scenarios where a tone is played which doesn't match certain certifications. For example; moving from a channel to another produces a tone which would fail a test case. I understand the reasoning fully, as it marks that the channel is ready to transmit or receive, but this doesn't mirror the behaviour of TETRA which would be wanted in this case. I'm also wondering if there would be any way to directly communicate feedback regarding PT Framework?
Replies
3
Boosts
0
Views
414
Activity
Oct ’25
Apple watch for child
Hi, my daughter was given an Apple Watch due to her grandfather's passing. It is not GPS/cellular, and we cannot connect it to her Apple ID because of this. It doesn't seem right to leave her logged in to his Apple ID, but we are currently out of options. Is there a workaround to this? When I try to set her up from my phone, it tells me that the watch must have GPS/cellular to set it up. Why? Am I missing something?
Replies
3
Boosts
0
Views
171
Activity
Jun ’25
iOS 17 camera capture assertions and issues
Hello, Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions. It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices. Here's what shows-up in the console: <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below. Thanks
Replies
3
Boosts
1
Views
1.4k
Activity
Oct ’25
Why does CADisplayLink of an external UIScreen drift in time?
I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable. I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:). The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below). Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely. Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same). When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected. Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else? Can the drifting be stopped? func onDisplayLinkUpdate(displayLink: CADisplayLink) { // Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime() }
Replies
3
Boosts
0
Views
462
Activity
Aug ’25
Is it possible to stream video from a UVC (USB Video Class) camera on an iPhone 15?
Is it possible to stream video from a UVC (USB Video Class) camera on an iPhone 15? If so, are there any specific hardware or software requirements to enable this functionality?
Replies
3
Boosts
1
Views
192
Activity
Apr ’25
CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Hi Apple Developer Forums, I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage. What’s slow (important detail) I’m measuring the time for: let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint) let ciImage = rawFilter.outputImage This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup). Observed behavior First time accessing CIRAWFilter.outputImage: ~3 seconds Second time (same app session, similar RAW): ~3 milliseconds So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.). Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern. Warm-up attempts using bundled DNG I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo: Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms This helps, but it’s still far from the steady-state ~3ms. Warm-up by capturing a real RAW (works, but concerns) The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing. However: In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX. I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded. Questions Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)? Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo? Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter? Attachments Figure 1: First RAW capture with no warm-up (~3s outputImage time) Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms) Thanks for any guidance or experience sharing!
Replies
3
Boosts
2
Views
648
Activity
Jan ’26
AVAudioSession.outputVolume not reporting correctly in iOS 18+ devices
I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume. However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer. What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations. Disabling and re-enabling the audio session is essential to how my application functions. I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
Replies
3
Boosts
1
Views
342
Activity
Feb ’26
Photos are captured with incorrect exposure bias in specific scenarios on iPhone 17 Pro
Hey, There seems to be an inconsistency when capturing a photo using QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced. See below: with .Quality with .Balanced This does not happen on the other lenses. I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful. Thanks, Alex
Replies
3
Boosts
0
Views
544
Activity
Dec ’25
Handling AVAudioEngine Configuration Change
Hi all, I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in. Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped. Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played. Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset? I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity. Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them Thanks for any feedback!
Replies
3
Boosts
0
Views
953
Activity
Oct ’25
ShazamKit for Android and 16 KB native library alignment
Hello, I'm working on a Flutter app targeting both Android and iOS, where I implemented ShazamKit. In order to achieve that, I first tried with the flutter_shazam_kit package, but since it's not maintained anymore, I forked it here, and tried to update it to meet the Google Play Store requirements, as you can see here: https://github.com/mregnauld/flutter_shazam_kit/tree/fix-16k Unfortunately, after trying everything, my app still doesn't meet the (not so) new 16 KB native library alignment. Also, I'm 100% sure it comes from that because the error message disappears if I remove that package from my app. So after investigating, it seems that the problem comes from the ShazamKit for Android (that you can find here: https://developer.apple.com/download/all/?q=Android%20ShazamKit), and especially the .so files in the .aar file. Is there anything I can do to fix that, or should I wait before the ShazamKit team fix that? I'm totally stuck with that so any help is highly appreciated. Thanks.
Replies
3
Boosts
0
Views
624
Activity
Oct ’25
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
Replies
3
Boosts
2
Views
809
Activity
Mar ’25
Mac OS Tahoe 26.0 (25A354) Sound Glitches When opening the simulator app
Hey there, I just upgraded to Mac OS Tahoe ,son an apple MacBook Pro 2019 16inch. am using IntellijIDEA and Flutter to develop a mobile app which I test on the simulator app running iOS 18.4 . the issue: when I start the simulator app. ( while in the loading phase and in the operation phase as well ), the audio from an already open YouTube tab on safari (this happens on chrome browser as well). the sound glitches and becomes Noise. a fix I found online is to kill the audio deamon on Mac OS, This works using the command: "sudo killall coreaudiod" this kills the audio process, (while the emulator is operational), then the macOS restarts the audio deamon then the audio works fine alongside with the simulator being open. I just want to ask is there a permanent fix for this? is Apple working on a fix for this in the upcoming update?
Replies
3
Boosts
5
Views
1.3k
Activity
Oct ’25
Configuring capture pipeline with ProResRAW codec
I am unable to find any clearcut documentation on configuring AVCaptureSession pipeline to capture video with proResRAW codec type, which is 16 bit format. Is it supported only with AVCaptureMovieFileOutput or one can have AVCaptureVideoDataOutput emitting 16-bit sample buffers that can be vended to AVAssetWriter?
Replies
3
Boosts
0
Views
1.2k
Activity
Feb ’26
SpeechTranscriber on Simulator
I am trying to use SpeechTranscriber from Speech framework. Is it possible to use it on Simulator of iOS 26 (Mac OS Tahoe)? Function "supportedLocales" returns an empty array.
Replies
3
Boosts
2
Views
991
Activity
Nov ’25
Recurring FigXPCUtilities / FigCaptureSourceRemote err=-17281 logs when using AVCaptureVideoDataOutput on iOS 26.x
Hi everyone, I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline. These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down. <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) Even in this clean, minimal setup, the same logs appear on iOS 26.2 The exact same logic did not produce these logs on iOS 18.x. To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch. My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview. Thanks in advance for any insight. Example Code
Replies
3
Boosts
2
Views
815
Activity
1w
Why Does AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps Occur on iPhone?
Hi everyone, We're encountering an unexpected issue with our iPhone-only camera app: 👉 TimeMark - Photo Proof https://apps.apple.com/us/app/timemark-photo-proof/id6446071834 Problem Description: Our app uses a full-screen camera view via AVCaptureSession. In some cases reported by users, the camera fails immediately upon app launch, and we receive this interruption reason: AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps According to the Apple documentation https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps?language=objc , this interruption typically occurs when the app is running in a multi-app layout such as Slide Over, Split View, or Picture in Picture — all of which are iPad-only features. However, this issue is being reported on iPhones, and our app does not support iPad at all. Also noted in the documentation: "Given your present AVCaptureSession configuration, the session may only be run if your app occupies the full screen." Additional Context: The issue occurs immediately on app launch, before the user can interact with the camera. We don’t enable multitaskingCameraAccessEnabled. We are 100% sure this is happening on iPhone, not iPad. It’s hard to reproduce; users report it happening sporadically. Locally, we tried playing Picture-in-Picture videos (e.g., Safari/YouTube) before launching our app, but we could not reproduce the issue. Questions: Why is this interruption reason occurring on iPhone, which doesn’t officially support Slide Over or Split View? Could this be caused by some system-level multitasking or resource contention (e.g., Picture in Picture from FaceTime or Safari)? Would enabling multitaskingCameraAccessEnabled help prevent this issue on iPhone, even though it's designed for iPad? Enabling multitaskingCameraAccessEnabled seems to require enabling UIBackgroundModes → voip. Would adding this background mode cause any App Store review risk or rejection if our app doesn't actually use VoIP functionality? Any help, insight, or suggestions would be greatly appreciated. Thanks in advance!
Replies
3
Boosts
0
Views
884
Activity
Oct ’25
Launch The Main App from LockedCameraCapture
If the app is launched from LockedCameraCapture and if the settings button is tapped, I need to launch the main app. CameraViewController: func settingsButtonTapped() { #if isLockedCameraCaptureExtension //App is launched from Lock Screen //Launch main app here... #else //App is launched from Home Screen self.showSettings(animated: true) #endif } In this document: https://developer.apple.com/documentation/lockedcameracapture/creating-a-camera-experience-for-the-lock-screen Apple asks you to use: func launchApp(with session: LockedCameraCaptureSession, info: String) { Task { do { let activity = NSUserActivityTypeLockedCameraCapture activity.userInfo = [UserInfoKey: info] try await session.openApplication(for: activity) } catch { StatusManager.displayError("Unable to open app - \(error.localizedDescription)") } } } However, the documentation states that this should be placed within the extension code - LockedCameraCapture. If I do that, how can I call that all the way down from the main app's CameraViewController?
Replies
3
Boosts
0
Views
580
Activity
Nov ’25
How to transcode HEIC to JPEG while preserving the ISO 21496-1 gain map
When shooting with an iPhone 15 or later, it’s possible to capture HEIC or JPEG images that include gain map information conforming to the ISO 21496-1 standard. However, during image format transcoding, the HEIC codec is able to preserve the ISO 21496-1 gain map. But when converting from HEIC to JPEG, the gain map is transformed into the Apple Gain Map format instead. Is there any solution to this issue?
Replies
3
Boosts
0
Views
1.2k
Activity
Jul ’25
AVAudioSessionCategoryPlayback is not allowed while CallKit call is active
We require assistance in resolving a critical audio design conflict within our Push-to-Talk (PTT) application. Our current volume amplification strategy—which relies on applying a GAIN factor to PCM samples in conjunction with setting the AVAudioSession category to Playback—is working successfully when PTT is used independently. However, upon integrating and reporting the same PTT call through the CallKit framework, this amplification effect is lost. The CallKit integration appears to be forcing a different, non-amplifying audio session category or configuration, negatively impacting the user's perceived call volume. We need guidance on how to maintain the AVAudioSessionCategoryPlayback setting, or an equivalent high-volume configuration, while operating under the control of CallKit.
Replies
3
Boosts
0
Views
412
Activity
Nov ’25