Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Created

AVAudioSession setActive(true) fails after phone call when app is in background
I’m seeing what appears to be an iOS audio-session issue that occurs only when a phone call happens while the app is in the background. API: AVAudioSession, AVAudioRecorder Background Modes: Audio enabled (UIBackgroundModes = audio) Category: .playAndRecord Microphone permission: granted Expected Behavior If the app is recording audio in the background and a phone call interrupts it: AVAudioSession.interruptionNotification(.began) fires Call ends AVAudioSession.interruptionNotification(.ended) fires App should be able to re-activate its audio session and resume or restart recording Apple documentation suggests this should be supported for background audio apps. Actual Behavior When the app is in the background and phone call is ended: AVAudioSession.interruptionNotification(.ended) does fire Attempting to reactivate the audio session always fails: Error Domain=NSOSStatusErrorDomain Code=560557684 ("!int") "Session activation failed" The session appears to remain permanently “interrupted” Retrying activation (with delays) does not help Recreating AVAudioRecorder does not help Reactivation works only after the app is opened again
0
0
129
1w
AVAudioSession.outputVolume does not reflect system volume changes made while app is in background
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume. Observed behavior: When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1). The app is then moved to the background. While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5). When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1). From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground. Questions: According to Apple’s documentation for AVAudioSession.outputVolume: “The systemwide output volume set by the user.” https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description. Questions: The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume? Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background? Any clarification on the intended behavior or recommended handling would be greatly appreciated.
0
0
109
1w
coreaudio-api mailing list search broken
Hello, The search functionality of the coreaudio-api mailing list archive has been broken for a very long time. Several of the lower-level audio APIs have only been discussed on this mailing list, making it critical for those of us maintaining old audio code. Steps to reproduce: Open https://lists.apple.com/archives/list/coreaudio-api@lists.apple.com/ in your web browser. Enter a search term in the "Search this list" field in the top-right corner of the page. The search will eventually time out with "502 Bad Gateway" Can somebody please forward this information to the current maintainer? I've tried to contact developer support but they weren't sure what to do. Thanks!
0
0
114
1w
Apple Music playlist create/delete works but DELETE returns 401 — and MusicKit write APIs are macOS‑unavailable. How to build a playlist editor on macOS?
I’m trying to build a playlist editor on macOS. I can create playlists via the Apple Music HTTP API, but DELETE always returns 401 even immediately after creation with the same tokens. Minimal repro: #!/usr/bin/env bash set -euo pipefail BASE_URL="https://api.music.apple.com/v1" PLAYLIST_NAME="${PLAYLIST_NAME:-blah}" : "${APPLE_MUSIC_DEV_TOKEN:?}" : "${APPLE_MUSIC_USER_TOKEN:?}" create_body="$(mktemp)" delete_body="$(mktemp)" trap 'rm -f "$create_body" "$delete_body"' EXIT curl -sS --compressed -o "$create_body" -w "Create status: %{http_code}\n" \ -X POST "${BASE_URL}/me/library/playlists" \ -H "Authorization: Bearer ${APPLE_MUSIC_DEV_TOKEN}" \ -H "Music-User-Token: ${APPLE_MUSIC_USER_TOKEN}" \ -H "Content-Type: application/json" \ -d "{\"attributes\":{\"name\":\"${PLAYLIST_NAME}\"}}" playlist_id="$(python3 - "$create_body" <<'PY' import json, sys with open(sys.argv[1], "r", encoding="utf-8") as f: data = json.load(f) print(data["data"][0]["id"]) PY )" curl -sS --compressed -o "$delete_body" -w "Delete status: %{http_code}\n" \ -X DELETE "${BASE_URL}/me/library/playlists/${playlist_id}" \ -H "Authorization: Bearer ${APPLE_MUSIC_DEV_TOKEN}" \ -H "Music-User-Token: ${APPLE_MUSIC_USER_TOKEN}" \ -H "Content-Type: application/json" I capture the response bodies like this: cat "$create_body" cat "$delete_body" Result: Create: 201 Delete: 401 I also checked the latest macOS SDK’s MusicKit interfaces and MusicLibrary.createPlaylist/edit/add(to:) are marked @available(macOS, unavailable), so I can’t create/ delete via MusicKit on macOS either. Question: How can I implement a playlist editor on macOS (create/delete/modify) if: MusicKit write APIs are unavailable on macOS, and The HTTP API can create but DELETE returns 401? Any guidance or official workaround would be hugely appreciated.
0
0
133
2w
Are White Balance gains applied before or after ADC?
At which point in the image processing pipeline does iOS apply the white balance gains which can be set via AVCaptureDevice.setWhiteBalanceModeLocked(with:completionHandler:)? Are those gains applied in the analog part of the camera pipeline, before the pixel voltage gets converted via the ADC to digital values? Or does the camera first convert the pixel voltages to digital values and then the gains are applied to the digital values? Is this consistent across devices or can the behavior vary from device to device?
1
0
255
2w
Hybrid Wired-to-Wireless Audio Mode Using AirPods Charging Case
Many Apple users own both Bluetooth earphones (AirPods) and traditional wired earphones. While Bluetooth audio provides freedom of movement, some users still prefer wired earphones for comfort, sound profile, or personal preference. However, plugging wired earphones directly into an iPhone can feel restrictive and inconvenient during daily use. This proposal suggests a hybrid audio approach where wired earphones can be connected to a Bluetooth-enabled AirPods charging case (or a similar Apple-designed module), allowing users to enjoy wired earphones without a physical connection to the iPhone. #Problem Statement *Wired earphones offer consistent audio quality and zero latency *Bluetooth earphones provide freedom from cables *Users must currently choose one or the other *Plugging wired earphones into an iPhone limits movement and can feel intrusive in daily scenarios (walking, commuting, working) There is no native Apple solution that allows wired earphones to function wirelessly while maintaining Apple’s audio experience standards. #Proposed Solution Introduce a Wired-to-Wireless Audio Mode through the AirPods charging case or a dedicated Apple Bluetooth audio bridge. How it works: User plugs wired earphones into the AirPods case (or a future AirPods accessory port) The case acts as a Bluetooth audio transmitter Audio is streamed wirelessly from iPhone to the case The case outputs audio to the wired earphones #User experiences: No cable connected to the iPhone Familiar wired earphone sound Freedom of movement similar to Bluetooth earbuds User Experience (UX Flow) Plug wired earphones into the AirPods case iPhone automatically detects: “Wired Earphones via AirPods Case” Seamless pairing using existing AirPods framework Audio controls, volume, and switching handled through iOS No additional apps required #Key Benefits Combines wired sound reliability with wireless convenience Reduces physical cable disturbance during use Extends usefulness of existing wired earphones Minimal learning curve for users Fits naturally into Apple’s ecosystem and design philosophy #Privacy & Performance Considerations On-device audio processing only No cloud involvement Low-latency audio using Apple’s proprietary Bluetooth codecs Power-efficient usage leveraging AirPods case battery #Target Users Users who prefer wired earphones but want wireless freedom Commuters and walkers Developers and professionals who multitask Users sensitive to Bluetooth earbud fit or comfort #Ecosystem Fit Builds on existing AirPods pairing and audio stack Aligns with Apple’s focus on seamless UX Could be implemented via: New AirPods hardware Firmware update + accessory Dedicated Apple audio bridge
1
0
267
2w
How To Integrating iOS 26 Spatial Photos Feature into Third-Party iPhone App
Dear Apple Technical Support Team, Greetings! I am an iOS app developer, currently upgrading the functions of the photo app I developed Recently, I noticed the new Spatial Photos feature added in the iOS 26 system, which brings an immersive 3D photo experience to users. We hope to integrate similar capabilities into our own app to provide users with a richer photo viewing experience. Through technical research, we found that on Apple Vision devices, the similar spatial photo display effect can be achieved through the ImagePresentationComponent.Spatial3DImage interface. However, our tests show that this interface only supports visionOS and cannot be called in the iOS system. At present, iOS 26 already natively supports the Spatial Photos feature, and we hope to know how to enable third-party photo apps to also have this capability. Here, we sincerely request your team to provide relevant technical support, mainly to understand the following questions: Are there any official APIs, SDKs, or development frameworks applicable to the iOS 26 system that can support third-party apps to implement core functions such as the generation and display of spatial photos? If there are no public adaptive interfaces available at present, are there any other compliant technical solutions or alternative paths to achieve similar effects? For third-party apps to integrate the spatial photo feature, are there any relevant development documents, technical specifications, or review requirements that need to be followed? We have completed the basic function iteration of the app and have the technical capability to quickly adapt to new functions. We hope to receive guidance and support from your team to help us bring a better product experience to iOS users. Attached are the relevant information of our app and the detailed report on interface compatibility during the test for your reference. If you need any further supplementary information, please feel free to inform us. Thank you for reviewing this email in your busy schedule, and we look forward to your reply!
0
0
179
2w
Any way to trigger cameraLensSmudgeDetectionStatus to change?
Looking to implement to UI to tell the user to clean their lens in our app. Implemented the KVO for the cameraLensSmudgeDetectionStatus but I'm having issues reliably triggering it in, both in our app and the main camera app. Tried to get inventive by putting tupperware over the lens, but I think the model driving this or the LiDAR sensor might be smart enough to detect there is something close to the lens. Is there any way to trigger this change in a similar way we can trigger thermal changes in debug? Thanks.
2
0
363
2w
Camera Permissions Popup
We have a very strange issue that I am trying to solve or find the best practice for. We have a SwiftUI View that uses the Camera to preview. So as suggested in Apples Docs we check authorisation status and then if it's not determined we request authorisation. We also have the privacy entry in the info.plist case .notDetermined: AVCaptureDevice.requestAccess(for: .video) { accessStatusAuthorised in if !accessStatusAuthorised { self.cameraStatus = .notAuthorised } else { self.isAuthorized = true self.cameraStatus = .authorised self.startCameraSession(cameraPosition: cameraPosition) } } case .restricted: cameraStatus = .notAuthorised isAuthorized = false case .denied: cameraStatus = .notAuthorised isAuthorized = false case .authorized: cameraStatus = .authorised isAuthorized = true startCameraSession(cameraPosition: cameraPosition) break @unknown default: isAuthorized = true cameraStatus = .notAuthorised } However when we call this code it freezes the Camera feed, even when allow has been tapped. However and this is the confusing part. If we do not call the code above, we still get the permission for camera access pop up and the camera works fine after allowing. What im concerned about is changing the code to do this and its a possible apple bug that gets fixed and hey then none of the Apps allow the camera function. I cannot see any where that the process has changed for iOS 26 / Xcode 26. Can anyone shed any light on this or had similar experience ?
1
0
82
2w
AVPlayerView. Internal constraints conflicts
I’m getting Auto Layout constraint conflict warnings related to AVPlayerView in my project. I’ve reproduced the issue on macOS Tahoe 26.2. The conflict appears to originate inside AVPlayerView itself, between its internal subviews, rather than in my own layout code. This issue can be easily reproduced in an empty project by simply adding an AVPlayerView as a subview using the code below. class ViewController: NSViewController { override func viewDidLoad() { super.viewDidLoad() let playerView = AVPlayerView() view.addSubview(playerView) } } After presenting that view controller, the following Auto Layout constraint conflict warnings appear in the console: Conflicting constraints detected: <decode: bad range for [%@] got [offs:346 len:1057 within:0]>. Will attempt to recover by breaking <decode: bad range for [%@] got [offs:1403 len:81 within:0]>. Unable to simultaneously satisfy constraints: ( "<NSLayoutConstraint:0xb33c29950 H:|-(0)-[AVDesktopPlayerViewContentView:0x10164dce0](LTR) (active, names: '|':AVPlayerView:0xb32ecc000 )>", "<NSLayoutConstraint:0xb33c299a0 AVDesktopPlayerViewContentView:0x10164dce0.right == AVPlayerView:0xb32ecc000.right (active)>", "<NSAutoresizingMaskLayoutConstraint:0xb33c62850 h=--& v=--& AVPlayerView:0xb32ecc000.width == 0 (active)>", "<NSLayoutConstraint:0xb33d46df0 H:|-(0)-[AVEventPassthroughView:0xb33cfb480] (active, names: '|':AVDesktopPlayerViewContentView:0x10164dce0 )>", "<NSLayoutConstraint:0xb33d46e40 AVEventPassthroughView:0xb33cfb480.trailing == AVDesktopPlayerViewContentView:0x10164dce0.trailing (active)>", "<NSLayoutConstraint:0xb33ef8320 NSGlassView:0xb33ed8c00.trailing == AVEventPassthroughView:0xb33cfb480.trailing - 6 (active)>", "<NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)>", "<NSLayoutConstraint:0xb33ef84b0 NSGlassView:0xb33ed8c00.leading >= AVEventPassthroughView:0xb33cfb480.leading + 6 (active)>" ) Will attempt to recover by breaking constraint <NSLayoutConstraint:0xb33ef8460 NSGlassView:0xb33ed8c00.width == 180 (active)> Set the NSUserDefault NSConstraintBasedLayoutVisualizeMutuallyExclusiveConstraints to YES to have -[NSWindow visualizeConstraints:] automatically called when this happens. And/or, set a symbolic breakpoint on LAYOUT_CONSTRAINTS_NOT_SATISFIABLE to catch this in the debugger. Is it system bug or maybe someone knows how to fix that? Thank you.
2
0
688
3w
SCK/replayd behaviors and delays
We're troubleshooting SCK issues. They occur with a relatively small amount of sessions, but lack of context and/or ability to advise the customer on how they could make behavior more predictable and reliable is problematic. Generally, there is 2 distinct issues which may or may not have the same root cause: Failure to establish SCK session. Usually manifests within the app as SCShareableContent.getWithCompletionHandler call either never invoking the completion handler, or taking prohibitively long time (we usually give it 3-10 sec before giving up). In the system log it may look like this: (log omitted - suspecting it triggers the content filter) Note the 6 seconds delay to completion of fetchShareableContentWithOption (normally it's a 30-40ms operation). Sometime, we'd see the stream established, but some minutes (or even hours) into the recording we'd stop receiving frames. Both scenarios are likely to occur when the disk space is low, with reliable repro of the problem #2 at below 8gb of free space (in that case, we've seen replayd silently dropping the session, without ever notifying the client ... improving API could go a long way there). However, out of recent occurrences, while most have less than 100GB available, we've seen it on machines with as much as 500GB free. Unfortunately, it's almost never reproducible in dev environment, so we have to rely on diagnostics we're able to collect in the field -- which nothing obvious yet. I'd like to understand the root cause of both scenarios better and/or how what specific frameworks can cause these behaviors.
0
0
462
3w
iOS 26: Underexposed image in exposure bracket appears clamped vs single capture
Hello, I have an iOS camera app that captures exposure brackets and performs custom HDR processing. On iOS 26, I’m observing a visual difference between: a single photo captured at –2 EV, and the –2 EV frame from an exposure bracket (–2 / 0 / +2 EV). On iOS 26: The single –2 EV image looks natural and consistent. The –2 EV image from the bracket appears clamped / distorted, most noticeably in high dynamic range scenes (highlight compression and loss of detail). On iOS 18, both approaches produce visually identical and correct –2 EV images. The issue only appears for bracketed captures on iOS 26. Attachments (examples) iOS 26 Single capture –2 EV (JPEG): /Users/danilobudimir/Downloads/ios26SingleImage/JPEG image-4006-8B77-51-0.jpeg Single capture –2 EV — Capture report (dumped settings): /Users/danilobudimir/Downloads/ios26SingleImage/UnderExposureDebug_CaptureReport_2026-01-09T15-59-20Z.md Bracket capture –2 EV frame (JPEG): /Users/danilobudimir/Downloads/bracket_iOS26/JPEG image-45CE-9793-A5-0.jpeg Bracket capture — Capture report (dumped settings): /Users/danilobudimir/Downloads/bracket_iOS26/UnderExposureDebug_CaptureReport_2026-01-09T15-55-42Z.md iOS 18 Single capture –2 EV (JPEG): /Users/danilobudimir/Downloads/ios18SingleImage/JPEG image-47FD-AF73-28-0.jpeg Single capture –2 EV — Capture report: /Users/danilobudimir/Downloads/ios18SingleImage/UnderExposureDebug_CaptureReport_2026-01-09T16-25-27Z.md Bracket capture — –2 EV frame (JPEG): /Users/danilobudimir/Downloads/bracket_iOS18/JPEG image-4A4C-9E93-46-0.jpeg Bracket capture — Capture report: /Users/danilobudimir/Downloads/bracket_iOS18/UnderExposureDebug_CaptureReport_2026-01-09T16-27-23Z.md Question Is there any new behavior in iOS 26 AVFoundation related to: AVCapturePhotoBracketSettings, tone mapping / HDR preprocessing, or internal image processing applied specifically to bracketed frames? Is there a new flag, format requirement or opt-out mechanism required to preserve linear underexposed frames in exposure brackets?
2
0
790
3w
AVSpeechSynthesizer system voices (SLA clarification)
Hello, I am building an iOS-only, commercial app that uses AVSpeechSynthesizer with system voices, strictly using the APIs provided by Apple. Before distributing the app, I want to ensure that my current implementation does not conflict with the iOS Software License Agreement (SLA) and is aligned with Apple’s intended usage. For a better playback experience (more accurate estimation of utterance duration and smoother skip forward/backward during playback), I currently synthesize speech using: AVSpeechSynthesizer.write(_:toBufferCallback:) Converting the received AVAudioPCMBuffer buffers into audio data Storing the audio inside the app sandbox Playing it back using AVAudioPlayer / AVAudioEngine The cached audio is: Generated fully on-device using system voices Stored only inside the app’s private container Used only for internal playback controls (timeline, seek, skip ±5 seconds) Never shared, exported, uploaded, or exposed outside the app The alternative approaches would be: Keeping the generated audio entirely in memory (RAM) for playback purposes, without writing it to the file system at any point Or using AVSpeechSynthesizer.speak(_:) and playing speech strictly in real time which has a poorer user experience compared to my approach I have reviewed the current iOS Software License Agreement: https://www.apple.com/legal/sla/docs/iOS18_iPadOS18.pdf In particular, section (f) mentions restrictions around System Characters, Live Captions, and Personal Voice, including the following excerpt: “…use … only for your personal, non-commercial use… No other creation or use of the System Characters, Live Captions, or Personal Voice is permitted by this License, including but not limited to the use, reproduction, display, performance, recording, publishing or redistribution in a … commercial context.” I do not see a specific reference in the SLA to system text-to-speech voices used via AVSpeechSynthesizer, and I want to be certain that temporarily caching synthesized speech for internal, non-exported playback is acceptable in a commercial app. My question is: Is caching AVSpeechSynthesizer system-voice output inside the app sandbox for internal playback acceptable, or is Apple’s recommended approach to rely only on real-time playback (speak(_:)) or strictly in-memory buffering without file storage? If this question falls outside DTS technical scope and is instead a policy or licensing matter, I would appreciate guidance on the authoritative Apple documentation or the correct Apple team/contact. Thank you.
0
1
281
3w
Logic Pro - discover channel upstream latency
Hello everyone, I've written an audio unit plugin that needs to be aware of any upstream latency caused by heavy plugins before it on the channel. Is there any way to query this? I know that Logic applies PDC at the channel's output (summing point), but I need to know what the accumulated latency is at the point the audio enters my plugin. Thanks!
0
0
329
3w
Why doesn’t AVPlayer / AVFoundation support MPEG-DASH (MPD)? Any public rationale?
Hi, I understand that AVPlayer/AVFoundation doesn’t natively play MPEG-DASH manifests (.mpd) today, while HLS is supported and widely documented by Apple. I’m not asking for roadmap commitments, but I’d like to understand whether there is any publicly documented rationale for not supporting DASH/MPD in AVFoundation (e.g., technical constraints, platform integration, DRM ecosystem, power/performance considerations, etc.). Questions: Is there any Apple statement / documentation explaining why DASH (MPD) isn’t supported in AVFoundation? Is Apple’s recommended approach still “provide HLS for Apple clients” (potentially sharing CMAF segments and generating separate manifests)? If there’s no public rationale, is filing Feedback Assistant the best channel for requesting MPD playback support? Thanks!
1
1
347
3w
It crashes when AVAssetReader is released
Thread 5 Crashed: 0 libobjc.A.dylib 0x19af7b038 objc_msgSend + 56 1 CoreFoundation 0x19dfdb618 cow_cleanup + 135 2 CoreFoundation 0x19dfdb6fc -[__NSDictionaryM dealloc] + 147 3 MediaToolbox 0x1b167636c FigRemotePropertyCacheTeardown + 31 4 MediaToolbox 0x1b1c5b648 remoteXPCAsset_Finalize + 107 5 CoreMedia 0x1b1e9166c FigBaseObjectFinalize + 275 6 CoreFoundation 0x19dfcc5ec _CFRelease + 295 7 AVFCore 0x1b1054d64 -[AVFigAssetTrackInspector dealloc] + 151 8 AVFCore 0x1b0f818d8 -[AVAssetTrack dealloc] + 63 9 CoreFoundation 0x19dfdba28 RELEASE_OBJECTS_IN_THE_ARRAY + 115 10 CoreFoundation 0x19dfdb7e0 -[__NSArrayM dealloc] + 147 11 AVFCore 0x1b0f52e04 -[AVURLAsset dealloc] + 167 12 libobjc.A.dylib 0x19af821f8 object_cxxDestructFromClass(objc_object*, objc_class*) + 115 13 libobjc.A.dylib 0x19af7df20 objc_destructInstance_nonnull_realized(objc_object*) + 75 14 libobjc.A.dylib 0x19af7d4a4 _objc_rootDealloc + 71 15 AVFCore 0x1b0fef988 -[AVAssetReaderOutput dealloc] + 415 16 AVFCore 0x1b0ff11ec -[AVAssetReaderTrackOutput dealloc] + 127 17 CoreFoundation 0x19dfe20a4 -[__NSSingleObjectArrayI dealloc] + 63 18 libobjc.A.dylib 0x19af7d3f8 AutoreleasePoolPage::releaseUntil(objc_object**) + 203
1
0
274
3w
is the output frame rate of a CMIOExtension rounded or capped?
I made a CMIOExtension (a virtual camera) which generates its own output, for use in our in-house software testing. I wanted to make a video source with 29.97, 30, 59.94 and 60fps output. To this end, I created a CMIOExtensionDeviceSource which creates a CMIOExtensionDevice with one CMIOExtensionStreamSource with various stream formats contained in [CMIOExtensionStreamFormat], including one with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1000, timescale: 30000) and another with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1001, timescale: 30000) I've held off on the creation of the 59.94/60fps source for now until this problem is resolved. my virtual camera works, it produces a signal, but when I examine its associated AVCaptureDevice in the debugger, I find (lldb) po self.captureDevice?.formats[0].videoSupportedFrameRateRanges[0].maxFrameDuration ▿ Optional<CMTime> ▿ some : CMTime - value : 1000000 - timescale : 30000000 ▿ flags : CMTimeFlags - rawValue : 1 - epoch : 0 I get the same value, 1000000/30000000, or exactly 30fps, for all the formats of my AVCaptureDevice. Is there something I'm doing wrong, or do CMIOExtensionDevices always round the frame rates? I can't force CoreMediaIO to produce frames at exactly my desired frame interval, but I'd like to ensure that the average frame rate is my desired rate. How can I do that? Frame emission is governed by a repeating DispatchSourceTimer with a repeat time specified in nanoseconds with the TimerFlags set to 'strict'.
2
0
543
3w
iPhone 14 Pro: External USB mic not available in AVAudioSession for call apps, but works in Voice Memos & Instagram Live
I’m facing a strange audio routing issue that seems specific to iPhone 14 Pro / Pro Max. I’m using LiveKit (WebRTC) in a React Native app, which uses AVAudioSession internally for audio capture (VoIP / call-style usage). 🔍 What’s happening: I’m using an external USB microphone. On these devices: iPhone 11 → ✅ USB mic works iPhone 13 → ✅ USB mic works iPhone 17 Pro → ✅ USB mic works iPhone 14 Pro Max → ❌ USB mic does NOT work On iPhone 14 Pro Max: The same USB mic: ✅ Works in Voice Memos ✅ Works in Instagram Live ❌ Does NOT appear as an input option in my app ❌ Does NOT work in WhatsApp / Instagram calls Also: In my app on iPhone 14 Pro Max, iOS does not show the audio input selector UI On iPhone 17 Pro, the same app and same build does show the selector and the USB mic works ⚙️ My audio session config ( LiveKit ): await AudioSession.setAppleAudioConfiguration({ audioCategory: 'playAndRecord', audioMode: 'default', audioCategoryOptions: ['allowBluetooth', 'defaultToSpeaker'], }); await AudioSession.startAudioSession(); ❓ My questions: Is this a known limitation or behavior specific to iPhone 14 Pro / Pro Max? Does iPhone 14 Pro have different audio routing rules for call / VoIP mode compared to other devices? Why does the same USB mic work in recording apps (Voice Memos, Instagram Live) but not in call-style apps (LiveKit, WhatsApp, Instagram call)? Is there any documented difference in AVAudioSession behavior on iPhone 14 Pro regarding external USB audio inputs?
1
0
91
3w
Unique identifier of a MIDI device
Hello, I need to know what is a unique identifier of a MIDI device (source/destination). Important note: I want to get the same ID when a device is reconnected (unplugged and then plugged again). The main candidate is kMIDIPropertyUniqueID property. But I don't know if it meets the requirement above or not. Additional question: is it always available for any endpoint? Also there is kMIDIPropertyDeviceID property. What about it? And one more option is just MIDIEndpointRef returned by MIDIGetSource or MIDIGetDestination. So what is the proper way to get ID which persists between device reconnections?
0
0
57
3w
AppleAVBAudio assertion information
Hi, I'm currently developping an AVB hardware device, and I'm currently stuck because because the apple AVB stack is throwing me errors without much informations. Is there any way to have more information about these assertions and why they are happening ? Furtermore is there any documentation on theAppleAVBAudio module ? It would be very handy Here are the logs shown in the console: Filtering the log data using "process == "coreaudiod"" Timestamp Thread Type Activity PID TTL 2025-12-05 15:44:27.087043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.087545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.088043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.088546+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.089043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.089545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.090043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.090545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.091043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.091545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.092044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.092544+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.093044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.093552+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.094050+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.094543+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
3
0
211
3w