Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
0
0
63
Apr ’25
AVPlayerItem. externalMetadata not available
According to the documentation (https://developer.apple.com/documentation/avfoundation/avplayeritem/externalmetadata), AVPlayerItem should have an externalMetadata property. However it does not appear to be visible to my app. When I try, I get: Value of type 'AVPlayerItem' has no member 'externalMetadata' Documentation states iOS 12.2+; I am building with a minimum deployment target of iOS 18. Code snippet: import Foundation import AVFoundation /// ... in function ... // create metadata as described in https://developer.apple.com/videos/play/wwdc2022/110338 var title = AVMutableMetadataItem() title.identifier = .commonIdentifierAlbumName title.value = "My Title" as NSString? title.extendedLanguageTag = "und" var playerItem = await AVPlayerItem(asset: composition) playerItem.externalMetadata = [ title ]
0
0
70
Apr ’25
Ducking MusicKit output when playing another sound
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player) the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession Sample code below the problem I am having is that .duckOthers is not ducking the Application Music Player output Is this a bug or am I doing this wrong? // Configure audio session for system-wide ducking try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers]) try AVAudioSession.sharedInstance().setActive(true) // Set the ducking level to maximum try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005) // Create and configure audio player self.audioPlayer = try AVAudioPlayer(data: audioData) self.audioPlayer?.delegate = self self.audioPlayer?.volume = 1.0 // Ensure full volume for speech self.audioPlayer?.prepareToPlay() // Set the audio player's settings for maximum clarity self.audioPlayer?.enableRate = false self.audioPlayer?.pan = 0.0 // Center the audio self.audioPlayer?.play()
0
0
47
Apr ’25
Music Kit initialisation, Uncaught TypeError: Cannot read properties of undefined (reading 'node')
I'm trying to load Music Kit on the server with solid js. I can confirm that my implementation has been sufficient to return authentication tokens and for MusicKit.isAuthorized to return true. My issue is that if I reload the page, it only succeeds intermittently (perhaps 25% of the time?). My question is - what is wrong with my implementation? Removing the async keyword ensures it loads every time but playing and queuing music no longer works. I'm currently assuming this is an SSR issue but the docs haven't explicitly specified this isn't possible. I have the following boilerplate: export default createHandler( () => ( <StartServer document={({ assets, children, scripts }) => { return ( <html lang="en"> <head> <meta name="apple-music-developer-token" content={authResult.token} /> <meta name="apple-music-app-name" content="app name" /> <meta name="apple-music-app-build" content="1978.4.1" /> {assets} <script src="https://js-cdn.music.apple.com/musickit/v3/musickit.js" async /> </head> <body> <div id="app">{children}</div> {scripts} </body> </html> ) }} /> )) When I first load my app, I'll encounter: musickit.js:13 Uncaught TypeError: Cannot read properties of undefined (reading 'node') at musickit.js:13:10194 at musickit.js:13:140 at musickit.js:13:209 The intermittence signals an issue relating to the async keyword. An expansion on this issue can be found here.
0
0
552
Dec ’24
Windows Apple Music: how to enumerate the local library or export it? Is Library.musicdb documented / API available?
Environment Windows 11 [edition/build]: [e.g., 23H2, 22631.x] Apple Music for Windows version: [e.g., 1.x.x from Microsoft Store] Library folder: C:\Users<user>\Music\Apple Music\Apple Music Library.musiclibrary Summary I need a supported way to programmatically enumerate the local Apple Music library on Windows (track file paths, playlists, etc.) for reconciliation with the on-disk Media folder. On macOS this used to be straightforward via scripting/export; on Windows I can’t find an equivalent. What I’m seeing in the library bundle Library.musicdb → not SQLite. First 4 bytes: 68 66 6D 61 ("hfma"). Library Preferences.musicdb → also starts with "hfma". artwork.sqlite → SQLite but appears to be artwork cache only (no track file paths). Extras.itdb → has SQLite format 3 header but (from a quick scan) not seeing track locations. Genius.itdb → not a SQLite database on this machine. What I’ve tried Attempted to open Library.musicdb with SQLite providers → error: “file is not a database.” Binary/string scans (ASCII, UTF-16LE/BE, null-stripped) of Library.musicdb → did not reveal file paths or obvious plist/XML/JSON blobs. The Windows Apple Music UI doesn’t appear to expose “Export Library / Export Playlist” like legacy iTunes did, and I can’t find a public API for local library enumeration on Windows. What I’m trying to accomplish Read local track entries (absolute or relative paths), detect broken links, and reconcile against the Media folder. A read-only solution is fine; I do not need to modify the library. Questions for Apple Is the Library.musicdb file format documented anywhere, or is there a supported SDK/API to enumerate the local library on Windows? Is there a supported export mechanism (CLI, UI, or API) on Windows Apple Music to dump the local library and/or playlists (XML/CSV/JSON)? Is there a Windows-specific equivalent to the old iTunes COM automation or any MusicKit surface that can return local library items (not streaming catalog) and their file locations? If none of the above exist today, is there a recommended workaround from Apple for library reconciliation on Windows (e.g., documented support for importing M3U/M3U8 to rebuild the local library from disk)? Are there any plans/timeline for adding Windows feature parity with iTunes/Music on macOS for exporting or scripting the local library? Why this matters For large personal libraries, users occasionally end up with orphaned files on disk or broken links in the app. Without an export or API, it’s difficult to audit and fix at scale on Windows. Reference details (in case it helps triage) Library.musicdb header bytes: 68-66-6D-61-A0-00-00-00-10-26-34-00-15-00-01-00 (ASCII shows hfma…). artwork.sqlite is readable but doesn’t contain track file paths (appears limited to artwork). I can supply a minimal repro tool and logs if that’s helpful. Feature request (if no current API) Add an official Export Library/Playlists action on Windows Apple Music, or Provide a read-only Windows API (or schema doc) that surfaces track file locations and playlist membership from the local library. Thanks in advance for any guidance or pointers to docs I might have missed.
0
0
182
Sep ’25
Title: Ambisonic B-Format Playback Issues on Vision Pro
I'm trying to implement Ambisonic B-Format audio playback on Vision Pro with head tracking. So far audio plays, head tracking works, and the sound appears to be stereo. The problem is that it is not a proper binaural playback when compared to playing back the audiofile with a DAW. Has anyone successfully implemented B-Format playback on Vision Pro? Any suggestions on my current implementation: func playAmbiAudioForum() async { do { try AVAudioSession.sharedInstance().setCategory(.playback) try AVAudioSession.sharedInstance().setActive(true) // AudioFile laoding/preperation guard let testFileURL = Bundle.main.url(forResource: "audiofile", withExtension: "wav") else { print("Test file not found") return } let audioFile = try AVAudioFile(forReading: testFileURL) let audioFileFormat = audioFile.fileFormat // create AVAudioFormat with Ambisonics B Format guard let layout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Ambisonic_B_Format) else { print("layout failed") return } let format = AVAudioFormat( commonFormat: audioFile.processingFormat.commonFormat, sampleRate: audioFile.fileFormat.sampleRate, interleaved: false, channelLayout: layout ) // write audiofile to buffer guard let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: UInt32(audioFile.length)) else { print("buffer failed") return } try audioFile.read(into: buffer) playerNode.renderingAlgorithm = .HRTF // connecting nodes audioEngine.attach(playerNode) audioEngine.connect(playerNode, to: audioEngine.outputNode, format: format) audioEngine.prepare() playerNode.scheduleBuffer(buffer, at: nil) { print("File finished playing") } try audioEngine.start() playerNode.play() } catch { print("Setup error:", error) } }
0
0
478
Jan ’25
MATCH_ATTEMPT_FAILED error on Android Studio Java+Kotlin
Getting MatchError "MATCH_ATTEMPT_FAILED" everytime when matchstream on Android Studio Java+Kotlin project. My project reads the samples from the mic input using audioRecord class and sents them to the Shazamkit to matchstream. I created a kotlin class to handle to Shazamkit. The audioRecord is build to be mono and 16 bit. My Kotlin Class class ShazamKitHelper { val shazamScope = CoroutineScope(Dispatchers.IO + SupervisorJob()) lateinit var streaming_session: StreamingSession lateinit var signature: Signature lateinit var catalog: ShazamCatalog fun createStreamingSessionAsync(developerTokenProvider: DeveloperTokenProvider, readBufferSize: Int, sampleRate: AudioSampleRateInHz ): CompletableFuture<Unit>{ return CompletableFuture.supplyAsync { runBlocking { runCatching { shazamScope.launch { createStreamingSession(developerTokenProvider,readBufferSize,sampleRate) }.join() }.onFailure { throwable -> }.getOrThrow() } } } private suspend fun createStreamingSession(developerTokenProvider:DeveloperTokenProvider,readBufferSize: Int,sampleRateInHz: AudioSampleRateInHz) { catalog = ShazamKit.createShazamCatalog(developerTokenProvider) streaming_session = (ShazamKit.createStreamingSession( catalog, sampleRateInHz, readBufferSize ) as ShazamKitResult.Success).data } fun startMatching() { val audioData = sharedAudioData ?: return // Return if sharedAudioData is null CoroutineScope(Dispatchers.IO).launch { runCatching { streaming_session.matchStream(audioData.data, audioData.meaningfulLengthInBytes, audioData.timestampInMs) }.onFailure { throwable -> Log.e("ShazamKitHelper", "Error during matchStream", throwable) } } } @JvmField var sharedAudioData: AudioData? = null; data class AudioData(val data: ByteArray, val meaningfulLengthInBytes: Int, val timestampInMs: Long) fun startListeningForMatches() { CoroutineScope(Dispatchers.IO).launch { streaming_session.recognitionResults().collect { matchResult -> when (matchResult) { is MatchResult.Match -> { val match = matchResult.matchedMediaItems println("Match found: ${match.get(0).title} by ${match.get(0).artist}") } is MatchResult.NoMatch -> { println("No match found") } is MatchResult.Error -> { val error = matchResult.exception println("Match error: ${error.message}") } } } } } } My code in java reads the samples from a thread: shazam_create_session(); while (audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){ if (shazam_session_created){ byte[] buffer = new byte[288000];//max_shazam_seconds * sampleRate * 2]; audioRecord.read(buffer,0,buffer.length,AudioRecord.READ_BLOCKING); helper.sharedAudioData = new ShazamKitHelper.AudioData(buffer,buffer.length,System.currentTimeMillis()); helper.startMatching(); if (!listener_called){ listener_called = true; helper.startListeningForMatches(); } } else{ SystemClock.sleep(100); } } private void shazam_create_session() { MyDeveloperTokenProvider provider = new MyDeveloperTokenProvider(); AudioSampleRateInHz sample_rate = AudioSampleRateInHz.SAMPLE_RATE_48000; if (sampleRate == 44100) sample_rate = AudioSampleRateInHz.SAMPLE_RATE_44100; CompletableFuture<Unit> future = helper.createStreamingSessionAsync(provider, 288000, sample_rate); future.thenAccept(result -> { shazam_session_created = true; }); future.exceptionally(throwable -> { Toast.makeText(mine, "Failure", Toast.LENGTH_SHORT).show(); return null; }); } I Implemented the developer token in java as follows public static class MyDeveloperTokenProvider implements DeveloperTokenProvider { DeveloperToken the_token = null; @NonNull @Override public DeveloperToken provideDeveloperToken() { if (the_token == null){ try { the_token = generateDeveloperToken(); return the_token; } catch (NoSuchAlgorithmException | InvalidKeySpecException e) { throw new RuntimeException(e); } } else{ return the_token; } } @NonNull private DeveloperToken generateDeveloperToken() throws NoSuchAlgorithmException, InvalidKeySpecException { PKCS8EncodedKeySpec priPKCS8 = new PKCS8EncodedKeySpec(Decoders.BASE64.decode(p8)); PrivateKey appleKey = KeyFactory.getInstance("EC").generatePrivate(priPKCS8); Instant now = Instant.now(); Instant expiration = now.plus(Duration.ofDays(90)); String jwt = Jwts.builder() .header().add("alg", "ES256").add("kid", keyId).and() .issuer(teamId) .issuedAt(Date.from(now)) .expiration(Date.from(expiration)) .signWith(appleKey) // Specify algorithm explicitly .compact(); return new DeveloperToken(jwt); } }
0
0
550
Dec ’24
AVSpeechUtterance stutters in CarPlay when connected to a BT headset
We are currently working on a CarPlay navigation app and so far everything is working well except for speaking turn notifications. Our TTS implementation works fine on the phone and works fine on CarPlay if the voice is spoken over the speaker in the car. If users connect a BT headset to the car and listen through that headset, then the voice commands are chopped up / stutter. Why would users use BT headset? Well, we are working on a motorcycle app, and there are no speakers usually on a motorcycle. It sounds like the BT channel is opened and closed repeatedly for every character / word spoken. This happens on different CarPlay devices and different Bluetooth headsets, we have reports from multiple users that they find this behavior annoying and that other apps work fine. Is this a known issue? Are there possible workaround?
0
0
64
Apr ’25
Only Apple based music devices show view
The following is my playground code. Any of the apple audio units show the plugin view, however anything else (i.e. kontakt, spitfire, etc.) does not. It does not error, just where the visual is expected is blank. import AppKit import PlaygroundSupport import AudioToolbox import AVFoundation import CoreAudioKit let manager = AVAudioUnitComponentManager.shared() let description = AudioComponentDescription(componentType: kAudioUnitType_MusicDevice, componentSubType: 0, componentManufacturer: 0, componentFlags: 0, componentFlagsMask: 0) var deviceComponents = manager.components(matching: description) var names = deviceComponents.map{$0.name} let pluginName: String = "AUSampler" // This works //let pluginName: String = "Kontakt" // This does not var plugin = deviceComponents.filter{$0.name.contains(pluginName)}.first! print("Plugin name: \(plugin.name)") var customViewController:NSViewController? AVAudioUnit.instantiate(with: plugin.audioComponentDescription, options: []){avAudioUnit, error in var ilip = avAudioUnit!.auAudioUnit.isLoadedInProcess print("Loaded in process: \(ilip)") guard error == nil else { print("Error: \(error!.localizedDescription)") return } print("AudioUnit successfully created.") let audioUnit = avAudioUnit!.auAudioUnit audioUnit.requestViewController{ vc in if let viewCtrl = vc { customViewController = vc var b = vc?.view.bounds PlaygroundPage.current.liveView = vc print("Successfully added view controller.") }else{ print("Failed to load controller.") } } }
0
0
347
Dec ’24
TTS Audio Unit Extension: File Write Access in App Group Container Denied Despite Proper Entitlements
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured. Setup: Main App (Flutter) and TTS Audio Unit Extension share the same App Group App Group is properly configured in developer portal and entitlements Main app successfully creates and uses files in the container Container structure shows existing directories (config/, dictionary/) with populated files Both targets have App Group capability enabled and entitlements set Current behavior: Extension can access/read the App Group container Extension can see existing directories and files All write attempts are blocked with "sandbox deny(1) file-write-create" errors Code example: const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) { NSString* groupIdStr = [NSString stringWithUTF8String:groupId]; NSString* componentStr = [NSString stringWithUTF8String:component]; NSURL* url = [[NSFileManager defaultManager] containerURLForSecurityApplicationGroupIdentifier:groupIdStr]; NSURL* fullPath = [url URLByAppendingPathComponent:componentStr]; NSError *error = nil; if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path withIntermediateDirectories:YES attributes:nil error:&amp;error]) { NSLog(@"Unable to create directory %@", error.localizedDescription); } return [[fullPath path] UTF8String]; } Error output: Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace Key questions: Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers? Is this a known limitation of TTS Audio Unit Extensions? What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions? If writing to App Group containers is not supported, what alternatives are available? Current entitlements: &lt;dict&gt; &lt;key&gt;com.apple.security.application-groups&lt;/key&gt; &lt;array&gt; &lt;string&gt;group.com.&lt;company&gt;.&lt;appname&gt;&lt;/string&gt; &lt;/array&gt; &lt;/dict&gt;
0
0
98
Apr ’25
Frequent crashes related to com.apple.coreaudio.AQClient thread
I'm encountering numerous crashes involving the com.apple.coreaudio.AQClient thread on our application. The crash details are as follows: #10 com.apple.coreaudio.AQClient SIGSEGV SEGV_ACCERR 0 libobjc.A.dylib _objc_msgSend + 44 1 AudioToolbox ClientMessageHandler::PropertyChanged(unsigned int) + 872 2 AudioToolbox ClientAudioQueue::FetchAndDeliverPendingCallbacks(unsigned int) + 924 3 AudioToolbox __XCallbackNotificationsAvailable + 212 4 libAudioToolboxUtility.dylib _mshMIGPerform + 260 5 CoreFoundation ___CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 6 CoreFoundation ___CFRunLoopDoSource1 + 596 7 CoreFoundation ___CFRunLoopRun + 2392 8 CoreFoundation _CFRunLoopRunSpecific + 572 9 AudioToolbox CADeprecated::GenericRunLoopThread::Entry(void*) + 156 10 libAudioToolboxUtility.dylib CADeprecated::CAPThread::Entry(CADeprecated::CAPThread*) + 88 11 libsystem_pthread.dylib __pthread_start + 116 All these crashes occur on system versions below iOS/iPadOS 17, primarily when the device's available RAM is low. What steps can I take to resolve this issue? Any insights would be greatly appreciated!
0
0
167
Nov ’25
Different behaviors of USB-C to Headphone Jack Adapters
I bought two "Apple USB-C to Headphone Jack Adapters". Upon closer inspection, they seems to be of different generations: The one with product ID 0x110a on top is working fine. The one with product ID 0x110b has two issues: There is a short but loud click noise on the headphone when I connect it to the iPad. When I play audio using AVAudioPlayer the first half of a second or so is cut off. Here's how I'm playing the audio: audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.delegate = self audioPlayer?.prepareToPlay() audioPlayer?.play() Is this a known issue? Am I doing something wrong?
0
0
306
Jul ’25
AVAudioPlayer/SKAudioNode audio no longer plays after interruption
Hi 👋! We have a SpriteKit-based app where we play AVAudio sounds in three different ways: Effects (incl. UI sounds) with AVAudioPlayer. Long looping tracks with AVAudioPlayer. Short animation effects on the timeline of SpriteKit's SKScene files (effectively SKAudioNode nodes). We've found that when you exit the app or otherwise interrupt audio plays, future audio plays often fail. For example, there's a WebKit-based video trailer inside the app, and if you play it, our looping background music track (2.) will stop playing, and won't resume as you close the trailer (return from WebKit). This is probably due to us not manually restarting the track (so may well be easily fixed). Periodically played AVAudioPlayer audio (1.) are not affected. However, the more concerning thing is that the audio tracks on SKScene file timelines (3.) will no longer play. My hypothesis is that AVAudioEngine gets interrupted, and needs to be restarted for those AVAudioNode elements to regain functionality. Thing is, we don't deal with AVAudioEngine at all currently in the app, meaning it is never initiated to begin with. Obviously things return to normal when you remove the app from short-term memory and restart it. However, it seems many of our users aren't doing this, and often report audio failing presumably due to some interruption in the past without the app ever being cleared from memory. Any idea why timeline-run SKAudioNodes would fail like this? Should the app react to app backgrounding/foregrounding regarding audio? Any help would be very much appreciated ✌️!
0
1
106
May ’25
ShazamKit Background Operation Broken on iOS 18 - SHManagedSession Stops Working After ~20 Seconds
Your draft looks great! Here's a refined version with the iOS 17 comparison emphasized and slightly better flow: Hi Apple Engineers and fellow developers, I'm experiencing a critical regression with ShazamKit's background operation on iOS 18. ShazamKit's SHManagedSession stops identifying songs in the background after approximately 20 seconds on iOS 18, while the exact same code works perfectly on iOS 17. The behavior is consistent: the app works perfectly in the foreground, but when backgrounded or device is locked, it initially works for about 20 seconds then stops identifying new songs. The microphone indicator remains active suggesting audio access is maintained, but ShazamKit doesn't send identified songs in the background until you open the app again. Detection immediately resumes when bringing the app to foreground. My technical setup uses SHManagedSession for continuous matching with background modes properly configured in Info.plist including audio mode, and Background App Refresh enabled. I've tested this on physical devices running iOS 18.0 through 18.5 with the same results across all versions. The exact same code running on iOS 17 devices works flawlessly in the background. To reproduce: initialize SHManagedSession and start matching, begin song identification in foreground, background the app or lock device, play different songs which are initially detected for about 20 seconds, then after the timeout period new songs are no longer identified until you bring the app to foreground. This regression has impacted my production app as users who rely on continuous background music identification are experiencing a broken feature. I submitted this as Feedback ID FB15255903 last September with no solution so far. I've created a minimal demo project that reproduces this issue: https://github.com/tfmart/ShazamKitBackground Has anyone else experienced this ShazamKit background regression on iOS 18? Are there any known workarounds or alternative approaches? Given the time this issue has persisted, could we please get acknowledgment of this regression, expected timeline for a fix, or any recommended workarounds? Testing environment is Xcode 16.0+ on iOS 18.0-18.5 across multiple physical device models. Any guidance would be greatly appreciated.
0
0
136
Jun ’25
Unable to play audio via MusicKit
Hey folks, I'm running into an odd issue suddenly with an app that had a working MusicKit integration before. I'm using ApplicationMusicPlayer to play Apple Music albums and songs. I'm testing on a physical device, signed in to Apple ID, and with a valid subscription. Apple Music via the first-party app works entirely fine on this device. Attempting to play back any content at all gives the log: <ICUserIdentityStoreACAccountBackend: 0x1070bf3e0> Failed to initialize primary apple account, error=Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store} [ICUserIdentityStore] - initializing account histories with activeAccountDSID = nil, activeLockerAccountDSID = nil, timestamp = 14605951908 [ICUserIdentityStore] Failed to fetch local store account with error: Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store}. The album artwork, track names, etc, all appear in the control center playback controls, but the music doesn't play. Trying to trigger playback with control center just results in it skipping to the next track, which doesn't play either. This exact code used to work. I have the MusicKit service selected in Apple Connect. Since this isn't entitlement-based, I'm not sure how else to check that I'm set up correctly. I've tried deleting/reinstalling the app, restarting the device, cleaning/rebuilding, and deleting DerivedData, to no avail. Any help? Running Xcode 16.4 (16F6), testing on iOS 18.5 (22F76)
0
1
141
Jun ’25