Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Apple music web kit play issues (MusicKit JS)
Hello, I am trying to follow the getting started guide. I have produced a developer token via the music kit embedding approach and can confirm I'm successfully authorized. When I try to do play music, I'm unable to hear anything. Thought it could be some auto-play problems with the browser, but it doesn't appear to be related, as I can trigger play from a button with no further success. const music = MusicKit.getInstance() try { await music.authorize() // successful const result = await music.api.music(`/v1/catalog/gb/search`, { term: 'Sound Travels', types: 'albums', }) await music.play() } catch (error) { console.error('play error', error) // ! No error triggered } I have searched the forum, have found similar queries but apparently none using V3 of the API. Other potentially helpful information: OS: macos 15.1 (24B83) API version: V3 On localhost Browser: Arc (chromium based), also tried on Safari, The only difference between the two browsers is that safari appears to exit the breakpoint, whereas Arc will continue (without throwing any errors) authorizationStatus: 3 Side note, any reason this is still in beta so many years later?
1
0
684
Dec ’24
Only Apple based music devices show view
The following is my playground code. Any of the apple audio units show the plugin view, however anything else (i.e. kontakt, spitfire, etc.) does not. It does not error, just where the visual is expected is blank. import AppKit import PlaygroundSupport import AudioToolbox import AVFoundation import CoreAudioKit let manager = AVAudioUnitComponentManager.shared() let description = AudioComponentDescription(componentType: kAudioUnitType_MusicDevice, componentSubType: 0, componentManufacturer: 0, componentFlags: 0, componentFlagsMask: 0) var deviceComponents = manager.components(matching: description) var names = deviceComponents.map{$0.name} let pluginName: String = "AUSampler" // This works //let pluginName: String = "Kontakt" // This does not var plugin = deviceComponents.filter{$0.name.contains(pluginName)}.first! print("Plugin name: \(plugin.name)") var customViewController:NSViewController? AVAudioUnit.instantiate(with: plugin.audioComponentDescription, options: []){avAudioUnit, error in var ilip = avAudioUnit!.auAudioUnit.isLoadedInProcess print("Loaded in process: \(ilip)") guard error == nil else { print("Error: \(error!.localizedDescription)") return } print("AudioUnit successfully created.") let audioUnit = avAudioUnit!.auAudioUnit audioUnit.requestViewController{ vc in if let viewCtrl = vc { customViewController = vc var b = vc?.view.bounds PlaygroundPage.current.liveView = vc print("Successfully added view controller.") }else{ print("Failed to load controller.") } } }
0
0
347
Dec ’24
Trouble with getting extended Album info from user library
Hello! I have a problem with getting album extended info from users library. Note that app authorised to use Apple Music according documentation. I get albums from users library with this code: func getLibraryAlbums() async throws -> MusicItemCollection<Album> { let request = MusicLibraryRequest<Album>() let response = try await request.response() return response.items } This is an example of Albums request respones: { "data" : [ { "meta" : { "musicKit_identifierSet" : { "isLibrary" : true, "id" : "1945382328890400383", "dataSources" : [ "localLibrary", "legacyModel" ], "type" : "Album", "deviceLocalID" : { "databaseID" : "37336CB19CF51727", "value" : "1945382328890400383" }, "catalogID" : { "kind" : "adamID", "value" : "1173535954" } } }, "id" : "1945382328890400383", "type" : "library-albums", "attributes" : { "artwork" : { "url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=4A2F444C%2D336D%2D49EA%2D90C8%2D13C547A5B95B", "width" : 0, "height" : 0 }, "genreNames" : [ "Pop" ], "trackCount" : 1, "artistName" : "Сара Окс", "isAppleDigitalMaster" : false, "audioVariants" : [ "lossless" ], "playParams" : { "catalogId" : "1173535954", "id" : "1945382328890400383", "musicKit_persistentID" : "1945382328890400383", "kind" : "album", "musicKit_databaseID" : "37336CB19CF51727", "isLibrary" : true }, "name" : "Нимфомания - Single", "isCompilation" : false } }, { "meta" : { "musicKit_identifierSet" : { "isLibrary" : true, "id" : "-8570883332059662437", "dataSources" : [ "localLibrary", "legacyModel" ], "type" : "Album", "deviceLocalID" : { "value" : "-8570883332059662437", "databaseID" : "37336CB19CF51727" }, "catalogID" : { "kind" : "adamID", "value" : "1618488499" } } }, "id" : "-8570883332059662437", "type" : "library-albums", "attributes" : { "isCompilation" : false, "genreNames" : [ "Pop" ], "trackCount" : 1, "artistName" : "TIMOFEEW & KURYANOVA", "isAppleDigitalMaster" : false, "audioVariants" : [ "lossless" ], "playParams" : { "catalogId" : "1618488499", "musicKit_persistentID" : "-8570883332059662437", "kind" : "album", "id" : "-8570883332059662437", "musicKit_databaseID" : "37336CB19CF51727", "isLibrary" : true }, "artwork" : { "url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=BEA6DBD3%2D8E14%2D4A10%2D97BE%2D8908C7C5FC2C", "width" : 0, "height" : 0 }, "name" : "Не звони - Single" } }, ... ] } In AlbumView using task: view modifier I request extended information about the album with this code: func loadExtendedInfo(_ album: Album) async throws -> Album { let response = try await album.with([.tracks, .audioVariants, .recordLabels], preferredSource: .library) return response } but in the response some of the fields are always nil, for example recordLabels, releaseDate, url, editorialNotes, copyright. Please tell me what I'm doing wrong?
0
0
457
Dec ’24
MATCH_ATTEMPT_FAILED error on Android Studio Java+Kotlin
Getting MatchError "MATCH_ATTEMPT_FAILED" everytime when matchstream on Android Studio Java+Kotlin project. My project reads the samples from the mic input using audioRecord class and sents them to the Shazamkit to matchstream. I created a kotlin class to handle to Shazamkit. The audioRecord is build to be mono and 16 bit. My Kotlin Class class ShazamKitHelper { val shazamScope = CoroutineScope(Dispatchers.IO + SupervisorJob()) lateinit var streaming_session: StreamingSession lateinit var signature: Signature lateinit var catalog: ShazamCatalog fun createStreamingSessionAsync(developerTokenProvider: DeveloperTokenProvider, readBufferSize: Int, sampleRate: AudioSampleRateInHz ): CompletableFuture<Unit>{ return CompletableFuture.supplyAsync { runBlocking { runCatching { shazamScope.launch { createStreamingSession(developerTokenProvider,readBufferSize,sampleRate) }.join() }.onFailure { throwable -> }.getOrThrow() } } } private suspend fun createStreamingSession(developerTokenProvider:DeveloperTokenProvider,readBufferSize: Int,sampleRateInHz: AudioSampleRateInHz) { catalog = ShazamKit.createShazamCatalog(developerTokenProvider) streaming_session = (ShazamKit.createStreamingSession( catalog, sampleRateInHz, readBufferSize ) as ShazamKitResult.Success).data } fun startMatching() { val audioData = sharedAudioData ?: return // Return if sharedAudioData is null CoroutineScope(Dispatchers.IO).launch { runCatching { streaming_session.matchStream(audioData.data, audioData.meaningfulLengthInBytes, audioData.timestampInMs) }.onFailure { throwable -> Log.e("ShazamKitHelper", "Error during matchStream", throwable) } } } @JvmField var sharedAudioData: AudioData? = null; data class AudioData(val data: ByteArray, val meaningfulLengthInBytes: Int, val timestampInMs: Long) fun startListeningForMatches() { CoroutineScope(Dispatchers.IO).launch { streaming_session.recognitionResults().collect { matchResult -> when (matchResult) { is MatchResult.Match -> { val match = matchResult.matchedMediaItems println("Match found: ${match.get(0).title} by ${match.get(0).artist}") } is MatchResult.NoMatch -> { println("No match found") } is MatchResult.Error -> { val error = matchResult.exception println("Match error: ${error.message}") } } } } } } My code in java reads the samples from a thread: shazam_create_session(); while (audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){ if (shazam_session_created){ byte[] buffer = new byte[288000];//max_shazam_seconds * sampleRate * 2]; audioRecord.read(buffer,0,buffer.length,AudioRecord.READ_BLOCKING); helper.sharedAudioData = new ShazamKitHelper.AudioData(buffer,buffer.length,System.currentTimeMillis()); helper.startMatching(); if (!listener_called){ listener_called = true; helper.startListeningForMatches(); } } else{ SystemClock.sleep(100); } } private void shazam_create_session() { MyDeveloperTokenProvider provider = new MyDeveloperTokenProvider(); AudioSampleRateInHz sample_rate = AudioSampleRateInHz.SAMPLE_RATE_48000; if (sampleRate == 44100) sample_rate = AudioSampleRateInHz.SAMPLE_RATE_44100; CompletableFuture<Unit> future = helper.createStreamingSessionAsync(provider, 288000, sample_rate); future.thenAccept(result -> { shazam_session_created = true; }); future.exceptionally(throwable -> { Toast.makeText(mine, "Failure", Toast.LENGTH_SHORT).show(); return null; }); } I Implemented the developer token in java as follows public static class MyDeveloperTokenProvider implements DeveloperTokenProvider { DeveloperToken the_token = null; @NonNull @Override public DeveloperToken provideDeveloperToken() { if (the_token == null){ try { the_token = generateDeveloperToken(); return the_token; } catch (NoSuchAlgorithmException | InvalidKeySpecException e) { throw new RuntimeException(e); } } else{ return the_token; } } @NonNull private DeveloperToken generateDeveloperToken() throws NoSuchAlgorithmException, InvalidKeySpecException { PKCS8EncodedKeySpec priPKCS8 = new PKCS8EncodedKeySpec(Decoders.BASE64.decode(p8)); PrivateKey appleKey = KeyFactory.getInstance("EC").generatePrivate(priPKCS8); Instant now = Instant.now(); Instant expiration = now.plus(Duration.ofDays(90)); String jwt = Jwts.builder() .header().add("alg", "ES256").add("kid", keyId).and() .issuer(teamId) .issuedAt(Date.from(now)) .expiration(Date.from(expiration)) .signWith(appleKey) // Specify algorithm explicitly .compact(); return new DeveloperToken(jwt); } }
0
0
550
Dec ’24
AudioHardwareError: No Access to Int32 error constants
I am unable to access the Int32 error from the errors that CoreAudio throws in Swift type AudioHardwareError. This is critical. There is no way to access the errors or even create an AudioHardwareError to test for errors. do { _ = try AudioHardwareDevice(id: 0).streams // will throw } catch { if let error = error as? AudioHardwareError { // cast to AudioHardwareError print(error) // prints error code but not the errorDescription } } How can get reliably get the error.Int32? Or create a AudioHardwareError with an error constant? There is no way for me to handle these error with code or run tests without knowing what the error is. On top of that, by default the error localizedDescription does not contain the errorDescription unless I extend AudioHardwareError with CustomStringConvertible. extension AudioHardwareError: @retroactive CustomStringConvertible { public var description: String { return self.localizedDescription } }
2
1
593
Dec ’24
How not to block during recording
The problem I have at the moment is that if a phone call comes in during my recording, even if I don't answer, my recording will be interrupted The phenomenon of recording interruption is that the picture is stuck, and the recording can be resumed automatically after the call is over. But it will cause the recorded video sound and painting out of sync Through the AVCaptureSessionWasInterrupted listening, I can get to record the types of alerts and interrupt As far as I can tell, a ringing or vibrating phone can block the audio channel. I found the same scenario in other apps, you can turn off the ring tone or vibration, but I don't know how to do it, I tried a lot of ways, but it doesn't work BlackmagicCam or ProMovie App, when a call comes in during recording, there will only be a notification menu, and there will be no ringtone or vibration, which solves the problem of recording interruption I don't know if this requires some configuration or application, please let me know if it does
1
0
540
Dec ’24
Sound randomly
Hello all! I've been having this issue for a while, on my iPhone 12 Pro. The volume when listening to music, watching YouTube, TikTok, etc. It will randomly lower, but the actual audio slider won't it will still be at max volume but get very quiet. I've followed other instructions such as turn off audio awareness, and other settings but nothing seems to be working. And phone calls too Has anyone else had this issue and managed to fix it?
1
0
422
Dec ’24
Music Kit initialisation, Uncaught TypeError: Cannot read properties of undefined (reading 'node')
I'm trying to load Music Kit on the server with solid js. I can confirm that my implementation has been sufficient to return authentication tokens and for MusicKit.isAuthorized to return true. My issue is that if I reload the page, it only succeeds intermittently (perhaps 25% of the time?). My question is - what is wrong with my implementation? Removing the async keyword ensures it loads every time but playing and queuing music no longer works. I'm currently assuming this is an SSR issue but the docs haven't explicitly specified this isn't possible. I have the following boilerplate: export default createHandler( () => ( <StartServer document={({ assets, children, scripts }) => { return ( <html lang="en"> <head> <meta name="apple-music-developer-token" content={authResult.token} /> <meta name="apple-music-app-name" content="app name" /> <meta name="apple-music-app-build" content="1978.4.1" /> {assets} <script src="https://js-cdn.music.apple.com/musickit/v3/musickit.js" async /> </head> <body> <div id="app">{children}</div> {scripts} </body> </html> ) }} /> )) When I first load my app, I'll encounter: musickit.js:13 Uncaught TypeError: Cannot read properties of undefined (reading 'node') at musickit.js:13:10194 at musickit.js:13:140 at musickit.js:13:209 The intermittence signals an issue relating to the async keyword. An expansion on this issue can be found here.
0
0
551
Dec ’24
AudioComponentInstanceNew takes up to five seconds to complete
We are using a VoiceProcessingIO audio unit in our VoIP application on Mac. In certain scenarios, the AudioComponentInstanceNew call blocks for up to five seconds (at least two). We are using the following code to initialize the audio unit: OSStatus status; AudioComponentDescription desc; AudioComponent inputComponent; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc.componentFlags = 0; desc.componentFlagsMask = 0; desc.componentManufacturer = kAudioUnitManufacturer_Apple; inputComponent = AudioComponentFindNext(NULL, &desc); status = AudioComponentInstanceNew(inputComponent, &unit); We are having the issue with current MacOS versions on a host of different Macs (x86 and x64 alike). It takes two to three seconds until AudioComponentInstanceNew returns. We also see the following errors in the log multiple times: AUVPAggregate.cpp:2560 AggInpStreamsChanged wait failed and those right after (which I don't know if they matter to this issue): KeystrokeSuppressorCore.cpp:44 ERROR: KeystrokeSuppressor initialization was unsuccessful. Invalid or no plist was provided. AU will be bypassed. vpStrategyManager.mm:486 Error code 2003332927 reported at GetPropertyInfo
3
1
1.2k
Dec ’24
Apple Music Won't Play using the latest version of Xcode/MacOS
I have tried everything. The songs load unto the playlists and on searches, but when prompted to play, they just won't play. I have a wrapper since my main player (which carries the buttons for play/rewind/forward/etc.), is in Objc. // // ApplePlayerWrapper.swift // UniversallyMac // // Created by Dorian Mattar on 11/10/24. // import Foundation import MusicKit import MediaPlayer @objc public class MusicKitWrapper: NSObject { @objc public static let shared = MusicKitWrapper() private let player = ApplicationMusicPlayer.shared // Play the current track @objc public func play() { guard !player.queue.entries.isEmpty else { print("Queue is empty. Cannot start playback.") return } logPlayerState(message: "Before play") Task { do { try await player.prepareToPlay() try await player.play() print("Playback started successfully.") } catch { if let nsError = error as NSError? { print("NSError Code: \(nsError.code), Domain: \(nsError.domain)") } } logPlayerState(message: "After play") } } // Log the current player state @objc public func logPlayerState(message: String = "") { print("Player State - \(message):") print("Playback Status: \(player.state.playbackStatus)") print("Queue Count: \(player.queue.entries.count)") // Only log current track details if the player is playing if player.state.playbackStatus == .playing { if let currentEntry = player.queue.currentEntry { print("Current Track: \(currentEntry.title)") print("Current Position: \(player.playbackTime) seconds") print("Track Length: \(currentEntry.endTime ?? 0.0) seconds") } else { print("No current track.") } } else { print("No track is playing.") } print("----------") } // Debug the queue @objc public func debugQueue() { print("Debugging Queue:") for (index, entry) in player.queue.entries.enumerated() { print("\(index): \(entry.title)") } } // Ensure track availability in the queue public func queueTracks(_ tracks: [Track]) { Task { do { for track in tracks { // Validate Play Parameters guard let playParameters = track.playParameters else { print("Track \(track.title) has no Play Parameters.") continue } // Log the Play Parameters print("Track Title: \(track.title)") print("Play Parameters: \(playParameters)") print("Raw Values: \(track.id.rawValue)") // Ensure the ID is valid if track.id.rawValue.isEmpty { print("Track \(track.title) has an invalid or empty ID in Play Parameters.") continue } // Queue the track try await player.queue.insert(track, position: .afterCurrentEntry) print("Queued track: \(track.title)") } print("Tracks successfully added to the queue.") } catch { print("Error queuing tracks: \(error)") } debugQueue() } } // Clear the current queue @objc public func resetMusicPlayer() { Task { player.stop() player.queue.entries.removeAll() print("Queue cleared.") print("Apple Music player reset successfully.") } } } I opened an Apple Dev. ticket, but I'm trying here as well. Thanks!
1
0
460
Jan ’25
Why is AVAudioEngine input giving all zero samples?
I am trying to get access to raw audio samples from mic. I've written a simple example application that writes the values to a text file. Below is my sample application. All the input samples from the buffers connected to the input tap is zero. What am I doing wrong? I did add the Privacy - Microphone Usage Description key to my application target properties and I am allowing microphone access when the application launches. I do find it strange that I have to provide permission every time even though in Settings > Privacy, my application is listed as one of the applications allowed to access the microphone. class AudioRecorder { private let audioEngine = AVAudioEngine() private var fileHandle: FileHandle? func startRecording() { let inputNode = audioEngine.inputNode let audioFormat: AVAudioFormat #if os(iOS) let hardwareSampleRate = AVAudioSession.sharedInstance().sampleRate audioFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: 1)! #elseif os(macOS) audioFormat = inputNode.inputFormat(forBus: 0) // Use input node's current format #endif setupTextFile() inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { [weak self] buffer, _ in self!.processAudioBuffer(buffer: buffer) } do { try audioEngine.start() print("Recording started with format: \(audioFormat)") } catch { print("Failed to start audio engine: \(error.localizedDescription)") } } func stopRecording() { audioEngine.stop() audioEngine.inputNode.removeTap(onBus: 0) print("Recording stopped.") } private func setupTextFile() { let tempDir = FileManager.default.temporaryDirectory let textFileURL = tempDir.appendingPathComponent("audioData.txt") FileManager.default.createFile(atPath: textFileURL.path, contents: nil, attributes: nil) fileHandle = try? FileHandle(forWritingTo: textFileURL) } private func processAudioBuffer(buffer: AVAudioPCMBuffer) { guard let channelData = buffer.floatChannelData else { return } let channelSamples = channelData[0] let frameLength = Int(buffer.frameLength) var textData = "" var allZero = true for i in 0..<frameLength { let sample = channelSamples[i] if sample != 0 { allZero = false } textData += "\(sample)\n" } if allZero { print("Got \(frameLength) worth of audio data on \(buffer.stride) channels. All data is zero.") } else { print("Got \(frameLength) worth of audio data on \(buffer.stride) channels.") } // Write to file if let data = textData.data(using: .utf8) { fileHandle!.write(data) } } }
4
0
924
Jan ’25
App recedes to background,audioEngine.start()
private var audioEngine = AVAudioEngine() private var inputNode: AVAudioInputNode! func startAnalyzing() { inputNode = audioEngine.inputNode let recordingFormat = inputNode.outputFormat(forBus: 0) let hardwareSampleRate = recordingSession.sampleRate inputNode.removeTap(onBus: 0) if recordingFormat.sampleRate != hardwareSampleRate { print("。") let newFormat = AVAudioFormat(commonFormat: recordingFormat.commonFormat, sampleRate: hardwareSampleRate, channels: recordingFormat.channelCount, interleaved: recordingFormat.isInterleaved) inputNode.installTap(onBus: 0, bufferSize: 1024, format: newFormat) { buffer, time in self.processAudioBuffer(buffer, time: time) } } else { inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, time in self.processAudioBuffer(buffer, time: time) } } do { audioEngine.prepare() try audioEngine.start() } catch { print(": \(error)") } } I back the app to the background and then call startAnalyzing(), which reports an error and the background recording permissions are configured。 error: [10429:570139] [aurioc] AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187) [10429:570139] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187 Audio engine couldn't start. Is background boot not allowed?
1
0
511
Jan ’25
Rear View Camera Installed – Now CarPlay Audio Stops After 15 Seconds via Bluetooth
I recently installed a rear-view camera in my car, and ever since, I've been experiencing a frustrating issue with my CarPlay. After about 15 seconds of playing audio via Bluetooth, the sound stops coming out of the speakers, even though the song continues to run in the background. For context, my stereo system is an aftermarket unit that I installed to enable CarPlay functionality. Everything worked perfectly before adding the rear-view camera. Unfortunately, my unit does not have a port for a wired connection, so I can't test the audio using a cable. Has anyone experienced a similar issue? Could the camera installation be interfering with the Bluetooth or audio system somehow? Any advice or troubleshooting tips would be greatly appreciated!
1
0
312
Jan ’25
How to set volume with MusicKit Web?
I've got a web app built with MusicKit that displays a list of songs. I have player controls for play, pause, skip next, skip, previous, toggle shuffle and set repeat mode. All of these work by using music. The play button, when nothing is playing and nothing is in the queue, will enqueue all the tracks and start playing with the below, for example: await music.setQueue({ songs, startPlaying: true }); I've implemented a progress slider based on feedback from the "playbackProgressDidChange" listener. Now, how in the world can I set the volume? This seems like it should be simple, but I am at a complete loss here. The docs say: "The volume of audio playback, which is set directly on the HTMLMediaElement as the HTMLMediaElement.volume property. This value ranges between 0, which would be muting the audio, and 1, which would be the loudest possible." Given that all my controls work off the music instance, I don't understand how I can do that. In this video from WWDC 2022, music web components are touched on briefly. These are also documented very sparsely. The volume docs are here. For the life of me, I can't even get the volume web component to display in the UI. It appears that MusicKit Web is hobbled compared to the native implementation, but surely adjusting volume shouldn't be that hard right? I'd appreciate any insight on how to do this, including how to get web components to work (in a Next JS app). Thanks.
2
0
561
Jan ’25
Inquiry about Potential Core Audio Improvements
Hi everyone, I wanted to bring up a question about Core Audio and its potential for future updates or improvements, specifically regarding latency optimization. As someone who relies on Core Audio for real-time audio processing, any enhancements in this area would be incredibly beneficial for professionals in the industry. Does anyone know if Apple has shared any plans or updates regarding Core Audio’s performance, particularly for low-latency applications? I’d appreciate any insights or advice from the community! Thanks so much! Best, Michael
1
0
495
Jan ’25
How to find `AudioHardwareControl` direction?
I'm working with modern Core Audio API introduced in macOS Sequoia. I have an AudioHadwareDevice which has several controls of type AudioHardwareControl. I figured out to filter only volume controls I can use classID == kAudioVolumeControlClassID condition. Some devices have volume controls for both input and output. How I can determine the direction of the control? Streams, i.e. AudioHardwareStream object have direction, but I didn't found a way to map controls to streams. There are kAudioObjectPropertyScopeInput and kAudioObjectPropertyScopeOutput property scopes, but no matter what I tried controls always return false to any control.hasProperty(address: whatever). Any other ideas?
1
0
502
Jan ’25
[VisionOS Audio] AVAudioPlayerNode occasionally produces loud popping/distortion when playing PCM data
I'm experiencing audio issues while developing for visionOS when playing PCM data through AVAudioPlayerNode. Issue Description: Occasionally, the speaker produces loud popping sounds or distorted noise This occurs during PCM audio playback using AVAudioPlayerNode The issue is intermittent and doesn't happen every time Technical Details: Platform: visionOS Device: vision pro / simulator Audio Framework: AVFoundation Audio Node: AVAudioPlayerNode Audio Format: PCM I would appreciate any insights on: Common causes of audio distortion with AVAudioPlayerNode Recommended best practices for handling PCM playback in visionOS Potential configuration issues that might cause this behavior Has anyone encountered similar issues or found solutions? Any guidance would be greatly helpful. Thank you in advance!
2
1
639
Jan ’25