Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Playing audio live from Bluetooth headset on iPhone speaker
Hi guys, I am having issue in live-streaming audio from Bluetooth headset and playing it live on the iPhone speaker. I am able to redirect audio back to the headset but this is not what I want. The issue happens when I am trying to override output - the iPhone switches to speaker but also switches a microphone. This is example of the code: import AVFoundation class AudioRecorder { let player: AVAudioPlayerNode let engine:AVAudioEngine let audioSession:AVAudioSession let audioSessionOutput:AVAudioSession init() { self.player = AVAudioPlayerNode() self.engine = AVAudioEngine() self.audioSession = AVAudioSession.sharedInstance() self.audioSessionOutput = AVAudioSession() do { try self.audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.defaultToSpeaker]) try self.audioSessionOutput.setCategory(AVAudioSession.Category.playAndRecord, options: [.allowBluetooth]) // enables Bluetooth HFP profile try self.audioSession.setMode(AVAudioSession.Mode.default) try self.audioSession.setActive(true) // try self.audioSession.overrideOutputAudioPort(.speaker) // doens't work } catch { print(error) } let input = self.engine.inputNode self.engine.attach(self.player) let bus = 0 let inputFormat = input.inputFormat(forBus: bus) self.engine.connect(self.player, to: engine.mainMixerNode, format: inputFormat) input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in self.player.scheduleBuffer(buffer) print(buffer) } } public func start() { try! self.engine.start() self.player.play() } public func stop() { self.player.stop() self.engine.stop() } } I am not sure if this is a bug or not. Can somebody point me into the right direction? I there a way to design a custom audio routing? I would also appreciate some good documentation besides AVFoundation docs.
0
0
309
Mar ’25
AVAudioEngine. Select input device on macOS
Hello! I'm use AVFoundation for preview video and audio from selected device, and I try use AVAudioEngine for preview audio in real-time, but I can't or I don't understand how select input device? I can hear only my microphone in real-time So far, I'm using AVCaptureAudioPreviewOutput for in real-time hear audio, but I think has delay. On iOS works easy with AVAudioEngine, but on macOS bruh...
1
0
426
Mar ’25
AVAudioRecorder records silence
We have application using PTT Framework to record audio messages when app is backgrounded. Right now we are using AVAudioRecorder for that purpose. And problem is one specific user has frequent issue - recorded audio contains only silence. I've checked almost everything I can imagine but didn't find any possible reason of issue. Conditions: AVAudioRecorder uses following configuration: [ AVEncoderAudioQualityKey: AVAudioQuality.low.rawValue, AVFormatIDKey : kAudioFormatMPEG4AAC, AVNumberOfChannelsKey: 1, AVSampleRateKey: 16000.0 ] App waits both didBeginTransmitting and didActivate audioSession from PTChannelManager (audio session has playback category at that moment) App does AVAudioSession category change to playAndRecord App gets routeChangeNotification with categoryChange and category = playAndRecord There is no any interruption notifications from AVAudioSession during recording There is no any error notification from AVAudioRecorder Any idea what exactly I do wrong? Is there anything else I should check? Thanks in advance. P.S. it looks like recording audio with AudioUnit has the same issue, but let's exclude it from question atm for simplicity.
3
0
385
Mar ’25
Lightning to HDMI mirrors
I am developing a VOD playback app, but when I stream video to an external monitor connected via HDMI with Lightning on iOS 18 or later, the screen goes dark and I cannot confirm playback. The app I am developing does not detect the HDMI and display the Player separately, but simply mirrors the video. We have confirmed that the same phenomenon occurs with other services, but we were able to confirm playback with some services such as Apple TV. Please let us know if there are any other necessary settings such as video certificates required for video playback. We would also like to know if the problem occurs with iOS 18 or later.
0
0
264
Mar ’25
Background recording app getting killed by watch dog.. how to avoid?
We have the necessary background recording entitlements, and for many users... do not run into any issues. However, there is a subset of users that routinely get recordings ending.. we have narrowed this down and believe it to be the work of the watch dog. First we removed the entire view hierarchy when app is backgrounded. There is just 'Text("Recording")' This got the CPU usage in profiler down to 0%. We saw massive improvements to recording success rate. We walked away assuming that was enough. However we are still seeing the same sort of crashes. All in the background. We're using Observation to drive audio state changes to a Live Activity. Are those Observations causing the problem? Why doesn't apple provide a better API to background audio? The internet is full of weird issues https://stackoverflow.com/questions/76010213/why-is-my-react-native-app-sometimes-terminated-in-the-background-while-tracking https://stackoverflow.com/questions/71656047/why-is-my-react-native-app-terminating-in-the-background-while-recording-ios-r https://github.com/expo/expo/issues/16807 This is such a terrible user experience. And we have very little visibility into what is happening and why. No where in apple documentation states that in order for background recording to work, the app can only be 'Text("Recording")' It does not outline a CPU or memory threshold. It just kills us.
2
0
390
Mar ’25
Error -50 writing to AVAudioFile
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue? Here is the debugger output. (lldb) po audioFile.url ▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _parseInfo : nil - _baseParseInfo : nil (lldb) po error Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)} (lldb) po buffer.format <AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po audioFile.fileFormat <AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po buffer.frameLength 882 (lldb) po buffer.audioBufferList ▿ 0x0000000300941e60 - pointerValue : 12894608992 This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer. extension FMLiveSwitchAudioFrame { func convertedToPCMBuffer() -> AVAudioPCMBuffer { Self.convertToAVAudioPCMBuffer(from: self)! } static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? { // Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame guard let buffer = frame.buffer(), let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil } // Extract PCM format details from FMLiveSwitchAudioFormat let sampleRate = Double(format.clockRate()) let channelCount = AVAudioChannelCount(format.channelCount()) // Determine bytes per sample based on bit depth let bitsPerSample = 16 let bytesPerSample = bitsPerSample / 8 let bytesPerFrame = bytesPerSample * Int(channelCount) let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame) // Create an AVAudioFormat from the FMLiveSwitchAudioFormat guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else { return nil } // Create an AudioBufferList to wrap the existing buffer let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1) audioBufferList.pointee.mNumberBuffers = 1 audioBufferList.pointee.mBuffers.mNumberChannels = channelCount audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length()) audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer // Transfer ownership of the buffer to AVAudioPCMBuffer let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in // Ensure the buffer is freed when AVAudioPCMBuffer is deallocated buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation } */ pcmBuffer?.frameLength = frameLength return pcmBuffer } } This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen. private func onRaisedFrame(obj: Any!) -> Void { // Bail out early if no one is interested in the data. guard isMonitoring else { return } // Convert LS frame to AVAudioPCMBuffer (no-copy) let frame = obj as! FMLiveSwitchAudioFrame let buffer = frame.convertedToPCMBuffer() // Hand subscribers a reference to the buffer for rendering to display. bufferPublisher?.send(buffer) // If we have and output file, store the data there, as well. guard let audioFile = self.audioFile else { return } do { try audioFile.write(from: buffer) // FIXME: This call is throwing error -50 } catch { FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)") self.audioFile = nil } } This is how the audio file is being setup. static var recordingFormat: AVAudioFormat = { AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)! }() let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
0
0
387
Mar ’25
Making sense of AVAudioSession interruption notifications
I have an app under development - demo here - https://youtu.be/VbAfUk_eYl0?si=s6EDBx-4G6P_QbZO - which is sort of an audio player for airdropped files - something useful to musicians who dump work in progress to their phone, make notes, revise and update. I've been testing my handling of audio session interruption notifications, but seems to be a lot of inconsistency in how, when and why iOS delivers them, and I'm wondering if there is some rhyme or reason to it that I'm just not detecting. For example, I am playing a song in my app. Switch to Apple Music and start playing a song there. My app gets an interruption began notification - this is consistent. Switch back to my app, and about half the time, I will get an interruption ended notification (coupled often with a blast of the tail of whatever audio buffer was partially played when the interruption started, even though the engine was stopped - and followed by call to my AVAudioPlayerNodeCompletionCallback - is there some way to avoid this?). Half the time I don't get an interruption ended notification; my app can (as expected) end the interruption by activating the AVAudioSession and playing something. I have not been able to determine any pattern to this behavior, other than that if my app started playing using AVAudioPlayerNode.scheduleSegment rather than scheduleFile I think the notification will be consistently delivered on app activation rather than when I activate the session programmatically. I would like my app to behave deterministically, and would appreciate any help in deciphering what causes the inconsistent behavior in notifications from iOS.
2
0
339
Mar ’25
Failure of AudioUnitSetProperty when using MacCatalyst (works on macOS)
I was trying to set custom audio output device for a generated audio on macCatalyst. While using let status = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout.size)) kAudioOutputUnitProperty_CurrentDevice is invalid, and status = -10879, indicating an error. STEPS TO REPRODUCE Set Run Destination to MacOS and run the program. "AudioUnitSetProperty: 0" should be printed, indicating it works fine. Set Run Destination to Mac Catalyst and run the program. "Error setting output device: -10879" should be printed, indicating an error.
4
1
660
Mar ’25
Implementing Enhanced Dialogue on tvOS 18
How does a third party developer go about supporting the new Enhanced Dialogue option for video apps in tvOS 18? If an app is using the standard AVPlayerViewController, I had assumed it would be a simple-ish matter of building against the tvOS 18 SDK but apparently not, the options don't appear, not even greyed out.
1
1
591
Mar ’25
access transport in Logic Pro
hi, i need to read wether the transport is playing or stopped but my current method that works for vst does not work for au. is there a lpx resource available for developers anywhere? if (auto* playHead = processor->getPlayHead()) { juce::AudioPlayHead::CurrentPositionInfo posInfo; if (playHead->getCurrentPosition(posInfo)) { bool isCurrentlyPlaying = posInfo.isPlaying; if (isCurrentlyPlaying != wasTransportPlaying) { if (isCurrentlyPlaying) { wasTransportPlaying = isCurrentlyPlaying; startAllTimers(); } else { wasTransportPlaying = isCurrentlyPlaying; stopAllTimers(); } } } } thanks :)
0
0
258
Mar ’25
CoreAudio HAL plugin vs dext
The presentation "create audio drivers with DriverKit" from WWDC 2021 demonstrates how to use a dext to implement a virtual audio driver. It also says " If a virtual audio driver or device is all that is needed, the audio server plug-in driver model should continue to be used". Indeed, in AudioDriverKit/AudioDriverKitTypes.h, there is no IOUserAudioTransportType Virtual, although CoreAudio/AudioHardwareBase.h includes kAudioDeviceTransportTypeVirtual. For one of our products, we require virtual devices to implement a software loopback "cable". We've implemented this using the "traditional" HAL plugin, and as a proof-of-concept, also using a dext. In the dext, I tried setting the transport type to 'virt', which seems to only have the effect of changing the icon shown in Audio Midi Setup. HAL plugins require an installer, and the installer has to kill coreaudiod in a post-install script. You have to turn off SIP to debug them. Just like AudioDriverKit drivers, they are out-of-process and run in a process not owned by the hosting app. Our HAL plugin's interface is property based; we had to write a lot of boiler-plate code to implement required properties. Writing an AudioDriverKit driver is in most respects easier - a lot of the scaffolding is implemented in the base driver, which we only alter where required. Debugging and installation is much easier. The dext works just fine, as far as we can ascertain, just as well as a HAL plugin. So, my question is - is the advice to use a HAL plugin for a virtual device still correct in 2025? And if so, what's the objection? We'd really prefer to ship the AudioDriverKit virtual audio device.
2
1
489
Mar ’25
Memory leak AVAudioPlayer
Let's consider the following code. I've created an actor that loads a list of .mp3 files from a Bundle and then makes it available for audio reproduction. Unfortunately, I'm experiencing a memory leak. At the play method. player.play() From Instruments I get _malloc_type_malloc_outlined libsystem_malloc.dylib start_wqthread libsystem_pthread.dylib private actor AudioActor { enum Failure: Error { case soundsNotLoaded([AudioPlayerClient.Sound: Error]) } enum Player { case music(AVAudioPlayer) } var players: [Sound: Player] = [:] let bundles: [Bundle] init(bundles: UncheckedSendable<[Bundle]>) { self.bundles = bundles.wrappedValue } func load(sounds: [Sound]) throws { try AVAudioSession.sharedInstance().setActive(true, options: []) var errors: [Sound: Error] = [:] for sound in sounds { guard let url = bundle.url(forResource: sound.name, withExtension: "mp3") else { continue } do { self.players[sound] = try .music(AVAudioPlayer(contentsOf: url)) } catch { errors[sound] = error } } guard errors.isEmpty else { throw Failure.soundsNotLoaded(errors) } } func play(sound: Sound, loops: Int?) throws { guard let player = self.players[sound] else { return } switch player { case let .music(player): player.numberOfLoops = loops ?? -1 player.play() } } func stop(sound: Sound) throws { guard let player = self.players[sound] else { throw Failure.soundsNotLoaded([:]) } switch player { case let .music(player): player.stop() } } }
0
0
101
Mar ’25
Feature Request: Long-Lived Access to Personal Apple Music Data
Feature Request: Long-Lived Access to Personal Apple Music Data Use Case Summary I'm developing a personal portfolio website (using Nuxt) and want to display information from my own Apple Music library - showcasing personal playlists, recently played tracks, or a read-only "now playing" widget. This is purely for personal use on my website and doesn't require other users to log in. With Spotify's API, implementing this was straightforward thanks to automatic token refresh. I want a similarly seamless integration with Apple Music. Challenge with MusicKit and Music User Tokens Apple Music API requirements Apple's Music API requires a valid Music User Token (MUT) for requests involving personal library data. Beyond the Apple Developer Token, you must obtain a user-specific token via MusicKit authentication to access your own library playlists, play history, or current playback status. Token expiration and manual renewal Music User Tokens expire after approximately 6 months without any mechanism to automatically refresh or renew them - unlike typical OAuth flows that provide refresh tokens. Apple's guidance suggests the device (e.g., iPhone) is responsible for obtaining new user tokens when old ones expire. This works for interactive apps on Apple devices but fails in server-side or long-lived web contexts like a personal website widget. Impact on personal projects Displaying Apple Music data on a public-facing site becomes difficult. I would need to periodically re-authenticate through the MusicKit JS flow every few months just to keep a widget alive. Embedding credentials in a public site is insecure, and manual token refreshing is cumbersome and easy to forget. Comparison to Spotify's Token Model Spotify's API offers a developer-friendly authentication model. Their OAuth flow provides a Refresh Token that applications can use to obtain new access tokens automatically without requiring user re-authorization. This means a personal app can maintain continuous access to a user's Spotify data for extended periods until access is revoked. When building a similar feature with Spotify, this automatic token renewal was crucial. I could safely store the refresh token on my server and have my app periodically update the access token. Many developers have created public-facing widgets showing currently playing tracks on blogs or GitHub profiles using this model. Unfortunately, Apple Music's API lacks an equivalent capability, putting it at a disadvantage for personal projects. Proposed Solutions I request Apple's consideration for one of these enhancements: Provide a mechanism to refresh or extend a Music User Token programmatically for server-side applications. This could be an OAuth-style refresh token issued alongside the MUT, or a dedicated endpoint to exchange an expired MUT for a new one. This would enable renewal without a full user re-auth/login each time. Allow developers to access their own Apple Music library data with just the long-lived Developer Token. Apple could permit GET requests to personal library endpoints using the Developer Token alone, or a special token tied to the developer's Apple ID. This access would be read-only - no ability to modify the library, purely for retrieving data. It could be an opt-in feature in the Apple Developer account settings. Either solution would significantly improve the developer experience for Apple Music API in personal projects. Security and Privacy Considerations This request is not about accessing others' data or creating privacy loopholes - it's about empowering an Apple Music subscriber to access their own information more conveniently. The proposed options respect privacy principles: The data accessed is only what the user already has access to - their own playlists, library items, or playback status. An automatic token refresh can be designed securely (revocable tokens bound to a single account with no increase in permissions). Read-only developer token access could be restricted to non-sensitive data and require explicit opt-in. Conclusion I request an improvement to Apple Music's developer experience through either (1) an automatic Music User Token refresh mechanism, or (2) a provision for read-only personal library access using a Developer Token. This would bring Apple Music integration capabilities closer to parity with services like Spotify for personal projects. I ask Apple's Developer Relations and the Apple Music API team to consider this feature request. If there are existing best practices or workarounds with current APIs, I would appreciate guidance. I invite feedback from Apple or other developers. Are there known patterns for maintaining an Apple Music user token for server-side applications, or any plans to support non-interactive use cases? Any advice is welcome. Thank you for your consideration. I look forward to integrating Apple Music into my personal site as smoothly as with other services, and believe many developers would benefit from this added flexibility. Sources: User Authentication for MusicKit - Requirements for Music User Tokens StackOverflow: Do Apple Music User Tokens expire? - Confirmation of 6-month expiration MetaBrainz GSoC Blog - Documentation of MusicKit authentication limitations Apple Developer Forums - Information on token renewal behavior Spotify for Developers - Documentation on refresh token mechanism
1
0
225
Mar ’25
Play Audio for a Metronome
Hi, I am looking for a good way to play sounds at a high frequency. At the moment I am using the AVAudioEngine, and create a couple AVAudioPlayerNode and for each sound I need to play I create a AVAudioPCMBuffer. When the app needs to play a sound, I get the correct AVAudioPCMBuffer for the sound and use the first available AVAudioPlayerNode and feed it to the buffer. The timing for a metronome app has to be very precise because if it's of by about 16ms the user can hear that it is not playing had the right interval. For low speeds this is working without any problems, but at high speeds it is getting worse. Maybe anyone has an idea on how I can improve my method. Its a Plugin for Flutter. import AVFoundation class FastSoundPlayer { private var audioPlayers: [SoundPlayer?] = [] private var sounds: [String: Sound] = [:] private var engine = AVAudioEngine() let session = AVAudioSession.sharedInstance() init() { do { try session.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: [AVAudioSession.CategoryOptions.mixWithOthers]) try session.setActive(true) createSoundPlayers(count: 20) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } // Selector method to handle applicationDidBecomeActiveNotification func applicationDidBecomeActive() { // Reinitialize AVAudioEngine and reattach all nodes do { engine.reset() objc_sync_enter(audioPlayers) audioPlayers.removeAll() createSoundPlayers(count: 20) objc_sync_exit(audioPlayers) try engine.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } func createSoundPlayers(count: Int) { for _ in 0..<count { let player = SoundPlayer() engine.attach(player.player) engine.connect(player.player, to: engine.mainMixerNode, format: nil) audioPlayers.append(player) } } func load(sound: Data, name: String) { let sound = Sound(soundData: sound) sounds[name] = sound } func play(name: String) { if !engine.isRunning { applicationDidBecomeActive() } guard let sound = sounds[name] else { print("Sound not found") return } if let player = getAvailablePlayer() { player.play(sound: sound) } } func getAvailablePlayer() -> SoundPlayer? { for player in audioPlayers { if !player!.isPlaying { return player } } return nil } } class SoundPlayer { let player = AVAudioPlayerNode() var isPlaying = false init() { player.volume = 1.0 } func play(sound: Sound) { player.scheduleBuffer(sound.sound!, at: nil, options: .interrupts, completionCallbackType: .dataPlayedBack) { _ in self.complete() } if (player.engine != nil && player.engine!.isRunning) { player.play() isPlaying = true } } func complete() { isPlaying = false } } class Sound { var sound: AVAudioPCMBuffer? init(soundData: Data) { do { let temporaryURL = FileManager.default.temporaryDirectory.appendingPathComponent("tempSound.wav") try soundData.write(to: temporaryURL) // Create AVAudioFile from the temporary file URL let audioFile = try AVAudioFile(forReading: temporaryURL) // Define the format for the PCM buffer (44100Hz, stereo) let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: 2, interleaved: false) // Create AVAudioPCMBuffer guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: AVAudioFrameCount(audioFile.length)) else { // Failed to create PCM buffer self.sound = nil return } // Read audio file into PCM buffer try audioFile.read(into: pcmBuffer) // Assign the created AVAudioPCMBuffer to the sound property self.sound = pcmBuffer } catch { print("Error loading sound file: \(error.localizedDescription)") self.sound = nil } } } Thanks!
1
0
126
Mar ’25
AudioUnit (AUv2) Session Compatibility After Adding MIDI Support
Hi there! We have a suite of AudioUnit v2 plugins that have been shipped for some time as aufx plugins, and we are looking into MIDI-related platform upgrades, so we need a way to update these plugins to request MIDI from Logic (and other AU hosts) but avoid changing our AU type and subtype so we don't break existing sessions. Any ideas on how we can do this?
1
0
92
Mar ’25
Best way to stream audio from file system
I am trying to stream audio from local filesystem. For that, I am trying to use an AVAssetResourceLoaderDelegate for an AVURLAsset. However, Content-Length is not known at the start. To overcome this, I tried several methods: Set content length as nil, in the AVAssetResourceLoadingContentInformationRequest Set content length to -1, in the ContentInformationRequest Both of these cause the AVPlayerItem to fail with an error. I also tried setting Content-Length as INT_MAX, and setting a renewalDate = Date(timeIntervalSinceNow: 5). However, that seems to be buggy. Even after updating the Content-Length to the correct value (e.g. X bytes) and finishing that loading request, the resource loader keeps getting requests with requestedOffset = X with dataRequest.requestsAllDataToEndOfResource = true. These requests keep coming indefinitely, and as a result it seems that the next item in the queue does not get played. Also, .AVPlayerItemDidPlayToEndTime notification does not get called. I wanted to check if this is an expected behavior or is there a bug in this implementation. Also, what is the recommended way to stream audio of unknown initial length from local file system? Thanks!
1
0
154
Mar ’25
MusicKit: Best way to check if all tracks of albums are added to library.
I prefer to use the album fetched from the library instead of the catalog since this is faster. If doing so, how can I check if all tracks of an album are added to the library. In this case I'd like to fetch the catalog version or throw an error (for example when offline). Using .with(.tracks) on the library album fetches the tracks added to the library. The trackCount property is referring to the tracks that can be fetched from the library. The isComplete property is always nil when fetching from the library. One possible way is checking the trackNumber and discCount properties. However this only detects that not all tracks of an album are added to the library if there is a song not added ahead of one that is. I'd like to be able to handle this edge case as well. Is there currently a way to do this? I'd prefer to not rely on the apple music catalog for this since this is supposed to work offline as well. Fetching and storing all trackIDs when connected and later comparing against these would work, but this would potentially mean storing tens of thousands of track ids. Thank you
0
0
73
Mar ’25