Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Created

Ducking MusicKit output when playing another sound
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player) the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession Sample code below the problem I am having is that .duckOthers is not ducking the Application Music Player output Is this a bug or am I doing this wrong? // Configure audio session for system-wide ducking try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers]) try AVAudioSession.sharedInstance().setActive(true) // Set the ducking level to maximum try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005) // Create and configure audio player self.audioPlayer = try AVAudioPlayer(data: audioData) self.audioPlayer?.delegate = self self.audioPlayer?.volume = 1.0 // Ensure full volume for speech self.audioPlayer?.prepareToPlay() // Set the audio player's settings for maximum clarity self.audioPlayer?.enableRate = false self.audioPlayer?.pan = 0.0 // Center the audio self.audioPlayer?.play()
0
0
65
Apr ’25
Format of 14-bit RAW bayer data from lower bit camera sensor?
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive sensor data in as unprocessed format as possible. I'm using AVCapturePhotoOutput to take Bayer RAW stills and receiving data in kCVPixelFormatType_14Bayer_RGGB format. However, I'm puzzled as to the content of the bits. I simply demosaic the image by taking each 2x2 square: RG GB and use R, (G+G)/2, B to get 16-bit RGB values - and this indeed works. However, I am puzzled as to the values we are getting as they seem to be approximately in the range 2048 - 16383. The top value is understandable - the maximum that you can fit in 14-bits (as implied by the pixel format type). However we don't seem to be able to get lower than ~2048 no matter how black/dark we make the sensor. I'm aware that the sensor is probably not 14-bits (we're using the iPhone 16e camera) and that maybe this is to do with the way the sensor data is packaged. The Advances in iOS Photography video (https://developer.apple.com/videos/play/wwdc2016/501/) describes it as "10-bit sensor RAW packaged in 14 bits per pixel instead of eight." Is there any documentation describing what is going on here? It's vital for our use that we get as close to the raw camera sensor light readings as possible, so any pointers as to the mapping (e.g. decompanding?) being used would be extremely useful. Many thanks in advance for your help.
3
0
194
Apr ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
2
2
464
Apr ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
1
2
229
Apr ’25
issue in recording using AVAudio
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet. **func installTap( onBus bus: AVAudioNodeBus, bufferSize: AVAudioFrameCount, format: AVAudioFormat?, block tapBlock: @escaping AVAudioNodeTapBlock ) ** It works perfectly fine. But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
0
0
127
Apr ’25
AVAssetWriterInput Crash on appendSampleBuffer Converting PCM
Overview We are producing audio in real time from an editing application and are trying to put that on an HLS stream. We attempt to submit PCM samples through an audio writer but are getting a crash after a select number of samples have been appended. Depending on the number of audio frames in the PCM buffer, we might get more iterations before the crash but it always has the same traceback (see below). Code The setup is rather simple. We took inspiration from a few sources around the web. NSMutableDictionary *audio = [[NSMutableDictionary alloc] init]; [audio setObject:@(kAudioFormatMPEG4AAC) forKey:AVFormatIDKey]; [audio setObject:[NSNumber numberWithInt:config.audioSampleRate] // 48000 forKey:AVSampleRateKey]; [audio setObject:[NSNumber numberWithInt:config.audioChannels] // 2 forKey:AVNumberOfChannelsKey]; [audio setObject:@160000 forKey:AVEncoderBitRateKey]; m_audioConfig = [[NSDictionary alloc] initWithDictionary:audio]; m_audio = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:m_audioConfig]; AVAudioFrameCount audioFrames = BUFFER_SAMPLES * bCount; AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:m_full.pcmFormat frameCapacity:audioFrames]; pcmBuffer.frameLength = pcmBuffer.frameCapacity; AudioChannelLayout layout; memset(&layout, 0, sizeof(layout)); layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo; CMFormatDescriptionRef format; OSStatus stats = CMAudioFormatDescriptionCreate( kCFAllocatorDefault, pcmBuffer.format.streamDescription, sizeof(layout), &layout, 0, nil, nil, &format ); for (int i = 0; i < bCount; i++) { AudioPCM pcm; audioCallback->callback(pcm); memcpy(*(pcmBuffer.int16ChannelData) + (bufferSize * i), pcm.data, bufferSize); } size_t samplesConsumed = BUFFER_SAMPLES * bCount; CMSampleBufferRef sampleBuffer; CMSampleTimingInfo timing; timing.duration = CMTimeMake(1, config.audioSampleRate); timing.presentationTimeStamp = presentationTime; timing.decodeTimeStamp = kCMTimeInvalid; OSStatus ostatus = CMSampleBufferCreate( kCFAllocatorDefault, nil, false, nil, nil, format, (CMItemCount)pcmBuffer.frameLength, 1, &timing, 0, nil, &sampleBuffer ); //// ostatus = CMSampleBufferSetDataBufferFromAudioBufferList( sampleBuffer, kCFAllocatorDefault, kCFAllocatorDefault, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, pcmBuffer.audioBufferList ); if (ostatus != noErr) { NSLog(@"fill audio sample from buffer list failed: %s", logAudioError(ostatus)); return; } ostatus = CMSampleBufferSetDataReady(sampleBuffer); if (ostatus != noErr) { NSLog(@"set sample buffer ready failed: %s", logAudioError(ostatus)); return; } // Finally we can attach it, then shove the presentation time forward [m_audio appendSampleBuffer:sampleBuffer]; The Crash The crash points towards some level of deallocation when the conversion tooling is done or has enough samples to process an output packet? It's had to say. 0 caulk 0x1a1e9532c caulk::alloc::tiered_allocator<caulk::alloc::size_range_tier<0ul, 1008ul, caulk::alloc::tree_allocator<caulk::alloc::chunk_allocator<caulk::alloc::page_allocator, caulk::alloc::bitmap_allocator, caulk::alloc::embed_block_memory, 16384ul, 16ul, 6ul>>>, caulk::alloc::size_range_tier<1009ul, 256000ul, caulk::alloc::guarded_edges_allocator<caulk::alloc::consolidating_free_map<caulk::alloc::page_allocator, 10485760ul>, 4ul>>, caulk::alloc::tracking_allocator<caulk::alloc::page_allocator>>::deallocate(caulk::alloc::block, unsigned long) + 636 1 AudioToolboxCore 0x1993fbfe4 ExtendedAudioBufferList_Destroy + 112 2 AudioToolboxCore 0x1993d5fe0 std::__1::__optional_destruct_base<ACCodecOutputBuffer, false>::~__optional_destruct_base[abi:ne180100]() + 68 3 AudioToolboxCore 0x1993d5f48 acv2::CodecConverter::~CodecConverter() + 196 4 AudioToolboxCore 0x1993d5e5c acv2::CodecConverter::~CodecConverter() + 16 5 AudioToolboxCore 0x1992574d8 std::__1::vector<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>, std::__1::allocator<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>>>::__clear[abi:ne180100]() + 84 6 AudioToolboxCore 0x199259acc acv2::AudioConverterChain::RebuildConverterChain(acv2::ChainBuildSettings const&) + 116 7 AudioToolboxCore 0x1992596ec acv2::AudioConverterChain::SetProperty(unsigned int, unsigned int, void const*) + 1808 8 AudioToolboxCore 0x199324acc acv2::AudioConverterV2::setProperty(unsigned int, unsigned int, void const*) + 84 9 AudioToolboxCore 0x199327f08 with_resolved(OpaqueAudioConverter*, caulk::function_ref<int (AudioConverterAPI*)>) + 60 10 AudioToolboxCore 0x1993281e4 AudioConverterSetProperty + 72 11 MediaToolbox 0x1a7566c2c FigSampleBufferProcessorCreateWithAudioCompression + 2296 12 MediaToolbox 0x1a754db08 0x1a70b5000 + 4819720 13 MediaToolbox 0x1a754dab4 FigMediaProcessorCreateForAudioCompressionWithFormatWriter + 100 14 MediaToolbox 0x1a77ebb98 0x1a70b5000 + 7564184 15 MediaToolbox 0x1a7804158 0x1a70b5000 + 7663960 16 MediaToolbox 0x1a7801da0 0x1a70b5000 + 7654816 17 AVFCore 0x1ada530c4 -[AVFigAssetWriterTrack addSampleBuffer:error:] + 192 18 AVFCore 0x1ada55164 -[AVFigAssetWriterAudioTrack _flushPendingSampleBuffersReturningError:] + 500 19 AVFCore 0x1ada55354 -[AVFigAssetWriterAudioTrack addSampleBuffer:error:] + 472 20 AVFCore 0x1ada4ebf0 -[AVAssetWriterInputWritingHelper appendSampleBuffer:error:] + 128 21 AVFCore 0x1ada4c354 -[AVAssetWriterInput appendSampleBuffer:] + 168 22 lib_devapple_hls.dylib 0x115d2c7cc detail::AppleHLSImplementation::audioRuntime() + 1052 23 lib_devapple_hls.dylib 0x115d2d094 void* std::__1::__thread_proxy[abi:ne180100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void (detail::AppleHLSImplementation::*)(), detail::AppleHLSImplementation*>>(void*) + 72 24 libsystem_pthread.dylib 0x196e5b2e4 _pthread_start + 136 Any insight would be welcome!
2
0
316
May ’25
About the built-in instrument sound of Apple devices
Does anyone know how to pronounce the sound of a specific instrument when you tap a button on the screen on your iPhone or iPad? Now, in the middle of creating a music learning app, I'm thinking of assigning monotones or chords to the button-like frames on the keyboard and fingerboard on the screen. Can it be achieved with SwiftUI chords alone? Once upon a time, MIDI level 1 I remember that there was a pronunciation function of the instrument, but I don't think about implementing the same function in the current OS. Please lend me your wisdom.
0
0
62
May ’25
Failed to launch Photo Editing Extension from Mac Catalyst app
I have an iOS app that includes a Photo Editing Extension and is optimized for Mac Catalyst so you can edit photos in the Photos app on your Mac. This has worked really well but now I am encountering an error alert trying to open the photo editing extension: RBSLaunchRequest error trying to launch plugin com.company.TestEditor. TestPhotoEditor (B7A616A7-2 5A8-4E02-8B32-5CAB37C8B4B2): ErrorDomain=RBSRequestErrorDomain Code=5 "Launch failed." UserInfo={NSLocalizedFailureReason=Launch failed., NSUnderlyingError=0x7f08fafd0 {ErrorDomain=NSPOSIXErrorDomain Code=153 "Unknown error: 153" UserInfo={NSLocalizedDescription=Launchd job spawn failed}}} Create a new iOS app project in Xcode Create a new target and choose iOS > Photo Editing Extension For both targets in the project, add Mac Catalyst as a supported destination Run the app on My Mac (Mac Catalyst) Open the Photos app, double click a photo, click Edit, click the more plugins button, and click TestPhotoEditor in the list macOS 15.4.1 + Xcode 16.3
1
0
354
May ’25
CoreMediaErrorDomain -12888: Bandwidth down-stepping when using 2sec segment duration
the problem is when using HLS live stream with AVPlayer on iOS/ tvOS the player chooses first highest bandwidth then slowly steps down to lowest (within 1-3min) and eventually steps up again then repeats to step down. the AVPlayer error log sends events: errorStatusCode: -12888, errorDomain: Optional("CoreMediaErrorDomain"), errorComment: Optional("The operation couldn't be completed. (CoreMediaErrorDomain error -12888 - Playlist File unchanged for longer than 1.5 * target duration we use standard segments in CMAF format, 2sec duration #EXTM3U #EXT-X-VERSION:6 #EXT-X-TARGETDURATION:2 #EXT-X-MEDIA-SEQUENCE:147065903 #EXT-X-MAP:URI="video_1_4660000_init.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2" #EXT-X-PROGRAM-DATE-TIME:2025-04-30T12:51:07 #EXTINF:2.000, video_1_4660000_t17460174670001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2 #EXTINF:2.000, video_1_4660000_t17460174690001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2 #EXTINF:2.000, video_1_4660000_t17460174710001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2 when using 6sec segments the player stays stable at highest bandwidth. is there a way to avoid this error? in AVPlayer or HLS configuration?
2
0
393
May ’25
[iOS 18.0] PHImageManager request image crash
I am seeing a crash on iOS 18.0 in my app due to PHImageManager.default().requestImage. Same crash is seen even while using requestImageDataAndOrientation. Not sure why this is happening only on iOS 18.0. Any help on this would be appreciated. Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Subtype: KERN_INVALID_ADDRESS at 0x0000000000000010 Exception Codes: 0x0000000000000001, 0x0000000000000010 VM Region Info: 0x10 is not in any region. Bytes before following region: 4310777840 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL UNUSED SPACE AT START ---> Termination Reason: SIGNAL 11 Segmentation fault: 11 Terminating Process: exc handler [695] Triggered by Thread: 14 Thread 14 name: Dispatch queue: */file=33) .Generic Thread 14 Crashed: 0 libobjc.A.dylib 0x1944d7020 objc_msgSend + 32 1 PhotoLibraryServices 0x1b080262c +[PLUniformTypeIdentifier utiWithCompactRepresentation:conformanceHint:] + 40 2 Photos 0x1b0081354 _resourceInfoFromResultDict + 1048 3 Photos 0x1b0080a7c fetchResourcesForChoosing + 584 4 Photos 0x1b0080714 ___fetchNonHintResources_block_invoke.227 + 152 5 PhotoLibraryServices 0x1b026b3c4 __53-[PLManagedObjectContext _directPerformBlockAndWait:]_block_invoke + 48 6 CoreData 0x19f171b00 developerSubmittedBlockToNSManagedObjectContextPerform + 228 7 libdispatch.dylib 0x19eeff584 _dispatch_client_callout + 16 8 libdispatch.dylib 0x19eef5728 _dispatch_lane_barrier_sync_invoke_and_complete + 56 9 CoreData 0x19f1c2a0c -[NSManagedObjectContext performBlockAndWait:] + 308 10 PhotoLibraryServices 0x1b026cea4 -[PLManagedObjectContext _directPerformBlockAndWait:] + 144 11 PhotoLibraryServices 0x1b026cdf8 -[PLManagedObjectContext performBlockAndWait:] + 196 12 Photos 0x1b00804a4 _fetchNonHintResources + 292 13 Photos 0x1afeedb38 PHChooserListContinueEnumerating + 144 14 Photos 0x1afeed9f8 -[PHImageResourceChooser presentNextQualifyingResource] + 412 15 Photos 0x1afeed1d0 -[PHImageRequest startRequest] + 2424 16 libdispatch.dylib 0x19eee5aac _dispatch_call_block_and_release + 32 17 libdispatch.dylib 0x19eeff584 _dispatch_client_callout + 16 18 libdispatch.dylib 0x19eeee2d0 _dispatch_lane_serial_drain + 740 19 libdispatch.dylib 0x19eeeede0 _dispatch_lane_invoke + 440 20 libdispatch.dylib 0x19eef91dc _dispatch_root_queue_drain_deferred_wlh + 292 21 libdispatch.dylib 0x19eef8a60 _dispatch_workloop_worker_thread + 540 22 libsystem_pthread.dylib 0x2214d5660 _pthread_wqthread + 292 23 libsystem_pthread.dylib 0x2214d29f8 start_wqthread + 8
6
0
198
May ’25
videoCaptureQueue would make the app crashed when I using IOS 18.4.1
Hi All I have some problem when I using the IOS 18.4.1 I have iphone16 pro and ipad Air, both are updated to IOS 18.4.1 I tried to following sample code. However, when I run the app around 30 seconds to 1 minutes, the application would be crashed When I using another Ipad with IOS 17, it would not have the same problem. https://developer.apple.com/documentation/createml/creating-an-action-classifier-model https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed#overview%29,
6
0
241
May ’25
PhotosPicker obtains the result information of the selected video
In the AVP project, a selector pops up, only wanting to filter spatial videos. When selecting the material of one of the spatial videos, the selection result returns empty. How can we obtain the video selected by the user and get the path and the URL of the file The code is as follows: PhotosPicker(selection: $selectedItem, matching: .videos) { Text("Choose a spatial photo or video") } func loadTransferable(from imageSelection: PhotosPickerItem) -> Progress { return imageSelection.loadTransferable(type: URL.self) { result in DispatchQueue.main.async { // guard imageSelection == self.imageSelection else { return } print("加载成功的图片集合:(result)") switch result { case .success(let url?): self.selectSpatialVideoURL = url print("获取视频链接:(url)") case .success(nil): break // Handle the success case with an empty value. case .failure(let error): print("spatial错误:(error)") // Handle the failure case with the provided error. } } } }
4
0
190
May ’25
FxPlug SDK 4.3.2 causes dyld errors when loaded on versions of macOS prior to 14.6
FxPlug is one of Apple’s official SDKs, recently updated to version 4.3.2. In theory the SDK should guarantee third-parties can build plug-ins that are backward compatible with older versions of Final Cut Pro, Motion and Compressor. FxPlug SDK includes two frameworks that third-party developers like me end up bundling inside our third-party plugins: FxPlug.framework and PlugInManager.framework. Behind the scenes, the SDK relies on PlugInKit, but the FxPlug.framework provides abstractions so that third-parties don't have to handle the intricacies of XPC directly. The most recent version of FxPlug.framework included with the SDK was possibly built with an error: the Info.plist shows a LSMinimumSystemVersion entry of 14.6, suggesting the binary may have been compiled and linked with MACOSX_DEPLOYMENT_TARGET set to 14.6 by accident. The problem: when older versions of Final Cut Pro or Motion load a third-party plugin (itself built with the appropriate deployment target, macOS 11 or 12, for example) on pre-macOS 14.6, the dynamic linker immediately loads Apple’s own FxPlug.framework, but this causes the process to crash immediately: Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 libobjc.A.dylib 0x7ff81e065955 map_images_nolock + 5399 1 libobjc.A.dylib 0x7ff81e0643d6 map_images + 67 2 dyld 0x10bd551fb invocation function for block in dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 275 3 dyld 0x10bd506c9 dyld4::RuntimeState::withLoadersReadLock(void () block_pointer) + 41 4 dyld 0x10bd550e2 dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 82 5 dyld 0x10bd68d45 dyld4::APIs::_dyld_objc_notify_register(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 79 6 libobjc.A.dylib 0x7ff81e064244 _objc_init + 1279 7 libdispatch.dylib 0x7ff81e01d993 _os_object_init + 13 8 libdispatch.dylib 0x7ff81e02b1b8 libdispatch_init + 311 9 libSystem.B.dylib 0x7ff828fd585f libSystem_initializer + 238 10 dyld 0x10bd5ae4f invocation function for block in dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 182 11 dyld 0x10bd81aad invocation function for block in dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 242 12 dyld 0x10bd78e26 invocation function for block in dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 557 13 dyld 0x10bd47db3 dyld3::MachOFile::forEachLoadCommand(Diagnostics&, void (load_command const*, bool&) block_pointer) const + 129 14 dyld 0x10bd78bb7 dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 179 15 dyld 0x10bd81604 dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 466 16 dyld 0x10bd5ad82 dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 144 17 dyld 0x10bd6165a dyld4::PrebuiltLoader::runInitializers(dyld4::RuntimeState&) const + 30 18 dyld 0x10bd6e76e dyld4::APIs::runAllInitializersForMain() + 38 19 dyld 0x10bd4c38d dyld4::prepare(dyld4::APIs&, dyld3::MachOAnalyzer const*) + 3443 20 dyld 0x10bd4b4e4 start + 388 Can someone at Apple with the right domain expertise confirm that this is the type of crash you would see because the framework was built assuming it would run on macOS 14.6 and later, and when facing an older environment (e.g. ObjC runtime) it lacks extra code that would ensure backward compatibility with the earlier ObjC runtime found on macOS 12.x?
2
0
203
May ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback, using ApplicationMusicPlayer. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
1
3
319
May ’25
Unable to match music with shazamkit for Android
Hello, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
0
0
145
May ’25
Pip custom view in Xcode16, iOS18, does not display when the camera is turned on
I have added some custom views on my pip. These controls disappeared after opening the camera in the Xcode16 environment and iOS 18 system, and it was found that these custom views were not removed and seemed to be obscured. They were displayed normally in the Xcode15.4 environment. I would like to ask how to make my custom views display normally
Replies
1
Boosts
0
Views
116
Activity
Apr ’25
Pip custom view in Xcode16, iOS18, does not display when the camera is turned on
I have added some custom views on my pip. These controls disappeared after opening the camera in the Xcode16 environment and iOS 18 system, and it was found that these custom views were not removed and seemed to be obscured. They were displayed normally in the Xcode15.4 environment. I would like to ask how to make my custom views display normally
Replies
0
Boosts
0
Views
89
Activity
Apr ’25
Ducking MusicKit output when playing another sound
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player) the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession Sample code below the problem I am having is that .duckOthers is not ducking the Application Music Player output Is this a bug or am I doing this wrong? // Configure audio session for system-wide ducking try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers]) try AVAudioSession.sharedInstance().setActive(true) // Set the ducking level to maximum try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005) // Create and configure audio player self.audioPlayer = try AVAudioPlayer(data: audioData) self.audioPlayer?.delegate = self self.audioPlayer?.volume = 1.0 // Ensure full volume for speech self.audioPlayer?.prepareToPlay() // Set the audio player's settings for maximum clarity self.audioPlayer?.enableRate = false self.audioPlayer?.pan = 0.0 // Center the audio self.audioPlayer?.play()
Replies
0
Boosts
0
Views
65
Activity
Apr ’25
Format of 14-bit RAW bayer data from lower bit camera sensor?
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive sensor data in as unprocessed format as possible. I'm using AVCapturePhotoOutput to take Bayer RAW stills and receiving data in kCVPixelFormatType_14Bayer_RGGB format. However, I'm puzzled as to the content of the bits. I simply demosaic the image by taking each 2x2 square: RG GB and use R, (G+G)/2, B to get 16-bit RGB values - and this indeed works. However, I am puzzled as to the values we are getting as they seem to be approximately in the range 2048 - 16383. The top value is understandable - the maximum that you can fit in 14-bits (as implied by the pixel format type). However we don't seem to be able to get lower than ~2048 no matter how black/dark we make the sensor. I'm aware that the sensor is probably not 14-bits (we're using the iPhone 16e camera) and that maybe this is to do with the way the sensor data is packaged. The Advances in iOS Photography video (https://developer.apple.com/videos/play/wwdc2016/501/) describes it as "10-bit sensor RAW packaged in 14 bits per pixel instead of eight." Is there any documentation describing what is going on here? It's vital for our use that we get as close to the raw camera sensor light readings as possible, so any pointers as to the mapping (e.g. decompanding?) being used would be extremely useful. Many thanks in advance for your help.
Replies
3
Boosts
0
Views
194
Activity
Apr ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
Replies
2
Boosts
2
Views
464
Activity
Apr ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
Replies
1
Boosts
2
Views
229
Activity
Apr ’25
issue in recording using AVAudio
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet. **func installTap( onBus bus: AVAudioNodeBus, bufferSize: AVAudioFrameCount, format: AVAudioFormat?, block tapBlock: @escaping AVAudioNodeTapBlock ) ** It works perfectly fine. But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
Replies
0
Boosts
0
Views
127
Activity
Apr ’25
AVAssetWriterInput Crash on appendSampleBuffer Converting PCM
Overview We are producing audio in real time from an editing application and are trying to put that on an HLS stream. We attempt to submit PCM samples through an audio writer but are getting a crash after a select number of samples have been appended. Depending on the number of audio frames in the PCM buffer, we might get more iterations before the crash but it always has the same traceback (see below). Code The setup is rather simple. We took inspiration from a few sources around the web. NSMutableDictionary *audio = [[NSMutableDictionary alloc] init]; [audio setObject:@(kAudioFormatMPEG4AAC) forKey:AVFormatIDKey]; [audio setObject:[NSNumber numberWithInt:config.audioSampleRate] // 48000 forKey:AVSampleRateKey]; [audio setObject:[NSNumber numberWithInt:config.audioChannels] // 2 forKey:AVNumberOfChannelsKey]; [audio setObject:@160000 forKey:AVEncoderBitRateKey]; m_audioConfig = [[NSDictionary alloc] initWithDictionary:audio]; m_audio = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:m_audioConfig]; AVAudioFrameCount audioFrames = BUFFER_SAMPLES * bCount; AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:m_full.pcmFormat frameCapacity:audioFrames]; pcmBuffer.frameLength = pcmBuffer.frameCapacity; AudioChannelLayout layout; memset(&layout, 0, sizeof(layout)); layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo; CMFormatDescriptionRef format; OSStatus stats = CMAudioFormatDescriptionCreate( kCFAllocatorDefault, pcmBuffer.format.streamDescription, sizeof(layout), &layout, 0, nil, nil, &format ); for (int i = 0; i < bCount; i++) { AudioPCM pcm; audioCallback->callback(pcm); memcpy(*(pcmBuffer.int16ChannelData) + (bufferSize * i), pcm.data, bufferSize); } size_t samplesConsumed = BUFFER_SAMPLES * bCount; CMSampleBufferRef sampleBuffer; CMSampleTimingInfo timing; timing.duration = CMTimeMake(1, config.audioSampleRate); timing.presentationTimeStamp = presentationTime; timing.decodeTimeStamp = kCMTimeInvalid; OSStatus ostatus = CMSampleBufferCreate( kCFAllocatorDefault, nil, false, nil, nil, format, (CMItemCount)pcmBuffer.frameLength, 1, &timing, 0, nil, &sampleBuffer ); //// ostatus = CMSampleBufferSetDataBufferFromAudioBufferList( sampleBuffer, kCFAllocatorDefault, kCFAllocatorDefault, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, pcmBuffer.audioBufferList ); if (ostatus != noErr) { NSLog(@"fill audio sample from buffer list failed: %s", logAudioError(ostatus)); return; } ostatus = CMSampleBufferSetDataReady(sampleBuffer); if (ostatus != noErr) { NSLog(@"set sample buffer ready failed: %s", logAudioError(ostatus)); return; } // Finally we can attach it, then shove the presentation time forward [m_audio appendSampleBuffer:sampleBuffer]; The Crash The crash points towards some level of deallocation when the conversion tooling is done or has enough samples to process an output packet? It's had to say. 0 caulk 0x1a1e9532c caulk::alloc::tiered_allocator<caulk::alloc::size_range_tier<0ul, 1008ul, caulk::alloc::tree_allocator<caulk::alloc::chunk_allocator<caulk::alloc::page_allocator, caulk::alloc::bitmap_allocator, caulk::alloc::embed_block_memory, 16384ul, 16ul, 6ul>>>, caulk::alloc::size_range_tier<1009ul, 256000ul, caulk::alloc::guarded_edges_allocator<caulk::alloc::consolidating_free_map<caulk::alloc::page_allocator, 10485760ul>, 4ul>>, caulk::alloc::tracking_allocator<caulk::alloc::page_allocator>>::deallocate(caulk::alloc::block, unsigned long) + 636 1 AudioToolboxCore 0x1993fbfe4 ExtendedAudioBufferList_Destroy + 112 2 AudioToolboxCore 0x1993d5fe0 std::__1::__optional_destruct_base<ACCodecOutputBuffer, false>::~__optional_destruct_base[abi:ne180100]() + 68 3 AudioToolboxCore 0x1993d5f48 acv2::CodecConverter::~CodecConverter() + 196 4 AudioToolboxCore 0x1993d5e5c acv2::CodecConverter::~CodecConverter() + 16 5 AudioToolboxCore 0x1992574d8 std::__1::vector<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>, std::__1::allocator<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>>>::__clear[abi:ne180100]() + 84 6 AudioToolboxCore 0x199259acc acv2::AudioConverterChain::RebuildConverterChain(acv2::ChainBuildSettings const&) + 116 7 AudioToolboxCore 0x1992596ec acv2::AudioConverterChain::SetProperty(unsigned int, unsigned int, void const*) + 1808 8 AudioToolboxCore 0x199324acc acv2::AudioConverterV2::setProperty(unsigned int, unsigned int, void const*) + 84 9 AudioToolboxCore 0x199327f08 with_resolved(OpaqueAudioConverter*, caulk::function_ref<int (AudioConverterAPI*)>) + 60 10 AudioToolboxCore 0x1993281e4 AudioConverterSetProperty + 72 11 MediaToolbox 0x1a7566c2c FigSampleBufferProcessorCreateWithAudioCompression + 2296 12 MediaToolbox 0x1a754db08 0x1a70b5000 + 4819720 13 MediaToolbox 0x1a754dab4 FigMediaProcessorCreateForAudioCompressionWithFormatWriter + 100 14 MediaToolbox 0x1a77ebb98 0x1a70b5000 + 7564184 15 MediaToolbox 0x1a7804158 0x1a70b5000 + 7663960 16 MediaToolbox 0x1a7801da0 0x1a70b5000 + 7654816 17 AVFCore 0x1ada530c4 -[AVFigAssetWriterTrack addSampleBuffer:error:] + 192 18 AVFCore 0x1ada55164 -[AVFigAssetWriterAudioTrack _flushPendingSampleBuffersReturningError:] + 500 19 AVFCore 0x1ada55354 -[AVFigAssetWriterAudioTrack addSampleBuffer:error:] + 472 20 AVFCore 0x1ada4ebf0 -[AVAssetWriterInputWritingHelper appendSampleBuffer:error:] + 128 21 AVFCore 0x1ada4c354 -[AVAssetWriterInput appendSampleBuffer:] + 168 22 lib_devapple_hls.dylib 0x115d2c7cc detail::AppleHLSImplementation::audioRuntime() + 1052 23 lib_devapple_hls.dylib 0x115d2d094 void* std::__1::__thread_proxy[abi:ne180100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void (detail::AppleHLSImplementation::*)(), detail::AppleHLSImplementation*>>(void*) + 72 24 libsystem_pthread.dylib 0x196e5b2e4 _pthread_start + 136 Any insight would be welcome!
Replies
2
Boosts
0
Views
316
Activity
May ’25
About the built-in instrument sound of Apple devices
Does anyone know how to pronounce the sound of a specific instrument when you tap a button on the screen on your iPhone or iPad? Now, in the middle of creating a music learning app, I'm thinking of assigning monotones or chords to the button-like frames on the keyboard and fingerboard on the screen. Can it be achieved with SwiftUI chords alone? Once upon a time, MIDI level 1 I remember that there was a pronunciation function of the instrument, but I don't think about implementing the same function in the current OS. Please lend me your wisdom.
Replies
0
Boosts
0
Views
62
Activity
May ’25
Failed to launch Photo Editing Extension from Mac Catalyst app
I have an iOS app that includes a Photo Editing Extension and is optimized for Mac Catalyst so you can edit photos in the Photos app on your Mac. This has worked really well but now I am encountering an error alert trying to open the photo editing extension: RBSLaunchRequest error trying to launch plugin com.company.TestEditor. TestPhotoEditor (B7A616A7-2 5A8-4E02-8B32-5CAB37C8B4B2): ErrorDomain=RBSRequestErrorDomain Code=5 "Launch failed." UserInfo={NSLocalizedFailureReason=Launch failed., NSUnderlyingError=0x7f08fafd0 {ErrorDomain=NSPOSIXErrorDomain Code=153 "Unknown error: 153" UserInfo={NSLocalizedDescription=Launchd job spawn failed}}} Create a new iOS app project in Xcode Create a new target and choose iOS > Photo Editing Extension For both targets in the project, add Mac Catalyst as a supported destination Run the app on My Mac (Mac Catalyst) Open the Photos app, double click a photo, click Edit, click the more plugins button, and click TestPhotoEditor in the list macOS 15.4.1 + Xcode 16.3
Replies
1
Boosts
0
Views
354
Activity
May ’25
CoreMediaErrorDomain -12888: Bandwidth down-stepping when using 2sec segment duration
the problem is when using HLS live stream with AVPlayer on iOS/ tvOS the player chooses first highest bandwidth then slowly steps down to lowest (within 1-3min) and eventually steps up again then repeats to step down. the AVPlayer error log sends events: errorStatusCode: -12888, errorDomain: Optional("CoreMediaErrorDomain"), errorComment: Optional("The operation couldn't be completed. (CoreMediaErrorDomain error -12888 - Playlist File unchanged for longer than 1.5 * target duration we use standard segments in CMAF format, 2sec duration #EXTM3U #EXT-X-VERSION:6 #EXT-X-TARGETDURATION:2 #EXT-X-MEDIA-SEQUENCE:147065903 #EXT-X-MAP:URI="video_1_4660000_init.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2" #EXT-X-PROGRAM-DATE-TIME:2025-04-30T12:51:07 #EXTINF:2.000, video_1_4660000_t17460174670001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2 #EXTINF:2.000, video_1_4660000_t17460174690001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2 #EXTINF:2.000, video_1_4660000_t17460174710001555.mp4?device_profile=cmaf_cbcs_verimatrix_cei%26seg_size=2%26cmaf=2 when using 6sec segments the player stays stable at highest bandwidth. is there a way to avoid this error? in AVPlayer or HLS configuration?
Replies
2
Boosts
0
Views
393
Activity
May ’25
[iOS 18.0] PHImageManager request image crash
I am seeing a crash on iOS 18.0 in my app due to PHImageManager.default().requestImage. Same crash is seen even while using requestImageDataAndOrientation. Not sure why this is happening only on iOS 18.0. Any help on this would be appreciated. Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Subtype: KERN_INVALID_ADDRESS at 0x0000000000000010 Exception Codes: 0x0000000000000001, 0x0000000000000010 VM Region Info: 0x10 is not in any region. Bytes before following region: 4310777840 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL UNUSED SPACE AT START ---> Termination Reason: SIGNAL 11 Segmentation fault: 11 Terminating Process: exc handler [695] Triggered by Thread: 14 Thread 14 name: Dispatch queue: */file=33) .Generic Thread 14 Crashed: 0 libobjc.A.dylib 0x1944d7020 objc_msgSend + 32 1 PhotoLibraryServices 0x1b080262c +[PLUniformTypeIdentifier utiWithCompactRepresentation:conformanceHint:] + 40 2 Photos 0x1b0081354 _resourceInfoFromResultDict + 1048 3 Photos 0x1b0080a7c fetchResourcesForChoosing + 584 4 Photos 0x1b0080714 ___fetchNonHintResources_block_invoke.227 + 152 5 PhotoLibraryServices 0x1b026b3c4 __53-[PLManagedObjectContext _directPerformBlockAndWait:]_block_invoke + 48 6 CoreData 0x19f171b00 developerSubmittedBlockToNSManagedObjectContextPerform + 228 7 libdispatch.dylib 0x19eeff584 _dispatch_client_callout + 16 8 libdispatch.dylib 0x19eef5728 _dispatch_lane_barrier_sync_invoke_and_complete + 56 9 CoreData 0x19f1c2a0c -[NSManagedObjectContext performBlockAndWait:] + 308 10 PhotoLibraryServices 0x1b026cea4 -[PLManagedObjectContext _directPerformBlockAndWait:] + 144 11 PhotoLibraryServices 0x1b026cdf8 -[PLManagedObjectContext performBlockAndWait:] + 196 12 Photos 0x1b00804a4 _fetchNonHintResources + 292 13 Photos 0x1afeedb38 PHChooserListContinueEnumerating + 144 14 Photos 0x1afeed9f8 -[PHImageResourceChooser presentNextQualifyingResource] + 412 15 Photos 0x1afeed1d0 -[PHImageRequest startRequest] + 2424 16 libdispatch.dylib 0x19eee5aac _dispatch_call_block_and_release + 32 17 libdispatch.dylib 0x19eeff584 _dispatch_client_callout + 16 18 libdispatch.dylib 0x19eeee2d0 _dispatch_lane_serial_drain + 740 19 libdispatch.dylib 0x19eeeede0 _dispatch_lane_invoke + 440 20 libdispatch.dylib 0x19eef91dc _dispatch_root_queue_drain_deferred_wlh + 292 21 libdispatch.dylib 0x19eef8a60 _dispatch_workloop_worker_thread + 540 22 libsystem_pthread.dylib 0x2214d5660 _pthread_wqthread + 292 23 libsystem_pthread.dylib 0x2214d29f8 start_wqthread + 8
Replies
6
Boosts
0
Views
198
Activity
May ’25
videoCaptureQueue would make the app crashed when I using IOS 18.4.1
Hi All I have some problem when I using the IOS 18.4.1 I have iphone16 pro and ipad Air, both are updated to IOS 18.4.1 I tried to following sample code. However, when I run the app around 30 seconds to 1 minutes, the application would be crashed When I using another Ipad with IOS 17, it would not have the same problem. https://developer.apple.com/documentation/createml/creating-an-action-classifier-model https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed#overview%29,
Replies
6
Boosts
0
Views
241
Activity
May ’25
PhotosPicker obtains the result information of the selected video
In the AVP project, a selector pops up, only wanting to filter spatial videos. When selecting the material of one of the spatial videos, the selection result returns empty. How can we obtain the video selected by the user and get the path and the URL of the file The code is as follows: PhotosPicker(selection: $selectedItem, matching: .videos) { Text("Choose a spatial photo or video") } func loadTransferable(from imageSelection: PhotosPickerItem) -> Progress { return imageSelection.loadTransferable(type: URL.self) { result in DispatchQueue.main.async { // guard imageSelection == self.imageSelection else { return } print("加载成功的图片集合:(result)") switch result { case .success(let url?): self.selectSpatialVideoURL = url print("获取视频链接:(url)") case .success(nil): break // Handle the success case with an empty value. case .failure(let error): print("spatial错误:(error)") // Handle the failure case with the provided error. } } } }
Replies
4
Boosts
0
Views
190
Activity
May ’25
Usage of Apple Music Feed leads to error 500
Hello, I'm trying to receive parquet files using the example that provided in documentation. I've done all required steps but receive constantly error 500 with "Upstream Service Error". By looking into the issues list, seems this error exists for months. Is it possible to get it working?
Replies
2
Boosts
0
Views
158
Activity
May ’25
audio video live broadcast
i need to make app for iOS to get code for XCode
Replies
1
Boosts
0
Views
83
Activity
May ’25
who can help me to make app for ios
who can help me to make app for iOS with XCode
Replies
0
Boosts
0
Views
93
Activity
May ’25
FxPlug SDK 4.3.2 causes dyld errors when loaded on versions of macOS prior to 14.6
FxPlug is one of Apple’s official SDKs, recently updated to version 4.3.2. In theory the SDK should guarantee third-parties can build plug-ins that are backward compatible with older versions of Final Cut Pro, Motion and Compressor. FxPlug SDK includes two frameworks that third-party developers like me end up bundling inside our third-party plugins: FxPlug.framework and PlugInManager.framework. Behind the scenes, the SDK relies on PlugInKit, but the FxPlug.framework provides abstractions so that third-parties don't have to handle the intricacies of XPC directly. The most recent version of FxPlug.framework included with the SDK was possibly built with an error: the Info.plist shows a LSMinimumSystemVersion entry of 14.6, suggesting the binary may have been compiled and linked with MACOSX_DEPLOYMENT_TARGET set to 14.6 by accident. The problem: when older versions of Final Cut Pro or Motion load a third-party plugin (itself built with the appropriate deployment target, macOS 11 or 12, for example) on pre-macOS 14.6, the dynamic linker immediately loads Apple’s own FxPlug.framework, but this causes the process to crash immediately: Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 libobjc.A.dylib 0x7ff81e065955 map_images_nolock + 5399 1 libobjc.A.dylib 0x7ff81e0643d6 map_images + 67 2 dyld 0x10bd551fb invocation function for block in dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 275 3 dyld 0x10bd506c9 dyld4::RuntimeState::withLoadersReadLock(void () block_pointer) + 41 4 dyld 0x10bd550e2 dyld4::RuntimeState::setObjCNotifiers(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 82 5 dyld 0x10bd68d45 dyld4::APIs::_dyld_objc_notify_register(void (*)(unsigned int, char const* const*, mach_header const* const*), void (*)(char const*, mach_header const*), void (*)(char const*, mach_header const*)) + 79 6 libobjc.A.dylib 0x7ff81e064244 _objc_init + 1279 7 libdispatch.dylib 0x7ff81e01d993 _os_object_init + 13 8 libdispatch.dylib 0x7ff81e02b1b8 libdispatch_init + 311 9 libSystem.B.dylib 0x7ff828fd585f libSystem_initializer + 238 10 dyld 0x10bd5ae4f invocation function for block in dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 182 11 dyld 0x10bd81aad invocation function for block in dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 242 12 dyld 0x10bd78e26 invocation function for block in dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 557 13 dyld 0x10bd47db3 dyld3::MachOFile::forEachLoadCommand(Diagnostics&, void (load_command const*, bool&) block_pointer) const + 129 14 dyld 0x10bd78bb7 dyld3::MachOFile::forEachSection(void (dyld3::MachOFile::SectionInfo const&, bool, bool&) block_pointer) const + 179 15 dyld 0x10bd81604 dyld3::MachOAnalyzer::forEachInitializer(Diagnostics&, dyld3::MachOAnalyzer::VMAddrConverter const&, void (unsigned int) block_pointer, void const*) const + 466 16 dyld 0x10bd5ad82 dyld4::Loader::findAndRunAllInitializers(dyld4::RuntimeState&) const + 144 17 dyld 0x10bd6165a dyld4::PrebuiltLoader::runInitializers(dyld4::RuntimeState&) const + 30 18 dyld 0x10bd6e76e dyld4::APIs::runAllInitializersForMain() + 38 19 dyld 0x10bd4c38d dyld4::prepare(dyld4::APIs&, dyld3::MachOAnalyzer const*) + 3443 20 dyld 0x10bd4b4e4 start + 388 Can someone at Apple with the right domain expertise confirm that this is the type of crash you would see because the framework was built assuming it would run on macOS 14.6 and later, and when facing an older environment (e.g. ObjC runtime) it lacks extra code that would ensure backward compatibility with the earlier ObjC runtime found on macOS 12.x?
Replies
2
Boosts
0
Views
203
Activity
May ’25
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback, using ApplicationMusicPlayer. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
Replies
1
Boosts
3
Views
319
Activity
May ’25
Unable to match music with shazamkit for Android
Hello, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
Replies
0
Boosts
0
Views
145
Activity
May ’25