Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

PHFetchOptions: Full List of Supported Predicate Keys
Hi, Could anybody share the full list of supported Predicate keys for the PHFetchOptions? I'm aware of the list that is posted in the documentation: https://developer.apple.com/documentation/photos/phfetchoptions However I have reason to believe that this is not an exhaustive list and there also seem to be mistakes in this doc. i.e. isFavorite does not work but favorite does. Through some experimentation I also found that this works: NSPredicate(format: "adjustmentFormatIdentifier == 'com.pixelmatorteam.touch.x.photo.PhotosAdjustmentData.EmbeddedSlimSidecarFileInfo.compressed'") even though adjustmentFormatIdentifier is not listed as a supported key. Are there other secret keys that you are aware of? Specifically I want to filter a fetch result for edited items. Something like this: NSPredicate(format: "hasAdjustments == true") (I tried this, doesn't work) The native Photos app has such a filter which leads me to believe that there probably is a key for this: If one of the Framework Developers reads this: Could you please update the documentation page with this information? Finally if there really aren't any more secret keys, is there a way to achieve this with adjustmentFormatIdentifier? I have tried a bunch of stuff already like adjustmentFormatIdentifier != nil but for some reason that gives me the exact opposite of what I want: all the photos without edits. 🫠 Any tips on the correct syntax here would be much appreciated.
1
0
140
Jun ’25
How to capture audio from the stream that's playing on the speakers?
Good day, ladies and gents. I have an application that reads audio from the microphone. I'd like it to also be able to read from the Mac's audio output stream. (A bonus would be if it could detect when the Mac is playing music.) I'd eventually be able to figure it out reading docs, but if someone can give a hint, I'd be very grateful, and would owe you the libation of your choice. Here's the code used to set up the AudioUnit: -(NSString*) configureAU { AudioComponent component = NULL; AudioComponentDescription description; OSStatus err = noErr; UInt32 param; AURenderCallbackStruct callback; if( audioUnit ) { AudioComponentInstanceDispose( audioUnit ); audioUnit = NULL; } // was CloseComponent // Open the AudioOutputUnit description.componentType = kAudioUnitType_Output; description.componentSubType = kAudioUnitSubType_HALOutput; description.componentManufacturer = kAudioUnitManufacturer_Apple; description.componentFlags = 0; description.componentFlagsMask = 0; if( component = AudioComponentFindNext( NULL, &description ) ) { err = AudioComponentInstanceNew( component, &audioUnit ); if( err != noErr ) { audioUnit = NULL; return [ NSString stringWithFormat: @"Couldn't open AudioUnit component (ID=%d)", err] ; } } // Configure the AudioOutputUnit: // You must enable the Audio Unit (AUHAL) for input and output for the same device. // When using AudioUnitSetProperty the 4th parameter in the method refers to an AudioUnitElement. // When using an AudioOutputUnit for input the element will be '1' and the output element will be '0'. param = 1; // Enable input on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &param, sizeof(UInt32) ); chkerr("Couldn't set first EnableIO prop (enable inpjt) (ID=%d)"); param = 0; // Disable output on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &param, sizeof(UInt32) ); chkerr("Couldn't set second EnableIO property on the audio unit (disable ootpjt) (ID=%d)"); param = sizeof(AudioDeviceID); // Select the default input device AudioObjectPropertyAddress OutputAddr = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &OutputAddr, 0, NULL, &param, &inputDeviceID ); chkerr("Couldn't get default input device (ID=%d)"); // Set the current device to the default input unit err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(AudioDeviceID) ); chkerr("Failed to hook up input device to our AudioUnit (ID=%d)"); callback.inputProc = AudioInputProc; // Setup render callback, to be called when the AUHAL has input data callback.inputProcRefCon = self; err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &callback, sizeof(AURenderCallbackStruct) ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); param = sizeof(AudioStreamBasicDescription); // get hardware device format err = AudioUnitGetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &deviceFormat, &param ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); audioChannels = MAX( deviceFormat.mChannelsPerFrame, 2 ); // Twiddle the format to our liking actualOutputFormat.mChannelsPerFrame = audioChannels; actualOutputFormat.mSampleRate = deviceFormat.mSampleRate; actualOutputFormat.mFormatID = kAudioFormatLinearPCM; actualOutputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved; if( actualOutputFormat.mFormatID == kAudioFormatLinearPCM && audioChannels == 1 ) actualOutputFormat.mFormatFlags &= ~kLinearPCMFormatFlagIsNonInterleaved; #if __BIG_ENDIAN__ actualOutputFormat.mFormatFlags |= kAudioFormatFlagIsBigEndian; #endif actualOutputFormat.mBitsPerChannel = sizeof(Float32) * 8; actualOutputFormat.mBytesPerFrame = actualOutputFormat.mBitsPerChannel / 8; actualOutputFormat.mFramesPerPacket = 1; actualOutputFormat.mBytesPerPacket = actualOutputFormat.mBytesPerFrame; // Set the AudioOutputUnit output data format err = AudioUnitSetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &actualOutputFormat, sizeof(AudioStreamBasicDescription)); chkerr("Could not change the stream format of the output device (ID=%d)"); param = sizeof(UInt32); // Get the number of frames in the IO buffer(s) err = AudioUnitGetProperty( audioUnit, kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0, &audioSamples, &param ); chkerr("Could not determine audio sample size (ID=%d)"); err = AudioUnitInitialize( audioUnit ); // Initialize the AU chkerr("Could not initialize the AudioUnit (ID=%d)"); // Allocate our audio buffers audioBuffer = [self allocateAudioBufferListWithNumChannels: actualOutputFormat.mChannelsPerFrame size: audioSamples * actualOutputFormat.mBytesPerFrame]; if( audioBuffer == NULL ) { [ self cleanUp ]; return [NSString stringWithFormat: @"Could not allocate buffers for recording (ID=%d)", err]; } return nil; } (...again, it would be nice to know if audio output is active and thereby choose the clean output stream over the noisy mic, but that would be a different chunk of code, and my main question may just be a quick edit to this chunk.) Thanks for your attention! ==Dave [p.s. if i get more than one useful answer, can i "Accept" more than one, to spread the credit around?] {pps: of course, the code lines up prettier in a monospaced font!}
1
0
189
Jun ’25
Above Xcode16 operation project, in the project use AVPictureInPictureController opportunities (PIP) function open system blackout
I found that when the development tool above Xcode16 ran my app, I opened the suspended inscription function, and then opened the system camera, the content in the suspended window would not be displayed, and the suspended window would have a black screen. However, this phenomenon does not appear on Xcode15.4 development tools, it is the same code, I do not know why
6
0
708
Apr ’25
Option to Set Default Interpolation for Keyframes to “Linear” in Final Cut Pro
In Final Cut Pro, keyframes for transform parameters (such as Position, Scale, and Rotation) are automatically set to “Smooth” interpolation. This often results in undesired easing between keyframes, especially when linear motion is required. Currently, we have to manually adjust each keyframe to "Linear" using the Video Animation Editor, which can be time-consuming when working with many keyframes. Would it be possible to add an option to set the default keyframe interpolation to "Linear"—either globally in Preferences or per parameter in the Inspector? This would greatly streamline the animation workflow for many editors. Thank you for considering this request!
0
0
125
Jun ’25
Track changes in the browser tab's audibility property.
Hi! I am writing a browser extension that allows you to control the playback of media content on a music service website. Unfortunately Safari does not support tracking changes to the audible property in an event tabs.onUpdated. Is there an alternative to this event? I'm looking for a way to track when the automatic inference engine interrupts playback on a music service website. That you.
0
0
99
Apr ’25
How to disable automatic updates to MPNowPlayingInfoCenter from AVPlayer
I’m building a SwiftUI app whose primary job is to play audio. I manage all of the Now-Playing metadata and Command center manually via the available shared instances: MPRemoteCommandCenter.shared() MPNowPlayingInfoCenter.default().nowPlayingInfo In certain parts of the app I also need to display videos, but as soon as I attach another AVPlayer, it automatically pushes its own metadata into the Control Center and overwrites my audio info. What I need: a way to show video inline without ever having that video player update the system’s Now-Playing info (or Control Center). In my app, I start by configuring the shared audio session do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [ .allowAirPlay, .allowBluetoothA2DP ]) try AVAudioSession.sharedInstance().setActive(true) } catch { NSLog("%@", "**** Failed to set up AVAudioSession \(error.localizedDescription)") } and then set the MPRemoteCommandCenter commands and MPNowPlayingInfoCenter nowPlayingInfo like mentioned above. All this works without any issues as long as I only have one AVPlayer in my app. But when I add other AVPlayers to display some videos (and keep the main AVPlayer for the sound) they push undesired updates to MPNowPlayingInfoCenter: struct VideoCardView: View { @State private var player: AVPlayer let videoName: String init(player: AVPlayer = AVPlayer(), videoName: String) { self.player = player self.videoName = videoName guard let path = Bundle.main.path(forResource: videoName, ofType: nil) else { return } let url = URL(fileURLWithPath: path) let item = AVPlayerItem(url: url) self.player.replaceCurrentItem(with: item) } var body: some View { VideoPlayer(player: player) .aspectRatio(contentMode: .fill) .onAppear { player.isMuted = true player.allowsExternalPlayback = false player.actionAtItemEnd = .none player.play() MPNowPlayingInfoCenter.default().nowPlayingInfo = nil MPNowPlayingInfoCenter.default().playbackState = .stopped NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player.currentItem, queue: .main) { notification in guard let finishedItem = notification.object as? AVPlayerItem, finishedItem === player.currentItem else { return } player.seek(to: .zero) player.play() } } .onDisappear { player.pause() } } } Which is why I tried adding: MPNowPlayingInfoCenter.default().nowPlayingInfo = nil MPNowPlayingInfoCenter.default().playbackState = .stopped // or .interrupted, .unknown But that didn't work. I also tried making a wrapper around the AVPlayerViewController in order to set updatesNowPlayingInfoCenter to false, but that didn’t work either: struct CustomAVPlayerView: UIViewControllerRepresentable { let player: AVPlayer func makeUIViewController(context: Context) -> AVPlayerViewController { let vc = AVPlayerViewController() vc.player = player vc.updatesNowPlayingInfoCenter = false vc.showsPlaybackControls = false return vc } func updateUIViewController(_ controller: AVPlayerViewController, context: Context) { controller.player = player } } Hence any help on how to embed video in SwiftUI without its AVPlayer touching MPNowPlayingInfoCenter would be greatly appreciated. All this was tested on an actual device with iOS 18.4.1, and built with Xcode 16.2 on macOS 15.5
2
0
263
Jun ’25
Playing periodic audio in background using AVFoundation - facing audio session startup failure
Hello everyone, I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes. Following AVFoundation documentation, I’m configuring my audio session like this: let session = AVAudioSession.sharedInstance() try session.setCategory( .playback, mode: .default, options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers] ) self.engine.attach(self.player) self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat) try? session.setActive(true) When it’s time to play cues, I schedule playback on a DispatchQueue: // scheduleAudio uses DispatchQueue self.scheduleAudio(at: interval.start) { do { try audio.engine.start() audio.node.play() for sample in interval.samples { audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime)) } } catch { print("Audio activation failed: \(error)") } } This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905. Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected. I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio. Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background? Any advice or pointers would be greatly appreciated!
0
0
227
Jul ’25
Capturing multiple screens no longer works with macOS Sequoia
Capturing more than one display is no longer working with macOS Sequoia. We have a product that allows users to capture up to 2 displays/screens. Our application is using gstreamer which in turn is based on AVFoundation. I found a quick way to replicate the issue by just running 2 captures from separate terminals. Assuming display 1 has device index 0, and display 2 has device index 1, here are the steps: install gstreamer with brew install gstreamer Then open 2 terminal windows and launch the following processes: terminal 1 (device-index:0): gst-launch-1.0 avfvideosrc -e device-index=0 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink terminal 2 (device-index:1): gst-launch-1.0 avfvideosrc -e device-index=1 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink The first process that is launched will show the screen, the second process launched will not. Testing this on macOS Ventura and Sonoma works as expected, showing both screens. I submitted the same issue on Feedback Assistant: FB15900976
2
0
393
Apr ’25
401 Authorization error for Music Feed API
I've established proper authorization for general music api calls, but when I use that same authorization to retrieve metadata for the latest music feeds (see https://developer.apple.com/documentation/applemusicfeed/requesting-a-feed-export), I get a 401 Unauthorized error. As per the documentation, I'm simply issuing a GET against https://api.media.apple.com/v1/feed/album/latest. Are there different entitlements needed for the Music Feed API?
2
0
107
May ’25
How to disable the built-in speakers and microphone on a Mac
I need to implement a solution through an API or custom driver to completely block out the built-in speakers and microphone of Mac, because I need other apps to use specified external devices as audio input and output. Is there a way to achieve this requirement? What I mean is that even in system preferences, it should not be possible to choose the built-in microphone and speakers; only my external device can be used.
0
0
205
Apr ’25
Intermittent Memory Leak Indicated in Simulator When Using AVAudioEngine with mainMixerNode Only
Hello, I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak. Here is a simplified version of the code I'm using: // This function is called when the user taps a button in the view controller: #import "ViewController.h" @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction)myButtonAction:(id)sender { NSLog(@"Test"); soundCreate(); } @end // media.m static AVAudioEngine *audioEngine = nil; void soundCreate(void) { if (audioEngine != nil) return; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; audioEngine = [[AVAudioEngine alloc] init]; AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init]; [audioEngine attachNode:playerNode]; [audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil]; [audioEngine startAndReturnError:nil]; } In the memory leak report, the following call stack is repeated, seemingly in a loop: ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore AUListenerAddParameter AudioToolboxCore addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore 0x180178ddf
0
0
130
Apr ’25
Memory Leak in AVAudioPlayer in Simulator only
I have a memory leak, when using AVAudioPlayer. I managed to narrow down the issue into a very simple app, which code I paste in at the end. The memory leak start immediately when I start playing sound, but only in the emylator. On the real iPhone there is no memory leak. The memory leak on the Simulator looks like this: import SwiftUI import AVFoundation struct ContentView_Audio: View { var sound: AVAudioPlayer? init() { guard let path = Bundle.main.path(forResource: "cd201", ofType: "mp3") else { return } let url = URL(fileURLWithPath: path) do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [.mixWithOthers]) } catch { return } do { try AVAudioSession.sharedInstance().setActive(true) } catch { return } do { sound = try AVAudioPlayer(contentsOf: url) } catch { return } } var body: some View { HStack { Button { playSound() } label: { ZStack { Circle() .fill(.mint.opacity(0.3)) .frame(width: 44, height: 44) .shadow(radius: 8) Image(systemName: "play.fill") .resizable() .frame(width: 20, height: 20) } } .padding() Button { stopSound() } label: { ZStack { Circle() .fill(.mint.opacity(0.3)) .frame(width: 44, height: 44) .shadow(radius: 8) Image(systemName: "stop.fill") .resizable() .frame(width: 20, height: 20) } } .padding() } } private func playSound() { guard sound != nil else { return } sound?.volume = 1 // sound?.numberOfLoops = -1 sound?.play() } func stopSound() { sound?.stop() } }
2
0
133
Apr ’25
How to match music with shazamkit for Android ?
Hi all, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this?
0
0
96
Apr ’25
After playing an HDR video on iPhone for a while, the HDR effect disappears and the screen brightness decrease
When i use AVPlayer to obtain the video frame CVPixelBufferRef of an HDR video, and use AVSampleBufferDisplayLayer to display it on the screen, after a period of time, the HDR video content and screen gradually darken, losing the HDR effect. Steps to reproduce: Create an AVPlayer to loop an HDR video, specify the video frame format as kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange Create a timer to get the video frame CVPixelBufferRef at 30 frames per second Use AVSampleBufferDisplayLayer to display CVPixelBufferRef on the screen Don't operate the phone, wait for a period of time (such as 40 minutes), the HDR effect disappears and the screen darkens Note: You need to use an iPhone device, iOS 18.5 and below operating system You need to ensure that the HDR video is played in a loop, that is, to ensure that the screen continues to display HDR content, wait for a period of time, depending on different devices, you need to wait for 20-40 minutes. In the iPhone Photos app,the same problem will occur after playing HDR video in a loop for a long time Expected Results: When rendering HDR content for a long time, it is guaranteed that there is always an HDR effect, and the HDR content and screen will not be darkened. Current Results: After about 20-40 minutes, the HDR effect disappears and the screen darkens.
4
0
774
Jul ’25
How to override the default USB video
According to the doc, I did a simple demo to verify. My env: ProductName: macOS ProductVersion: 15.5 BuildVersion: 24F74 2.4 GHz 四核Intel Core i5 Info.plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>IOKitPersonalities</key> <dict> <key>UVCamera</key> <dict> <key>CFBundleIdentifierKernel</key> <string>com.apple.kpi.iokit</string> <key>IOClass</key> <string>IOUserService</string> <key>IOMatchCategory</key> <string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> <key>IOProviderClass</key> <string>IOUserResources</string> <key>IOResourceMatch</key> <string>IOKit</string> <key>IOUserClass</key> <string>UVCamera</string> <key>IOUserServerName</key> <string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> <key>IOProbeScore</key> <integer>100000</integer> <key>idVendor</key> <integer>1452</integer> <key>idProduct</key> <integer>34068</integer> </dict> </dict> <key>OSBundleUsageDescription</key> <string></string> </dict> </plist> UVCamera.cpp // // UVCamera.cpp // UVCamera // // Created by DTEN on 2025/6/12. // #include <os/log.h> #include <DriverKit/IOUserServer.h> #include <DriverKit/IOLib.h> #include "UVCamera.h" kern_return_t IMPL(UVCamera, Start) { kern_return_t ret; ret = Start(provider, SUPERDISPATCH); os_log(OS_LOG_DEFAULT, "Hello World"); return ret; } UVCamera.iig // // UVCamera.iig // UVCamera // // Created by DTEN on 2025/6/12. // #ifndef UVCamera_h #define UVCamera_h #include <Availability.h> #include <DriverKit/IOService.iig> class UVCamera: public IOService { public: virtual kern_return_t Start(IOService * provider) override; }; #endif /* UVCamera_h */ Then I build by xcode and mv it to /Library/DriverExtensions: sudo mv com.lqs.MyVirtualCam.UVCamera.dext /Library/DriverExtensions sudo kmutil install -R / -r /Library/DriverExtensions kmutil rebuild done However,the dext can't be loaded: kmutil showloaded --list-only | grep UVCamera No variant specified, falling back to release What's the problem? anyone can help me?
0
0
223
Jun ’25
Unable to match music with shazamkit for Android
Hello, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
0
0
145
May ’25
Why a driverkit extension needs a CMIO extension
I developed a driverkit extension based on overriding-the-default-usb-video-class-extension, but the link didn’t give the details of realization. I asked DTS who gave two tips: 1, Do you also have a CMIO extension to load in place of the default overriding-the-default-usb-video-class-extension 2, Your DriverKit extension’s info.plist is also missing the CameraAssistantBundleID. I want to know why a driverkit extension needs a CMIO extension, what’s the data and control flow?
2
0
372
Jun ’25
Capture session interrupted randomly
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason: AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps Our users are all from iPhones, no one is using an iPad. Just to be sure we have set session.isMultitaskingCameraAccessEnabled = true but it does not seem to make any difference. Another weird interruption we are seeing
0
0
123
Jun ’25
PHFetchOptions: Full List of Supported Predicate Keys
Hi, Could anybody share the full list of supported Predicate keys for the PHFetchOptions? I'm aware of the list that is posted in the documentation: https://developer.apple.com/documentation/photos/phfetchoptions However I have reason to believe that this is not an exhaustive list and there also seem to be mistakes in this doc. i.e. isFavorite does not work but favorite does. Through some experimentation I also found that this works: NSPredicate(format: "adjustmentFormatIdentifier == 'com.pixelmatorteam.touch.x.photo.PhotosAdjustmentData.EmbeddedSlimSidecarFileInfo.compressed'") even though adjustmentFormatIdentifier is not listed as a supported key. Are there other secret keys that you are aware of? Specifically I want to filter a fetch result for edited items. Something like this: NSPredicate(format: "hasAdjustments == true") (I tried this, doesn't work) The native Photos app has such a filter which leads me to believe that there probably is a key for this: If one of the Framework Developers reads this: Could you please update the documentation page with this information? Finally if there really aren't any more secret keys, is there a way to achieve this with adjustmentFormatIdentifier? I have tried a bunch of stuff already like adjustmentFormatIdentifier != nil but for some reason that gives me the exact opposite of what I want: all the photos without edits. 🫠 Any tips on the correct syntax here would be much appreciated.
Replies
1
Boosts
0
Views
140
Activity
Jun ’25
How to capture audio from the stream that's playing on the speakers?
Good day, ladies and gents. I have an application that reads audio from the microphone. I'd like it to also be able to read from the Mac's audio output stream. (A bonus would be if it could detect when the Mac is playing music.) I'd eventually be able to figure it out reading docs, but if someone can give a hint, I'd be very grateful, and would owe you the libation of your choice. Here's the code used to set up the AudioUnit: -(NSString*) configureAU { AudioComponent component = NULL; AudioComponentDescription description; OSStatus err = noErr; UInt32 param; AURenderCallbackStruct callback; if( audioUnit ) { AudioComponentInstanceDispose( audioUnit ); audioUnit = NULL; } // was CloseComponent // Open the AudioOutputUnit description.componentType = kAudioUnitType_Output; description.componentSubType = kAudioUnitSubType_HALOutput; description.componentManufacturer = kAudioUnitManufacturer_Apple; description.componentFlags = 0; description.componentFlagsMask = 0; if( component = AudioComponentFindNext( NULL, &description ) ) { err = AudioComponentInstanceNew( component, &audioUnit ); if( err != noErr ) { audioUnit = NULL; return [ NSString stringWithFormat: @"Couldn't open AudioUnit component (ID=%d)", err] ; } } // Configure the AudioOutputUnit: // You must enable the Audio Unit (AUHAL) for input and output for the same device. // When using AudioUnitSetProperty the 4th parameter in the method refers to an AudioUnitElement. // When using an AudioOutputUnit for input the element will be '1' and the output element will be '0'. param = 1; // Enable input on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &param, sizeof(UInt32) ); chkerr("Couldn't set first EnableIO prop (enable inpjt) (ID=%d)"); param = 0; // Disable output on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &param, sizeof(UInt32) ); chkerr("Couldn't set second EnableIO property on the audio unit (disable ootpjt) (ID=%d)"); param = sizeof(AudioDeviceID); // Select the default input device AudioObjectPropertyAddress OutputAddr = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &OutputAddr, 0, NULL, &param, &inputDeviceID ); chkerr("Couldn't get default input device (ID=%d)"); // Set the current device to the default input unit err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(AudioDeviceID) ); chkerr("Failed to hook up input device to our AudioUnit (ID=%d)"); callback.inputProc = AudioInputProc; // Setup render callback, to be called when the AUHAL has input data callback.inputProcRefCon = self; err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &callback, sizeof(AURenderCallbackStruct) ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); param = sizeof(AudioStreamBasicDescription); // get hardware device format err = AudioUnitGetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &deviceFormat, &param ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); audioChannels = MAX( deviceFormat.mChannelsPerFrame, 2 ); // Twiddle the format to our liking actualOutputFormat.mChannelsPerFrame = audioChannels; actualOutputFormat.mSampleRate = deviceFormat.mSampleRate; actualOutputFormat.mFormatID = kAudioFormatLinearPCM; actualOutputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved; if( actualOutputFormat.mFormatID == kAudioFormatLinearPCM && audioChannels == 1 ) actualOutputFormat.mFormatFlags &= ~kLinearPCMFormatFlagIsNonInterleaved; #if __BIG_ENDIAN__ actualOutputFormat.mFormatFlags |= kAudioFormatFlagIsBigEndian; #endif actualOutputFormat.mBitsPerChannel = sizeof(Float32) * 8; actualOutputFormat.mBytesPerFrame = actualOutputFormat.mBitsPerChannel / 8; actualOutputFormat.mFramesPerPacket = 1; actualOutputFormat.mBytesPerPacket = actualOutputFormat.mBytesPerFrame; // Set the AudioOutputUnit output data format err = AudioUnitSetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &actualOutputFormat, sizeof(AudioStreamBasicDescription)); chkerr("Could not change the stream format of the output device (ID=%d)"); param = sizeof(UInt32); // Get the number of frames in the IO buffer(s) err = AudioUnitGetProperty( audioUnit, kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0, &audioSamples, &param ); chkerr("Could not determine audio sample size (ID=%d)"); err = AudioUnitInitialize( audioUnit ); // Initialize the AU chkerr("Could not initialize the AudioUnit (ID=%d)"); // Allocate our audio buffers audioBuffer = [self allocateAudioBufferListWithNumChannels: actualOutputFormat.mChannelsPerFrame size: audioSamples * actualOutputFormat.mBytesPerFrame]; if( audioBuffer == NULL ) { [ self cleanUp ]; return [NSString stringWithFormat: @"Could not allocate buffers for recording (ID=%d)", err]; } return nil; } (...again, it would be nice to know if audio output is active and thereby choose the clean output stream over the noisy mic, but that would be a different chunk of code, and my main question may just be a quick edit to this chunk.) Thanks for your attention! ==Dave [p.s. if i get more than one useful answer, can i "Accept" more than one, to spread the credit around?] {pps: of course, the code lines up prettier in a monospaced font!}
Replies
1
Boosts
0
Views
189
Activity
Jun ’25
Above Xcode16 operation project, in the project use AVPictureInPictureController opportunities (PIP) function open system blackout
I found that when the development tool above Xcode16 ran my app, I opened the suspended inscription function, and then opened the system camera, the content in the suspended window would not be displayed, and the suspended window would have a black screen. However, this phenomenon does not appear on Xcode15.4 development tools, it is the same code, I do not know why
Replies
6
Boosts
0
Views
708
Activity
Apr ’25
Option to Set Default Interpolation for Keyframes to “Linear” in Final Cut Pro
In Final Cut Pro, keyframes for transform parameters (such as Position, Scale, and Rotation) are automatically set to “Smooth” interpolation. This often results in undesired easing between keyframes, especially when linear motion is required. Currently, we have to manually adjust each keyframe to "Linear" using the Video Animation Editor, which can be time-consuming when working with many keyframes. Would it be possible to add an option to set the default keyframe interpolation to "Linear"—either globally in Preferences or per parameter in the Inspector? This would greatly streamline the animation workflow for many editors. Thank you for considering this request!
Replies
0
Boosts
0
Views
125
Activity
Jun ’25
Track changes in the browser tab's audibility property.
Hi! I am writing a browser extension that allows you to control the playback of media content on a music service website. Unfortunately Safari does not support tracking changes to the audible property in an event tabs.onUpdated. Is there an alternative to this event? I'm looking for a way to track when the automatic inference engine interrupts playback on a music service website. That you.
Replies
0
Boosts
0
Views
99
Activity
Apr ’25
Does an artist similarity station broaden selection variety compared to a song similarity station?
Does an artist similarity station broaden selection variety compared to a song similarity station? You don't have to answer if it is against nondisclosure terms.
Replies
0
Boosts
0
Views
66
Activity
Mar ’25
How to disable automatic updates to MPNowPlayingInfoCenter from AVPlayer
I’m building a SwiftUI app whose primary job is to play audio. I manage all of the Now-Playing metadata and Command center manually via the available shared instances: MPRemoteCommandCenter.shared() MPNowPlayingInfoCenter.default().nowPlayingInfo In certain parts of the app I also need to display videos, but as soon as I attach another AVPlayer, it automatically pushes its own metadata into the Control Center and overwrites my audio info. What I need: a way to show video inline without ever having that video player update the system’s Now-Playing info (or Control Center). In my app, I start by configuring the shared audio session do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [ .allowAirPlay, .allowBluetoothA2DP ]) try AVAudioSession.sharedInstance().setActive(true) } catch { NSLog("%@", "**** Failed to set up AVAudioSession \(error.localizedDescription)") } and then set the MPRemoteCommandCenter commands and MPNowPlayingInfoCenter nowPlayingInfo like mentioned above. All this works without any issues as long as I only have one AVPlayer in my app. But when I add other AVPlayers to display some videos (and keep the main AVPlayer for the sound) they push undesired updates to MPNowPlayingInfoCenter: struct VideoCardView: View { @State private var player: AVPlayer let videoName: String init(player: AVPlayer = AVPlayer(), videoName: String) { self.player = player self.videoName = videoName guard let path = Bundle.main.path(forResource: videoName, ofType: nil) else { return } let url = URL(fileURLWithPath: path) let item = AVPlayerItem(url: url) self.player.replaceCurrentItem(with: item) } var body: some View { VideoPlayer(player: player) .aspectRatio(contentMode: .fill) .onAppear { player.isMuted = true player.allowsExternalPlayback = false player.actionAtItemEnd = .none player.play() MPNowPlayingInfoCenter.default().nowPlayingInfo = nil MPNowPlayingInfoCenter.default().playbackState = .stopped NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player.currentItem, queue: .main) { notification in guard let finishedItem = notification.object as? AVPlayerItem, finishedItem === player.currentItem else { return } player.seek(to: .zero) player.play() } } .onDisappear { player.pause() } } } Which is why I tried adding: MPNowPlayingInfoCenter.default().nowPlayingInfo = nil MPNowPlayingInfoCenter.default().playbackState = .stopped // or .interrupted, .unknown But that didn't work. I also tried making a wrapper around the AVPlayerViewController in order to set updatesNowPlayingInfoCenter to false, but that didn’t work either: struct CustomAVPlayerView: UIViewControllerRepresentable { let player: AVPlayer func makeUIViewController(context: Context) -> AVPlayerViewController { let vc = AVPlayerViewController() vc.player = player vc.updatesNowPlayingInfoCenter = false vc.showsPlaybackControls = false return vc } func updateUIViewController(_ controller: AVPlayerViewController, context: Context) { controller.player = player } } Hence any help on how to embed video in SwiftUI without its AVPlayer touching MPNowPlayingInfoCenter would be greatly appreciated. All this was tested on an actual device with iOS 18.4.1, and built with Xcode 16.2 on macOS 15.5
Replies
2
Boosts
0
Views
263
Activity
Jun ’25
Playing periodic audio in background using AVFoundation - facing audio session startup failure
Hello everyone, I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes. Following AVFoundation documentation, I’m configuring my audio session like this: let session = AVAudioSession.sharedInstance() try session.setCategory( .playback, mode: .default, options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers] ) self.engine.attach(self.player) self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat) try? session.setActive(true) When it’s time to play cues, I schedule playback on a DispatchQueue: // scheduleAudio uses DispatchQueue self.scheduleAudio(at: interval.start) { do { try audio.engine.start() audio.node.play() for sample in interval.samples { audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime)) } } catch { print("Audio activation failed: \(error)") } } This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905. Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected. I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio. Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background? Any advice or pointers would be greatly appreciated!
Replies
0
Boosts
0
Views
227
Activity
Jul ’25
Capturing multiple screens no longer works with macOS Sequoia
Capturing more than one display is no longer working with macOS Sequoia. We have a product that allows users to capture up to 2 displays/screens. Our application is using gstreamer which in turn is based on AVFoundation. I found a quick way to replicate the issue by just running 2 captures from separate terminals. Assuming display 1 has device index 0, and display 2 has device index 1, here are the steps: install gstreamer with brew install gstreamer Then open 2 terminal windows and launch the following processes: terminal 1 (device-index:0): gst-launch-1.0 avfvideosrc -e device-index=0 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink terminal 2 (device-index:1): gst-launch-1.0 avfvideosrc -e device-index=1 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink The first process that is launched will show the screen, the second process launched will not. Testing this on macOS Ventura and Sonoma works as expected, showing both screens. I submitted the same issue on Feedback Assistant: FB15900976
Replies
2
Boosts
0
Views
393
Activity
Apr ’25
401 Authorization error for Music Feed API
I've established proper authorization for general music api calls, but when I use that same authorization to retrieve metadata for the latest music feeds (see https://developer.apple.com/documentation/applemusicfeed/requesting-a-feed-export), I get a 401 Unauthorized error. As per the documentation, I'm simply issuing a GET against https://api.media.apple.com/v1/feed/album/latest. Are there different entitlements needed for the Music Feed API?
Replies
2
Boosts
0
Views
107
Activity
May ’25
How to disable the built-in speakers and microphone on a Mac
I need to implement a solution through an API or custom driver to completely block out the built-in speakers and microphone of Mac, because I need other apps to use specified external devices as audio input and output. Is there a way to achieve this requirement? What I mean is that even in system preferences, it should not be possible to choose the built-in microphone and speakers; only my external device can be used.
Replies
0
Boosts
0
Views
205
Activity
Apr ’25
Intermittent Memory Leak Indicated in Simulator When Using AVAudioEngine with mainMixerNode Only
Hello, I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak. Here is a simplified version of the code I'm using: // This function is called when the user taps a button in the view controller: #import "ViewController.h" @interface ViewController () @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction)myButtonAction:(id)sender { NSLog(@"Test"); soundCreate(); } @end // media.m static AVAudioEngine *audioEngine = nil; void soundCreate(void) { if (audioEngine != nil) return; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; audioEngine = [[AVAudioEngine alloc] init]; AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init]; [audioEngine attachNode:playerNode]; [audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil]; [audioEngine startAndReturnError:nil]; } In the memory leak report, the following call stack is repeated, seemingly in a loop: ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore AUListenerAddParameter AudioToolboxCore addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore 0x180178ddf
Replies
0
Boosts
0
Views
130
Activity
Apr ’25
Memory Leak in AVAudioPlayer in Simulator only
I have a memory leak, when using AVAudioPlayer. I managed to narrow down the issue into a very simple app, which code I paste in at the end. The memory leak start immediately when I start playing sound, but only in the emylator. On the real iPhone there is no memory leak. The memory leak on the Simulator looks like this: import SwiftUI import AVFoundation struct ContentView_Audio: View { var sound: AVAudioPlayer? init() { guard let path = Bundle.main.path(forResource: "cd201", ofType: "mp3") else { return } let url = URL(fileURLWithPath: path) do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [.mixWithOthers]) } catch { return } do { try AVAudioSession.sharedInstance().setActive(true) } catch { return } do { sound = try AVAudioPlayer(contentsOf: url) } catch { return } } var body: some View { HStack { Button { playSound() } label: { ZStack { Circle() .fill(.mint.opacity(0.3)) .frame(width: 44, height: 44) .shadow(radius: 8) Image(systemName: "play.fill") .resizable() .frame(width: 20, height: 20) } } .padding() Button { stopSound() } label: { ZStack { Circle() .fill(.mint.opacity(0.3)) .frame(width: 44, height: 44) .shadow(radius: 8) Image(systemName: "stop.fill") .resizable() .frame(width: 20, height: 20) } } .padding() } } private func playSound() { guard sound != nil else { return } sound?.volume = 1 // sound?.numberOfLoops = -1 sound?.play() } func stopSound() { sound?.stop() } }
Replies
2
Boosts
0
Views
133
Activity
Apr ’25
How to match music with shazamkit for Android ?
Hi all, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this?
Replies
0
Boosts
0
Views
96
Activity
Apr ’25
After playing an HDR video on iPhone for a while, the HDR effect disappears and the screen brightness decrease
When i use AVPlayer to obtain the video frame CVPixelBufferRef of an HDR video, and use AVSampleBufferDisplayLayer to display it on the screen, after a period of time, the HDR video content and screen gradually darken, losing the HDR effect. Steps to reproduce: Create an AVPlayer to loop an HDR video, specify the video frame format as kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange Create a timer to get the video frame CVPixelBufferRef at 30 frames per second Use AVSampleBufferDisplayLayer to display CVPixelBufferRef on the screen Don't operate the phone, wait for a period of time (such as 40 minutes), the HDR effect disappears and the screen darkens Note: You need to use an iPhone device, iOS 18.5 and below operating system You need to ensure that the HDR video is played in a loop, that is, to ensure that the screen continues to display HDR content, wait for a period of time, depending on different devices, you need to wait for 20-40 minutes. In the iPhone Photos app,the same problem will occur after playing HDR video in a loop for a long time Expected Results: When rendering HDR content for a long time, it is guaranteed that there is always an HDR effect, and the HDR content and screen will not be darkened. Current Results: After about 20-40 minutes, the HDR effect disappears and the screen darkens.
Replies
4
Boosts
0
Views
774
Activity
Jul ’25
How to override the default USB video
According to the doc, I did a simple demo to verify. My env: ProductName: macOS ProductVersion: 15.5 BuildVersion: 24F74 2.4 GHz 四核Intel Core i5 Info.plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>IOKitPersonalities</key> <dict> <key>UVCamera</key> <dict> <key>CFBundleIdentifierKernel</key> <string>com.apple.kpi.iokit</string> <key>IOClass</key> <string>IOUserService</string> <key>IOMatchCategory</key> <string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> <key>IOProviderClass</key> <string>IOUserResources</string> <key>IOResourceMatch</key> <string>IOKit</string> <key>IOUserClass</key> <string>UVCamera</string> <key>IOUserServerName</key> <string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> <key>IOProbeScore</key> <integer>100000</integer> <key>idVendor</key> <integer>1452</integer> <key>idProduct</key> <integer>34068</integer> </dict> </dict> <key>OSBundleUsageDescription</key> <string></string> </dict> </plist> UVCamera.cpp // // UVCamera.cpp // UVCamera // // Created by DTEN on 2025/6/12. // #include <os/log.h> #include <DriverKit/IOUserServer.h> #include <DriverKit/IOLib.h> #include "UVCamera.h" kern_return_t IMPL(UVCamera, Start) { kern_return_t ret; ret = Start(provider, SUPERDISPATCH); os_log(OS_LOG_DEFAULT, "Hello World"); return ret; } UVCamera.iig // // UVCamera.iig // UVCamera // // Created by DTEN on 2025/6/12. // #ifndef UVCamera_h #define UVCamera_h #include <Availability.h> #include <DriverKit/IOService.iig> class UVCamera: public IOService { public: virtual kern_return_t Start(IOService * provider) override; }; #endif /* UVCamera_h */ Then I build by xcode and mv it to /Library/DriverExtensions: sudo mv com.lqs.MyVirtualCam.UVCamera.dext /Library/DriverExtensions sudo kmutil install -R / -r /Library/DriverExtensions kmutil rebuild done However,the dext can't be loaded: kmutil showloaded --list-only | grep UVCamera No variant specified, falling back to release What's the problem? anyone can help me?
Replies
0
Boosts
0
Views
223
Activity
Jun ’25
Unable to match music with shazamkit for Android
Hello, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
Replies
0
Boosts
0
Views
145
Activity
May ’25
Why a driverkit extension needs a CMIO extension
I developed a driverkit extension based on overriding-the-default-usb-video-class-extension, but the link didn’t give the details of realization. I asked DTS who gave two tips: 1, Do you also have a CMIO extension to load in place of the default overriding-the-default-usb-video-class-extension 2, Your DriverKit extension’s info.plist is also missing the CameraAssistantBundleID. I want to know why a driverkit extension needs a CMIO extension, what’s the data and control flow?
Replies
2
Boosts
0
Views
372
Activity
Jun ’25
Infrequent Sound inconsitency with Apple Music
Since MacOS 26 Apple Music has inconsitent drops to the Quality of some Tracks indiscrimantly. I don't know if others Expereinced it. It doesn't happen on the Speakers or connected via Bluetooth, but the AUX I/O has it quite often. It is more noticable on Headphones with 48kHz and higher Frequency Bandwidth. Here is the FB18062589
Replies
0
Boosts
0
Views
179
Activity
Jul ’25
Capture session interrupted randomly
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason: AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps Our users are all from iPhones, no one is using an iPad. Just to be sure we have set session.isMultitaskingCameraAccessEnabled = true but it does not seem to make any difference. Another weird interruption we are seeing
Replies
0
Boosts
0
Views
123
Activity
Jun ’25