Hello,
As far as I know and in all of my testing there is no way for a user or a developer to change the frame rate of the video output on iPadOS. If you connect an iPad via a USB Hub or a USB to HDMI Adaptor and then connect it to an external monitor it will output at 59.94fps.
I have a video app where a user monitors live video at 25fps and 30fps, they often output to an external display and there are times when the external display will stutter due to the mismatch in frame rate, ie. using 25fps and outputting at 59.94fps.
I thought it was impossible to change the video output frame rate, then in V3.1 of the Blackmagic Camera App I saw an interesting change in their release notes:
‘Support for HDMI Monitoring at Sensor Rate and Resolution’
This means there is some way to modify it, not sure if this is done via a Private API that Apple has allowed Blackmagic to use. If so, how can we access this or is there a way to enable this that is undocumented?
Thanks!
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've been having issues with authorization after switching from v1 to v3, have tried some of the suggestions provided by someone in a reply to a post of mine. Have tried to reach out to Apple Support a few times as well, though I haven't received any support that has helped me to move forward.
I have tested my token at https://jwt.io and I'm getting a "Signature Verified", tried multiple browsers in private/incognito mode, now when I try to sign into my Apple Account to test my player I am receiving an error that stats "There is a Problem Connecting. There May be an Issue with Your Network." (which is not the case).
I have tried everything I can think of, I'm at a loss and would appreciate any help to get my project moving forward. This is what I am seeing in the browser developer console (using Firefox):
Authorization failed: AUTHORIZATION_ERROR: Unauthorized
MKError https://js-cdn.music.apple.com/musickit/v3/musickit.js:13
authorize https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
asyncGeneratorStep$w https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
_next https://js-cdn.music.apple.com/musickit/v3/musickit.js:28
media.mydomain.com:398:19
https://media.mydomain.com/:398
I have a complex CoreImage pipeline which I'm keen to optimise. I'm aware that calling back to the CPU can have a significant impact on the performance - what is the best way to find out where this is happening?
Hi, I'm working on an AUv3 project.
The app itself displays my icon.
However the Auv3 extension does not display any icon in any host app (AUM, Drambo, etc.0).
I thought that the extension would inherit the host app icon but that it does not appear to be the case.
I tried to add the icon as a 1024x1024 file to the extension target and the update my extension plist file withe a CFBundleIconFile key but no luck either.
It must surely be really easy. What am I missing?
Thanks in advance for your help!
Topic:
Media Technologies
SubTopic:
Audio
I'm unable to play music using Musickit.js in the Chrome or Firefox browsers.
Even using the apple guide here: https://js-cdn.music.apple.com/musickit/v1/index.html - I've added my Music Developer Token and song/album url, but it only works in Safari and not in Chrome or Firefox.
I'm unsure if this is a global issue, or if there is something I need to do to enable playback in other browsers, but as it stands it's not working for me.
Thanks for any help in advance!
I'm building a UIKit app that reads user's Apple Music library and displays it. In MusicKit there is the Artwork structure which I need to use to display artwork images in the app. Since I'm not using SwiftUI I cannot use the ArtworkImage view that is recommended way of displaying those images but the Artwork structure has a method that returns url for the image which can be used to read the image.
The way I have it setup is really simple:
extension MusicKit.Song {
func imageURL(for cgSize: CGSize) -> URL? {
return artwork?.url(
width: Int(cgSize.width),
height: Int(cgSize.height)
)
}
func localImage(for cgSize: CGSize) -> UIImage? {
guard let url = imageURL(for: cgSize),
url.scheme == "musicKit",
let data = try? Data(contentsOf: url) else {
return nil
}
return .init(data: data)
}
}
Now, everytime I access .artwork property (so a lot of times) the main thread gets blocked and the console output gets bombared with messages like these:
2023-07-26 11:49:47.317195+0200 Plum[998:297199] [Artwork] Failed to create color analysis for artwork: <MPMediaLibraryArtwork: 0x289591590> with error; Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.mediaartworkd.xpc was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.mediaartworkd.xpc was invalidated: failed at lookup with error 159 - Sandbox restriction.}
2023-07-26 11:49:47.317262+0200 Plum[998:297199] [Artwork] Failed to create color analysis for artwork: file:///var/mobile/Media/iTunes_Control/iTunes/Artwork/Originals/4b/48d7b8d349d2de858413ae4561b6ba1b294dc7
2023-07-26 11:49:47.323099+0200 Plum[998:297013] [Plum] IIOImageWriteSession:121: cannot create: '/var/mobile/Media/iTunes_Control/iTunes/Artwork/Caches/320x320/4b/48d7b8d349d2de858413ae4561b6ba1b294dc7.sb-f9c7943d-6ciLNp'error = 1 (Operation not permitted)
My guess is that the most performance-heavy task here is performing the color analysis for each artwork but IMO the property backgroundColor should not be a stored property if that's the case. I am not planning to use it anywhere and if so it should be a computed async property so it doesn't block the caller.
I know I can move the call to a background thread and that fixes the issue of blocking main thread but still the loading times for each artwork are terribly slow and that impacts the UX.
SwiftUI's ArtworkImage loads the artworks much quicker and without the errors so there must be a better way to do it.
We use BassDSDPlayer / SFBAudioEngine to play just about any file, but playing Apple Music is failing. All subscriptions are up to date. We stop the SFBAudioEngine and the BassDSDPlayer before playing Apple Music to no avail.
PRINTS:
Supported files in /Users/dorian/Music/Music/Media.localized/Music/4: 28364
Apple Music is authorized and can play catalog.
Resetting default output device...
Releasing BassDSDPlayer audio device...
BassDSDPlayer: Audio device released.
STOPPED sfbAudioDevice
Default output device is ID: 76
applicationQueuePlayer _establishConnectionIfNeeded timeout [ping did not pong]
applicationQueuePlayer _establishConnectionIfNeeded timeout [ping did not pong]
Player State - After resetting output:
Playback Status: stopped
Queue Count: 0
No track is playing.
Music player reset successfully.
BassDSDPlayer: Audio device released.
Default output device set successfully: 76
Default output device is ID: 76
Default output device set successfully: 76
Default output device ID: 76
Validated PlayParameters for track: squabble up
PlayParameters: PlayParameters(id: 1781270321, kind: "song", isLibrary: nil, catalogID: nil, libraryID: nil, deviceLocalID: nil, rawValues: [:])
Starting playback...
Player State - After playback:
Playback Status: stopped
Queue Count: 1
No track is playing.
Notification BASS DSD NSConcreteNotification 0x600007ce2b00 {name = kUpdateSongInfo; object = {
AlbumTitle = GNX;
ArtistName = "Kendrick Lamar";
SongArtwork = "<NSImage 0x6000041b7ca0 Size={300, 300} RepProvider=<NSImageArrayRepProvider: 0x600003518770, reps:(\n "NSBitmapImageRep 0x600009ed9dc0 Size={300, 300} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=300x300 Alpha=NO Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600007ce15c0"\n)>>";
SongLength = "157.992";
SongTitle = "squabble up";
Source = AppleMusic;
}}
Apple Music track loaded: squabble up by Kendrick Lamar
Player State - Before play:
Playback Status: stopped
Queue Count: 1
No track is playing.
prepareToPlay failed [no target descriptor]
NSError Code: 1, Domain: MPMusicPlayerControllerErrorDomain
Player State - After play:
Playback Status: stopped
Queue Count: 1
No track is playing.
func playAppleMusicTracks(tracks: [Track]) {
AppleMusicManager.shared.isAuthorizedAndReadyForPlayback { isAuthorized in
guard isAuthorized else {
print("Apple Music authorization or capabilities insufficient for playback.")
return
}
print("Resetting default output device...")
self.stopSFBAudioDevice()
self.resetMusicPlayer()
self.resetAudioSystem()
self.ensureOutputDeviceReady()
Task {
for track in tracks {
guard self.validatePlayParameters(for: track) else { continue }
do {
try await ApplicationMusicPlayer.shared.queue.insert(track, position: .afterCurrentEntry)
guard !ApplicationMusicPlayer.shared.queue.entries.isEmpty else {
print("Queue is empty after queuing. Playback cannot proceed.")
return
}
self.notifyAppleMusicTrackInfo(track)
} catch {
print("Error starting playback: \(error)")
if let nsError = error as NSError? {
print("NSError Code: \(nsError.code), Domain: \(nsError.domain)")
}
}
}
MusicKitWrapper.shared.logPlayerState(message: "After playback")
}
}
}
@objc public class MusicKitWrapper: NSObject {
@objc public static let shared = MusicKitWrapper()
private let player = ApplicationMusicPlayer.shared
// Play the current track
@objc public func play() {
guard !player.queue.entries.isEmpty else {
print("Queue is empty. Cannot start playback.")
return
}
logPlayerState(message: "Before play")
Task {
do {
try await player.prepareToPlay()
try await player.play()
print("Playback started successfully.")
} catch {
if let nsError = error as NSError? {
print("NSError Code: \(nsError.code), Domain: \(nsError.domain)")
}
}
logPlayerState(message: "After play")
}
}
Any help would be appreciated.
Thanks!
Hi,It seems like it's pretty easy to consume HTTP Live Streaming content in an iOS app. Unfortunately, I need to consume media from an RTSP server. It seems to me that this is a very similar thing, and that all of the underpinnings for doing it ought to be present in iOS, but I'm having a devil of a time figuring out how to make it work without doing a lot of programming.For starters, I know that there are web-based services that can consume an RTSP stream and rebroadcast it as an HTTP Live Stream that can be easily consumed by the media players in iOS. This won't work for me because my application needs to function in an environment where there is no internet access (it's on a private Wifi network where the only other thing on the network is the device that is serving the RTSP stream).Having read everything I can get my hands on and exploring third-party and open-source solutions, I've compiled the following list of ideas:1. Using an iOS build of the open-source ffmpeg library, which supports RTSP, I've come up with a test app that can receive the RTSP packets, decode them, create UIImages out of the frames, and display those frames on-screen. This provides a crude player, but performance is poor, most likely because ffmpeg can't take advantage of any hardware acceleration. It also doesn't provide me with any way to integrate the video stream into AVFoundation, so I'm on my own as far as saving the stream to a file, transcoding it, etc.2. I know that the AVURLAsset class doesn't directly support the RTSP scheme. Since I have access to the undecoded RTSP packets via ffmpeg, I've thought it should be possible to implement RTSP support myself via a custom NSURLProtocol, essentially fooling AVFoundation into reading those packets as if they originated in a file. I'm not sure if this would work, since the raw packets coming from the RTSP server might lack the headers that would otherwise be present in data being read from a file. I'm not even sure if AVFoundation would recognize my custom protocol.3. If a protocol doesn't work, I've considered that I might be able to implement my own local HTTP Live Streaming server that converts the RTSP packets into an HTTP stream that the media players can read. This sounds like a terribly convoluted solution to the problem, at best, and very difficult at worst.4. Going back to solution (1), if I could speed up the decoding by using some iOS CoreVideo function instead of ffmpeg, this solution might be okay. However, I can't find any documentation for CoreVideo on iOS (Apple only documents it for OS X).5. I'm certainly willing to license a third-party solution if it works well and provides good performance. Unfortunately, everything I've found so far is pretty crummy and mostly just leverages ffmpeg and/or VLC. What is most disappointing to me is that nobody seems to be able or willing to provide a solution that neatly integrates with AVFoundation. I really want to make my RTSP stream available as an AVAsset so I can use it with AVFoundation players and other classes -- I don't want to build an app that relies on custom third-party code for everything.Any ideas, tips, advice would be greatly appreciated.Thanks,Frank
I am using https://developer.apple.com/documentation/applemusicapi/add-tracks-to-a-library-playlist
to add tracks to playlists. This endpoint works fine for all playlists except for collaborative playlists.
For collaborative playlist I get the following 500 error as a response:
"errors": [
{
"id": "<some id>",
"title": "Upstream Service Error",
"detail": "Unable to update tracks",
"status": "500",
"code": "50001"
}
]
}
Steps to reproduce:
Create a playlist in your library.
Use the api to add a song.
Confirm that it works.
Make that same playlist collaborative.
Update the playlist ID in your api request (as making a playlist collaborative changes its id)
Confirm that you get the 500 error.
I am trying to get access to raw audio samples from mic. I've written a simple example application that writes the values to a text file.
Below is my sample application. All the input samples from the buffers connected to the input tap is zero. What am I doing wrong?
I did add the Privacy - Microphone Usage Description key to my application target properties and I am allowing microphone access when the application launches. I do find it strange that I have to provide permission every time even though in Settings > Privacy, my application is listed as one of the applications allowed to access the microphone.
class AudioRecorder {
private let audioEngine = AVAudioEngine()
private var fileHandle: FileHandle?
func startRecording() {
let inputNode = audioEngine.inputNode
let audioFormat: AVAudioFormat
#if os(iOS)
let hardwareSampleRate = AVAudioSession.sharedInstance().sampleRate
audioFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: 1)!
#elseif os(macOS)
audioFormat = inputNode.inputFormat(forBus: 0) // Use input node's current format
#endif
setupTextFile()
inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { [weak self] buffer, _ in
self!.processAudioBuffer(buffer: buffer)
}
do {
try audioEngine.start()
print("Recording started with format: \(audioFormat)")
} catch {
print("Failed to start audio engine: \(error.localizedDescription)")
}
}
func stopRecording() {
audioEngine.stop()
audioEngine.inputNode.removeTap(onBus: 0)
print("Recording stopped.")
}
private func setupTextFile() {
let tempDir = FileManager.default.temporaryDirectory
let textFileURL = tempDir.appendingPathComponent("audioData.txt")
FileManager.default.createFile(atPath: textFileURL.path, contents: nil, attributes: nil)
fileHandle = try? FileHandle(forWritingTo: textFileURL)
}
private func processAudioBuffer(buffer: AVAudioPCMBuffer) {
guard let channelData = buffer.floatChannelData else { return }
let channelSamples = channelData[0]
let frameLength = Int(buffer.frameLength)
var textData = ""
var allZero = true
for i in 0..<frameLength {
let sample = channelSamples[i]
if sample != 0 {
allZero = false
}
textData += "\(sample)\n"
}
if allZero {
print("Got \(frameLength) worth of audio data on \(buffer.stride) channels. All data is zero.")
} else {
print("Got \(frameLength) worth of audio data on \(buffer.stride) channels.")
}
// Write to file
if let data = textData.data(using: .utf8) {
fileHandle!.write(data)
}
}
}
Support external cameras in your iPadOS app and use Swift to read multiple camera feeds?
thanks
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...")
In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails.
If I "Archive" the app+plugin, there is no audio unit extension in the bundle.
If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there.
Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
Can i use iokit usb lib to disable build-in camera?
Hello,
Does anyone have a recipe on how to raycast VNFaceLandmarkRegion2D points obtained from a frame's capturedImage?
More specifically, how to construct the "from" parameter of the frame's raycastQuery from a VNFaceLandmarkRegion2D point?
Do the points need to be flipped vertically? Is there any other transformation that needs to be performed on the points prior to passing them to raycastQuery?
I was trying to set custom audio output device for a generated audio on macCatalyst.
While using let status = AudioUnitSetProperty(outputUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global,
0,
&outputDeviceID,
UInt32(MemoryLayout.size))
kAudioOutputUnitProperty_CurrentDevice is invalid, and status = -10879, indicating an error.
STEPS TO REPRODUCE
Set Run Destination to MacOS and run the program. "AudioUnitSetProperty: 0" should be printed, indicating it works fine.
Set Run Destination to Mac Catalyst and run the program. "Error setting output device: -10879" should be printed, indicating an error.
I’m facing a problem while trying to achieve spatial audio effects in my iOS 18 app. I have tried several approaches to get good 3D audio, but the effect never felt good enough or it didn’t work at all.
Also what mostly troubles me is I noticed that AirPods I have doesn’t recognize my app as one having spatial audio (in audio settings it shows "Spatial Audio Not Playing"). So i guess my app doesn't use spatial audio potential.
First approach uses AVAudioEnviromentNode with AVAudioEngine. Chaining position of player as well as changing listener’s doesn’t seem to change anything in how audio plays.
Here's simple how i initialize AVAudioEngine
import Foundation
import AVFoundation
class AudioManager: ObservableObject {
// important class variables
var audioEngine: AVAudioEngine!
var environmentNode: AVAudioEnvironmentNode!
var playerNode: AVAudioPlayerNode!
var audioFile: AVAudioFile?
...
//Sound set up
func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
audioEngine = AVAudioEngine()
environmentNode = AVAudioEnvironmentNode()
playerNode = AVAudioPlayerNode()
audioEngine.attach(environmentNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: environmentNode, format: nil)
audioEngine.connect(environmentNode, to: audioEngine.mainMixerNode, format: nil)
environmentNode.listenerPosition = AVAudio3DPoint(x: 0, y: 0, z: 0)
environmentNode.listenerAngularOrientation = AVAudio3DAngularOrientation(yaw: 0, pitch: 0, roll: 0)
environmentNode.distanceAttenuationParameters.referenceDistance = 1.0 environmentNode.distanceAttenuationParameters.maximumDistance = 100.0
environmentNode.distanceAttenuationParameters.rolloffFactor = 2.0
// example.mp3 is mono sound
guard let audioURL = Bundle.main.url(forResource: "example", withExtension: "mp3") else {
print("Audio file not found")
return
}
do {
audioFile = try AVAudioFile(forReading: audioURL)
} catch {
print("Failed to load audio file: \(error)")
}
}
...
//Playing sound
func playSpatialAudio(pan: Float ) {
guard let audioFile = audioFile else { return }
// left side
playerNode.position = AVAudio3DPoint(x: pan, y: 0, z: 0)
playerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
do {
try audioEngine.start()
playerNode.play()
} catch {
print("Failed to start audio engine: \(error)")
}
...
}
Second more complex approach using PHASE did better. I’ve made an exemplary app that allows players to move audio player in 3D space. I have added reverb, and sliders changing audio position up to 10 meters each direction from listener but audio seems to only really change left to right (x axis) - again I think it might be trouble with the app not being recognized as spatial.
//Crucial class Variables:
class PHASEAudioController: ObservableObject{
private var soundSourcePosition: simd_float4x4 = matrix_identity_float4x4
private var audioAsset: PHASESoundAsset!
private let phaseEngine: PHASEEngine
private let params = PHASEMixerParameters()
private var soundSource: PHASESource
private var phaseListener: PHASEListener!
private var soundEventAsset: PHASESoundEventNodeAsset?
// Initialization of PHASE
init{
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
// Init PHASE Engine
phaseEngine = PHASEEngine(updateMode: .automatic)
phaseEngine.defaultReverbPreset = .mediumHall
phaseEngine.outputSpatializationMode = .automatic //nothing helps
// Set listener position to (0,0,0) in World space
let origin: simd_float4x4 = matrix_identity_float4x4
phaseListener = PHASEListener(engine: phaseEngine)
phaseListener.transform = origin
phaseListener.automaticHeadTrackingFlags = .orientation
try! self.phaseEngine.rootObject.addChild(self.phaseListener)
do{
try self.phaseEngine.start();
}
catch {
print("Could not start PHASE engine")
}
audioAsset = loadAudioAsset()
// Create sound Source
// Sphere
soundSourcePosition.translate(z:3.0)
let sphere = MDLMesh.newEllipsoid(withRadii: vector_float3(0.1,0.1,0.1), radialSegments: 14, verticalSegments: 14, geometryType: MDLGeometryType.triangles, inwardNormals: false, hemisphere: false, allocator: nil)
let shape = PHASEShape(engine: phaseEngine, mesh: sphere)
soundSource = PHASESource(engine: phaseEngine, shapes: [shape])
soundSource.transform = soundSourcePosition
print(soundSourcePosition)
do {
try phaseEngine.rootObject.addChild(soundSource)
}
catch {
print ("Failed to add a child object to the scene.")
}
let simpleModel = PHASEGeometricSpreadingDistanceModelParameters()
simpleModel.rolloffFactor = rolloffFactor
soundPipeline.distanceModelParameters = simpleModel
let samplerNode = PHASESamplerNodeDefinition(
soundAssetIdentifier: audioAsset.identifier,
mixerDefinition: soundPipeline,
identifier: audioAsset.identifier + "_SamplerNode")
samplerNode.playbackMode = .looping
do {soundEventAsset = try
phaseEngine.assetRegistry.registerSoundEventAsset(
rootNode: samplerNode,
identifier: audioAsset.identifier + "_SoundEventAsset")
} catch {
print("Failed to register a sound event asset.")
soundEventAsset = nil
}
}
//Playing sound
func playSound(){
// Fire new sound event with currently set properties
guard let soundEventAsset else { return }
params.addSpatialMixerParameters(
identifier: soundPipeline.identifier,
source: soundSource,
listener: phaseListener)
let soundEvent = try! PHASESoundEvent(engine: phaseEngine,
assetIdentifier: soundEventAsset.identifier,
mixerParameters: params)
soundEvent.start(completion: nil)
}
...
}
Also worth mentioning might be that I only own personal team account
Hi,
I'm working on an audio mixing app, that comes with bundled audio units that provide some of the app's core functionality.
For the next release of that app, we are planning to make two changes:
make the app sandboxed
package the bundled audio units as .appex bundles instead as .component bundles, so we don't need to take care of the installation at the correct spot in the file system
When trying this new approach, we run into problems where [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:] crashes when trying to load our audio unit with the exception:
AVAEInternal.h:109 [AUInterface.mm:468:AUInterfaceBaseV3: (AudioComponentInstanceNew(comp, &_auv2)): error -10863
Our audio unit has the `sandboxSafe flag enabled, and loads fine when the host app is not sandboxed, so I'm guessing I got the bundle id/code signing requirements for the .appex correct.
It seems, that my .appex isn't even loaded, and the system rejects it because of its metadata. Maybe there something wrong the Info.plist generated by Juice?
"BuildMachineOSBuild" => "23H222"
"CFBundleDisplayName" => "elgato_sample_recorder"
"CFBundleExecutable" => "ElgatoSampleRecorder"
"CFBundleIdentifier" => "com.iwascoding.EffectLoader.samplerecorderAUv3"
"CFBundleName" => "elgato_sample_recorder"
"CFBundlePackageType" => "XPC!"
"CFBundleShortVersionString" => "1.0.0.0"
"CFBundleSignature" => "????"
"CFBundleSupportedPlatforms" => [
0 => "MacOSX"
]
"CFBundleVersion" => "1.0.0.0"
"DTCompiler" => "com.apple.compilers.llvm.clang.1_0"
"DTPlatformBuild" => "24C94"
"DTPlatformName" => "macosx"
"DTPlatformVersion" => "15.2"
"DTSDKBuild" => "24C94"
"DTSDKName" => "macosx15.2"
"DTXcode" => "1620"
"DTXcodeBuild" => "16C5032a"
"LSMinimumSystemVersion" => "10.13"
"NSExtension" => {
"NSExtensionAttributes" => {
"AudioComponents" => [
0 => {
"description" => "Elgato Sample Recorder"
"factoryFunction" => "elgato_sample_recorderAUFactoryAUv3"
"manufacturer" => "Manu"
"name" => "Elgato: Elgato Sample Recorder"
"sandboxSafe" => 1
"subtype" => "Znyk"
"tags" => [
0 => "Effects"
]
"type" => "aufx"
"version" => 65536
}
]
}
"NSExtensionPointIdentifier" => "com.apple.AudioUnit-UI"
"NSExtensionPrincipalClass" => "elgato_sample_recorderAUFactoryAUv3"
}
"NSHighResolutionCapable" => 1
}
Any ideas what I am missing?
I noticed that AVSampleBufferDisplayLayerContentLayer is not released when the AVSampleBufferDisplayLayer is removed and released.
It is possible to reproduce the issue with the simple code:
import AVFoundation
import UIKit
class ViewController: UIViewController {
var displayBufferLayer: AVSampleBufferDisplayLayer?
override func viewDidLoad() {
super.viewDidLoad()
let displayBufferLayer = AVSampleBufferDisplayLayer()
displayBufferLayer.videoGravity = .resizeAspectFill
displayBufferLayer.frame = view.bounds
view.layer.insertSublayer(displayBufferLayer, at: 0)
self.displayBufferLayer = displayBufferLayer
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
self.displayBufferLayer?.flush()
self.displayBufferLayer?.removeFromSuperlayer()
self.displayBufferLayer = nil
}
}
}
In my real project I have mutliple AVSampleBufferDisplayLayer created and removed in different view controllers, this is problematic because the amount of leaked AVSampleBufferDisplayLayerContentLayer keeps increasing.
I wonder that maybe I should use a pool of AVSampleBufferDisplayLayer and reuse them, however I'm slightly afraid that this can also lead to strange bugs.
Edit: It doesn't cause leaks on iOS 18 device but leaks on iPad Pro, iOS 17.5.1
Context:
I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup.
I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate.
I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking.
Issue
When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block.
Details:
The audio session is already active and configured with .playAndRecord.
The input tap is already installed when the engine is started.
When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio.
Assumptions / Current Setup
Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio.
However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking.
Questions
Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine?
Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio?
Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer?
Would using separate engines for input and output improve responsiveness?
I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation.
Relevant Code Snippets
Engine Setup
func setup() {
let input = audioEngine.inputNode
do {
try input.setVoiceProcessingEnabled(true)
} catch {
print("Could not enable voice processing \(error)")
return
}
input.isVoiceProcessingAGCEnabled = false
let output = audioEngine.outputNode
let mainMixer = audioEngine.mainMixerNode
audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat)
audioEngine.connect(beepNode, to: mainMixer, format: outputFormat)
audioEngine.connect(mainMixer, to: output, format: outputFormat)
// Initialize converters
converter = AVAudioConverter(from: inputFormat, to: outputFormat)!
f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)!
audioEngine.prepare()
}
Input Tap Installation
func installTap() {
guard AudioHandler.shared.checkMicrophonePermission() else {
print("Microphone not granted for recording")
return
}
guard !isInputTapped else {
print("[AudioEngine] Input is already tapped!")
return
}
let input = audioEngine.inputNode
let microphoneFormat = input.inputFormat(forBus: 0)
let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)!
let desiredFormat = outputFormat
let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate)
input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in
guard let self = self else { return }
// Output buffer: 1920 frames at 16kHz
guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return }
outputBuffer.frameLength = outputBuffer.frameCapacity
let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = .haveData
return buffer
}
var error: NSError?
let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock)
if converterResult != .haveData {
DebugLogger.shared.print("Downsample error \(converterResult)")
} else {
self.handleDownsampledBuffer(outputBuffer)
}
}
isInputTapped = true
}
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734).
On iOS 17 this gave the warning that the completion handler was not run on the main thread.
I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231
but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue.
.task {
await MainActor.run {
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
var nowPlayingInfo = [String: Any]()
let image = NSImage(named: "image")!
// warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in
// Not on main thread here!
return image
})
nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo
}
}
I'm wondering if there is an alternative method to set the now playing artwork?