Hi,
I am trying to enable the default MIDINetworkSession in a Catalyst app on MacOS like this:
MIDINetworkSession.default().isEnabled = true
MIDINetworkSession.default().connectionPolicy = .anyone
In the AppSandbox I have both incoming and outgoing network connections enabled. And I also added the NSLocalNetworkUsageDescription key to the info.plist. Bonjour services are also added to the info.plist:
NSBonjourServices
_apple-midi._udp.
Nevertheless the session stays disabled. Running the same code works just fine on iOS.
Is there any special setup I need to make on MacOS to enable the MIDINetworkSession?
Thanks!
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm trying to load Music Kit on the server with solid js. I can confirm that my implementation has been sufficient to return authentication tokens and for MusicKit.isAuthorized to return true. My issue is that if I reload the page, it only succeeds intermittently (perhaps 25% of the time?). My question is - what is wrong with my implementation? Removing the async keyword ensures it loads every time but playing and queuing music no longer works. I'm currently assuming this is an SSR issue but the docs haven't explicitly specified this isn't possible.
I have the following boilerplate:
export default createHandler(
() => (
<StartServer
document={({ assets, children, scripts }) => {
return (
<html lang="en">
<head>
<meta name="apple-music-developer-token" content={authResult.token} />
<meta name="apple-music-app-name" content="app name" />
<meta name="apple-music-app-build" content="1978.4.1" />
{assets}
<script
src="https://js-cdn.music.apple.com/musickit/v3/musickit.js"
async
/>
</head>
<body>
<div id="app">{children}</div>
{scripts}
</body>
</html>
)
}}
/>
))
When I first load my app, I'll encounter:
musickit.js:13 Uncaught TypeError: Cannot read properties of undefined (reading 'node')
at musickit.js:13:10194
at musickit.js:13:140
at musickit.js:13:209
The intermittence signals an issue relating to the async keyword. An expansion on this issue can be found here.
Hi,
I have been working on a project that enables users to listen to their favorite music using a streaming service, which so far was Spotify. The app had a programmable 3D/2D interface with the ability to connect to devices in your home and have them react to music. As of September 2024, Spotify decomissioned their Audio Analysis API. I have seen other posts mention playing Apple Music through AVFoundation, which would break DRM and so it’s not supported. However, the Spotify Audio Analysis API does not allow for a full frequency reconstruction. It is entirely temporal data on beats, kicks, loudness, and timbre changes, which themselves are operators on the spectral data from the FFT. It would be very useful for the developer community if we get the ability to do this and it will probably Apple Music among developers and those who use their apps a lot more.
Would love to hear your thoughts about this and Happy New Year!
We are using a VoiceProcessingIO audio unit in our VoIP application on Mac. In certain scenarios, the AudioComponentInstanceNew call blocks for up to five seconds (at least two). We are using the following code to initialize the audio unit:
OSStatus status;
AudioComponentDescription desc;
AudioComponent inputComponent;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
inputComponent = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew(inputComponent, &unit);
We are having the issue with current MacOS versions on a host of different Macs (x86 and x64 alike). It takes two to three seconds until AudioComponentInstanceNew returns.
We also see the following errors in the log multiple times:
AUVPAggregate.cpp:2560 AggInpStreamsChanged wait failed
and those right after (which I don't know if they matter to this issue):
KeystrokeSuppressorCore.cpp:44 ERROR: KeystrokeSuppressor initialization was unsuccessful. Invalid or no plist was provided. AU will be bypassed. vpStrategyManager.mm:486 Error code 2003332927 reported at GetPropertyInfo
I'm building a camera app that does some post processing after the photo has been taken. With 12MP the processing is pretty good, but larger images 24MP is very slow.
I created a very simple example to demonstrate the issue, which is loading an image and the rendering it to data.
let context = CIContext()
let imageUrl = Bundle.main.url(forResource: "12mp", withExtension: "jpg")!
let data = try! Data(contentsOf: imageUrl)
let ciImage = CIImage(data: data)!
let start = CFAbsoluteTimeGetCurrent()
let data = context.jpegRepresentation(of: ciImage, colorSpace: context.workingColorSpace!)
print(data?.count)
print("Resize Completed: " + String(CFAbsoluteTimeGetCurrent() - start))
Running this code on an iPhone 16 Pro with different images produces these benchmarks:
12MP => 0.03s
24MP => 1.22s
48MP => 2.98s
I understand that processing time will increase with resolution but it doesn't seem linear. I have tried setting different CiContext options such as .useSoftwareRenderer: false but it has made no difference.
From profiling the process it looks like the JPEG decoding is the bottle neck. This is for a 48MP Image:
Is there any way this can be improved?
After updating to 18.3 my iPad 7th generation has no sound and will not play videos
Hey, I have a complex CIFilter chain I'm trying to debug to improve processing time. Is there any documentation on what all the colours mean and the naming, e.g. sRGB Linear_to_workspace?
Thanks
Alex
Hello!
I am building a video camera app and trying to implement Apple log for iPhone 15 Pro and 16 Pro.
I am not seeing a lot of documentation on it and notice the amount of apps that use it on the app is rather limited. Less an 5 to be exact.
Is Apple Log recording a feature that is accessible to developers?
Here is a link to documentation: https://developer.apple.com/documentation/avfoundation/avcapturecolorspace/applelog
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance?
`videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1
`audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
Can some one please answer me How Can I update the cookies of the previously set m3u8 video in AVPlayer without creating the new AVURLAsset and replacing the AVPlayer current Item with it
I have tried everything. The songs load unto the playlists and on searches, but when prompted to play, they just won't play.
I have a wrapper since my main player (which carries the buttons for play/rewind/forward/etc.), is in Objc.
//
// ApplePlayerWrapper.swift
// UniversallyMac
//
// Created by Dorian Mattar on 11/10/24.
//
import Foundation
import MusicKit
import MediaPlayer
@objc public class MusicKitWrapper: NSObject {
@objc public static let shared = MusicKitWrapper()
private let player = ApplicationMusicPlayer.shared
// Play the current track
@objc public func play() {
guard !player.queue.entries.isEmpty else {
print("Queue is empty. Cannot start playback.")
return
}
logPlayerState(message: "Before play")
Task {
do {
try await player.prepareToPlay()
try await player.play()
print("Playback started successfully.")
} catch {
if let nsError = error as NSError? {
print("NSError Code: \(nsError.code), Domain: \(nsError.domain)")
}
}
logPlayerState(message: "After play")
}
}
// Log the current player state
@objc public func logPlayerState(message: String = "") {
print("Player State - \(message):")
print("Playback Status: \(player.state.playbackStatus)")
print("Queue Count: \(player.queue.entries.count)")
// Only log current track details if the player is playing
if player.state.playbackStatus == .playing {
if let currentEntry = player.queue.currentEntry {
print("Current Track: \(currentEntry.title)")
print("Current Position: \(player.playbackTime) seconds")
print("Track Length: \(currentEntry.endTime ?? 0.0) seconds")
} else {
print("No current track.")
}
} else {
print("No track is playing.")
}
print("----------")
}
// Debug the queue
@objc public func debugQueue() {
print("Debugging Queue:")
for (index, entry) in player.queue.entries.enumerated() {
print("\(index): \(entry.title)")
}
}
// Ensure track availability in the queue
public func queueTracks(_ tracks: [Track]) {
Task {
do {
for track in tracks {
// Validate Play Parameters
guard let playParameters = track.playParameters else {
print("Track \(track.title) has no Play Parameters.")
continue
}
// Log the Play Parameters
print("Track Title: \(track.title)")
print("Play Parameters: \(playParameters)")
print("Raw Values: \(track.id.rawValue)")
// Ensure the ID is valid
if track.id.rawValue.isEmpty {
print("Track \(track.title) has an invalid or empty ID in Play Parameters.")
continue
}
// Queue the track
try await player.queue.insert(track, position: .afterCurrentEntry)
print("Queued track: \(track.title)")
}
print("Tracks successfully added to the queue.")
} catch {
print("Error queuing tracks: \(error)")
}
debugQueue()
}
}
// Clear the current queue
@objc public func resetMusicPlayer() {
Task {
player.stop()
player.queue.entries.removeAll()
print("Queue cleared.")
print("Apple Music player reset successfully.")
}
}
}
I opened an Apple Dev. ticket, but I'm trying here as well. Thanks!
I am trying to get access to raw audio samples from mic. I've written a simple example application that writes the values to a text file.
Below is my sample application. All the input samples from the buffers connected to the input tap is zero. What am I doing wrong?
I did add the Privacy - Microphone Usage Description key to my application target properties and I am allowing microphone access when the application launches. I do find it strange that I have to provide permission every time even though in Settings > Privacy, my application is listed as one of the applications allowed to access the microphone.
class AudioRecorder {
private let audioEngine = AVAudioEngine()
private var fileHandle: FileHandle?
func startRecording() {
let inputNode = audioEngine.inputNode
let audioFormat: AVAudioFormat
#if os(iOS)
let hardwareSampleRate = AVAudioSession.sharedInstance().sampleRate
audioFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: 1)!
#elseif os(macOS)
audioFormat = inputNode.inputFormat(forBus: 0) // Use input node's current format
#endif
setupTextFile()
inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { [weak self] buffer, _ in
self!.processAudioBuffer(buffer: buffer)
}
do {
try audioEngine.start()
print("Recording started with format: \(audioFormat)")
} catch {
print("Failed to start audio engine: \(error.localizedDescription)")
}
}
func stopRecording() {
audioEngine.stop()
audioEngine.inputNode.removeTap(onBus: 0)
print("Recording stopped.")
}
private func setupTextFile() {
let tempDir = FileManager.default.temporaryDirectory
let textFileURL = tempDir.appendingPathComponent("audioData.txt")
FileManager.default.createFile(atPath: textFileURL.path, contents: nil, attributes: nil)
fileHandle = try? FileHandle(forWritingTo: textFileURL)
}
private func processAudioBuffer(buffer: AVAudioPCMBuffer) {
guard let channelData = buffer.floatChannelData else { return }
let channelSamples = channelData[0]
let frameLength = Int(buffer.frameLength)
var textData = ""
var allZero = true
for i in 0..<frameLength {
let sample = channelSamples[i]
if sample != 0 {
allZero = false
}
textData += "\(sample)\n"
}
if allZero {
print("Got \(frameLength) worth of audio data on \(buffer.stride) channels. All data is zero.")
} else {
print("Got \(frameLength) worth of audio data on \(buffer.stride) channels.")
}
// Write to file
if let data = textData.data(using: .utf8) {
fileHandle!.write(data)
}
}
}
Support external cameras in your iPadOS app and use Swift to read multiple camera feeds?
thanks
I set both the AVCapturePhotoOutput and the AVCapturePhotoSettings with maxPhotoDimensions = .init(width: 8064, height: 6048), but still get the 12mp photo.
Topic:
Media Technologies
SubTopic:
Photos & Camera
ApplicationMusicPlayer with queue created from playlist crashes with random occurrence shortly after skipping back or forth using controls embedded in the notification, with the error on console log: applicationController: xpc service connection interrupted.
I've noticed that the issue occurs more frequently the shorter is time between skipping entries. Since ApplicationMusicPlayer is run on a remote process, the main app does not crash, but the music stops playing without any exception, and the playback control turns uninitiated.
Here is how I'm initiating the queue:
let entries = playlist
.with(.entries).entries!
.map { ApplicationMusicPlayer.Queue.Entry($0) }
ApplicationMusicPlayer.shared.queue = .init(
entries, startingAt: entries.last
)
Please give me some tips on how to solve this.
EDIT:
The issue does not occur when navigating quickly through the station.
I've been working with MusicKit without enrolling for a developer account and I haven't run into any issues until I noticed nil on the SysteyMusicPlayer item for some songs and I don't understand why. Are some songs blocked from the framework? Or is it somehow a limitation to not having registered MusicKit to the bundle ID? I am planning on using MusicKit properly in prod and this is just a test app for a package I'm working on.
These are the nil songs which I got from the Discovery Station: https://music.apple.com/ca/album/okay/950816298?i=950816304
https://music.apple.com/ca/album/youre-so-cool/1670485433?i=1670485446
Hello,
I'm sporadically getting the unknown CoreHaptics error described in the title, when trying to create a CHHapticPatternPlayer from an .ahap file — this does not occur in a repeatable manner, unfortunately...
This occurs in a SpriteKit game, running on both iOS 17 and 18.
Anyone know what the error means, since it's apparently undocumented?
Note that the haptics engine is started right before creating the pattern, and the start request doesn't appear to fail...
Thanks,
D.
private var audioEngine = AVAudioEngine()
private var inputNode: AVAudioInputNode!
func startAnalyzing() {
inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
let hardwareSampleRate = recordingSession.sampleRate
inputNode.removeTap(onBus: 0)
if recordingFormat.sampleRate != hardwareSampleRate {
print("。")
let newFormat = AVAudioFormat(commonFormat: recordingFormat.commonFormat,
sampleRate: hardwareSampleRate,
channels: recordingFormat.channelCount,
interleaved: recordingFormat.isInterleaved)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: newFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
} else {
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
}
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
print(": \(error)")
}
}
I back the app to the background and then call startAnalyzing(), which reports an error and the background recording permissions are configured。
error:
[10429:570139] [aurioc] AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187)
[10429:570139] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
Audio engine couldn't start.
Is background boot not allowed?
is forKey:fileSize considered accessing non-public API?
has your app been rejected at review stage due to this?
let resources = PHAssetResource.assetResources(for: asset)
if let resource = resources.first {
if let fileSize = resource.value(forKey: "fileSize") as? Int {
return fileSize
}
}
I recently installed a rear-view camera in my car, and ever since, I've been experiencing a frustrating issue with my CarPlay. After about 15 seconds of playing audio via Bluetooth, the sound stops coming out of the speakers, even though the song continues to run in the background.
For context, my stereo system is an aftermarket unit that I installed to enable CarPlay functionality. Everything worked perfectly before adding the rear-view camera. Unfortunately, my unit does not have a port for a wired connection, so I can't test the audio using a cable.
Has anyone experienced a similar issue? Could the camera installation be interfering with the Bluetooth or audio system somehow? Any advice or troubleshooting tips would be greatly appreciated!