I've been trying to use AVMIDIControlChangeEvent with a bankSelect message type to change the instrument the sequencer uses on a AVMusicTrack with no luck.
I started with the Apple AVAEMixerSample, converting the initial setup/loading and portions dealing with the sequencer to Swift. I got that working and playing the "bluesyRiff" and then modified it to play individual notes. So my createAndSetupSequencer looked like
func createAndSetupSequencer() {
sequencer = AVAudioSequencer(audioEngine: engine)
// guard let midiFileURL = Bundle.main.url(forResource: "bluesyRiff", withExtension: "mid") else {
// print (" failed guard trying to get URL for bluesyRiff")
// return
// }
let track = sequencer.createAndAppendTrack()
var currTime = 1.0
for i: UInt32 in 0...8 {
let newNoteEvent = AVMIDINoteEvent(channel: 0, key: 60+i, velocity: 64, duration: 2.0)
track.addEvent(newNoteEvent, at: AVMusicTimeStamp(currTime))
currTime += 2.0
}
The notes played, so then I also replaced the gs_instruments sound bank with GeneralUser GS MuseScore v1.442 first by trying
guard let soundBankURL = Bundle.main.url(forResource: "GeneralUser GS MuseScore v1.442", withExtension: "sf2") else {
return}
do {
try sampler.loadSoundBankInstrument(at: soundBankURL, program: 0x001C, bankMSB: 0x79, bankLSB: 0x08)
} catch{....
}
This appears to work, the instrument (8 which is "Funk Guitar") plays. If I change to bankLSB: 0x00 I get the "Palm Muted guitar". So I know that the soundfont has these instruments
Stuff goes off the rails when I try to change the instruments in createAndSetupSequencer. Putting
let programChange = AVMIDIProgramChangeEvent(channel: 0, programNumber: 0x001C)
let bankChange = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.bankSelect, value: 0x00)
track.addEvent(programChange, at: AVMusicTimeStamp(1.0))
track.addEvent(bankChange, at: AVMusicTimeStamp(1.0))
just before my add note loop doesn't produce any change. Loading bankLSB 8 (Funk) in sampler.loadSoundBankInstrument and trying to change with bankSelect 0 (Palm muted) in createAndSetupSequencer results in instrument 8 (Funk) playing not Palm Muted.
Loading bankLSB 0 (Palm muted) and trying to change with bankSelect 8 (Funk) doesn't work, 0 (Palm muted) plays
I also tried sampler.loadInstrument(at: soundBankURL) and then I always get the first instrument in the sound font file (piano)no matter what values I put in my programChange/bankChange
I've also changed the time in the track.addEvent to be 0, 1.0, 3.0 etc to no success
The sampler.loadSoundBankInstrument specifies two UInt8 parameters, bankMSB and BankLSB while the AVMIDIControlChangeEvent bankSelect value is UInt32 suggesting it might be some combination of bankMSB and BankLSB. But the documentation makes no mention of what this should look like. I tried various combinations of 0x7908, 0X0879 etc to no avail
I will also point out that I am able to successfully execute other control change events
For example adding
if i == 1 {
let portamentoOnEvent = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.portamento, value: 0xFF)
track.addEvent(portamentoOnEvent, at: AVMusicTimeStamp(currTime))
let portamentoRateEvent = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.portamentoTime, value: 64)
track.addEvent(portamentoRateEvent, at: AVMusicTimeStamp(currTime))
}
does produce a change in the sound. (As an aside, a definition of what portamento time is, other than "the rate of portamento" would be welcome. is it notes/seconds? freq/minute? beats/hour?)
I was able to get the instrument to change in a different program using MusicPlayer and a series of MusicTrackNewMIDIChannelEvent on a track but these operate on a MusicTrack not the AVMusicTrack which the sequencer uses.
Has anyone been successful in switching instruments through an AVMIDIControlChangeEvent or have any feedback on how to do this?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I use the Apple Music API to poll my listening history at regular intervals.
Every morning between 5:30AM and 7:30AM, I observe a strange pattern in the API responses. During this window, one or more of the regular polling intervals returns a response that differs significantly from the prior history response, even though I had no listening activity at that time.
I'm using this endpoint: https://api.music.apple.com/v1/me/recent/played/tracks?types=songs,library-songs&include[library-songs]=catalog&include[songs]=albums,artists
Here’s a concrete example from this morning:
Time: 5:45AM
Fetch 1 Tracks (subset):
1799261990, 1739657416, 1786317143, 1784288789, 1743250261, 1738681804, 1789325498, 1743036755, ...
Time: 5:50AM
Fetch 2 Tracks (subset):
1799261990, 1739657416, 1786317143, 1623924746, 1635185172, 1574004238, 1198763630, 1621299055, ...
Time: 5:55AM
Fetch 3 Tracks (subset):
1799261990, 1739657416, 1786317143, 1784288789, 1743250261, 1738681804, 1789325498, 1743036755, ...
At 5:50, a materially different history is returned, then it returns back to the prior history at the next poll. I've listened to all of the tracks in each set, but the 5:50 history drops some tracks and returns some from further back in history.
I've connected other accounts and the behavior is consistent and repeatable every day across them. It appears the API is temporarily returning a different (possibly outdated or cached?) view of the user's history during that early morning window.
Has anyone seen this behavior before?
Is this a known issue with the Apple Music API or MusicKit backend? I'd love any insights into what might cause this, or recommendations on how to work around it.
Mobile app - Ellie's Gift
https://apps.apple.com/gb/app/ellies-gift/id1617597875
Using AVFoundation to play audio tracks within the app.
Has always been working fine across apple and android, but iphone 14 and newer devices are unable to play audio.
Any idea's or suggestions?
I'm building a camera app that does some post processing after the photo has been taken. With 12MP the processing is pretty good, but larger images 24MP is very slow.
I created a very simple example to demonstrate the issue, which is loading an image and the rendering it to data.
let context = CIContext()
let imageUrl = Bundle.main.url(forResource: "12mp", withExtension: "jpg")!
let data = try! Data(contentsOf: imageUrl)
let ciImage = CIImage(data: data)!
let start = CFAbsoluteTimeGetCurrent()
let data = context.jpegRepresentation(of: ciImage, colorSpace: context.workingColorSpace!)
print(data?.count)
print("Resize Completed: " + String(CFAbsoluteTimeGetCurrent() - start))
Running this code on an iPhone 16 Pro with different images produces these benchmarks:
12MP => 0.03s
24MP => 1.22s
48MP => 2.98s
I understand that processing time will increase with resolution but it doesn't seem linear. I have tried setting different CiContext options such as .useSoftwareRenderer: false but it has made no difference.
From profiling the process it looks like the JPEG decoding is the bottle neck. This is for a 48MP Image:
Is there any way this can be improved?
I’m developing a hybrid app (WebView / Turbo Native) that uses getUserMedia to access the back camera for a PPG/heart rate measurement feature (the user places their finger on the camera).
Problem: Even when I specify constraints like:
{
video: {
deviceId: '...',
facingMode: { exact: 'environment' },
advanced: [{ zoom: 1.0 }]
},
audio: false
}
On iPhone 15 (iOS 18), iOS unexpectedly switches between the wide, ultra-wide, and telephoto lenses during the measurement.
This breaks the heart rate detection, and it forces the user to move their finger in the middle of the measurement.
Question: Is there any way, via getUserMedia/WebRTC, to force iOS to use only the wide-angle lens and prevent automatic lens switching?
I know that with AVFoundation (Swift) you can pick .builtInWideAngleCamera, but I’m hoping to avoid building a custom native layer and would prefer to stick with WebView/JavaScript if possible to save time and complexity.
Any suggestions, workarounds, or updates from Apple would be greatly appreciated!
Thanks a lot!
According to the documentation (https://developer.apple.com/documentation/avfoundation/avplayeritem/externalmetadata), AVPlayerItem should have an externalMetadata property. However it does not appear to be visible to my app. When I try, I get:
Value of type 'AVPlayerItem' has no member 'externalMetadata'
Documentation states iOS 12.2+; I am building with a minimum deployment target of iOS 18.
Code snippet:
import Foundation
import AVFoundation
/// ... in function ...
// create metadata as described in https://developer.apple.com/videos/play/wwdc2022/110338
var title = AVMutableMetadataItem()
title.identifier = .commonIdentifierAlbumName
title.value = "My Title" as NSString?
title.extendedLanguageTag = "und"
var playerItem = await AVPlayerItem(asset: composition)
playerItem.externalMetadata = [ title ]
I wanted to ask if I can use Camera Capture Extension with AVMultiCamPiP, because the LockedCameraCaptureUIScene only return in the closure one session and I can't use the same session for both the front and back at the same time
Does an artist similarity station broaden selection variety compared to a song similarity station?
You don't have to answer if it is against nondisclosure terms.
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames.
For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0.
However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither.
Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
We are developing an apple music app on phone, the developed web works fine on chrome, but when i load it on webivew on my phone, i can't play the first song,
We doubt that the drm init, key exchange, session creation was on the music.play() function, while we trigger the play, the drm or session was not ok for play a real song, so it got an error
so we may wanna know:
what about the realative process of drm, key, session, etc in the play() function?
are there some state detect function to show weather the drm is ok?
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
MusicKit
MusicKit JS
Apple Music Feed
I am developing an app to stream and download DRM protected HLS videos based on the official “FairPlay Streaming Server SDK”.
When I play the downloaded video, it asks the server for .ts or .aac, even though I have passed the path of the downloaded video to AVURLAsset.
As a result, playback fails when the device is offline, such as in airplane mode.
This behavior depends on the playback time of the video and occurs when trying to download and play a video with a playback time of 19 hours or more.
It did not occur for videos with a playback time of 18 hours.
The environment we checked is iOS 18.3.
The solution at this time is to limit the video playback time to 18 hours, but if possible, we would like to allow download playback of videos longer than 19 hours.
Does anyone have any information or know of a solution to this problem, such as if you have experienced this type of event, or if you know that content longer than 19 hours cannot be played offline?
// load
let path = ".../xxx.movpkg" // Path of the downloaded file
videoAsset = AVURLAsset(url: path)
playerItem = AVPlayerItem(asset: videoAsset!)
player.replaceCurrentItem(with: playerItem)
// isPlayableOffline
print("videoAsset.assetCache.isPlayableOffline = \(videoAsset.assetCache.isPlayableOffline)") // true
I prefer to use the album fetched from the library instead of the catalog since this is faster. If doing so, how can I check if all tracks of an album are added to the library. In this case I'd like to fetch the catalog version or throw an error (for example when offline).
Using .with(.tracks) on the library album fetches the tracks added to the library.
The trackCount property is referring to the tracks that can be fetched from the library.
The isComplete property is always nil when fetching from the library.
One possible way is checking the trackNumber and discCount properties. However this only detects that not all tracks of an album are added to the library if there is a song not added ahead of one that is. I'd like to be able to handle this edge case as well.
Is there currently a way to do this? I'd prefer to not rely on the apple music catalog for this since this is supposed to work offline as well. Fetching and storing all trackIDs when connected and later comparing against these would work, but this would potentially mean storing tens of thousands of track ids.
Thank you
I'm experiencing an unexpected behavior with AVURLAsset and cookies. When setting cookies through AVURLAssetHTTPCookiesKey option, they seem to be sent only on the initial request but not on retry attempts.
Here's my current implementation:
let cookieProperties: [HTTPCookiePropertyKey: Any] = [
.name: "sessionCookie",
.value: "testValue",
.domain: url.host ?? "",
.path: "/",
.secure: true
]
if let cookie = HTTPCookie(properties: cookieProperties) {
let asset = AVURLAsset(url: url, options: [
AVURLAssetHTTPCookiesKey: [cookie],
])
}
According to the documentation, AVURLAssetHTTPCookiesKey should apply the cookies to all requests made by this asset. However, when the initial request fails and AVPlayer retries, the cookies are not included in subsequent requests.
Only when I store the cookie with HTTPCookieStorage.shared.setCookie, then it persists.
Questions:
Is this the expected behavior?
If not, what could be causing the cookies to not persist for retry attempts?
Is using HTTPCookieStorage.shared the recommended approach instead?
Environment:
iOS 16+
Using AVPlayer with AVURLAsset
Streaming HLS content
Any insights would be greatly appreciated.
Hi there!
We have a suite of AudioUnit v2 plugins that have been shipped for some time as aufx plugins, and we are looking into MIDI-related platform upgrades, so we need a way to update these plugins to request MIDI from Logic (and other AU hosts) but avoid changing our AU type and subtype so we don't break existing sessions. Any ideas on how we can do this?
We have a Push To Talk application which allow user to record video and audio.
When user is recording a video using AVCaptureSession and receive's an Push To Talk call, from moment the Push To Talk call is received the audio in the video which is being captured is stopped while the video capture is still in progress.
Here after the PTT call is completed, we have tried restarting the audio session, there are no errors that are getting printed but we still don't see the audio getting restarted in video capture.
We have also tried to add a new input for AVCaptureSession we are receiving error that is resulting in video capture stopping, error mentioned below:
[OS-PLT] [CameraManager] Movie file finished with error: Error Domain=AVFoundationErrorDomain Code=-11818 "Recording Stopped" UserInfo={AVErrorRecordingSuccessfullyFinishedKey=true, NSLocalizedDescription=Recording Stopped, NSLocalizedRecoverySuggestion=Stop any other actions using the recording device and try again., AVErrorRecordingFailureDomainKey=1, NSUnderlyingError=0x3026bff60 {Error Domain=NSOSStatusErrorDomain Code=-16414 "(null)"}}, success
We have also raised a Feedback Ticket on same: https://feedbackassistant.apple.com/feedback/16050598
After 18.2 IOS update, videos are not playing in Netflix, Amazon Prime and youtube
Topic:
Media Technologies
SubTopic:
Video
使用AVSpeechUtterance实现iOS语音播报,选择语言为简体中文“zh-CN”,读取中文“袆”(hui 第一声)错误,读成了“祎”(yi 第一声),希望能优化。
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue?
Here is the debugger output.
(lldb) po audioFile.url
▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201
- _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201
- _parseInfo : nil
- _baseParseInfo : nil
(lldb) po error
Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)}
(lldb) po buffer.format
<AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved>
(lldb) po audioFile.fileFormat
<AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved>
(lldb) po buffer.frameLength
882
(lldb) po buffer.audioBufferList
▿ 0x0000000300941e60
- pointerValue : 12894608992
This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer.
extension FMLiveSwitchAudioFrame {
func convertedToPCMBuffer() -> AVAudioPCMBuffer {
Self.convertToAVAudioPCMBuffer(from: self)!
}
static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? {
// Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame
guard
let buffer = frame.buffer(),
let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil }
// Extract PCM format details from FMLiveSwitchAudioFormat
let sampleRate = Double(format.clockRate())
let channelCount = AVAudioChannelCount(format.channelCount())
// Determine bytes per sample based on bit depth
let bitsPerSample = 16
let bytesPerSample = bitsPerSample / 8
let bytesPerFrame = bytesPerSample * Int(channelCount)
let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame)
// Create an AVAudioFormat from the FMLiveSwitchAudioFormat
guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else {
return nil
}
// Create an AudioBufferList to wrap the existing buffer
let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1)
audioBufferList.pointee.mNumberBuffers = 1
audioBufferList.pointee.mBuffers.mNumberChannels = channelCount
audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length())
audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer
// Transfer ownership of the buffer to AVAudioPCMBuffer
let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in
// Ensure the buffer is freed when AVAudioPCMBuffer is deallocated
buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation
} */
pcmBuffer?.frameLength = frameLength
return pcmBuffer
}
}
This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen.
private func onRaisedFrame(obj: Any!) -> Void {
// Bail out early if no one is interested in the data.
guard isMonitoring else { return }
// Convert LS frame to AVAudioPCMBuffer (no-copy)
let frame = obj as! FMLiveSwitchAudioFrame
let buffer = frame.convertedToPCMBuffer()
// Hand subscribers a reference to the buffer for rendering to display.
bufferPublisher?.send(buffer)
// If we have and output file, store the data there, as well.
guard let audioFile = self.audioFile else { return }
do {
try audioFile.write(from: buffer) // FIXME: This call is throwing error -50
} catch {
FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)")
self.audioFile = nil
}
}
This is how the audio file is being setup.
static var recordingFormat: AVAudioFormat = {
AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)!
}()
let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin.
The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds.
When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating.
I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance!
P.S. I’ve already enabled background audio permissions.
The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
Hi all, we try migrate project to Swift 6
Project use AVPlayer in MainActor
Selection audio and subtitiles not work
Task { @MainActor in let group = try await item.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible)
get error: Non-sendable type 'AVMediaSelectionGroup?' returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary
and second example
`if #available(iOS 15.0, *) {
player?.currentItem?.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible, completionHandler: { group, error in
if error != nil {
return
}
if let groupWrp = group {
DispatchQueue.main.async {
self.setupAudio(groupWrp, audio: audioLang)
}
}
})
}`
get error: Sending 'groupWrp' risks causing data races