TL;DR How to solve possible racing issue of EXT-X-SESSION-KEY request and encrypted media segment request?
I'm having trouble using custom AVAssetResourceLoaderDelegate with video manifest containing VideoProtectionKey(VPK). My master manifest contains rendition manifest url and VPK url. When not using custom resource delegate, everything works fine.
My custom resource delegate is implemented in way where it first append prefix to scheme of the master manifest url before creating the asset. And during handling master manifest, it puts back original scheme, make the request, modify the scheme for rendition manifest url in the response content by appending the same prefix again, so that rendition manifest request also goes into custom resource loader delegate. Same goes for VPK request. The AES-128 key is stored in memory within custom resource loader delegate object. So far so good.
The VPK is requested before segment request. But the problem comes where the media segment requests happen. The media segment request url from rendition manifest goes into custom resource loader as well and those are encrypted. I can see segment request finish first then the related VPK requests kick in after a few seconds. The previous VPK value is cached in memory so it is not network causing the delay but some mechanism that I'm not aware of causing this.
So could anyone tell me what would be the proper way of handling this situation? The native library is handling it well so I just want to know how. Thanks in advance!
Streaming
RSS for tagDeep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
We’re currently developing a music-based app using MusicKit, and we recently noticed that iOS 26 beta introduces a new “Automix” feature in the Apple Music app. This enables seamless DJ-style transitions between songs—beyond the standard crossfade functionality.
We’re trying to understand:
Will this Automix feature be accessible to third-party apps that use MusicKit?
If not available in the initial iOS 26 release, is there a plan to expose it through public APIs in a future update?
Is there any technical documentation, WWDC session, or roadmap info regarding Automix support via MusicKit?
This functionality would be a significant enhancement for our app, especially for intelligent audio transitions and curated playlists.
Thanks.
The documentation for the Apple Music API indicates that the genreNames field for a given artist (see https://developer.apple.com/documentation/applemusicapi/artists/attributes-data.dictionary) is an array of strings. However, it only appears as though you return ONE SINGLE GENRE per Artist, regardless of how many genres might be attached to that artist's albums.
Am I missing something? Is there an artist where multiple genres may be returned, or is this a bug in the documentation?
Hi everyone,
I noticed that Apple recently added a few new beta sample codes related to video encoding:
Encoding video for low-latency conferencing
Encoding video for live streaming
While experimenting with H.264 encoding, I came across some questions regarding certain configurations:
When I enable kVTVideoEncoderSpecification_EnableLowLatencyRateControl, is it still possible to use kVTCompressionPropertyKey_VariableBitRate? In my tests, I get an error.
It also seems that kVTVideoEncoderSpecification_EnableLowLatencyRateControl cannot be used together with kVTCompressionPropertyKey_ConstantBitRate when encoding H264. Is that expected?
When using kVTCompressionPropertyKey_ConstantBitRate with kVTCompressionPropertyKey_MaxKeyFrameInterval set to 2, the encoder outputs only keyframes, and the frame size keeps increasing, which doesn’t seem like the intended behavior.
Regarding the following code from the sample:
let byteLimit = (Double(bitrate) / 8) * 1.5 as CFNumber
let secLimit = Double(1.0) as CFNumber
let limitsArray = [ byteLimit, secLimit ] as CFArray // Each 1 second limit byte as bitrate
err = VTSessionSetProperty(session, key: kVTCompressionPropertyKey_DataRateLimits, value: limitsArray)
This DataRateLimits setting doesn’t seem to have any effect in my tests. Whether I set it or not, the values remain unchanged.
Since the documentation on developer.apple.com/documentation
doesn’t clearly explain these cases, I’d like to ask if anyone has insights or recommendations about the proper usage of these settings.
Thanks in advance!
Hi there,
We're working on offline playback of DRM tracks. The persistent keys (also known as track licenses) for offline playback are stored locally on the device and are served from cache when a user initiates playback of a downloaded track.
Our persistent keys have a limited validity time and need to be refreshed when they expire. To prevent a situation where a persistent key expires while the user is offline, we've decided to eagerly refresh these keys one week before their expiration date. To make that happen we need to be able to obtain the expiration date of the given track license.
We've been attempting to use the makeSecureTokenForExpirationDateOfPersistableContentKey API to facilitate this process. The documentation states that this API returns a secret token representing the persistent key, which we can then exchange with our license server for the expiration date: https://developer.apple.com/documentation/avfoundation/avcontentkeysession/makesecuretokenforexpirationdate(ofpersistablecontentkey:completionhandler:)?language=objc
However, every time we call makeSecureTokenForExpirationDateOfPersistableContentKey, we receive an error with code -46250. We haven't been able to find any public references or documentation for this specific error code, which is preventing us from troubleshooting the issue. We are conducting our tests on a physical device, as the simulator does not support FairPlay playback. We don't use dual expiry approach.
Is our understanding of how to obtain the expiration timestamp correct? Are we using the makeSecureTokenForExpirationDateOfPersistableContentKey API as it was intended? What does the -46250 error code mean, and what steps should we take to fix our FairPlay implementation to make this work?
Thanks in advance for your assistance.
The operation couldn’t be completed. (CoreMediaErrorDomain error -19156 - The operation couldn’t be completed. (CoreMediaErrorDomain error -19156.
Hello, We have Video Stream app. It has HLS VOD Content. We supply 1080p, 4K Contents to users. Users were watching 1080p content before tvOS 26. Users can not watch 1080p content anymore when they update to tvOS 26. We have not changed anything at HLS playlist side and application version. This problem only occurs on Apple TV 4th Gen (A1625) tvOS 26 version. There is no problem with newer Apple TV devices. Would you help to resolve problem? Thanks in advance
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
Apple TV
tvOS
HTTP Live Streaming
quotes are displayed incorrectly in subtitles of AVPlayerViewController when streaming VOD content using HLS.
single quote ' (escaped ') is displayed as apos;
double quotes " (escaped ") is displayed as quot;
following the vtt specification.
The same stream works fine in VLC player, showing quotes correctly in subtitles.
subtitle vtt files use
Content-Type: text/vtt
WEBVTT
X-TIMESTAMP-MAP=LOCAL:490014:06:04.000,MPEGTS:158764568760056
example line:
490014:05:46.000 --> 490014:05:50.440 align:start line:83% position:14%
and the playlist has:
#EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",LANGUAGE="da",NAME="Dansk",AUTOSELECT=YES,CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog,public.accessibility.describes-music-and-sound",URI="subs/dan_5/playlist.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=780000,CODECS="mp4a.40.5,avc1.42c01e",RESOLUTION=256x144,AUDIO="audio-aac",SUBTITLES="subs"
lære dig endnu bedre at kende."
adding 'wvtt' to CODECS list in playlist does not make a difference.
Is this a known bug? Is there a workaround?
I guess the AVResourceLoaderDelegate can be used to intercept and parse the subtitle files, but it seems like quite a hack and not really intended to be used for this.
Hello there,
I'm trying to implement feature which uses AirPlay with Apple TV. I want to disconnect from the device programmatically when something happens. Under something I mean a situation when a user wants to stop broadcasting (for example close the PiP window on his phone). I use this snippet:
try audioSession.setCategory(.playAndRecord, options: .defaultToSpeaker)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
It works fine sometimes but not always (it works on iOS 18 but it doesn't on iOS 17 or ). So I thought it's a bug and create a ticker to feedback assistant (FB21220013). The support told me write a post on the forum.
Hello, I am developing a custom player SDK using AVPlayer to support HLS and LL-HLS live streaming. I have some questions about the internal logic of AVPlayer regarding ABR, as this information is not explicitly covered in the documentation.
ABR Switching Logic: Does AVPlayer trigger bitrate switching primarily based on stall occurrences (buffer starvation)? I am curious if the switching logic is reactive to stalls or if it proactively switches to prevent them based on throughput estimation.
Developer Controls for ABR: To influence or control the ABR selection, are preferredPeakBitRate and preferredForwardBufferDuration the only properties available to developers? Are there any other recommended APIs to assist with ABR decisions?
Thank you for your help.
I can't play video content with HEVC and DRM. Tested HEVC only: OK. Tested DRM+AVC: Ok.
Tested 2 players (Clappr/Stevie and BitMovin)
Master, variants and EXT-X-MAPs are downloaded Ok, DRM keys Ok and then, for instance with BitMovin Player:
[BMP] [Player] [Error] Event: SourceError, Data: {"code":2001,"data":{"message":"The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.)","code":-12927},"message":"Source Error. The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.)","timestamp":1740320663.4505711,"type":"onSourceError"} code: 2001 [Data code: -12927, message: The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.), underlying error: Error Domain=CoreMediaErrorDomain Code=-12927 "(null)"]
4k-master.m3u8.txt
4k.m3u8.txt
4k-audio.m3u8.txt
Hello All,
I am looking for assistance with our FairPlay Streaming (FPS) certificates. We are in the process of migrating to a new video streaming vendor and need to create a new FPS certificate using SDK 4. However, we have reached the limit of allowed FPS certificates in our account and cannot create a new one.
Issue Details:
• We currently have two FPS certificates active in our developer account.
• One of these was created using SDK 5, but our new vendor (Mux) requires an FPS certificate based on SDK 4.
• Since Apple does not allow deleting FPS certificates from the developer portal, we are unable to create a new SDK 4 certificate.
• We kindly request Apple to revoke one of our existing FPS certificates to allow us to generate a new SDK 4 certificate.
Request:
We would greatly appreciate it if you could assist us on how to delete one of our existing FPS certificates so that we can proceed with creating a new SDK 4 certificate for our vendor integration.
Thank you for your support.
Hello,
I am developing a video streaming service that uses FairPlay. Since around February 20th, we have started receiving reports of CoreMediaErrorDomain -42709 errors.
Unfortunately, there is no documentation from Apple that explains what this error means, so we are not sure how to address or fix the issue.
Most of the users who reported this error are using iOS 18.2.1 and iOS 18.3.1.
Could you please advise on what we should check or how we might resolve this error?
A few months ago, I had the opportunity to receive a 2018 iMac, and I’ve been using it to create content for my social media. I was truly impressed by the power of its processors. Even with this older model, I’ve been able to grow my presence online—something I couldn’t achieve with newer computers from other brands that I previously purchased.
I would love to become a promoter of your brand in the gaming world. All I ask for is technological support with more recent equipment and a minimal payment for collaborating with you. I am genuinely interested in being part of your company and leveraging the potential and reputation of Apple to reach even greater heights.
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
GameplayKit
External Graphics Processors
Developer Tools
I have an iOS app that downloads HLS files, which are protected by FairPlay. These files are stored locally, and their locations are managed using Core Data. When playing these tracks, I use AVURLAsset to access the stored file paths.
Recently, a client upgraded to a new iPhone and used Quick Start to transfer data from his old device. While all other app data was successfully transferred, including Core Data records and UserDefaults, the actual HLS files were missing. As a result, the app retained metadata about the downloaded content, but the files themselves were gone, causing playback failures.
Does Quick Start exclude certain types of locally stored files, especially DRM-protected HLS downloads, or is the issue related to how FairPlay-protected content is handled during the transfer of locally stored files?
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
Cloud and Local Storage
HTTP Live Streaming
AVFoundation
Hello, hope everybody is doing well. I have some reels (of aspect ratio 9X16) content, which I want to playback on iOS phones.
My question is does AV player support out of box playback of encrypted Mp4. Please note, this is not HLS fMp4, rather unfragmented Mp4 content.
If it is supported, what algorithm of encryption shall be used? Please let me know.
Topic:
Media Technologies
SubTopic:
Streaming
I’m building a professional camera app where users can customize the video recording format and color grading. In the func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method, I handle video frames and use Metal for real-time color grading. This works well when device.activeColorSpace is sRGB or P3, and the results are great. However, when the color space is HLG_BT2020 or appleLog, the MTKTextureLoader.newTexture(cgImage: cgImage, options: options) method throws an error. After researching, I found that the video frame in these color spaces has a bit-per-channel (bpc) greater than 8 after being converted to CGImage, causing the texture creation to fail. I tried converting the CGImage to a lower bpc to successfully create the texture, but the final output image is garbled and not as expected. Is there a solution to this issue?
Is there a way to install HLS tools on ARM Linux? The only one I can find is for x86 Linux.
I'm working on a project on a Raspberry Pi. I'd like to install the tool to generate my hls files, but the alternatives are more complicated to use.
Is there a way to run them like Mac OS does with Rosseta?
I did watch WWDC 2019 Session 716 and understand that an active audio session is key to unlocking low‑level networking on watchOS. I’m configuring my audio session and engine as follows:
private func configureAudioSession(completion: @escaping (Bool) -> Void) {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
// Retrieve sample rate and configure the audio format.
let sampleRate = audioSession.sampleRate
print("Active hardware sample rate: \(sampleRate)")
audioFormat = AVAudioFormat(standardFormatWithSampleRate: sampleRate, channels: 1)
// Configure the audio engine.
audioInputNode = audioEngine.inputNode
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: audioFormat)
try audioEngine.start()
completion(true)
} catch {
print("Error configuring audio session: \(error.localizedDescription)")
completion(false)
}
}
private func setupUDPConnection() {
let parameters = NWParameters.udp
parameters.includePeerToPeer = true
connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters)
setupNWConnectionHandlers()
}
private func setupTCPConnection() {
let parameters = NWParameters.tcp
connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters)
setupNWConnectionHandlers()
}
private func setupWebSocketConnection() {
guard let url = URL(string: "ws://***.***.xxxxx.***:0000") else {
print("Invalid WebSocket URL")
return
}
let session = URLSession(configuration: .default)
webSocketTask = session.webSocketTask(with: url)
webSocketTask?.resume()
print("WebSocket connection initiated")
sendAudioToServer()
receiveDataFromServer()
sendWebSocketPing(after: 0.6)
}
private func setupNWConnectionHandlers() {
connection?.stateUpdateHandler = { [weak self] state in
DispatchQueue.main.async {
switch state {
case .ready:
print("Connected (NWConnection)")
self?.isConnected = true
self?.failToConnect = false
self?.receiveDataFromServer()
self?.sendAudioToServer()
case .waiting(let error), .failed(let error):
print("Connection error: \(error.localizedDescription)")
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
self?.setupNetwork()
}
case .cancelled:
print("NWConnection cancelled")
self?.isConnected = false
default:
break
}
}
}
connection?.start(queue: .main)
}
Duplex in this context refers to two-way audio transmission simultaneously recording and sending audio while also receiving and playing back incoming audio, similar to a VoIP/SIP call.
The setup works fine on the simulator, which suggests that the core logic is correct. However, since the simulator doesn’t fully replicate WatchOS hardware behavior especially for audio sessions and networking issues might arise when running on a real device.
The problem likely lies in either the Watch’s actual hardware limitations, permission constraints, or specific audio session configurations.
I am reaching out to seek further assistance regarding the challenges I've been experiencing with establishing a UDP, TCP & web socket connection on watchOS using NWConnection for duplex audio streaming. Despite implementing the recommendations provided earlier, I am still encountering difficulties
From what I can see, your implementation is focused on streaming audio playback with the server. In my case, I'm looking for a slightly different approach: I want to capture audio and send buffers of a specific size to the server while playing audio simultaneously, essentially achieving full duplex streaming similar to a VOIP call. Additionally, I’d like to ensure that if no external audio route is connected, the Apple Watch speaker is used by default. Any thoughts or insights on adapting this setup for those requirements would be very welcome.
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
AVAudioNode
Network
AVAudioSession
AVAudioEngine
We've successfully implemented an AVAssetWriter to produce HLS streams (all code is Objective-C++ for interop with existing codebase) but are struggling to extend the operations to use tagged buffers.
We're starting to wonder if the tagged buffers required for an MV-HEVC signal are fully supported when producing HLS segments in a live-stream setting.
We generate a live stream of data using something like:
UTType *t = [UTType typeWithIdentifier:AVFileTypeMPEG4];
m_writter = [[AVAssetWriter alloc] initWithContentType:t];
// - videoHint describes HEVC and width/height
// - m_videoConfig includes compression settings and, when using MV-HEVC,
// the correct keys are added (i.e. kVTCompressionPropertyKey_MVHEVCVideoLayerIDs)
// The app was throwing an exception without these which was
// useful to know when we got the configuration right.
m_video = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:m_videoConfig sourceFormatHint:videoHint];
For either path we're producing CVPixelBufferRefs that contain the raw pixel information (i.e. 32BGRA) so we use an adapter to make that as simple as possible.
If we use a single view and a AVAssetWriterInputPixelBufferAdaptor things work out very well. We produce segments and the delegate is called.
However, if we use the AVAssetWriterInputTaggedPixelBufferGroupAdaptor as exampled in the SideBySideToMVHEVC demo project, things go poorly.
We create the tagged buffers with something like:
CMTagCollectionRef collections[2];
CMTag leftTags[] = {
CMTagMakeWithSInt64Value(
kCMTagCategory_VideoLayerID, (int64_t)0),
CMTagMakeWithSInt64Value(
kCMTagCategory_StereoView, kCMStereoView_LeftEye)
};
CMTagCollectionCreate(
kCFAllocatorDefault, leftTags, 2, &(collections[0])
);
CMTag rightTags[] = {
CMTagMakeWithSInt64Value(
kCMTagCategory_VideoLayerID, (int64_t)1),
CMTagMakeWithSInt64Value(
kCMTagCategory_StereoView, kCMStereoView_RightEye)
};
CMTagCollectionCreate(
kCFAllocatorDefault, rightTags, 2, &(collections[1])
);
CFArrayRef tagCollections = CFArrayCreate(
kCFAllocatorDefault, (const void **)collections, 2, &kCFTypeArrayCallBacks
);
CVPixelBufferRef buffers[] = {*b, *alt};
CFArrayRef b = CFArrayCreate(
kCFAllocatorDefault, (const void **)buffers, 2, &kCFTypeArrayCallBacks
);
CMTaggedBufferGroupRef bufferGroup;
OSStatus res = CMTaggedBufferGroupCreate(
kCFAllocatorDefault, tagCollections, b, &bufferGroup
);
Perhaps there's something about this OBJC code that I've buggered up? Hopefully!
Anyways, when I submit this tagged bugger group to the adaptor:
if (![mvVideoAdapter appendTaggedPixelBufferGroup:bufferGroup withPresentationTime:pts]) {
// report error...
}
Appending does not raise any errors - eventually it just hangs on us and we never return from it...
Real Issue:
So either:
The delegate assigned to the AVAssetWriter doesn't fire its assetWriter callback which should produce the segments
The adapter hangs on the appendTaggedPixelBufferGroup before a segment is ready to be completed (but succeeds for a number of buffer groups before this happens).
This is the same delegate class that's assigned to the non multi view code path if MV-HEVC is turned off which works perfectly.