I’m building a music app using Apple Music streaming via ApplicationMusicPlayer.
My goal is to decrease the volume of the current song during the last 10 seconds, and when the next track begins, restore the volume to its normal level.
I know that ApplicationMusicPlayer doesn’t expose a volume API, and I want to avoid triggering the system volume HUD.
✅ Using Apple Music streaming (not local files)
❓ Is it possible to implement per-track fade-out/fade-in logic with ApplicationMusicPlayer?
Appreciate any clarification or official guidance!
Streaming
RSS for tagDeep dive into the technical specifications that influence seamless playback for streaming services, including bitrates, codecs, and caching mechanisms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm capturing video stream from GoPro camera (via UDP MPEG-TS packets) but unable to play in iOS app. can someone provide a source code to decode MPEG-TS data to CMSampleBuffer.
This is my native module code implementation
I'm getting base64 encoded string from server and passing this to my native module of pcm player to play audio
App.tsx
PcmPlayer.writeChunk(e.data);
PcmPlayer.swift
import AVFoundation
@objc(PcmPlayer)
class PcmPlayer: RCTEventEmitter {
private var engine: AVAudioEngine?
private var playerNode: AVAudioPlayerNode?
private var format: AVAudioFormat?
private var bufferQueue = [Data]()
private var isPlaying = false
private var hasEnded = false
private var scheduledBufferCount = 0
private let minBufferBytes = 50000
private let pcmQueue = DispatchQueue(label: "pcm.queue")
override init() {
super.init()
}
override func supportedEvents() -> [String]! {
return ["onStatus", "onMessage"]
}
@objc(initPlayer:channels:bitsPerSample:)
func initPlayer(_ sampleRate: NSNumber,
channels: NSNumber,
bitsPerSample: NSNumber) {
pcmQueue.async {
self.stopInternal()
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true, options: .notifyOthersOnDeactivation)
try session.setMode(.default)
print("🔈 Audio session active. Output route:", session.currentRoute.outputs)
} catch {
print("❌ Audio session setup failed:", error)
return
}
self.engine = AVAudioEngine()
self.playerNode = AVAudioPlayerNode()
guard let engine = self.engine, let playerNode = self.playerNode else {
print("❌ Engine or playerNode is nil")
return
}
engine.attach(playerNode)
self.format = AVAudioFormat(commonFormat: .pcmFormatFloat32,
sampleRate: sampleRate.doubleValue,
channels: AVAudioChannelCount(channels.uintValue),
interleaved: false)
guard let format = self.format else {
print("❌ Failed to create AVAudioFormat")
return
}
engine.connect(playerNode, to: engine.mainMixerNode, format: format)
do {
try engine.start()
playerNode.play()
engine.mainMixerNode.outputVolume = 1.0
print("✅ AVAudioEngine started with format:", format)
} catch {
print("❌ Engine start failed:", error)
}
self.hasEnded = false
}
}
@objc(writeChunk:)
func writeChunk(_ base64Pcm: String) {
pcmQueue.async {
guard base64Pcm.count >= 10 else {
print("⚠️ Skipping short base64 string")
return
}
var padded = base64Pcm
let mod4 = base64Pcm.count % 4
if mod4 > 0 {
padded += String(repeating: "=", count: 4 - mod4)
}
guard let data = Data(base64Encoded: padded, options: .ignoreUnknownCharacters) else {
print("❌ Failed to decode base64")
return
}
self.bufferQueue.append(data)
print("📥 Received PCM chunk (\(data.count) bytes)")
print("📥 writeChunk called. isPlaying=\(self.isPlaying), bufferQueue.count=\(self.bufferQueue.count)")
if !self.isPlaying {
self.isPlaying = true
self.waitForBufferAndStartPlayback()
} else if self.scheduledBufferCount == 0 {
self.isPlaying = true
self.waitForBufferAndStartPlayback()
}
}
}
private func waitForBufferAndStartPlayback() {
DispatchQueue.global().async {
while self.queueSize() < self.minBufferBytes && !self.hasEnded {
Thread.sleep(forTimeInterval: 0.01)
}
self.writeLoop()
}
}
private func writeLoop() {
DispatchQueue.global().async {
writeLoop: while self.isPlaying {
if self.bufferQueue.isEmpty {
for _ in 0..<100 {
Thread.sleep(forTimeInterval: 0.01)
if !self.bufferQueue.isEmpty { break }
}
if self.bufferQueue.isEmpty {
print("🔇 No more data to play after waiting")
self.isPlaying = false
break writeLoop
}
}
var data: Data?
self.pcmQueue.sync {
if !self.bufferQueue.isEmpty {
data = self.bufferQueue.removeFirst()
}
}
guard let chunk = data else {
print("⚠️ No data to process")
continue
}
if let buffer = self.pcmBufferFromData(chunk) {
self.scheduledBufferCount += 1
self.playerNode?.scheduleBuffer(buffer, completionHandler: {
self.pcmQueue.async {
self.scheduledBufferCount -= 1
if self.bufferQueue.isEmpty && self.scheduledBufferCount == 0 {
print("ℹ️ Playback idle - waiting for more data")
self.isPlaying = false
}
}
})
}
}
}
}
private func pcmBufferFromData(_ data: Data) -> AVAudioPCMBuffer? {
guard let format = self.format else { return nil }
let frameCount = UInt32(data.count / 2)
guard let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: frameCount) else {
print("❌ Failed to create AVAudioPCMBuffer")
return nil
}
buffer.frameLength = frameCount
guard let floatChannelData = buffer.floatChannelData?[0] else {
print("❌ floatChannelData is nil")
return nil
}
data.withUnsafeBytes { (rawBuffer: UnsafeRawBufferPointer) in
let int16Buffer = rawBuffer.bindMemory(to: Int16.self)
let count = min(int16Buffer.count, Int(frameCount))
for i in 0..<count {
floatChannelData[i] = Float32(int16Buffer[i]) / Float32(Int16.max)
}
}
return buffer
}
@objc(stopPlayer)
func stopPlayer() {
pcmQueue.async {
self.stopInternal()
}
}
private func stopInternal() {
print("🛑 stopInternal called")
self.playerNode?.stop()
self.engine?.stop()
self.engine?.reset()
self.playerNode = nil
self.engine = nil
self.format = nil
self.bufferQueue.removeAll()
self.isPlaying = false
self.hasEnded = true
self.scheduledBufferCount = 0
}
@objc(canWrite:rejecter:)
func canWrite(_ resolve: @escaping RCTPromiseResolveBlock,
rejecter reject: RCTPromiseRejectBlock) {
pcmQueue.async {
resolve(self.bufferQueue.count < 20)
}
}
@objc(flushPlayer:rejecter:)
func flushPlayer(_ resolve: @escaping RCTPromiseResolveBlock,
rejecter reject: RCTPromiseRejectBlock) {
pcmQueue.async {
self.bufferQueue.removeAll()
resolve(nil)
}
}
@objc
static override func requiresMainQueueSetup() -> Bool {
return false
}
private func queueSize() -> Int {
return pcmQueue.sync {
return self.bufferQueue.reduce(0) { $0 + $1.count }
}
}
}
I couldn't able to hear any audio via my real iOS device also it is working fine on emulator.
Topic:
Media Technologies
SubTopic:
Streaming
AVPictureInPictureControllerContentSource *contentSource = [[AVPictureInPictureControllerContentSource alloc] initWithSampleBufferDisplayLayer:self.renderView.sampleBufferDisplayLayer playbackDelegate:self];
AVPictureInPictureController *pictureInPictureController = [[AVPictureInPictureController alloc] initWithContentSource:contentSource];
pictureInPictureController.delegate = self;
(void)pictureInPictureController:(AVPictureInPictureController *)pictureInPictureController failedToStartPictureInPictureWithError:(NSError *)error
{
//error NSError * domain: @"PGPegasusErrorDomain" - code: -1003 0x00000002819fe3a0
}
when first start the PiP play, I got the error "//error NSError * domain: @"PGPegasusErrorDomain" - code: -1003 0x00000002819fe3a0", why?
and second start is Ok.
We encounter issue with avplayer in case of EXT-X-DISCONTINUITY misalignment between audio and video produced after insertion of gaps.
The initial objective is to introduce an EXT-X-DISCONTINUITY in audio playlist after some missing segments (EXT-X-GAP) which durations are aligned to video segments durations, to handle irregular audio durations.
Please find below an example of corresponding video and audio playlists:
video:
#EXTM3U
#EXT-X-VERSION:7
#EXT-X-MEDIA-SEQUENCE:872524632
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-TARGETDURATION:2
#USP-X-TIMESTAMP-MAP:MPEGTS=7096045027,LOCAL=2025-05-09T12:38:32.369100Z
#EXT-X-MAP:URI="hls/StreamingBasic-video=979200.m4s"
#EXT-X-PROGRAM-DATE-TIME:2025-05-09T12:38:32.369111Z
#EXTINF:2.002, no desc
hls/StreamingBasic-video=979200-872524632.m4s
#EXTINF:2.002, no desc
hls/StreamingBasic-video=979200-872524633.m4s
#EXTINF:2.002, no desc
hls/StreamingBasic-video=979200-872524634.m4s
#EXTINF:2.002, no desc
hls/StreamingBasic-video=979200-872524635.m4s
#EXTINF:2.002, no desc
hls/StreamingBasic-video=979200-872524636.m4s
## Media sequence discontinuity
#EXT-X-GAP
#EXTINF:2.002, no desc
hls/StreamingBasic-video=979200-872524637.m4s
## Media sequence discontinuity
#EXT-X-GAP
#EXTINF:2.002, no desc
hls/StreamingBasic-video=979200-872524638.m4s
#EXT-X-PROGRAM-DATE-TIME:2025-05-09T12:38:46.383111Z
#EXTINF:2.002, no desc
hls/StreamingBasic-video=979200-872524639.m4s
#EXTINF:2.002, no desc
hls/StreamingBasic-video=979200-872524640.m4s
audio:
EXTM3U
#EXT-X-VERSION:7
#EXT-X-MEDIA-SEQUENCE:872524632
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-TARGETDURATION:2
#USP-X-TIMESTAMP-MAP:MPEGTS=7096045867,LOCAL=2025-05-09T12:38:32.378400Z
#EXT-X-MAP:URI="hls/StreamingBasic-audio_99500_eng=98800.m4s"
#EXT-X-PROGRAM-DATE-TIME:2025-05-09T12:38:32.378444Z
#EXTINF:2.0053, no desc
hls/StreamingBasic-audio_99500_eng=98800-872524632.m4s
#EXTINF:2.0053, no desc
hls/StreamingBasic-audio_99500_eng=98800-872524633.m4s
#EXTINF:2.0053, no desc
hls/StreamingBasic-audio_99500_eng=98800-872524634.m4s
#EXTINF:1.984, no desc
hls/StreamingBasic-audio_99500_eng=98800-872524635.m4s
#EXTINF:2.0053, no desc
hls/StreamingBasic-audio_99500_eng=98800-872524636.m4s
## Media sequence discontinuity
#EXT-X-GAP
#EXTINF:2.002, no desc
hls/StreamingBasic-audio_99500_eng=98800-872524637.m4s
## Media sequence discontinuity
#EXT-X-GAP
#EXTINF:2.002, no desc
hls/StreamingBasic-audio_99500_eng=98800-872524638.m4s
#EXT-X-DISCONTINUITY
#EXT-X-PROGRAM-DATE-TIME:2025-05-09T12:38:46.778444Z
#EXTINF:1.6213, no desc
hls/StreamingBasic-audio_99500_eng=98800-872524639.m4s
#EXTINF:2.0053, no desc
hls/StreamingBasic-audio_99500_eng=98800-872524640.m4s
In this case playback is broken with avplayer.
Is it conformed to Http Live Streaming?
Is it an avplayer bug?
What are the guidelines to handle such gaps?
Hi everyone!
Here's what I observed so far:
On device it's reproducible on iOS/iPadOS18.5, but works on iPadOS17.7.
On iPhone16 iOS 18.5 simulator that I was extensively using for development it was reproducible until I reset content and settings.
On iPhone 16 iOS18.4 simulator, which was also used a lot during development it still works always, so I tend to think it's 18.5 issue.
Setting config.websiteDataStore = .nonPersistent() doesn't help.
Cleaning WKWebsiteDataStore doesn't help.
It works fine using direct URL from the embedded code (see the code below).
Can someone provide some insight on how this could be fixed?
Here's the code:
import SwiftUI
import WebKit
@main
struct IGVideoApp: App {
var body: some Scene {
WindowGroup {
WebView()
}
}
}
private struct WebView: UIViewRepresentable {
func makeUIView(context: Context) -> WKWebView {
let config = WKWebViewConfiguration()
config.allowsInlineMediaPlayback = true
return .init(frame: .zero, configuration: config)
}
func updateUIView(_ uiView: WKWebView, context: Context) {
let urlString = "https://www.instagram.com/reel/DKHFOGct3z7/?utm_source=ig_embed&utm_campaign=loading"
/// It works when loading from the data-instgrm-permalink URL directly
// uiView.load(.init(url: .init(string: "\(urlString)")!))
/// It doesn't work whith embedding
/// Note: the code part for embedding (<blockquote>...</blockquote>) is taken from my
/// Instagram post (https://www.instagram.com/p/DKHFOGct3z7/)
/// and stripped down. The urlString was also extracted for demonstration of direct loading.
let string = """
<!doctype html>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1">
<html>
<head />
<body style="background-color:black; margin:0px">
<blockquote class="instagram-media"
data-instgrm-captioned
data-instgrm-version="14"
data-instgrm-permalink="\(urlString)">
</blockquote>
<script async src="https://www.instagram.com/embed.js"></script>
</body>
</html>
"""
uiView.loadHTMLString(string, baseURL: .init(string: "https://www.instagram.com"))
}
}
Hello, our application is unable to HDMI output FairPlay protected content to TV via official Lightning HDMI AV Adapter, by checking the console log on mediaplayerd it is found that a CoreMediaErrorDomain Code=-19156 is raised, but we are unable to know what this error code means.
default 11:18:15.121584+0800 mediaplaybackd keyboss ckb_customURLReadCallback: 0x7fa62f800 60/0 customURLReqID 4 isComplete 1 err -19156 error <private> (0) dokeyCallbacksExist 0
default 11:18:15.121670+0800 mediaplaybackd keyboss ckb_processErrorForRequest: 0x7fa62f800 60/0 handler 4 err 0
default 11:18:15.121752+0800 mediaplaybackd <<<< FigCustomURLHandling >>>> curll_cancelRequestOnQueue: 0x7fa031360: requestID: 4
default 11:18:15.121932+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 reqFin err Error Domain=CoreMediaErrorDomain Code=-19156 (-19156) dokeyCallbacksExist 0
default 11:18:15.122025+0800 mediaplaybackd keyboss ckb_transitionRequestToTerminalState: 0x7fa62f800 60/0 retry
default 11:18:15.123195+0800 mediaplaybackd <<<< FigCPECryptorPKD >>>> PostKeyRequestErrorOccurred: 0x7fab7be80 029592C2-093D-400D-B57F-7AB06CC292D1 key request error: Error Domain=CoreMediaErrorDomain Code=-19160 (-19160)
I am developing an app that plays HLS audio.
When using AVPlayerItem with AVURLAsset, can AVAssetResourceLoaderDelegate correctly handle HLS segments?
My goal is to use AVAssetResourceLoaderDelegate to add authentication HTTP headers when accessing HLS .m3u8 and .ts files.
I can successfully download the files, but playback fails with errors.
Specifically, I am observing the following cases:
A. AVAssetResourceLoaderDelegate is canceled, and CoreMediaErrorDomain -12881 occurs
In NSURLConnectionDataDelegate’s didReceiveResponse method, set contentInformationRequest
In didReceiveData, call dataRequest respondWithData
resourceLoader didCancelLoadingRequest is called
CoreMediaErrorDomain -12881 occurs
B. CoreMediaErrorDomain -12881 occurs
In NSURLConnectionDataDelegate’s didReceiveResponse method, set contentInformationRequest
In connection didReceiveData, buffer all received data until the end
In connectionDidFinishLoading, pass the buffered data to respondWithData
Call loadingRequest finishLoading
CoreMediaErrorDomain -12881 occurs
In both cases, dataRequest.requestsAllDataToEndOfResource is YES.
For this use case, I am not using AVURLAssetHTTPHeaderFieldsKey because I need to apply the most up-to-date authentication data at the moment each file is accessed.
I would appreciate any advice or suggestions you might have. Thank you in advance!
Hello,
My company has an in-store app with FPS SDK 4.x (1024) keys. We've handed those keys over to a trusted third-party and we do not have them. We've been in-store for several years.
The person that created the keys in our organization mistakenly stored them encrypted to our third-party's PGP keys, so we cannot decrypt them, and the third party also has no mechanism to provide us with the keys even though it is in their runtime environment. They only have secure mechanisms for us to upload keys onto their servers.
We are trying to migrate to a different third-party DRM provider, and would like to obtain new keys. Unfortunately, the developer portal won't let me create new keys, saying that we have exceeded the number of keys allowed, which I assume is one.
Additionally, the new DRM provider can only support SDK 4.x keys, and it appears that we can only request SDK 5.x keys on the Apple Developer portal, as the SDK 4.0 option is grayed out. Regardless, it seems that we are not able to request any keys.
We've submitted a request to the support e-mail address and received an automated e-mail that the response should take a few days, but may take longer on occasion. It's now been a month. The e-mail says that the reply address is not monitored. Is there any way we can accelerate this?
Thank you,
Carlos
I'm having a crash on an app that plays videos when the users activates close captions.
I was able to replicate the issue on an empty project. The crash happens when the AVPlayerLayer is used to instantiate an AVPictureInPictureController
These are the example project where I tested the crash:
struct ContentView: View {
var body: some View {
VStack {
VideoPlaylistView()
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
.background(Color.black.ignoresSafeArea())
}
}
class VideoPlaylistViewModel: ObservableObject {
// Test with other videos
var player: AVPlayer? = AVPlayer(url: URL(string:"https://d2ufudlfb4rsg4.cloudfront.net/newsnation/WIpkLz23h/adaptive/WIpkLz23h_master.m3u8")!)
}
struct VideoPlaylistView: View {
@StateObject var viewModel = VideoPlaylistViewModel()
var body: some View {
ScrollView {
VideoCellView(player: viewModel.player)
.onAppear {
viewModel.player?.play()
}
}
.scrollTargetBehavior(.paging)
.ignoresSafeArea()
}
}
struct VideoCellView: View {
let player: AVPlayer?
@State var isCCEnabled: Bool = false
var body: some View {
ZStack {
PlayerView(player: player)
.accessibilityIdentifier("Player View")
}
.containerRelativeFrame([.horizontal, .vertical])
.overlay(alignment: .bottom) {
Button {
player?.currentItem?.asset.loadMediaSelectionGroup(for: .legible) { group,error in
if let group {
let option = !isCCEnabled ? group.options.first : nil
player?.currentItem?.select(option, in: group)
isCCEnabled.toggle()
}
}
} label: {
Text("Close Captions")
.font(.subheadline)
.foregroundStyle(isCCEnabled ? .red : .primary)
.buttonStyle(.bordered)
.padding(8)
.background(Color.blue.opacity(0.75))
}
.padding(.bottom, 48)
.accessibilityIdentifier("Button Close Captions")
}
}
}
import Foundation
import UIKit
import SwiftUI
import AVFoundation
import AVKit
struct PlayerView: UIViewRepresentable {
let player: AVPlayer?
func updateUIView(_ uiView: UIView, context: UIViewRepresentableContext<PlayerView>) {
}
func makeUIView(context: Context) -> UIView {
let view = PlayerUIView()
view.playerLayer.player = player
view.layer.addSublayer(view.playerLayer)
view.layer.backgroundColor = UIColor.red.cgColor
view.pipController = AVPictureInPictureController(playerLayer: view.playerLayer)
view.pipController?.requiresLinearPlayback = true
view.pipController?.canStartPictureInPictureAutomaticallyFromInline = true
view.pipController?.delegate = view
return view
}
}
class PlayerUIView: UIView, AVPictureInPictureControllerDelegate {
let playerLayer = AVPlayerLayer()
var pipController: AVPictureInPictureController?
override init(frame: CGRect) {
super.init(frame: frame)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func layoutSubviews() {
super.layoutSubviews()
playerLayer.frame = bounds
playerLayer.backgroundColor = UIColor.green.cgColor
}
func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: any Error) {
print("Error starting Picture in Picture: \(error.localizedDescription)")
}
}
class AppDelegate: NSObject, UIApplicationDelegate {
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey : Any]? = nil) -> Bool {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.playback, mode: .moviePlayback)
try audioSession.setActive(true)
} catch {
print("ERR: \(error.localizedDescription)")
}
return true
}
}
UITest to make the app crash:
final class VideoPlaylistSampleUITests: XCTestCase {
func testCrashiOS26ToggleCloseCaptions() throws {
let app = XCUIApplication()
app.launch()
let videoPlayer = app.otherElements["Player View"]
XCTAssertTrue(videoPlayer.waitForExistence(timeout: 30))
let closeCaptionButton = app.buttons["Button Close Captions"]
for _ in 0..<2000 {
closeCaptionButton.tap()
}
}
}
In iOS 26 When we download any DRM content first time it is downloading again when we edit audios and Video Quality and start downloading it is freezing complete app. Neither it is crashing not giving any error.
Hi,
After updating to iOS 26, our app is experiencing playback failures with AVPlayer. The same code and streams work fine on iOS 18 and earlier.
Error:
Domain [CoreMediaErrorDomain]
Code [-15628]
Description [The operation couldn’t be completed.]
Underlying Error Domain [(null)]
Code [0]
Description [(null)]
Environment:
iOS version: iOS 26
Stream type: HLS (m3u8) with segment (.ts) files
Observed behaviour:
We don’t have concrete steps to reproduce the issue, but so far, we have observed that this error tends to occur under low network conditions.
For devices that are still on ios17, playing Fairplay encrypted content still works fine. For devices that I've upgraded to ios26 playing the same content in the same app no longer works. I can advance and see the stream frames by tapping +10 scrubbing so I know that the content is being decrypted but tapping the play button of AVPlayer for an AVPlayerItem now does nothing in ios26. Is this a breaking change or is there a stricter requirement that I now have to implement?
Just updated my computer, phone, and dev tools to the latest versions of everything. Now when I run my app in a previously-working simulator (iPhone 16 w. iOS 18.5) I get:
Failed retrieving MusicKit tokens: fetching the developer token is not supported in the simulator when running on this version of macOS; please upgrade your Mac to macOS Ventura.
Also:
<ICCloudServiceStatusMonitor: 0x600003320e60>: Invoking 1 completion handler for MusicKit tokens. error=<ICError.DeveloperTokenFetchingFailed (-8200) "Failed to fetch media token from <AMSMediaTokenService: 0x6000029049a0>." { underlyingErrors: [ <AMSErrorDomain.300 "Token request encoding failed The token request encoder finished with an error." { userInfo: { AMSDescription : "Token request encoding failed", AMSFailureReason : "The token request encoder finished with an error." }; underlyingErrors: [ <AMSErrorDomain.5 "Anisette Failed Platform not supported" { userInfo: { AMSDescription : "Anisette Failed", AMSFailureReason : "Platform not supported" };
Anybody know what gives here? The Ventura message is absurd because I'm on Tahoe 26.1. The same code works on a physical phone running iOS 26.
Macs do not support Multi-Stream Transport (MST), which prevents from using a single DisplayPort or USB-C port to daisy-chain multiple external monitors in an extended display mode. So the the virtual multiple display modes are not working correctly on Mac.
Topic:
Media Technologies
SubTopic:
Streaming
I am working on Screen Record function in Apple Vision Pro, when I use broadcast upload extension, after I click record button, the XCode console show the exception:
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
we create and config the project as flow:
Create a Apple Vision Project.
Create a Broadcast Upload Extension Target.
Add App Group for Project Target and Extension Target, both use the same identifier.
Add "Main Camera Access", "Passthrough in Screen Capture" Capabilities for all targets.
Add "NSScreenCaptureUsageDescription", "NSMicrophoneUsageDescription" in Plist.
Add record button in view
Run debug in Apple Vision Pro device, after click record button, throw the exception.
For iOS17 we've had no problem playing Apple Fairplay encrypted content with keys delivered from our key server running on FairPlay Streaming Server SDK 5.1 and subsequently FairPlay Streaming Server SDK 26. It's built and deployed using Xcode Version 26.1.1 (17B100) with no changes to the code and - as expected - the content continued to be successfully decrypted and played (so far so good). However, as soon as a device was updated to iOS26, that device would no longer play the encrypted content.
Devices remaining on iOS17 continue to work normally and the debugging logs are a sanity-check that proves that. Is anyone else experiencing this issue?
Here's the code (you should be able to drop it into a fresh iOS Xcode project and provide a server url, content url and certificate).
We are experiencing an issue related to DepthData from the TrueDepth camera on a specific device.
On December 1, we tested with the complainant’s device iPhone 14 / iOS 26.0.1, and observed that the depth image is received with empty values.
However, the same implementation works normally on iPhone 17 Pro Max (iOS 26.1) and iPhone 13 Pro Max (iOS 26.0.1), where depth data is delivered correctly.
In the problematic case:
TrueDepth camera is active
Face ID works normally
The app receives a DepthData object, but all values are empty (0), not nil
Because the DepthData object is not nil, this makes it difficult to detect the issue through software fallback handling.
We developed the feature with reference to the following Apple sample:
https://developer.apple.com/documentation/AVFoundation/streaming-depth-data-from-the-truedepth-camera
We would like to ask:
Are there known cases where Face ID functions normally but DepthData from the TrueDepth camera is returned as empty values?
If so, is there a recommended approach for identifying or handling this situation?
Any guidance from Apple engineers or the community would be greatly appreciated.
Thank you.