We require assistance in resolving a critical audio design conflict within our Push-to-Talk (PTT) application. Our current volume amplification strategy—which relies on applying a GAIN factor to PCM samples in conjunction with setting the AVAudioSession category to Playback—is working successfully when PTT is used independently. However, upon integrating and reporting the same PTT call through the CallKit framework, this amplification effect is lost. The CallKit integration appears to be forcing a different, non-amplifying audio session category or configuration, negatively impacting the user's perceived call volume. We need guidance on how to maintain the AVAudioSessionCategoryPlayback setting, or an equivalent high-volume configuration, while operating under the control of CallKit.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In my app, I use api provided in Photos framework to delete specified photo.
But after upgrading to iOS 26, the delete function in some iOS device no longer work.
The api will never triggers the system confirmation dialog, and the completionHandler is never called.
In the iOS Photos app, deletion works correctly on the same assets, but calling the API from my app does not work.
Steps to Reproduce
Make sure the app has Full Photo Library Access.
Execute the following code:
PHPhotoLibrary.shared().performChanges({
let assetsToBeDeleted = PHAsset.fetchAssets(withLocalIdentifiers: delUrls, options: nil)
PHAssetChangeRequest.deleteAssets(assetsToBeDeleted)
}, completionHandler: completionHandler)
Expected Behavior
The system should present a confirmation dialog asking the user to delete the selected photos.
After the user confirms, the deletion should occur, and the completionHandler should be called with success or error.
Actual Behavior
The system delete confirmation dialog does not appear.
The completionHandler is never called.
Environment
iOS Versions: 26.1 / 26.0.1
It looks like api bug.
I want to check Is it a know issue and will be fixed. Thanks
I have the main app that saves preferences to UserDefaults.standard. So I have this one preference that the user is able to toggle - isRawOn
UserDefaults.standard.set(self.isRawOn, forKey: "isRawOn")
Now, I have LockedCameraCaptureExtension which is required know if that above setting on or off during launch. Also if it's toggled within the extension, the main app should know about it on the next launch.
The main app and the extension runs on separate containers and the preferences are not shared due to privacy reasons.
Apple mentions of using appContext of CameraCaptureIntent, but not sure how above scenario is possible through that....unless I am missing something.
Apple Reference
What I have for CameraCaptureIntent:
@available(iOS 18, *)
struct LaunchMyAppControlIntent: CameraCaptureIntent {
typealias AppContext = MyAppContext
static let title: LocalizedStringResource = "LaunchMyAppControlIntent"
static let description = IntentDescription("Capture photos with MyApp.")
@MainActor
func perform() async throws -> some IntentResult {
.result()
}
}
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
iOS
Photos and Imaging
PhotoKit
AVFoundation
part of the .swift file that controla the haptics buzz per each heart beat. however gives a 3-5 seconds delay between actual heart beat and watch haptic/buzz for that heart beat.
If I make a request to https://api.music.apple.com/v1/storefronts/us with the proper developer jwt token in the header, I receive the a successful response with a list of store fronts. If I remove the token, I do get back a 401 error.
If I call any other catalog base query, I am getting back a 500 error.
For instance: https://api.music.apple.com/v1/catalog/us/albums/310730204
returns a 500 error with the body being
{"message":"An unexpected error occurred"}
I'm not sure what I can do to fix this. Please help.
iPhone7 : iOS 14.0 Beta 5
Xcode-beta
Mac OS : 10.15.5 (19F101)
crash info :
** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[PHPhotoLibrary presentLimitedLibraryPickerFromViewController:]: unrecognized selector sent to instance xxxxxx'
terminating with uncaught exception of type NSException
my code:
(void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
if (@available(iOS 14, *)) {
[[PHPhotoLibrary sharedPhotoLibrary] presentLimitedLibraryPickerFromViewController:self];
}
}
My app is properly configured with MusicKit. I've generated a JWT using my valid credentials (Team ID, Key ID, private key), and I’ve ensured the time settings are correct via NTP.
When I call:
https://api.music.apple.com/v1/catalog/jp/search?term=ado&types=songs
I consistently receive a 500 Internal Server Error.
The JWT is generated using ES256 with valid iat and exp values. I’ve confirmed the token decodes properly using jwt.io, and it's passed via the Authorization: Bearer header.
Things I’ve confirmed:
Key ID, Team ID, private key are correct
App ID is configured with MusicKit capability
JWT is generated and signed correctly
macOS time is synced via NTP
Used both curl and Python to test — same result
Is there anything else I should check on the Apple Developer Console (like App ID, Certificates, or provisioning profile)?
Or could this be a backend issue on Apple’s side?
Any guidance would be appreciated.
My app encountered problems when trying to open an x86 audioUnit v2 on a Silicon Mac (although Rosetta is installed).
There seems to be a XPC connection issue with the AUHostingService that I don't know how to fix.
I observed other host apps opening the same plugins without problem, so there is probably something wrong or incompatible in my codes.
I noticed that:
The issue occurs whether or not the app is sandboxed.
The issue does no longer occur when the app itself runs under Rosetta.
There is no error reported by CoreAudio during allocation and initialization of the audio unit. The first notified errors appears when the unit calls AudioUnitRender from the rendering callback.
With most x86 plugins, the error is on first call:
kAudioUnitErr_RenderTimeout
and on any subsequent call:
kAudioComponentErr_InstanceInvalidated
On the UI side, when the Cocoa View is loaded, it appears shortly, then disappears immediately leaving its superview empty.
With another x86 plugin, the Cocoa View is loaded normally, but CoreAudio still emits
kAudioUnitErr_NoConnection
from AudioUnitRender, whether the view has been loaded or not, and the plugin produces no sound.
I also find these messages in the console (printed in that order):
CLIENT ERROR: RemoteAUv2ViewController does not override - and thus cannot react to catastrophic errors beyond logging them
AUAudioUnit_XPC.mm:641 Crashed AU possible component description: aumu/Helm/Tyte
My app uses the AUv2 API and I suspect that working with the AUv3 API would spare me these problems.
However, considering how my audio system is built (audio units are wrapped into C++ classes and most connections between units are managed on the fly from the rendering callback), it would be a lot of work to convert, and I’m even not sure that all I do with the AUv2 API would be possible with the AUv3 API.
I could possibly find an intermediate solution, but in the immediate future I'm looking for the simplest and fastest possible fix. If I cannot find better, I see two fallback options:
In this part of the doc: “Beginning with macOS 11, the system loads audio units into a separate process that depends on the architecture or host preference”, does “host preference” means that it would be possible to disable the “out of process” behavior, for example from the app entitlements or info.plist?
Otherwise, as a last resort, I could completely disable the use of x86 audioUnits when my app runs under ARM64, for at least making things cleaner. But the Audio Component API doesn’t give any info about the plugin architecture, how could I found it?
Any tip or idea about this issue will be much appreciated.
Thanks in advance!
Hi,
I'm working on a project that uses the AVSpeechSynthesizer and AVSpeechUtterance.
I discovered by chance that the AVSpeechSynthesizer automatically completes some words instead of just outputting what it's supposed to.
These are abbreviations for days of the week or months, but not all of them. I don't want either of them automatically completed, but only the specified text. The completion transcends languages.
I have written a short example program for demonstration purposes.
import SwiftUI
import AVFoundation
import Foundation
let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer()
struct ContentView: View {
var body: some View {
VStack {
Button {
utter("mon")
} label: {
Text("mon")
}
.buttonStyle(.borderedProminent)
Button {
utter("tue")
} label: {
Text("tue")
}
.buttonStyle(.borderedProminent)
Button {
utter("thu")
} label: {
Text("thu")
}
.buttonStyle(.borderedProminent)
Button {
utter("feb")
} label: {
Text("feb")
}
.buttonStyle(.borderedProminent)
Button {
utter("feb", lang: "de-DE")
} label: {
Text("feb DE")
}
.buttonStyle(.borderedProminent)
Button {
utter("wed")
} label: {
Text("wed")
}
.buttonStyle(.borderedProminent)
}
.padding()
}
private func utter(_ text: String, lang: String = "en-US") {
let utterance = AVSpeechUtterance(string: text)
let voice = AVSpeechSynthesisVoice(language: lang)
utterance.voice = voice
synthesizer.speak(utterance)
}
}
#Preview {
ContentView()
}
Thank you
Christian
iPhoneで撮影した映像をブラウザのアプリへ送信して画面に映す機能を持ったアプリを開発しています。
iPhone 17 Pro, 17 Pro Maxでこのアプリを利用するとブラウザ側に表示される映像が緑一色や、緑がメインのカラフルな映像になってしまいます。
調べてみると17Proと17ProMaxで超広角カメラと望遠カメラの画素数が変更になっている(1200万画素→4800万画素)ためエンコーディングで失敗しているのではないかと疑っています。
なんでも情報下さい。
環境情報
WebRTCライブラリ: GoogleWebRTC バージョン 1.1 (CocoaPodsで導入)
シグナリングサーバー: AWS Kinesis Video Streams
問題が発生するデバイス:
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,1, OS: 26.1
問題が発生しないデバイス:
iPhone17,5 以前の多数のモデル
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,3, OS: 26.0
Hello!
We stumbled upon a problem with our karaoke app where user on iPhone 16e/iOS 18.5 has problem with mic capture, other users cannot hear him. The mic capture is working fine on 17.5, 16.8. Maybe there is something else we need when configuring AVAudioSession for iOS 18.5?
Currently it's set up like this:
override func viewDidLoad() {
super.viewDidLoad()
UIApplication.shared.isIdleTimerDisabled = true
mRoomId = appDelegate.getRoomId()
let audioSession = AVAudioSession.sharedInstance()
try! audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker])
try! audioSession.setPreferredSampleRate(48000)
try! audioSession.setActive(true, options: [])
}
Topic:
Media Technologies
SubTopic:
Audio
We have a Low-Latency HLS stream, and on iOS 26, even though the bandwidth is sufficient, it still selects a low-bandwidth resolution (e.g., RESOLUTION=640x360) for playback instead of using a higher-bandwidth resolution (e.g., RESOLUTION=1920x1080) when using AVPlayerViewController with AVPlayer.
This works fine on iOS version 18 and previous versions. What could be the solution to this issue?
Topic:
Media Technologies
SubTopic:
Streaming
We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX.
To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like
AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far.
Can this be done?
I’m implementing FairPlay offline streaming on iOS and ran into a question about DRM expiration handling.
As far as I understand, when issuing a FairPlay offline license, there are typically two time windows:
1. The period during which the user can start offline playback (the longer “rental window”).
2. Once playback starts, the duration allowed to complete playback (the shorter “playback window”).
I’d like to display this information (the remaining validity or expiration time) in the app’s UI next to each downloaded asset.
My question is:
👉 Is there a way to programmatically check or retrieve the expiration time for a FairPlay offline asset on the client side (via AVFoundation or AVContentKeySession)?
Any guidance or best practices for surfacing DRM expiration info in the UI would be greatly appreciated.
How to USE iphone16 pro to capture HDR dolby_vision and use videotoolbox to encode with dolby_vision metadata in bitstream!
I know apple ios system can achieve this target,but I want to get source code how to do that ?
Topic:
Media Technologies
SubTopic:
Video
Hi everyone,
I’m exploring using the iPhone 17 Pro with the Blackmagic ProDock in a custom capture app. The genlock functionality seems accessible via AVExternalSyncDevice and related APIs, which is great.
I’m specifically curious about external timecode coming in from the ProDock:
• Is there a public way to access the timecode feed in a custom app via AVFoundation or another Apple API?
• If so, what is the recommended approach to read or apply that timecode during capture?
• Are there any current limitations or entitlements required to access timecode from ProDock in a third-party app?
I’m excited to start integrating synchronized capture in my app, and any guidance or sample patterns would be greatly appreciated.
Thanks in advance!
— [Artem]
Topic:
Media Technologies
SubTopic:
Video
When I’m on FaceTime, my phone will randomly end my call? I have an iPhone 17, iOS 26.1. Some times I won’t even be touching my phone screen and it’ll hang up. I’m not sure if this is a universal issue or just a me problem. It’s getting really annoying.
Topic:
Media Technologies
SubTopic:
General
Hey everyone,
I'm stuck on a really frustrating AVFoundation problem. I'm building a video editor that uses a custom AVVideoCompositor to add effects, and I need the final output to be 60 FPS.
So basically, I create an AVMutableComposition to sequence my video clips. I create an AVMutableVideoComposition and set the frame rate to 60 FPS: videoComposition.frameDuration = CMTime(value: 1, timescale: 60)
I assign my CustomVideoCompositor class to the videoComposition.
I create an AVPlayerItem with the composition and video composition.
The Problem:
Playback Works: When I play the AVPlayerItem in an AVPlayer, it's perfect. It plays at a smooth 60 FPS, and my custom compositor's startRequest method is called 60 times per second.
Export Fails: When I try to export the exact same composition and video composition using AVAssetExportSession, the final .mp4 file is always 30 FPS (or 29.97).
I've logged inside my custom compositor during the export, and it's definitely being called 30 times per second, so it's generating the 30 frames. It seems like AVAssetExportSession is just dropping every other frame when it encodes the video.
My source videos are screen recordings which I recorded using ScreenCaptureKit itself with the minimum frame interval to be 60.
Here is my export function. I'm using the AVAssetExportPresetHighestQuality preset :-
func exportVideo(to outputURL: URL) async throws {
guard let composition = composition,
let videoComposition = videoComposition else {
throw VideoCompositionError.noValidVideos
}
try? FileManager.default.removeItem(at: outputURL)
guard let exportSession = AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetHighestQuality // Is this the problem?
) else {
throw VideoCompositionError.trackCreationFailed
}
exportSession.outputFileType = .mp4
exportSession.videoComposition = videoComposition // This has the 60fps setting
try await exportSession.export(to: outputURL, as: .mp4)
}
I've created a bare bones sample project that shows this exact bug in action. The resulting video is 60fps during playback, but only 30fps during the export. https://github.com/zaidbren/SimpleEditor
My Question:
Why is AVAssetExportSession ignoring my 60 FPS frameDuration and defaulting to 30 FPS, even though AVPlayer respects it?
Environment
Device: iPhone 16e
iOS Version: 18.4.1 - 18.7.1
Framework: AVFoundation (AVAudioEngine)
Problem Summary
On iPhone 16e (iOS 18.4.1-18.7.1), the installTap callback stops being invoked after resuming from a phone call interruption. This issue is specific to phone call interruptions and does not occur on iPhone 14, iPhone SE 3, or earlier devices.
Expected Behavior
After a phone call interruption ends and audioEngine.start() is called, the previously installed tap should continue receiving audio buffers.
Actual Behavior
After resuming from phone call interruption:
Tap callback is no longer invoked
No audio data is captured
No errors are thrown
Engine appears to be running normally
Note: Normal pause/resume (without phone call interruption) works correctly.
Steps to Reproduce
Start audio recording on iPhone 16e
Receive or make a phone call (triggers AVAudioSession interruption)
End the phone call
Resume recording with audioEngine.start()
Result: Tap callback is not invoked
Tested devices:
iPhone 16e (iOS 18.4.1-18.7.1): Issue reproduces ✗
iPhone 14 (iOS 18.x): Works correctly ✓
iPhone SE 3 (iOS 18.x): Works correctly ✓
Code
Initial Setup (Works)
let inputNode = audioEngine.inputNode
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioEngine.start()
Interruption Handling
NotificationCenter.default.addObserver(
forName: AVAudioSession.interruptionNotification,
object: AVAudioSession.sharedInstance(),
queue: nil
) { notification in
guard let userInfo = notification.userInfo,
let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue) else {
return
}
if type == .began {
self.audioEngine.pause()
} else if type == .ended {
try? self.audioSession.setActive(true)
try? self.audioEngine.start()
// Tap callback doesn't work after this on iPhone 16e
}
}
Workaround
Full engine restart is required on iPhone 16e:
func resumeAfterInterruption() {
audioEngine.stop()
inputNode.removeTap(onBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioSession.setActive(true)
try audioEngine.start()
}
This works but adds latency and complexity compared to simple resume.
Questions
Is this expected behavior on iPhone 16e?
What is the recommended way to handle phone call interruptions?
Why does this only affect iPhone 16e and not iPhone 14 or SE 3?
Any guidance would be appreciated!
Hello, We have Video Stream app. It has HLS VOD Content. We supply 1080p, 4K Contents to users. Users were watching 1080p content before tvOS 26. Users can not watch 1080p content anymore when they update to tvOS 26. We have not changed anything at HLS playlist side and application version. This problem only occurs on Apple TV 4th Gen (A1625) tvOS 26 version. There is no problem with newer Apple TV devices. Would you help to resolve problem? Thanks in advance
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
Apple TV
tvOS
HTTP Live Streaming