Hi. I am working on an audio app for iOS. I have implemented UI and handling which allows the user to change playback rate of audio. When the user selects a different rate, I update the rate property on my AVQueuePlayer. This is working well on device.
When I use Airplay, it works for some devices and not for others. Some devices won't change playback rate and will always play at 1x speed.
Is this possibly a limitation of those 3rd-party devices? Or is there something I'm missing/should check? Would love to get playback rate changes working across all Airplay devices with our app.
Kind regards.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I see some demo show convert HDR video to SDR Pixelbuffer,such AVAssetReader、 AVVideoComposition 、AVComposition 、AVFoundation.
But In some cases,I want to render HDR Pixelbuffer and record video.
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetHigh;
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([videoDevice isVideoHDRSupported]) {
NSError *error = nil;
if ([videoDevice lockForConfiguration:&error]) {
videoDevice.automaticallyAdjustsVideoHDREnabled = NO;
videoDevice.videoHDREnabled = YES; // 开启 HDR
[videoDevice unlockForConfiguration];
} else {
NSLog(@"Error: %@", error.localizedDescription);
}
}
Real-time processing of HDR data requires processing of video frame data (such as filters), ensuring that the processing chain supports 10-bit color depth and HDR metadata. And use imagesBuffer to object tracking, etc.
How to solve this problem?
Hi folks,
When doing HLS v6 live streaming with fmp4 chunks we noticed that when the encoder timestamps slightly drift and a #EXT-X-DISCONTINUITY tag is created in either the audio or video playlist (in an ABR setup), the tag is not correctly handled by the player leading to a broken playback containing black screen or no audio (depending on which playlist the tag is printed in).
We noticed that this is often true when the number of tags is odd between the playlists (eg. the audio playlist contains 1 tag and the video contains 2 tags will result in a black screen with audio).
By using the same "broken" source but using Shaka player instead won't break the playback at all.
Are there any possible fix (or upcoming) for AV Player?
Hello Apple Community,
We are working on a real-time streaming feature where we receive chunks of raw MP4 data through a custom protocol and store them in a buffer (array). Our goal is to use these data chunks to play a continuous video stream in AVPlayer.
What We've Tried:
Custom URL Scheme with AVAssetResourceLoaderDelegate:
We implemented a custom URL scheme (customscheme://) to serve the buffered data using AVAssetResourceLoaderDelegate.
The method shouldWaitForLoadingOfRequestedResource is called only during the initial allocation. It doesn't get triggered when new chunks are appended to the buffer.
Despite appending new data to the buffer, AVPlayer doesn’t request further chunks from the delegate.
What We Need:
We are looking for a solution where:
The player continuously fetches data from the buffer as new chunks are added.
The playback remains smooth and uninterrupted, even with real-time data being appended.
Ideally, this solution works with AVPlayer while adhering to HLS-like behavior without implementing an HLS server.
Questions:
Is AVAssetResourceLoaderDelegate the right approach for this use case?
If so, how can we ensure shouldWaitForLoadingOfRequestedResource is called whenever new data is available in the buffer?
Are there alternative APIs or recommended patterns for playing real-time MP4 data chunks in AVPlayer?
Would implementing a custom FFmpeg-based player be necessary, or can this be achieved using AVPlayer and its APIs?
We appreciate any guidance, suggestions, or examples that can help us achieve this. Thank you!
Hello All,
It seems that it's "very easy" (😬) to implement a little Swift code inside the prepared AU using Xcode 16.2 on Sequoia 15.1.1 and a Mac Studio M1 Ultra, but my issue is that I finally don't know... where.
The documentation says that I've to find the AudioUnitViewController.swift file and then modify the render block :
audioUnit.renderBlock = { (numFrames, ioData) in
// Process audio here
}
in the Xcode project automatically generated, but I didn't find such a file...
If somebody can help me in showing where is the file to be modified, I'll be very grateful !
Thank you very much.
J
Hi I'm new to the forum,
I'm planning an app just for Apple watch, I would like to use bluetooth audio in background, how can I do it?
The messages I send via bluetooth stop as soon as the watch display turns off.
Thank you!
Nax
I want my iOS app to be able to use USB client mode to send LiDAR data and camera frames to another device. What are my options for doing this. I've found IOUSBHost for host mode, but I want to see all my options. The device I want driving the bus is a Meta Quest 3, which, and I mention for the sake of clarity, is inherently an Android device. The iPhone is to be used as a sensor hub, sending data to the Quest 3 for further processing.
As a workaround, I could let the iPhone drive the bus and have the Quest 3 use Android's accessory mode, which lets other devices drive the USB bus. But, there are more USB devices I want to attach for my project, and doing this makes such more difficult. I want to avoid it.
Topic:
Media Technologies
SubTopic:
Streaming
I wanted to ask if I can use Camera Capture Extension with AVMultiCamPiP, because the LockedCameraCaptureUIScene only return in the closure one session and I can't use the same session for both the front and back at the same time
Songs can be unavailable (greyed out) in Apple Music. How can I check if a song is unavailable via the MusicKit framework? Obviously the playback will fail with MPMusicPlayerControllerErrorDomain Code=6 "Failed to prepare to play" but how can I know that in advance? I need to check the availability of hundreds of albums and therefore initiating a playback for each of them is not an option.
Things I have tried:
Checking if the release date property is set to a future date. This filters out all future releases but doesn't solve the problem for already released songs.
Checking if the duration is 0. This does not work since the duration of unavailable songs does not have to be 0.
Initiating a playback and checking for the "Failed to prepare to play" error. This is not suitable for a huge amount of Albums.
I couldn't find a solution yet but somehow other third-party-apps are able ignore/don't shows these albums. I believe the Apple Music app is only displaying albums where at least one song is available.
I am using this function to fetch all albums of an artist.
private func fetchAlbumsFor(_ artist: Artist) async throws -> [Album] {
let artistWithAlbums = try await artist.with(.albums)
var allAlbums = [Album]()
guard var currentBadge = artistWithAlbums.albums else {
return []
}
allAlbums.append(contentsOf: currentBadge)
while currentBadge.hasNextBatch {
if let nextBatch = try await currentBadge.nextBatch() {
currentBadge = nextBatch
allAlbums.append(contentsOf: nextBatch)
} else {
break
}
}
return allAlbums
}
Here is an example album where I am unable to detect its unavailability (at least in Germany):
https://music.apple.com/de/album/die-haferhorde-immer-den-n%C3%BCstern-nach-h%C3%B6rspiel-zu-band-3/1755774804
Furthermore I was unable to navigate to this album via the Apple Music app directly.
Thanks for any help
Edit: Apparently this album is not included in an apple music subscription but can be bought seperately. The question remains: How can I check that?
I have added this extension to my app and followed the steps of the documentation but it didn't work. when I press the shortcut from the control center it only open the app and when I put it on the lockScreen it doesn't do anything. I don't know what I am missing
Topic:
Media Technologies
SubTopic:
Photos & Camera
The documentation for AVCaptureDeviceRotationCoordinator says it is designed to work with any CALayer but it seems like it is designed to work only with AVCaptureVideoPreviewLayer. Can someone confirm it is possible to make it work with other layers such as CAMetalLayer or AVSampleBufferDisplayLayer?
Hi all!
I have been experiencing some issues when using the AVAudioEngine to play audio and record input while doing a voice chat (through the PTT Interface).
I noticed if I connect any players to the AudioGraph OR call start that the audio session becomes active (this is on iOS).
I don't see anything in the docs or the header files in the AVFoundation, but is it possible that calling the stop method on an engine deactivates the audio session too?
In a normal app this behavior seems logical, but when using PTT all activation and deactivation of the audio session must go through the framework and its delegate methods.
The issue I am debugging is that when the engine with the input node tapped gets stopped, and there is a gap between the input and when the server replies with inbound audio to be played and something seems to be getting the hardware/audio session into a jammed state.
Thanks for any feedback and/or confirmation on this behavior!
Hello, I'm wondering how to capture 24MP photos.
I'm currently testing on an iPhone 16 Pro Max. By default, the device's activeFormat supports 24MP (photo dimensions: {4032x3024, 5712x4284}). For the photoOutput, I'm setting the maxPhotoDimensions to videoDevice.activeFormat.supportedMaxPhotoDimensions.lastObject, and setting MaxPhotoQualityPrioritization to quality.
When capturing, I'm applying the same maxPhotoDimensions and photoQualityPrioritization settings from the photoOutput directly to the AVCapturePhotoSettings.
What could be the issue?
// Objective-C
// setup
[self.photoOutput setMaxPhotoQualityPrioritization:AVCapturePhotoQualityPrioritizationQuality];
CMVideoDimensions maxPhotoDimensions = [(NSValue *)videoDevice.activeFormat.supportedMaxPhotoDimensions.lastObject CMVideoDimensionsValue];
[self.photoOutput setMaxPhotoDimensions:maxPhotoDimensions];
// capturing
AVCapturePhotoSettings *photoSettings = [AVCapturePhotoSettings photoSettings];
photoSettings.maxPhotoDimensions = self.photoOutput.maxPhotoDimensions;
photoSettings.photoQualityPrioritization = self.photoOutput.maxPhotoQualityPrioritization;
[self.photoOutput capturePhotoWithSettings:photoSettings delegate:photoCaptureDelegate];
...
am new to using Swift for a Mac Application. I am trying to control an external UVC-compliant camera focus and other capabilities. However, I'm having trouble with this and don't know where to start. I have downloaded an application from the App Store and it can control the focus and other capabilities.
I've tried IOKit but this seems to be complicated and this does not return any capabilities or control the camera.
I also tried AVfoundation and was able to open the camera, but using the following code did not work for me. as a device.isFocusPointOfInterestSupported returns false and without checking the app crashes.
@IBAction func focusChanged(_ sender: NSSlider) {
do {
guard let device = videoDevice else { return }
try device.lockForConfiguration()
// Check if focus mode and point of interest are supported
if device.isFocusModeSupported(.locked) {
device.focusMode = .locked
}
if device.isFocusPointOfInterestSupported {
// Map the slider value (0.0 to 1.0) to the focus point's X coordinate
let focusX = CGFloat(sender.doubleValue)
let focusPoint = CGPoint(x: focusX, y: 0.5) // Y coordinate is typically 0.5 (centered vertically)
device.focusPointOfInterest = focusPoint
} else {
print("Focus point of interest is not supported on this device.")
}
device.unlockForConfiguration()
// Log focus settings
print("Focus point: \(device.focusPointOfInterest)")
print("Focus mode: \(device.focusMode.rawValue)")
} catch {
print("Error adjusting focus: \(error)")
}
Any help or advice is much appreciated.
How can I implement the same custom CIFilter as a Lightroom Color Grading tool for shadows, midtones, highlights and global areas?
Dear Apple Developer Forum,
I have a question regarding the AVCaptureDevice on iOS. We're trying to capture photos in the best quality possible along with depth data with the highest accuracy possible. We were delighted when we saw AVCaptureDevice could be initialized with the AVMediaType=.depthData which works as expected (depthData is a part of the AVCapturePhoto). When setting to AVMediaType=.video, we still receive depth data (of same quality according to our own internal tests). That confused us.
Mind you, we set the device format and depth format as well:
private func getDeviceFormat() throws -> AVCaptureDevice.Format {
// Ensures high video format and an appropriate color profile.
let format = camera?.formats.first(where: {
$0.isHighPhotoQualitySupported &&
$0.supportedDepthDataFormats.count > 0 &&
$0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
})
// Check and see if it's available.
guard format != nil else {
throw CaptureDeviceError.necessaryFormatNotAvailable
}
return format!
}
private func getDepthDataFormat(for format: AVCaptureDevice.Format) throws -> AVCaptureDevice.Format {
// Access the depth format.
let depthDataFormat = format.supportedDepthDataFormats.first(where: {
$0.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat32
})
// Check if it exists
guard depthDataFormat != nil else {
throw CaptureDeviceError.necessaryFormatNotAvailable
}
// Returns it.
return depthDataFormat!
}
We're wondering, what steps we can take to ensure the best quality photo, along with the most accurate depth data? What properties are the most important, which have an effect, which don't? Are there any ways we can optimize our current configuration? We find it difficult as there's very limited guides and explanations on the media subtypes, for example kCVPixelFormatType_420YpCbCr8BiPlanarFullRange. Is it the best? Is it the best for our use case of high quality photo + most accurate depth data?
Important comment: Our App only runs on iPhone 14 Pro, iPhone 15 Pro, iPhone 16 Pro on the latest iOS versions.
We hope someone with greater knowledge at Apple can help us and guide us on how we can have the photos of best quality and depth data with most accuracy.
Thank you very much!
Kind regards.
Hi all,
with my app ScreenFloat, you can record your screen, along with system- and microphone audio.
Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on.
Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one.
So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app.
But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow.
My approach is this:
"Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession
Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession
For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback.
I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest).
So, my question is, is there a faster way?
The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data).
All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal.
I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps.
I'd appreciate any ideas or pointers.
Thank you kindly,
Matthias
In my app SexyPeri (https://apps.apple.com/fr/app/id6738291985), I create an album with some pics
Album is called SexyPeri
Now, I wish to redirect the user from my app SexyPeri DIRECTLY to the album SexyPeri
There is no doc about your scheme photos-navigation, or I didn't see it. Some guys retro-engineered it, but I couldn't make this work. photos-navigation://album?name=SexyPeri doesn't work.
So my question is: how can I redirect to the album directly ?
I am using AVMulti so the user captures two images how can I access those images if there is only one url that stores the captured images for the lockScreenCapture extension ? Plus how can I detect if the user opened the app from the extension to be able to navigate the user to the right screen ?
I'm developing an iOS app using AVFoundation for real-time video capture and object detection.
While implementing torch functionality with camera switching (between Wide and Ultra-Wide lenses), I encountered a critical issue where the camera freezes when toggling the torch while the Ultra-Wide camera is active.
Issue
If the torch is ON and I switch from Wide to Ultra-Wide, the camera freezes
If the Ultra-Wide camera is active and I try to turn the torch ON, the camera freezes
The iPhone Camera app allows using the torch while recording video with the Ultra-Wide lens, so this should be possible via AVFoundation as well.
Code snippet
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
guard let self = self else { return }
let isSwitchingToUltraWide = !self.isUsingFisheyeCamera
let cameraType: AVCaptureDevice.DeviceType = isSwitchingToUltraWide ? .builtInUltraWideCamera : .builtInWideAngleCamera
let cameraName = isSwitchingToUltraWide ? "Ultra Wide" : "Wide"
guard let selectedCamera = AVCaptureDevice.default(cameraType, for: .video, position: .back) else {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "\(cameraName) camera is not available on this device.")
}
return
}
do {
let currentInput = self.videoCapture.captureSession.inputs.first as? AVCaptureDeviceInput
self.videoCapture.captureSession.beginConfiguration()
if isSwitchingToUltraWide && self.isFlashlightOn {
self.forceEnableTorchThroughWide()
}
if let currentInput = currentInput {
self.videoCapture.captureSession.removeInput(currentInput)
}
let videoInput = try AVCaptureDeviceInput(device: selectedCamera)
self.videoCapture.captureSession.addInput(videoInput)
self.videoCapture.captureSession.commitConfiguration()
self.videoCapture.updateVideoOrientation()
DispatchQueue.main.async {
if let barButton = sender as? UIBarButtonItem {
barButton.title = isSwitchingToUltraWide ? "Wide" : "Ultra Wide"
barButton.tintColor = isSwitchingToUltraWide ? UIColor.systemGreen : UIColor.white
}
print("Switched to \(cameraName) camera.")
}
self.isUsingFisheyeCamera.toggle()
} catch {
DispatchQueue.main.async {
self.showAlert(title: "Camera Error", message: "Failed to switch to \(cameraName) camera: \(error.localizedDescription)")
}
}
}
}
Expected Behavior
Torch should be able to work when Ultra-Wide is active, just like the iPhone Camera app does.
The camera should not freeze when switching between Wide and Ultra-Wide with the torch ON.
AVCaptureSession should not crash when toggling the torch while Ultra-Wide is active.
Questions & Help Needed
Is this a known issue with AVFoundation?
How does the iPhone Camera app allow using the torch while recording in Ultra-Wide?
What’s the correct way to switch between Wide and Ultra-Wide cameras without freezing when the torch is active?
Info
Device tested: iPhone 13 Pro / iPhone 15 Pro / Iphone 15
iOS Version: iOS 17.3 / iOS 18.0
Xcode Version: 16.2