Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Created

iOS Radio App: Need to extract and stream audio-only from HLS streams with video content
I'm developing an iOS radio app that plays various HLS streams. The challenge is that some stations broadcast HLS streams containing both audio and video (example: https://svs.itworkscdn.net/smcwatarlive/smcwatar/chunks.m3u8), but I want to: Extract and play only the audio track Support AirPlay for audio-only streaming Minimize data usage by not downloading video content Technical Details: iOS 17+ Swift 5.9 Using AVFoundation for playback Current implementation uses AVPlayer with AVPlayerItem Current Code Structure: class StreamPlayer: ObservableObject { @Published var isPlaying = false private var player: AVPlayer? private var playerItem: AVPlayerItem? func playStream(url: URL) { let asset = AVURLAsset(url: url) playerItem = AVPlayerItem(asset: asset) player = AVPlayer(playerItem: playerItem) player?.play() } Stream Analysis: When analyzing the video stream using FFmpeg: CopyInput #0, hls, from 'https://svs.itworkscdn.net/smcwatarlive/smcwatar/chunks.m3u8': Stream #0:0: Video: h264, yuv420p(tv, bt709), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps Stream #0:1: Audio: aac, 44100 Hz, stereo, fltp Attempted Solutions: Using MobileFFmpeg: let command = [ "-i", streamUrl, "-vn", "-acodec", "aac", "-ac", "2", "-ar", "44100", "-b:a", "128k", "-f", "mpegts", "udp://127.0.0.1:12345" ].joined(separator: " ") ffmpegProcess = MobileFFmpeg.execute(command) Issue: While FFmpeg successfully extracts audio, playback through AVPlayer doesn't work reliably. Tried using HLS output: let command = [ "-i", streamUrl, "-vn", "-acodec", "aac", "-ac", "2", "-ar", "44100", "-b:a", "128k", "-f", "hls", "-hls_time", "2", "-hls_list_size", "3", outputUrl.path ] Issue: Creates temporary files but faces synchronization issues with live streams. Requirements: Real-time audio extraction from HLS stream Maintain live streaming capabilities Full AirPlay support Minimal data usage (avoid downloading video content) Handle network interruptions gracefully Questions: What's the most efficient way to extract only audio from an HLS stream in real-time? Is there a way to tell AVPlayer to ignore video tracks completely? Are there better alternatives to FFmpeg for this specific use case? What's the recommended approach for handling AirPlay with modified streams? Any guidance or alternative approaches would be greatly appreciated. Thank you!
1
1
489
Dec ’24
Can you add pictures with the camera using the new photos picker instead of the old UI View Controller?
I'm a new app developer and am trying to add a button that adds pictures from the photo library AND camera. I added the first function (adding pictures from the photo library) using the new-ish photoPicker, but I can't find a way to do the same thing for the camera. Should I just tough it out and use the UI View Controller struct that I've seen in all of the YouTube tutorials I've come across? I also want the user to be able to crop the picture in the app after they take a picture. Thanks in advance
1
0
629
Dec ’24
Musickit Media player missing output device selection
Hi All, I am working on a DJ playout app (MACOS). The app has a few AVAudioPlayerNode's combined with the ApplicationMusicPlayer from Musickit. I can route the output of the AVaudioPlayer to a hardware device so that the audio files are directed to their own dedicated output on my Mac. The ApplicationMusicPlayer is following the default output and this is pretty annoying. Has anyone found a solution to chain the ApplicationMusicPlayer and get it set to a output device? Thanks Pancras
2
0
568
Dec ’24
Why AvAssetWriter adds pts drift when writing fMP4
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames. For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0. However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither. Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
0
0
593
Dec ’24
Sound randomly
Hello all! I've been having this issue for a while, on my iPhone 12 Pro. The volume when listening to music, watching YouTube, TikTok, etc. It will randomly lower, but the actual audio slider won't it will still be at max volume but get very quiet. I've followed other instructions such as turn off audio awareness, and other settings but nothing seems to be working. And phone calls too Has anyone else had this issue and managed to fix it?
1
0
422
Dec ’24
Unknown error -12881 when using AVAssetResourceLoader
Here we are focusing to change the cookie at every 120 seconds while playing , in apple avplayer we can't modify cookie after initialisation due to that we followed the approach to using " Resource loader delegate " to pass cookie as a header value . What I notice is that the playlist file (.m3u8) gets downloaded correctly. Then video file (.m4a) some chunks also gets downloaded. I know that the .ts file is downloaded because I can see the GET request completing on the web server with status 200. I also set a breakpoint at the following line: loadingRequest.dataRequest?.respond(with: data) immediately got error from avplayer status as "The operation could not be completed. An unknown error occurred (-12881) From core media" Need confirmation on why I am unable to load HLS using resource loader. is it possible to update cookie value while paying continuously on avplayer. override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. let urlString = "localhost://demo.unified-streaming.com/k8s/features/stable/video/tears-of-steel/tears-of-steel.ism/.m3u8" guard let url = URL(string: urlString) else { print("Invalid URL") return } //Create cookie to prepare for player asset let cookie = HTTPCookie(properties: [ .name: "dazn-token", .value: "cookie value", .domain: url.host() ?? "", .path: "/", .discard: true ]) //Create cookie key to set AVURLAsset let options = [AVURLAssetHTTPCookiesKey: [cookie]] let asset = AVURLAsset(url: url,options: options) proxy = ReverseProxyResourceLoader() proxy?.cookie = "exampleCookie" // Set resource loader delegate to moniter the chunks asset.resourceLoader.setDelegate(proxy, queue: DispatchQueue.global()) // Load asset keys asynchronously (e.g., "playable") let keys = ["playable"] // Initialize the AVPlayer with the URL let playerItem = AVPlayerItem(asset: asset) self.player = AVPlayer(playerItem: playerItem) playerItem.addObserver(self, forKeyPath: "status", options: [.new, .initial], context: nil) // Observe 'error' property (if needed) playerItem.addObserver(self, forKeyPath: "error", options: [.new], context: nil) let contentKeySessionDelegate = ContentKeyDelegate() // Initialize AVContentKeySession let contentKeySession = AVContentKeySession(keySystem: .clearKey) self.contentKeySession = contentKeySession contentKeySession.setDelegate(contentKeySessionDelegate, queue: DispatchQueue.main) // Associate the asset with the content key session contentKeySession.addContentKeyRecipient(asset) // Create a layer for the AVPlayer and add it to the view playerLayer = AVPlayerLayer(player: player) playerLayer?.frame = view.bounds playerLayer?.videoGravity = .resizeAspect if let playerLayer = playerLayer { view.layer.addSublayer(playerLayer) } NotificationCenter.default.addObserver( self, selector: #selector(playerDidFinishPlaying), name: .AVPlayerItemDidPlayToEndTime, object: player?.currentItem ) // Start playback player?.play() } // Update cookie when ever needed func updateCookie() { proxy?.cookie = "update exampleCookie" } @objc private func playerDidFinishPlaying(notification: Notification) { print("Playback finished!") // Optionally, handle end-of-playback actions here } // // ReverseProxyResourceLoader.swift // HLSDemo // // Created by Gajje.Venkatarao on 12/12/24. // import Foundation import AVKit import AVFoundation class ReverseProxyResourceLoader: NSObject, AVAssetResourceLoaderDelegate { var cookie = "" func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool { resourceLoader.preloadsEligibleContentKeys = true guard let interceptedURL = loadingRequest.request.url else { loadingRequest.finishLoading(with: NSError(domain: "ReverseProxy", code: -1, userInfo: [NSLocalizedDescriptionKey: "Invalid URL"])) return false } if interceptedURL.scheme == "skd" { print("Token updated Cookie:", interceptedURL ) return false } var components = URLComponents(url: interceptedURL, resolvingAgainstBaseURL: false) components?.scheme = "https" // Replace with the original scheme guard let originalURL = components?.url else { loadingRequest.finishLoading(with: NSError(domain: "ReverseProxy", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to map URL"])) loadingRequest.finishLoading() return false } var request = URLRequest(url: originalURL) request.httpMethod = "GET" if let storeCoockie = HTTPCookie(properties: [ .name: "dazn-token", .value: cookie, .domain: originalURL.host ?? "", .path: "/", .discard: true ]){ HTTPCookieStorage.shared.setCookie(storeCoockie) } let headers = loadingRequest.request.allHTTPHeaderFields ?? [:] for (key, value) in headers { request.addValue(value, forHTTPHeaderField: key) } request.addValue(cookie, forHTTPHeaderField: "Cookie") URLSession.shared.configuration.httpShouldSetCookies = true request.httpShouldHandleCookies = true let task = (URLSession.shared.dataTask(with: originalURL) { data, response, error in if let error = error { print("Error Received:", error) loadingRequest.finishLoading(with: error) return } print(originalURL) guard let data = data , let url = response?.url else { loadingRequest.finishLoading(with: NSError(domain: "ReverseProxy", code: -1, userInfo: [NSLocalizedDescriptionKey: "No data received"])) return } loadingRequest.dataRequest?.respond(with: data) loadingRequest.finishLoading() } as URLSessionDataTask) task.resume() return true } } Example project
0
0
603
Dec ’24
Custom Share Desination stopped working in FCP X 11
We integrate with FCP X using a custom share destination and the Apple Script interface. This has been working fine until the the recent version 11 update of FCP X. With this update we are no longer receiving the open event when the export has completed. We get the apple event to creat the Asset and the file is exported to the location we set in the response. There is just no open event after that. I suspect something is wrong with our scripting support but I have no idea what or how to troubleshoot. This works fine in 10.8.1 and below.
0
0
378
Dec ’24
Is Apple Log open to developers for 3rd party apps?
Hello! I am building a video camera app and trying to implement Apple log for iPhone 15 Pro and 16 Pro. I am not seeing a lot of documentation on it and notice the amount of apps that use it on the app is rather limited. Less an 5 to be exact. Is Apple Log recording a feature that is accessible to developers? Here is a link to documentation: https://developer.apple.com/documentation/avfoundation/avcapturecolorspace/applelog
1
0
496
Dec ’24
Inquiry about Potential Core Audio Improvements
Hi everyone, I wanted to bring up a question about Core Audio and its potential for future updates or improvements, specifically regarding latency optimization. As someone who relies on Core Audio for real-time audio processing, any enhancements in this area would be incredibly beneficial for professionals in the industry. Does anyone know if Apple has shared any plans or updates regarding Core Audio’s performance, particularly for low-latency applications? I’d appreciate any insights or advice from the community! Thanks so much! Best, Michael
1
0
500
Dec ’24
resources for image cleanup
What is the purpose of AdjustmentsSecondary.data included in the PHAssetResource for a cleaned-up image? When using creationRequest.addResource, what should be set for the PHAssetResourceType? If I set the PHAssetResourceType as follows to create an asset, it appears correctly in the camera roll. However, when attempting to edit the image in the Photos app, the app crashes: IMG_5332.HEIC → .photo FullSizeRender.HEIC → .fullSizePhoto Adjustments.plist → .adjustmentData AdjustmentsSecondary.data → .adjustmentData
0
0
461
Dec ’24
[VisionOS Audio] AVAudioPlayerNode occasionally produces loud popping/distortion when playing PCM data
I'm experiencing audio issues while developing for visionOS when playing PCM data through AVAudioPlayerNode. Issue Description: Occasionally, the speaker produces loud popping sounds or distorted noise This occurs during PCM audio playback using AVAudioPlayerNode The issue is intermittent and doesn't happen every time Technical Details: Platform: visionOS Device: vision pro / simulator Audio Framework: AVFoundation Audio Node: AVAudioPlayerNode Audio Format: PCM I would appreciate any insights on: Common causes of audio distortion with AVAudioPlayerNode Recommended best practices for handling PCM playback in visionOS Potential configuration issues that might cause this behavior Has anyone encountered similar issues or found solutions? Any guidance would be greatly helpful. Thank you in advance!
2
1
642
Dec ’24
photo album widget won’t work
for a while i had one photo widget (no special app, just the standard apple one) and it was set to shuffle to an album of pics of my bf. no problems at all. a few weeks later i added one to shuffle through an album of pics of my cat, and that one worked fine, but it made the one of my bf stop working, and it just showed a blank white widget, no error message or anything. so i removed the one of my cat hoping the one of my bf would go back to working, and it didn’t. i only have the widgets for find my, my bank, and then apps on my home screen otherwise.
1
0
656
Dec ’24
App recedes to background,audioEngine.start()
private var audioEngine = AVAudioEngine() private var inputNode: AVAudioInputNode! func startAnalyzing() { inputNode = audioEngine.inputNode let recordingFormat = inputNode.outputFormat(forBus: 0) let hardwareSampleRate = recordingSession.sampleRate inputNode.removeTap(onBus: 0) if recordingFormat.sampleRate != hardwareSampleRate { print("。") let newFormat = AVAudioFormat(commonFormat: recordingFormat.commonFormat, sampleRate: hardwareSampleRate, channels: recordingFormat.channelCount, interleaved: recordingFormat.isInterleaved) inputNode.installTap(onBus: 0, bufferSize: 1024, format: newFormat) { buffer, time in self.processAudioBuffer(buffer, time: time) } } else { inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, time in self.processAudioBuffer(buffer, time: time) } } do { audioEngine.prepare() try audioEngine.start() } catch { print(": \(error)") } } I back the app to the background and then call startAnalyzing(), which reports an error and the background recording permissions are configured。 error: [10429:570139] [aurioc] AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187) [10429:570139] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187 Audio engine couldn't start. Is background boot not allowed?
1
0
515
Dec ’24
Enabling MIDINetworkSession in a catalyst app
Hi, I am trying to enable the default MIDINetworkSession in a Catalyst app on MacOS like this: MIDINetworkSession.default().isEnabled = true MIDINetworkSession.default().connectionPolicy = .anyone In the AppSandbox I have both incoming and outgoing network connections enabled. And I also added the NSLocalNetworkUsageDescription key to the info.plist. Bonjour services are also added to the info.plist: NSBonjourServices _apple-midi._udp. Nevertheless the session stays disabled. Running the same code works just fine on iOS. Is there any special setup I need to make on MacOS to enable the MIDINetworkSession? Thanks!
0
0
450
Dec ’24
Music Kit initialisation, Uncaught TypeError: Cannot read properties of undefined (reading 'node')
I'm trying to load Music Kit on the server with solid js. I can confirm that my implementation has been sufficient to return authentication tokens and for MusicKit.isAuthorized to return true. My issue is that if I reload the page, it only succeeds intermittently (perhaps 25% of the time?). My question is - what is wrong with my implementation? Removing the async keyword ensures it loads every time but playing and queuing music no longer works. I'm currently assuming this is an SSR issue but the docs haven't explicitly specified this isn't possible. I have the following boilerplate: export default createHandler( () => ( <StartServer document={({ assets, children, scripts }) => { return ( <html lang="en"> <head> <meta name="apple-music-developer-token" content={authResult.token} /> <meta name="apple-music-app-name" content="app name" /> <meta name="apple-music-app-build" content="1978.4.1" /> {assets} <script src="https://js-cdn.music.apple.com/musickit/v3/musickit.js" async /> </head> <body> <div id="app">{children}</div> {scripts} </body> </html> ) }} /> )) When I first load my app, I'll encounter: musickit.js:13 Uncaught TypeError: Cannot read properties of undefined (reading 'node') at musickit.js:13:10194 at musickit.js:13:140 at musickit.js:13:209 The intermittence signals an issue relating to the async keyword. An expansion on this issue can be found here.
0
0
552
Dec ’24
AVAssetWriter append audio/video streams concurrently in Real time recording setup
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance? `videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1 `audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
1
0
636
Dec ’24
[Request] Support for Spotify-like Audio Analysis API for Apple Music.
Hi, I have been working on a project that enables users to listen to their favorite music using a streaming service, which so far was Spotify. The app had a programmable 3D/2D interface with the ability to connect to devices in your home and have them react to music. As of September 2024, Spotify decomissioned their Audio Analysis API. I have seen other posts mention playing Apple Music through AVFoundation, which would break DRM and so it’s not supported. However, the Spotify Audio Analysis API does not allow for a full frequency reconstruction. It is entirely temporal data on beats, kicks, loudness, and timbre changes, which themselves are operators on the spectral data from the FFT. It would be very useful for the developer community if we get the ability to do this and it will probably Apple Music among developers and those who use their apps a lot more. Would love to hear your thoughts about this and Happy New Year!
0
2
624
Dec ’24
Slow performance decoding large images with Core Image.
I'm building a camera app that does some post processing after the photo has been taken. With 12MP the processing is pretty good, but larger images 24MP is very slow. I created a very simple example to demonstrate the issue, which is loading an image and the rendering it to data. let context = CIContext() let imageUrl = Bundle.main.url(forResource: "12mp", withExtension: "jpg")! let data = try! Data(contentsOf: imageUrl) let ciImage = CIImage(data: data)! let start = CFAbsoluteTimeGetCurrent() let data = context.jpegRepresentation(of: ciImage, colorSpace: context.workingColorSpace!) print(data?.count) print("Resize Completed: " + String(CFAbsoluteTimeGetCurrent() - start)) Running this code on an iPhone 16 Pro with different images produces these benchmarks: 12MP => 0.03s 24MP => 1.22s 48MP => 2.98s I understand that processing time will increase with resolution but it doesn't seem linear. I have tried setting different CiContext options such as .useSoftwareRenderer: false but it has made no difference. From profiling the process it looks like the JPEG decoding is the bottle neck. This is for a 48MP Image: Is there any way this can be improved?
0
0
614
Dec ’24