How do we export a spatial video at a higher resolution than 2200 x 2200?
For example, I want to export a video that I edited in a spatial timeline as 4096 x 4096 resolution with spatial metadata to output as an MV-HEVC file.
I looked under both Final Cut and Compressor export settings, but couldn't find a way to do this.
Thanks for your help!
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Can anyone explain how AVAssetExportSession works in iOS 18 and earlier versions?
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance?
`videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1
`audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
Hello!
I am building a video camera app and trying to implement Apple log for iPhone 15 Pro and 16 Pro.
I am not seeing a lot of documentation on it and notice the amount of apps that use it on the app is rather limited. Less an 5 to be exact.
Is Apple Log recording a feature that is accessible to developers?
Here is a link to documentation: https://developer.apple.com/documentation/avfoundation/avcapturecolorspace/applelog
We integrate with FCP X using a custom share destination and the Apple Script interface. This has been working fine until the the recent version 11 update of FCP X.
With this update we are no longer receiving the open event when the export has completed. We get the apple event to creat the Asset and the file is exported to the location we set in the response. There is just no open event after that. I suspect something is wrong with our scripting support but I have no idea what or how to troubleshoot.
This works fine in 10.8.1 and below.
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames.
For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0.
However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither.
Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
After 18.2 IOS update, videos are not playing in Netflix, Amazon Prime and youtube
Topic:
Media Technologies
SubTopic:
Video