Sorry about the delay, I didn't see your message on the day it was posted (that little bell icon doesn't do much).
I figured out what I was doing wrong but hadn't got around to replying to my own post.
I was using this initializer:
@nonobjc public convenience init(formatDescription: CMFormatDescription, maxFrameDuration: CMTime, minFrameDuration: CMTime, validFrameDurations: [CMTime]?)
In previous code, I'd created a format like this (pseudocode):
CMIOExtensionStreamFormat(formatDescription: description, maxFrameDuration: 30 fps, minFrameDuration: 30 fps, validFrameDurations: nil)
which seemed to work fine - my virtual camera had a single format with 30fps. When I made a generator with multiple formats, I tried to add another format with a different frame rate like this:
CMIOExtensionStreamFormat(formatDescription: description, maxFrameDuration: 29.97 fps, minFrameDuration: 29.97 fps, validFrameDurations: nil)
and ended up with an AVCaptureDevice which offered two formats, both with the min and max frame duration of 30fps. That was the wrong thing to do.
I only need a new CMIOExtensionStreamFormat only for a different size of output stream (all my streams use the same pixel format). Each size gets one format with multiple frame rates - pseudocode:
CMIOExtensionStreamFormat(formatDescription: description, maxFrameDuration: 29.97 fps, minFrameDuration: 60 fps, validFrameDurations: [29.97fps, 30fps, 59.94fps, 60fps])
That works.
Apple's built-in webcam on my laptop can provide arbitrary frame rates between 1 and 30 fps , while most UVC cameras only support a limited, fixed set of frame rates.
With that mystery solved, my remaining question is how best to provide an actual frame rate which is closest to the promised value? Currently I do. this by counting frames since the start of generation and comparing setting a timer to fire at the anticipated time of the next frame (startTime + frameCount*frameDuration), rather than using a repeating timer.
Topic:
Media Technologies
SubTopic:
Video
Tags: