Adding both AVCaptureMovieFileOutput and AVCaptureVideoDataOutput is supported in AVCaptureSession as seen in documentation (copied snippet below) but then when AVCaptureDevice is configured with ProRes422 codec, it fails unless one of the two outputs is removed from the capture session. It is very much reproducible on iPhone 14 pro running iOS 26.0.
Prior to iOS 16, you can add an AVCaptureVideoDataOutput and an AVCaptureMovieFileOutput to the same session, but only one may have its connection active. If you attempt to enable both connections, the system chooses the movie file output as the active connection and disables the video data output’s connection. For apps that link against iOS 16 or later, this restriction no longer exists.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I want to know under what conditions does -[AVAsynchronousVideoCompositionRequest sourceFramebytrackID] returns nil. I have a custom compositor and when seeking AVPlayer, I find the method sometimes returns nil, particularly when seek tolerance is set to zero. No issues are seen if I simply play the composition. Only seeking throws these errors and only some of the times.
I deleted Derived data of project and after that I can't remove any line of code in the project. The moment I delete any character in function, XCode 13 is duplicating that line as seen in the below image.
And I can't delete even a comment. The code doesn't even build now. What do I do?
A big chunk of commented code I tried to delete, it deletes but then shows that deleted code again upwards in another function.
I have an AVComposition playback via AVPlayer where AVComposition has multiple audio tracks with audioMix applied. My question is how is it possible to compute audio meter values for the audio playing back through AVPlayer? Using MTAudioProcessingTap it seems you can only get callback for one track at a time. But if that route has to be used, it's not clear how to get sample values of all the audio tracks at a given time in a single callback?
I have tried everything but it looks to be impossible to get MTKView to display full range of colors of HDR CIImage made from CVPixelBuffer (in 10bit YUV format). Only builtin layers such as AVCaptureVideoPreviewLayer, AVPlayerLayer, AVSampleBufferDisplayLayer are able to fully display HDR images on iOS. Is MTKView incapable of displaying full BT2020_HLG color range? Why does MTKView clip colors no matter even if I set pixel Color format to bgra10_xr or bgra10_xr_srgb?
convenience init(frame: CGRect, contentScale:CGFloat) {
self.init(frame: frame)
contentScaleFactor = contentScale
}
convenience init(frame: CGRect) {
let device = MetalCamera.metalDevice
self.init(frame: frame, device: device)
colorPixelFormat = .bgra10_xr
self.preferredFramesPerSecond = 30
}
override init(frame frameRect: CGRect, device: MTLDevice?) {
guard let device = device else {
fatalError("Can't use Metal")
}
guard let cmdQueue = device.makeCommandQueue(maxCommandBufferCount: 5) else {
fatalError("Can't make Command Queue")
}
commandQueue = cmdQueue
context = CIContext(mtlDevice: device, options: [CIContextOption.cacheIntermediates: false])
super.init(frame: frameRect, device: device)
self.framebufferOnly = false
self.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
}
And then rendering code:
override func draw(_ rect: CGRect) {
guard let image = self.image else {
return
}
let dRect = self.bounds
let drawImage: CIImage
let targetSize = dRect.size
let imageSize = image.extent.size
let scalingFactor = min(targetSize.width/imageSize.width, targetSize.height/imageSize.height)
let scalingTransform = CGAffineTransform(scaleX: scalingFactor, y: scalingFactor)
let translation:CGPoint = CGPoint(x: (targetSize.width - imageSize.width * scalingFactor)/2 , y: (targetSize.height - imageSize.height * scalingFactor)/2)
let translationTransform = CGAffineTransform(translationX: translation.x, y: translation.y)
let scalingTranslationTransform = scalingTransform.concatenating(translationTransform)
drawImage = image.transformed(by: scalingTranslationTransform)
let commandBuffer = commandQueue.makeCommandBufferWithUnretainedReferences()
guard let texture = self.currentDrawable?.texture else {
return
}
var colorSpace:CGColorSpace
if #available(iOS 14.0, *) {
colorSpace = CGColorSpace(name: CGColorSpace.itur_2100_HLG)!
} else {
// Fallback on earlier versions
colorSpace = drawImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
}
NSLog("Image \(colorSpace.name), \(image.colorSpace?.name)")
context.render(drawImage, to: texture, commandBuffer: commandBuffer, bounds: dRect, colorSpace: colorSpace)
commandBuffer?.present(self.currentDrawable!, afterMinimumDuration: 1.0/Double(self.preferredFramesPerSecond))
commandBuffer?.commit()
}
I get the following error when configuring MCBrowserViewController to look for nearby peers. And this is despite appending the required info in info.plist namely,
[MCNearbyServiceBrowser] NSNetServiceBrowser did not search with error dict [{
NSNetServicesErrorCode = "-72008";
NSNetServicesErrorDomain = 10;
}].
NSLocalNetworkUsageDescription
<string>Need permission to discover and connect to My Service running on peer iOS device</string>
NSBonjourServices
<array>
<string>_my-server._tcp</string>
<string>_my-server._udp</string>
</array>
Here is my code:
let browser = MCBrowserViewController(serviceType: "my-server", session: session)
browser.delegate = self
browser.minimumNumberOfPeers = kMCSessionMinimumNumberOfPeers
browser.maximumNumberOfPeers = 1
self.present(browser, animated: true, completion: nil)
In the WWDC 2021 video 10047, it was mentioned to look for availability of Lossless CVPixelBuffer format and fallback to normal BGRA32 format if it is not available. But in the updated AVMultiCamPiP sample code, it first looks for Lossy format than the lossless. Why is it so and whats the exact difference it would make if we select lossy vs lossless?
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully.
extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) {
float2 coord1 = t1.coord();
float2 coord2 = t2.coord();
float4 innerRect = t2.extent();
float minX = innerRect.x + time*innerRect.z;
float minY = innerRect.y + time*innerRect.w;
float cropWidth = (1 - time) * innerRect.w;
float cropHeight = (1 - time) * innerRect.z;
float4 s1 = t1.sample(coord1);
float4 s2 = t2.sample(coord2);
if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) {
return s1;
} else {
return s2;
}
}
And it crashes on initialization.
class CIWipeRenderer: CIFilter {
var backgroundImage:CIImage?
var foregroundImage:CIImage?
var inputTime: Float = 0.0
static var kernel:CIColorKernel = { () -> CIColorKernel in
let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: url)
return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!!
}()
override var outputImage: CIImage? {
guard let backgroundImage = backgroundImage else {
return nil
}
guard let foregroundImage = foregroundImage else {
return nil
}
return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime])
}
}
It crashes in the try line with the following error:
Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError
If I replace the kernel code with the following, it works like a charm:
extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time)
{
return mix(s1, s2, time);
}
I am trying to develop tone curves filter using Metal or Core Image as I find CIToneCurve filter is having limitations (number of points are atmost 5, spline curve it is using is not documented, and sometimes output is a black image even with 4 points). Moreover it's not straightforward to have separate R,G,B curves independently. I decided to explore other libraries that implement tone curve and the only one that I know is GPUImage (few others borrow code from the same library). But the source code is too cryptic to understand and I have [doubts] about the manner in which it is generating look up texture (https://stackoverflow.com/questions/70516363/gpuimage-tone-curve-rgbcomposite-filter).
Can someone explain how to correctly implement R,G,B, and RGB composite curves filter like in Mac Photos App?
Is it possible to pass MTLTexture to Metal Core Image Kernel? How can Metal resources be shared with Core Image?
I have a subclass of UIScrollView called MyScrollView. There is a subview called contentViewinside MyScrollView. The width constraint is set to be the contentSize of MyScrollView.
private func setupSubviews() {
contentView = ContentView()
contentView.backgroundColor = UIColor.blue
contentView.translatesAutoresizingMaskIntoConstraints = false
contentView.isUserInteractionEnabled = true
self.addSubview(contentView)
contentView.leadingAnchor.constraint(equalTo: self.leadingAnchor).isActive = true
contentView.trailingAnchor.constraint(equalTo: self.trailingAnchor).isActive = true
contentView.topAnchor.constraint(equalTo: self.topAnchor).isActive = true
contentView.bottomAnchor.constraint(equalTo: self.bottomAnchor).isActive = true
// create contentView's Width and Height constraints
cvWidthConstraint = contentView.widthAnchor.constraint(equalToConstant: 0.0)
cvHeightConstraint = contentView.heightAnchor.constraint(equalToConstant: 0.0)
// activate them
cvWidthConstraint.isActive = true
cvHeightConstraint.isActive = true
cvWidthConstraint.constant = myWidthConstant //<--- problem here if myWidthConstant is very high, such as 512000
cvHeightConstraint.constant = frame.height
contentView.layoutIfNeeded()
}
The problem is if I set cvWidthConstraint.constant to a very high value such as 521000, I get a warning:
This NSLayoutConstraint is being configured with a constant that exceeds internal limits. A smaller value will be substituted, but this problem should be fixed. Break on BOOL _NSLayoutConstraintNumberExceedsLimit(void) to debug. This will be logged only once. This may break in the future.
I wonder how does one set UIScrollView content size to be very high values?
I have a RemoteIO unit that successfully playbacks the microphone samples in realtime via attached headphones. I need to get the same functionality ported using AVAudioEngine, but I can't seem to make a head start. Here is my code, all I do is connect inputNode to playerNode which crashes.
var engine: AVAudioEngine!
var playerNode: AVAudioPlayerNode!
var mixer: AVAudioMixerNode!
var engineRunning = false
private func setupAudioSession() {
var options:AVAudioSession.CategoryOptions = [.allowBluetooth, .allowBluetoothA2DP]
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: options)
try AVAudioSession.sharedInstance().setAllowHapticsAndSystemSoundsDuringRecording(true)
} catch {
MPLog("Could not set audio session category")
}
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setActive(false)
try audioSession.setPreferredSampleRate(Double(44100))
} catch {
print("Unable to deactivate Audio session")
}
do {
try audioSession.setActive(true)
} catch {
print("Unable to activate AudioSession")
}
}
private func setupAudioEngine() {
self.engine = AVAudioEngine()
self.playerNode = AVAudioPlayerNode()
self.engine.attach(self.playerNode)
engine.connect(self.engine.inputNode, to: self.playerNode, format: nil)
do {
try self.engine.start()
}
catch {
print("error couldn't start engine")
}
engineRunning = true
}
But starting AVAudioEngine causes a crash:
libc++abi: terminating with uncaught exception of type NSException
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason:
'required condition is false: inDestImpl->NumberInputs() > 0 || graphNodeDest->CanResizeNumberOfInputs()'
terminating with uncaught exception of type NSException
How do I get realtime record and playback of mic samples via headphones working?
I have an AVPlayerLayer and AVPlayer setup for playback on external screen as follows:
var player = AVPlayer()
playerView.player = player
player.usesExternalPlaybackWhileExternalScreenIsActive = true
player.allowsExternalPlayback = true
playerView is just a UIView that has AVPlayerLayer as it's main layer. This code works and automatically starts displaying and playing video on external screen. The thing is I want an option to invert the AVPlayerLayer on the external screen. I tried setting transform on playerView but that is ignored on the external screen. How do I gain more control on the external screen window?
I also tried to manually add playerView to external screen window and set
player.usesExternalPlaybackWhileExternalScreenIsActive = true
I can also display AVPlayerLayer manually this way. But again, setting a transform on this screen has no effect on external display. So it may also be a UIKit issue.
I need to implement a text editor using UITextView that supports:
Bold/Italic/Underline
Color,Font,font size changes
Paragraph alignment
List format (bullets, numbers, etc.)
Custom selection of text anywhere in the text view and change the properties
So far I have managed to do it without NSTextStorage but it seems I am hitting limits. For instance, to change font, I use UIFontPickerViewController and change the font as follows:
func fontPickerViewControllerDidPickFont(_ viewController: UIFontPickerViewController) {
if let selectedFontDesc = viewController.selectedFontDescriptor {
let font = UIFont(descriptor: selectedFontDesc, size: selectedFontDesc.pointSize)
self.selectedFont = font
self.textView.typingAttributes = [NSAttributedString.Key.foregroundColor: self.selectedColor ?? UIColor.white, NSAttributedString.Key.font: self.selectedFont ?? UIFont.preferredFont(forTextStyle: .body, compatibleWith: nil)]
if let range = self.textView.selectedTextRange, let selectedFont = selectedFont {
let attributedText = NSMutableAttributedString(attributedString: self.textView.attributedText)
let location = textView.offset(from: textView.beginningOfDocument, to: range.start)
let length = textView.offset(from: range.start, to: range.end)
let nsRange = NSRange(location: location, length: length)
attributedText.setAttributes([NSAttributedString.Key.font : selectedFont], range: nsRange)
self.textView.attributedText = attributedText
}
}
}
This works but the problem is it resets the color of the selected text and other properties. I need to understand a way in which the existing attributed of the text under selection are not disturbed. I suspect the way to do is with using NSTextStorage but I can't find anything good on internet that explains the right use of NSTextStorage to achieve this.
This issue is driving me crazy. I load an NSAttributedString in UITextView and within moments after loading the foregroundColor attribute of text is erased(i.e becomes white) without me doing anything. Here is the code and NSLog dump. How do I debug this I wonder?
class ScriptEditingView: UITextView, UITextViewDelegate {
var defaultFont = UIFont.preferredFont(forTextStyle: .body)
var defaultTextColor = UIColor.white
private func commonInit() {
self.font = UIFont.preferredFont(forTextStyle: .body)
self.allowsEditingTextAttributes = true
self.textColor = defaultTextColor
self.backgroundColor = UIColor.black
self.isOpaque = true
self.isEditable = true
self.isSelectable = true
self.dataDetectorTypes = []
self.showsHorizontalScrollIndicator = false
}
}
And then in my ViewController that contains the UITextView, I have this code:
textView = ScriptEditingView(frame: newTextViewRect, textContainer: nil)
textView.delegate = self
view.addSubview(textView)
textView.allowsEditingTextAttributes = true
let guide = view.safeAreaLayoutGuide
// 5
textView.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
textView.leadingAnchor.constraint(equalTo: guide.leadingAnchor),
textView.trailingAnchor.constraint(equalTo: guide.trailingAnchor),
textView.topAnchor.constraint(equalTo: view.topAnchor),
textView.bottomAnchor.constraint(equalTo: view.bottomAnchor)
])
textView.attributedText = attributedString
NSLog("Attributed now")
dumpAttributesOfText()
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {
NSLog("Attributes after 1 sec")
self.dumpAttributesOfText()
}
And here is code to dump attributes of text:
private func dumpAttributesOfText() {
textView.attributedText?.enumerateAttributes(in: NSRange(location: 0, length: textView.attributedText!.length), options: .longestEffectiveRangeNotRequired, using: { dictionary, range, stop in
NSLog(" range \(range)")
if let font = dictionary[.font] as? UIFont {
NSLog("Font at range \(range) - \(font.fontName), \(font.pointSize)")
}
if let foregroundColor = dictionary[.foregroundColor] as? UIColor {
NSLog("Foregroundcolor \(foregroundColor) at range \(range)")
}
if let underline = dictionary[.underlineStyle] as? Int {
NSLog("Underline \(underline) at range \(range)")
}
})
}
The logs show this:
2022-07-02 13:16:02.841199+0400 MyApp[12054:922491] Attributed now
2022-07-02 13:16:02.841370+0400 MyApp[12054:922491] range {0, 14}
2022-07-02 13:16:02.841486+0400 MyApp[12054:922491] Font at range {0, 14} - HelveticaNeue, 30.0
2022-07-02 13:16:02.841586+0400 MyApp[12054:922491] Foregroundcolor UIExtendedGrayColorSpace 1 1 at range {0, 14}
2022-07-02 13:16:02.841681+0400 MyApp[12054:922491] range {14, 6}
2022-07-02 13:16:02.841770+0400 MyApp[12054:922491] Font at range {14, 6} - HelveticaNeue, 30.0
2022-07-02 13:16:02.841855+0400 MyApp[12054:922491] Foregroundcolor kCGColorSpaceModelRGB 0.96863 0.80784 0.27451 1 at range {14, 6}
2022-07-02 13:16:03.934816+0400 MyApp[12054:922491] Attributes after 1 sec
2022-07-02 13:16:03.935087+0400 MyApp[12054:922491] range {0, 20}
2022-07-02 13:16:03.935183+0400 MyApp[12054:922491] Font at range {0, 20} - HelveticaNeue, 30.0
2022-07-02 13:16:03.935255+0400 MyApp[12054:922491] Foregroundcolor UIExtendedGrayColorSpace 1 1 at range {0, 20}