I am trying to use AVCaptureDevice.rotationCoordinator API to observe angles for preview and capture and it seems there is an issue with the API when used with arbitrary CALayer (which is not a AVCaptureVideoPreviewLayer) and switching cameras.
Here is my setup. The below function is defined in an actor class called CameraManager that performs setup of rotationCoordinator.
func updateRotationCoordinator(_ callback:@escaping @MainActor (CGFloat) -> Void) {
guard let device = sessionConfiguration.activeVideoInput?.device, let displayLayer = displayLayer else { return }
cancellables.removeAll()
rotationCoordinator = AVCaptureDevice.RotationCoordinator(device: device, previewLayer: displayLayer)
guard let coordinator = rotationCoordinator else { return }
coordinator.publisher(for: \.videoRotationAngleForHorizonLevelPreview)
.receive(on: DispatchQueue.main)
.sink { degrees in
let radians = degrees * .pi / 180
MainActor.assumeIsolated {
callback(radians)
}
}
.store(in: &cancellables)
}
This works the very first time but when I switch cameras and call this function again, it throws a runtime error that view's layer is modified from a non-main thread. This happens at the very line where rotation coordinator is been recreated. It's not clear why initialising rotation coordinator should modify CALayer properties right in it's init method.
Modifying properties of a view's layer off the main thread is not allowed: view <MyApp.DisplayLayerView: 0x102ffaf40> with nearest ancestor view controller <_TtGC7SwiftUI19UIHostingControllerGVS_15ModifiedContentVS_7AnyViewVS_12RootModifier__: 0x101f7fb80>; backtrace:
(
0 UIKitCore 0x0000000194a977b4 575E5140-FA6A-37C2-B00B-A4EACEDFDA53 + 22509492
1 UIKitCore 0x000000019358594c 575E5140-FA6A-37C2-B00B-A4EACEDFDA53 + 416076
2 QuartzCore 0x00000001927f5bd8 D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 43992
3 QuartzCore 0x00000001927f5a4c D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 43596
4 QuartzCore 0x000000019283a41c D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 324636
5 QuartzCore 0x000000019283a0a8 D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 323752
6 AVFCapture 0x00000001af072a18 09192166-E0B6-346C-B1C2-7C95C3EFF7F7 + 420376
7 MyApp.debug.dylib 0x0000000105fa3914 $s10MyApp15CapturePipelineC25updateRotationCoordinatoryyy12CoreGraphics7CGFloatVScMYccF + 972
8 MyApp.debug.dylib 0x00000001063ade40 $s10MyApp11CameraModelC18switchVideoDevicesyyYaFTY3_ + 72
9 MyApp.debug.dylib 0x0000000105fe3cbd $s10MyApp11ContentViewV4bodyQrvg7SwiftUI6VStackVyAE05TupleE0VyAE6HStackVyAIyAE6SpacerV_AE6ButtonVyAE0E0PAEE5frame5width6height9alignmentQr12CoreGraphics7CGFloatVSg_AyE9AlignmentVtFQOyAqEE11scaledToFitQryFQOyAqEE10imageScaleyQrAE5ImageV0Z0OFQOyA3__Qo__Qo__Qo_GtGG_AmKyAIyAKyAIyAqEE7paddingyQrAE4EdgeO3SetV_AYtFQOyAA07CaptureM0V_Qo__AOyAE4TextVGAmKyAIyA9__AqEEArstUQrAY_AYA_tFQOyAM_Qo_A9_tGGtGG_AmqEE10background_AUQrqd___A_tAePRd__lFQOyAqEEArstUQrAY_AYA_tFQOyA21__Qo__AqEEArstUQrAY_AYA_tFQOyAE06_ShapeE0VyAE9RectangleVAE5ColorVG_Qo_Qo_SgtGGtGGyXEfU0_A42_yXEfU_A10_yXEfU_yyScMYccfU_yyYacfU_TQ1_ + 1
10 MyApp.debug.dylib 0x0000000105ff06d9 $s10MyApp11ContentViewV4bodyQrvg7SwiftUI6VStackVyAE05TupleE0VyAE6HStackVyAIyAE6SpacerV_AE6ButtonVyAE0E0PAEE5frame5width6height9alignmentQr12CoreGraphics7CGFloatVSg_AyE9AlignmentVtFQOyAqEE11scaledToFitQryFQOyAqEE10imageScaleyQrAE5ImageV0Z0OFQOyA3__Qo__Qo__Qo_GtGG_AmKyAIyAKyAIyAqEE7paddingyQrAE4EdgeO3SetV_AYtFQOyAA07CaptureM0V_Qo__AOyAE4TextVGAmKyAIyA9__AqEEArstUQrAY_AYA_tFQOyAM_Qo_A9_tGGtGG_AmqEE10background_AUQrqd___A_tAePRd__lFQOyAqEEArstUQrAY_AYA_tFQOyA21__Qo__AqEEArstUQrAY_AYA_tFQOyAE06_ShapeE0VyAE9RectangleVAE5ColorVG_Qo_Qo_SgtGGtGGyXEfU0_A42_yXEfU_A10_yXEfU_yyScMYccfU_yyYacfU_TATQ0_ + 1
11 MyApp.debug.dylib 0x0000000105f9c595 $sxIeAgHr_xs5Error_pIegHrzo_s8SendableRzs5NeverORs_r0_lTRTQ0_ + 1
12 MyApp.debug.dylib 0x0000000105f9fb3d $sxIeAgHr_xs5Error_pIegHrzo_s8SendableRzs5NeverORs_r0_lTRTATQ0_ + 1
13 libswift_Concurrency.dylib 0x000000019c49fe39 E15CC6EE-9354-3CE5-AF91-F641CA8283E0 + 433721
)
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I see SwiftUI body being repeatedly called in an infinite loop in the presence of Environment variables like horizontalSizeClass or verticalSizeClass. This happens after device is rotated from portrait to landscape and then back to portrait mode. The deinit method of TestPlayerVM is repeatedly called. Minimally reproducible sample code is pasted below.
The infinite loop is not seen if I remove size class environment references, OR, if I skip addPlayerObservers call in the TestPlayerVM initialiser.
import AVKit
import Combine
struct InfiniteLoopView: View {
@Environment(\.verticalSizeClass) var verticalSizeClass
@Environment(\.horizontalSizeClass) var horizontalSizeClass
@State private var openPlayer = false
@State var playerURL: URL = URL(fileURLWithPath: Bundle.main.path(forResource: "Test_Video", ofType: ".mov")!)
var body: some View {
PlayerView(playerURL: playerURL)
.ignoresSafeArea()
}
}
struct PlayerView: View {
@Environment(\.dismiss) var dismiss
var playerURL:URL
@State var playerVM = TestPlayerVM()
var body: some View {
VideoPlayer(player: playerVM.player)
.ignoresSafeArea()
.background {
Color.black
}
.task {
let playerItem = AVPlayerItem(url: playerURL)
playerVM.playerItem = playerItem
}
}
}
@Observable
class TestPlayerVM {
private(set) public var player: AVPlayer = AVPlayer()
var playerItem:AVPlayerItem? {
didSet {
player.replaceCurrentItem(with: playerItem)
}
}
private var cancellable = Set<AnyCancellable>()
init() {
addPlayerObservers()
}
deinit {
print("Deinit Video player manager")
removeAllObservers()
}
private func removeAllObservers() {
cancellable.removeAll()
}
private func addPlayerObservers() {
player.publisher(for: \.timeControlStatus, options: [.initial, .new])
.receive(on: DispatchQueue.main)
.sink { timeControlStatus in
print("Player time control status \(timeControlStatus)")
}
.store(in: &cancellable)
}
}
I have a discrete scrubber implementation (range 0-100) using ScrollView in SwiftUI that fails on the end points. For instance, scrolling it all the way to bottom shows a value of 87 instead of 100. Or if scrolling down by tapping + button incrementally till it reaches the end, it will show the correct value of 100 when it reaches the end. But now, tapping minus button doesn't scrolls the scrubber back till minus button is clicked thrice.
I understand this has only to do with scroll target behaviour of .viewAligned but don't understand what exactly is the issue, or if its a bug in SwiftUI.
import SwiftUI
struct VerticalScrubber: View {
var config: ScrubberConfig
@Binding var value: CGFloat
@State private var scrollPosition: Int?
var body: some View {
GeometryReader { geometry in
let verticalPadding = geometry.size.height / 2 - 8
ZStack(alignment: .trailing) {
ScrollView(.vertical, showsIndicators: false) {
VStack(spacing: config.spacing) {
ForEach(0...(config.steps * config.count), id: \.self) { index in
horizontalTickMark(for: index)
.id(index)
}
}
.frame(width: 80)
.scrollTargetLayout()
.safeAreaPadding(.vertical, verticalPadding)
}
.scrollTargetBehavior(.viewAligned)
.scrollPosition(id: $scrollPosition, anchor: .top)
Capsule()
.frame(width: 32, height: 3)
.foregroundColor(.accentColor)
.shadow(color: .accentColor.opacity(0.3), radius: 3, x: 0, y: 1)
}
.frame(width: 100)
.onAppear {
DispatchQueue.main.async {
scrollPosition = Int(value * CGFloat(config.steps))
}
}
.onChange(of: value, { oldValue, newValue in
let newIndex = Int(newValue * CGFloat(config.steps))
print("New index \(newIndex)")
if scrollPosition != newIndex {
withAnimation {
scrollPosition = newIndex
print("\(scrollPosition)")
}
}
})
.onChange(of: scrollPosition, { oldIndex, newIndex in
guard let pos = newIndex else { return }
let newValue = CGFloat(pos) / CGFloat(config.steps)
if abs(value - newValue) > 0.001 {
value = newValue
}
})
}
}
private func horizontalTickMark(for index: Int) -> some View {
let isMajorTick = index % config.steps == 0
let tickValue = index / config.steps
return HStack(spacing: 8) {
Rectangle()
.fill(isMajorTick ? Color.accentColor : Color.gray.opacity(0.5))
.frame(width: isMajorTick ? 24 : 12, height: isMajorTick ? 2 : 1)
if isMajorTick {
Text("\(tickValue * 5)")
.font(.system(size: 12, weight: .medium))
.foregroundColor(.primary)
.fixedSize()
}
}
.frame(maxWidth: .infinity, alignment: .trailing)
.padding(.trailing, 8)
}
}
#Preview("Vertical Scrubber") {
struct VerticalScrubberPreview: View {
@State private var value: CGFloat = 0
private let config = ScrubberConfig(count: 20, steps: 5, spacing: 8)
var body: some View {
VStack {
Text("Vertical Scrubber (0–100 in steps of 5)")
.font(.title2)
.padding()
HStack(spacing: 30) {
VerticalScrubber(config: config, value: $value)
.frame(width: 120, height: 300)
.background(Color(.systemBackground))
.border(Color.gray.opacity(0.3))
VStack {
Text("Current Value:")
.font(.headline)
Text("\(value * 5, specifier: "%.0f")")
.font(.system(size: 36, weight: .bold))
.padding()
HStack {
Button("−5") {
let newValue = max(0, value - 1)
if value != newValue {
value = newValue
UISelectionFeedbackGenerator().selectionChanged()
}
print("Value \(newValue), \(value)")
}
.disabled(value <= 0)
Button("+5") {
let newValue = min(CGFloat(config.count), value + 1)
if value != newValue {
value = newValue
UISelectionFeedbackGenerator().selectionChanged()
}
print("Value \(newValue), \(value)")
}
.disabled(value >= CGFloat(config.count))
}
.buttonStyle(.bordered)
}
}
Spacer()
}
.padding()
}
}
return VerticalScrubberPreview()
}
I tried using sample code "Applying Matte Effects to People in Images and Videos" on iPhone 12 mini, but it's not accurate near the boundaries (especially hair). I even tried .accurate mode in segmentation quality level that causes iPhone to overheat quickly but still segmentation is not good for live video. One thing that may matter is results of segmentation are not as good as matting which applies alpha channel for the hair to blend accurately with the background. But if I am missing something, please do point out.
On an iPhone 12 mini running iOS 15 beta 3, the microphone samples with builtin microphone are all silent in RemoteIO unit callbacks. This happens when AVAudioSession preferredInputSource is set to back or front mic. It is NOT seen when preferred input source is set to bottom mic or with external microphones.
Is this a known bug on iOS 15?
AVAudioSession API for setting stereo orientation on supported devices (iPhone XS and above) is completely broken on iOS 15 betas. Even the 'Stereo Audio Capture' sample code does not work anymore on iOS 15. Even AVCaptureAudioDataOutput fails when setting stereo orientation on AVAudioSession. I am wondering if Apple Engineers are aware of this issue and whether this would be fixed in upcoming betas and most importantly, iOS 15 mainline.
Is it necessary for AVVideoCompositionInstruction (custom) to have atleast one video asset track? If one needs to just generate video out of motion graphics and pictures, does one still need to add dummy video track added to composition first?
I filed a bug and the status in Feedback Assistant now shows "Potential fix identified - In iOS 15". But the bug is still visible in iOS 15 beta 6. What does the status mean? Does it say it will be fixed in iOS 15 main build?
I want to know under what conditions does -[AVAsynchronousVideoCompositionRequest sourceFramebytrackID] returns nil. I have a custom compositor and when seeking AVPlayer, I find the method sometimes returns nil, particularly when seek tolerance is set to zero. No issues are seen if I simply play the composition. Only seeking throws these errors and only some of the times.
I had a project that was created on XCode 12. Next I opened it in XCode 13 beta 5 and made lot of edits. Now it does not build at all in XCode 12.5. I tried clean build but still doesn't work.
Command CompileSwift failed with a nonzero exit code
1. Apple Swift version 5.4 (swiftlang-1205.0.26.9 clang-1205.0.19.55)
2. Running pass 'Module Verifier' on function '@"xxxxxxxxxxxxxxxxH0OSo014AVAsynchronousA18CompositionRequestCtF"'
0 swift-frontend 0x000000010796fe85 llvm::sys::PrintStackTrace(llvm::raw_ostream&) + 37
1 swift-frontend 0x000000010796ee78 llvm::sys::RunSignalHandlers() + 248
2 swift-frontend 0x0000000107970446 SignalHandler(int) + 262
3 libsystem_platform.dylib 0x00007fff204fbd7d _sigtramp + 29
4 libdyld.dylib 0x00007fff204d0ce8 _dyld_fast_stub_entry(void*, long) + 65
5 libsystem_c.dylib 0x00007fff2040b406 abort + 125
6 swift-frontend 0x0000000102b92a31 swift::performFrontend(llvm::ArrayRef<char const*>, char const*, void*, swift::FrontendObserver*)::$_1::__invoke(void*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) + 1169
7 swift-frontend 0x00000001078c52d0 llvm::report_fatal_error(llvm::Twine const&, bool) + 288
8 swift-frontend 0x00000001078c51ab llvm::report_fatal_error(char const*, bool) + 43
9 swift-frontend 0x000000010786537f (anonymous namespace)::VerifierLegacyPass::runOnFunction(llvm::Function&) + 111
10 swift-frontend 0x00000001077ff0b9 llvm::FPPassManager::runOnFunction(llvm::Function&) + 1353
11 swift-frontend 0x00000001077fe3a0 llvm::legacy::FunctionPassManagerImpl::run(llvm::Function&) + 112
12 swift-frontend 0x0000000107805835 llvm::legacy::FunctionPassManager::run(llvm::Function&) + 341
13 swift-frontend 0x0000000102f3e3e8 swift::performLLVMOptimizations(swift::IRGenOptions const&, llvm::Module*, llvm::TargetMachine*) + 1688
14 swift-frontend 0x0000000102f3f486 swift::performLLVM(swift::IRGenOptions const&, swift::DiagnosticEngine&, llvm::sys::SmartMutex<false>*, llvm::GlobalVariable*, llvm::Module*, llvm::TargetMachine*, llvm::StringRef, swift::UnifiedStatsReporter*) + 2582
15 swift-frontend 0x0000000102b9e863 performCompileStepsPostSILGen(swift::CompilerInstance&, std::__1::unique_ptr<swift::SILModule, std::__1::default_delete<swift::SILModule> >, llvm::PointerUnion<swift::ModuleDecl*, swift::SourceFile*>, swift::PrimarySpecificPaths const&, int&, swift::FrontendObserver*) + 3683
16 swift-frontend 0x0000000102b8fd22 swift::performFrontend(llvm::ArrayRef<char const*>, char const*, void*, swift::FrontendObserver*) + 6370
17 swift-frontend 0x0000000102b11e82 main + 1266
18 libdyld.dylib 0x00007fff204d1f3d start + 1
error: Abort trap: 6 (in target 'VideoEditing' from project 'VideoEditing')
I have a AVVideoComposition with customCompositor. The issue is sometimes AVPlayer crashes on seeking, especially when seektolerance is set to CMTime.zero. The reason for crash is request.sourceFrame(byTrackID: trackId) returns nil even though it should not. Below is a sample of 3 instructions and time ranges, and all contain only track 1.
2021-09-09 12:27:50.773825+0400 VideoApp[86227:6913831] Instruction 0.0, 4.0
2021-09-09 12:27:50.774105+0400 VideoApp[86227:6913831] ...Present TrackId 1 in this instruction
2021-09-09 12:27:50.774196+0400 VideoApp[86227:6913831] Instruction 4.0, 5.0
2021-09-09 12:27:50.774258+0400 VideoApp[86227:6913831] ...Present TrackId 1 in this instruction
2021-09-09 12:27:50.774312+0400 VideoApp[86227:6913831] ...Present TrackId 1 in this instruction
2021-09-09 12:27:50.774369+0400 VideoApp[86227:6913831] Instruction 5.0, 18.845
2021-09-09 12:27:50.774426+0400 VideoApp[86227:6913831] ...Present TrackId 1 in this instruction
VideoApp /VideoEditingCompositor.swift:141: Fatal error: No pixel buffer for track 1, 4.331427
Here is the simple line of code that produces this error:
guard let pixelBuffer = request.sourceFrame(byTrackID: trackId) else {
fatalError("No pixel buffer for track \(trackId), \(request.compositionTime.seconds)")
}
As can be seen, time 4.331427 seconds is very much in time limits of second instruction that runs from 4.0 seconds to 5.0 seconds. Why the custom compositor returns nil pixel buffer then? And the times are random (the time values for crash keep changing), next time i run the program to specifically seek at this time, it does return valid pixel buffer! So it has nothing to do with particular time instant. Also playback is totally fine (without seeking). It's something that is to do with AVFoundation framework than the app.
Has anyone seen such an error ever?
I have prototyped multilayer timeline with custom cells, where:
a. Each cell has possibly different size. Some cell sizes can be more than visible rect of ScrollView,
b. The gap between cells may be different (even though it appears same in the picture below), except the first(base) layer where the cell gap is fixed to 2 points,
c. Each cell can be selected and trimmed/expanded from each end using UIPanGestureRecognizer. Trimming/Expansion have custom rules. For the base layer, cell simply pushes other cells as it expands or contracts. For other layers however, the trimming or expansion have to respect boundaries of neighbouring cells.
d. Timeline can be zoomed horizontally which has the effect of scaling cells
e. Cells can be dragged and dropped to other rows subject to custom rules.
I have implemented all this using UIScrollView. By default all cells are initialized and added to UIScrollView, whether they are visible or not. But now I am hitting limits as I draw more content on each cell. Which means I need to reuse cells and draw only visible content. I discussed this with Apple Engineers in WWDC labs and one of the engineer suggested I use UICollectionView with custom layout where I can get lot of functionality for free (such as cell reuse, drag and drop). He suggested me looking into WWDC 2018 video (session 225) on UICollectionView. But as I look at custom layout of UICollectionView, it's not clear to me:
Q1. How to manually trim/expand select cells in UICollectionView with custom layout using UIPanGesture? In case of UIScrollView, I just have a UIPanGestureRecognizer on cell and do the trimming and expansion of it's frame (respecting given boundary conditions).
Q2. How to scale all the cells with a given zoom factor? With UIScrollView, I simply scale the frames of each cell and then calculate contentOffset to reposition UIScrollView around the point of zoom.
Even with UICollectionView with just one cell which has width say 10x of UICollectionView frame width, I will need further optimization to draw content on only visible portion rather than the whole cell. How is that possible with UICollectionViewCell to draw only part of the cell that's visible on screen?
I have the following class in Swift:
public class EffectModel {
var type:String
var keyframeGroup:[Keyframe<EffectParam>] = []
}
public enum EffectParam<Value:Codable>:Codable {
case scalar(Value)
case keyframes([Keyframe<Value>])
public enum CodingKeys: String, CodingKey {
case rawValue, associatedValue
}
...
...
}
public class Keyframe<T:Codable> : Codable {
public var time:CMTime
public var property:String
public var value:T
enum CodingKeys: String, CodingKey {
case time
case property
case value
}
...
}
The problem is compiler doesn't accepts the generic EffectParam and gives the error
Generic parameter 'Value' could not be inferred
One way to solve the problem would be to redeclare the class EffectModel as
public class EffectModel <EffectParam:Codable>
But the problem is this class has been included in so many other classes so I will need to incorporate generic in every class that has object of type EffectModel, and then any class that uses objects of those classes and so on. That is not a solution for me. Is there any other way to solve the problem in Swift using other language constructs (such as protocols)?
I have an AVComposition playback via AVPlayer where AVComposition has multiple audio tracks with audioMix applied. My question is how is it possible to compute audio meter values for the audio playing back through AVPlayer? Using MTAudioProcessingTap it seems you can only get callback for one track at a time. But if that route has to be used, it's not clear how to get sample values of all the audio tracks at a given time in a single callback?
I have tried everything but it looks to be impossible to get MTKView to display full range of colors of HDR CIImage made from CVPixelBuffer (in 10bit YUV format). Only builtin layers such as AVCaptureVideoPreviewLayer, AVPlayerLayer, AVSampleBufferDisplayLayer are able to fully display HDR images on iOS. Is MTKView incapable of displaying full BT2020_HLG color range? Why does MTKView clip colors no matter even if I set pixel Color format to bgra10_xr or bgra10_xr_srgb?
convenience init(frame: CGRect, contentScale:CGFloat) {
self.init(frame: frame)
contentScaleFactor = contentScale
}
convenience init(frame: CGRect) {
let device = MetalCamera.metalDevice
self.init(frame: frame, device: device)
colorPixelFormat = .bgra10_xr
self.preferredFramesPerSecond = 30
}
override init(frame frameRect: CGRect, device: MTLDevice?) {
guard let device = device else {
fatalError("Can't use Metal")
}
guard let cmdQueue = device.makeCommandQueue(maxCommandBufferCount: 5) else {
fatalError("Can't make Command Queue")
}
commandQueue = cmdQueue
context = CIContext(mtlDevice: device, options: [CIContextOption.cacheIntermediates: false])
super.init(frame: frameRect, device: device)
self.framebufferOnly = false
self.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0)
}
And then rendering code:
override func draw(_ rect: CGRect) {
guard let image = self.image else {
return
}
let dRect = self.bounds
let drawImage: CIImage
let targetSize = dRect.size
let imageSize = image.extent.size
let scalingFactor = min(targetSize.width/imageSize.width, targetSize.height/imageSize.height)
let scalingTransform = CGAffineTransform(scaleX: scalingFactor, y: scalingFactor)
let translation:CGPoint = CGPoint(x: (targetSize.width - imageSize.width * scalingFactor)/2 , y: (targetSize.height - imageSize.height * scalingFactor)/2)
let translationTransform = CGAffineTransform(translationX: translation.x, y: translation.y)
let scalingTranslationTransform = scalingTransform.concatenating(translationTransform)
drawImage = image.transformed(by: scalingTranslationTransform)
let commandBuffer = commandQueue.makeCommandBufferWithUnretainedReferences()
guard let texture = self.currentDrawable?.texture else {
return
}
var colorSpace:CGColorSpace
if #available(iOS 14.0, *) {
colorSpace = CGColorSpace(name: CGColorSpace.itur_2100_HLG)!
} else {
// Fallback on earlier versions
colorSpace = drawImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
}
NSLog("Image \(colorSpace.name), \(image.colorSpace?.name)")
context.render(drawImage, to: texture, commandBuffer: commandBuffer, bounds: dRect, colorSpace: colorSpace)
commandBuffer?.present(self.currentDrawable!, afterMinimumDuration: 1.0/Double(self.preferredFramesPerSecond))
commandBuffer?.commit()
}