Post

Replies

Boosts

Views

Activity

What is a robust approach to deleting a CKRecord associated with an IndexPath in a table view or collection view?
When synchronizing model objects, local CKRecords, and CKRecords in CloudKit during swipe-to-delete, how can I make this as robust as possible? Error handling omitted for the sake of the example. override func tableView(_ tableView: UITableView, commit editingStyle: UITableViewCell.EditingStyle, forRowAt indexPath: IndexPath) {         if editingStyle == .delete {             let record = self.records[indexPath.row]             privateDatabase.delete(withRecordID: record.recordID) { recordID, error in                 self.records.remove(at: indexPath.row)             }         }     } Since indexPath could change due to other changes in the table view / collection view during the time it takes to delete the record from CloudKit, how could this be improved upon?
1
0
667
Jul ’21
Migrating from value semantics model stored on iCloud to Core Data model
I have an app that currently depends on fetching the model through CloudKit, and is composed of value types. I'm considering adding Core Data support so that record modifications are robust regardless of network conditions. Core Data resources seem to always assume a model layer with reference semantics, so I'm not sure where to begin. Should I keep my top-level model type a struct? Can I? If I move my model to reference semantics, how might I bridge from past model instances that are fetched through CloudKit and then decoded? Thank you in advance.
0
0
576
Sep ’21
When observing a notification that may be posted "on a thread other than the one used to registered the observer," how should I ensure thread-safe UI work?
I observe when an AVPlayer finishes play in order to present a UIAlert at the end time. NotificationCenter.default.addObserver( self, selector: #selector(presentAlert), name: .AVPlayerItemDidPlayToEndTime, object: nil ) I've had multiple user reports of the alert happening where they're not intended, such as the middle of the video after replaying, and on other views. I'm unable to reproduce this myself, but my guess is that it's a threading issue since AVPlayerItemDidPlayToEndTime says "the system may post this notification on a thread other than the one used to registered the observer." How then do I make sure the alert is present on the main thread? Should I dispatch to the main queue from within my presentAlert function, or add the above observer with addObserver(forName:object:queue:using:) instead, passing in the main operation queue?
1
0
1.3k
Oct ’21
Should a delegate property passed into a struct also be declared as weak in the struct?
The Swift book says that "to prevent strong reference cycles, delegates are declared as weak references." protocol SomeDelegate: AnyObject { } class viewController: UIViewController, SomeDelegate { weak var delegate: SomeDelegate? override func viewDidLoad() { delegate = self } } Say the class parameterizes a struct with that delegate class viewController: UIViewController, SomeDelegate { weak var delegate: SomeDelegate? override func viewDidLoad() { delegate = self let exampleView = ExampleView(delegate: delegate) let hostingController = UIHostingController(rootView: exampleView) self.present(hostingController, animated: true) } } struct ExampleView: View { var delegate: SomeDelegate! var body: some View { Text("") } } Should the delegate property in the struct also be marked with weak?
1
0
2.2k
Oct ’21
How do you extend an iOS app's background execution time when continuing an upload operation?
I'd like a user's upload operation that's started in the foreground to continue when they leave the app. Apple's article Extending Your App's Background Execution Time has the following code listing func sendDataToServer( data : NSData ) { // Perform the task on a background queue. DispatchQueue.global().async { // Request the task assertion and save the ID. self.backgroundTaskID = UIApplication.shared. beginBackgroundTask (withName: "Finish Network Tasks") { // End the task if time expires. UIApplication.shared.endBackgroundTask(self.backgroundTaskID!) self.backgroundTaskID = UIBackgroundTaskInvalid } // Send the data synchronously. self.sendAppDataToServer( data: data) // End the task assertion. UIApplication.shared.endBackgroundTask(self.backgroundTaskID!) self.backgroundTaskID = UIBackgroundTaskInvalid } } The call to self.sendAppDataToServer( data: data) is unclear. Is this where the upload operation would go, wrapped in Dispatch.global().sync { }?
1
0
681
Nov ’21
How do you create a new AVAsset video that consists of only frames from given `CMTimeRange`s of another video?
Apple's sample code Identifying Trajectories in Video contains the following delegate callback: func cameraViewController(_ controller: CameraViewController, didReceiveBuffer buffer: CMSampleBuffer, orientation: CGImagePropertyOrientation) { let visionHandler = VNImageRequestHandler(cmSampleBuffer: buffer, orientation: orientation, options: [:]) if gameManager.stateMachine.currentState is GameManager.TrackThrowsState { DispatchQueue.main.async { // Get the frame of rendered view let normalizedFrame = CGRect(x: 0, y: 0, width: 1, height: 1) self.jointSegmentView.frame = controller.viewRectForVisionRect(normalizedFrame) self.trajectoryView.frame = controller.viewRectForVisionRect(normalizedFrame) } // Perform the trajectory request in a separate dispatch queue. trajectoryQueue.async { do { try visionHandler.perform([self.detectTrajectoryRequest]) if let results = self.detectTrajectoryRequest.results { DispatchQueue.main.async { self.processTrajectoryObservations(controller, results) } } } catch { AppError.display(error, inViewController: self) } } } } However, instead of drawing UI whenever detectTrajectoryRequest.results exist (https://developer.apple.com/documentation/vision/vndetecttrajectoriesrequest/3675672-results), I'm interested in using the CMTimeRange provided by each result to construct a new video. In effect, this would filter down the original video to only frames with trajectories. How might I accomplish this, perhaps through writing only specific time ranges' frames from one AVFoundation video to a new AVFoundation video?
1
0
835
Nov ’21
Trying to modernize and get working Apple's sample code AVReaderWriter. Why is this `DispatchGroup` work block not being called?
Apple's sample code "AVReaderWriter: Offline Audio / Video Processing" has the following listing let writingGroup = dispatch_group_create() // Transfer data from input file to output file. self.transferVideoTracks(videoReaderOutputsAndWriterInputs, group: writingGroup) self.transferPassthroughTracks(passthroughReaderOutputsAndWriterInputs, group: writingGroup) // Handle completion. let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) dispatch_group_notify(writingGroup, queue) { // `readingAndWritingDidFinish()` is guaranteed to call `finish()` exactly once. self.readingAndWritingDidFinish(assetReader, assetWriter: assetWriter) } in CynanifyOperation.swift (an NSOperation subclass that stylizes imported video and exports it). How would I get about writing this part in modern Swift so that it compiles and works? I've tried writing this as let writingGroup = DispatchGroup() // Transfer data from input file to output file. self.transferVideoTracks(videoReaderOutputsAndWriterInputs: videoReaderOutputsAndWriterInputs, group: writingGroup) self.transferPassthroughTracks(passthroughReaderOutputsAndWriterInputs: passthroughReaderOutputsAndWriterInputs, group: writingGroup) // Handle completion. writingGroup.notify(queue: .global()) { // `readingAndWritingDidFinish()` is guaranteed to call `finish()` exactly once. self.readingAndWritingDidFinish(assetReader: assetReader, assetWriter: assetWriter) } However, it's taking an extremely long time for self.readingAndWritingDidFinish(assetReader: assetReader, assetWriter: assetWriter) to be called, and my UI is stuck in the ProgressViewController with a loading spinner. Is there something I wrote incorrectly or missed conceptually in the Swift 5 version?
1
0
499
Nov ’21
How do restrict pan gesture recognizers to when a pinch gesture is occurring?
How do you only accept pan gestures when the user is in the process of a pinch gesture? In other words, I'd like to avoid delivering one finger pan gestures. @IBAction func pinchPiece(_ pinchGestureRecognizer: UIPinchGestureRecognizer) { guard pinchGestureRecognizer.state == .began || pinchGestureRecognizer.state == .changed, let piece = pinchGestureRecognizer.view else { // After pinch releases, zoom back out. if pinchGestureRecognizer.state == .ended { UIView.animate(withDuration: 0.3, animations: { pinchGestureRecognizer.view?.transform = CGAffineTransform.identity }) } return } adjustAnchor(for: pinchGestureRecognizer) let scale = pinchGestureRecognizer.scale piece.transform = piece.transform.scaledBy(x: scale, y: scale) pinchGestureRecognizer.scale = 1 // Clear scale so that it is the right delta next time. } @IBAction func panPiece(_ panGestureRecognizer: UIPanGestureRecognizer) { guard panGestureRecognizer.state == .began || panGestureRecognizer.state == .changed, let piece = panGestureRecognizer.view else { return } let translation = panGestureRecognizer.translation(in: piece.superview) piece.center = CGPoint(x: piece.center.x + translation.x, y: piece.center.y + translation.y) panGestureRecognizer.setTranslation(.zero, in: piece.superview) } public func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool { true }
1
0
891
Nov ’21
What is a good way to reset a UIPanGestureRecognizer's view to its original frame once the pan gesture is done?
Say you have a pinch gesture recognizer and pan gesture recognizer on an image view: @IBAction func pinchPiece(_ pinchGestureRecognizer: UIPinchGestureRecognizer) { guard pinchGestureRecognizer.state == .began || pinchGestureRecognizer.state == .changed, let piece = pinchGestureRecognizer.view else { // After pinch releases, zoom back out. if pinchGestureRecognizer.state == .ended { UIView.animate(withDuration: 0.3, animations: { pinchGestureRecognizer.view?.transform = CGAffineTransform.identity }) } return } adjustAnchor(for: pinchGestureRecognizer) let scale = pinchGestureRecognizer.scale piece.transform = piece.transform.scaledBy(x: scale, y: scale) pinchGestureRecognizer.scale = 1 // Clear scale so that it is the right delta next time. } @IBAction func panPiece(_ panGestureRecognizer: UIPanGestureRecognizer) { guard panGestureRecognizer.state == .began || panGestureRecognizer.state == .changed, let piece = panGestureRecognizer.view else { return } let translation = panGestureRecognizer.translation(in: piece.superview) piece.center = CGPoint(x: piece.center.x + translation.x, y: piece.center.y + translation.y) panGestureRecognizer.setTranslation(.zero, in: piece.superview) } public func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool { true } The pinch gesture's view resets to its original state after the gesture is done, which occurs in its else clause. What would be a good way to do the same for the pan gesture recognizer? Ideally I'd like the gesture recognizers to be in an extension of UIImageView, which would also mean that I can't add a store property to the extension for tracking the initial state of the image view.
5
0
1.2k
Dec ’21
Strategy for avoiding of removing duplicate Vision trajectory request results
I am saving time ranges from an input video asset where trajectories are found, then exporting only those segments to an output video file. Currently I track these time ranges in a stored property var timeRangesOfInterest: [Double : CMTimeRange], which is set in the trajectory request's completion handler func completionHandler(request: VNRequest, error: Error?) {         guard let request = request as? VNDetectTrajectoriesRequest else { return }         if let results = request.results,            results.count > 0 {             for result in results {                 var timeRange = result.timeRange                 timeRange.start = timeRange.start - self.assetWriterStartTime                 self.timeRangesOfInterest[timeRange.start.seconds] = timeRange             }         }     } Then these time ranges of interest are used in an export session to only export those segments /*          Finish writing, and asynchronously evaluate the results from writing          the samples.         */         assetReaderWriter.writingCompleted { result in             self.exportVideoTimeRanges(timeRanges: self.timeRangesOfInterest.map { $0.1 }) { result in                 completionHandler(result)             }         } Unfortunately however, I'm getting repeated trajectory video segments in the outputted video. Is this maybe because trajectory requests return "in progress" repeated trajectory results with slightly different time range start times? What might be a good strategy for avoiding or removing them? I noticed trajectory segments will appear out of order in the output as well.
0
0
877
Dec ’21
What are best practices for storing API keys / access tokens?
A quick web search shows that storing them in a plist is not recommended. What are the best practices here?
Replies
1
Boosts
1
Views
2.4k
Activity
Jul ’21
Code signing failed on latest non-beta Xcode in macOS Monterey beta 1
"Code signing 'WatchDeuce Extension.appex' failed." "View distribution logs for more information." Does anyone have any suggestions for a solution or workaround? I've filed this as FB9171462 with the logs attached.
Replies
10
Boosts
0
Views
4k
Activity
Jul ’21
What is the simplest approach to execute a Swift statement only after an asynchronous operation finishes?
For example, Operation A both fetches model data over the network and updates a UICollectionViewbacked by it. Operation B filters model data. What is a good approach to executing B only after A is finished?
Replies
2
Boosts
0
Views
618
Activity
Jul ’21
What is a robust approach to deleting a CKRecord associated with an IndexPath in a table view or collection view?
When synchronizing model objects, local CKRecords, and CKRecords in CloudKit during swipe-to-delete, how can I make this as robust as possible? Error handling omitted for the sake of the example. override func tableView(_ tableView: UITableView, commit editingStyle: UITableViewCell.EditingStyle, forRowAt indexPath: IndexPath) {         if editingStyle == .delete {             let record = self.records[indexPath.row]             privateDatabase.delete(withRecordID: record.recordID) { recordID, error in                 self.records.remove(at: indexPath.row)             }         }     } Since indexPath could change due to other changes in the table view / collection view during the time it takes to delete the record from CloudKit, how could this be improved upon?
Replies
1
Boosts
0
Views
667
Activity
Jul ’21
In UIKit, how do you present a confirmation dialog when the user is swiping to delete a table view cell?
Is there a UIKit equivalent to SwiftUI's confirmationDialog(_:isPresented:titleVisibility:actions:)?
Topic: UI Frameworks SubTopic: UIKit Tags:
Replies
3
Boosts
0
Views
1.2k
Activity
Jul ’21
Picker driven by enum
How do you create a picker where the user's selection corresponds to different values of an enumerated type?
Replies
2
Boosts
1
Views
11k
Activity
Aug ’21
Migrating from value semantics model stored on iCloud to Core Data model
I have an app that currently depends on fetching the model through CloudKit, and is composed of value types. I'm considering adding Core Data support so that record modifications are robust regardless of network conditions. Core Data resources seem to always assume a model layer with reference semantics, so I'm not sure where to begin. Should I keep my top-level model type a struct? Can I? If I move my model to reference semantics, how might I bridge from past model instances that are fetched through CloudKit and then decoded? Thank you in advance.
Replies
0
Boosts
0
Views
576
Activity
Sep ’21
When observing a notification that may be posted "on a thread other than the one used to registered the observer," how should I ensure thread-safe UI work?
I observe when an AVPlayer finishes play in order to present a UIAlert at the end time. NotificationCenter.default.addObserver( self, selector: #selector(presentAlert), name: .AVPlayerItemDidPlayToEndTime, object: nil ) I've had multiple user reports of the alert happening where they're not intended, such as the middle of the video after replaying, and on other views. I'm unable to reproduce this myself, but my guess is that it's a threading issue since AVPlayerItemDidPlayToEndTime says "the system may post this notification on a thread other than the one used to registered the observer." How then do I make sure the alert is present on the main thread? Should I dispatch to the main queue from within my presentAlert function, or add the above observer with addObserver(forName:object:queue:using:) instead, passing in the main operation queue?
Replies
1
Boosts
0
Views
1.3k
Activity
Oct ’21
Should a delegate property passed into a struct also be declared as weak in the struct?
The Swift book says that "to prevent strong reference cycles, delegates are declared as weak references." protocol SomeDelegate: AnyObject { } class viewController: UIViewController, SomeDelegate { weak var delegate: SomeDelegate? override func viewDidLoad() { delegate = self } } Say the class parameterizes a struct with that delegate class viewController: UIViewController, SomeDelegate { weak var delegate: SomeDelegate? override func viewDidLoad() { delegate = self let exampleView = ExampleView(delegate: delegate) let hostingController = UIHostingController(rootView: exampleView) self.present(hostingController, animated: true) } } struct ExampleView: View { var delegate: SomeDelegate! var body: some View { Text("") } } Should the delegate property in the struct also be marked with weak?
Replies
1
Boosts
0
Views
2.2k
Activity
Oct ’21
How do you extend an iOS app's background execution time when continuing an upload operation?
I'd like a user's upload operation that's started in the foreground to continue when they leave the app. Apple's article Extending Your App's Background Execution Time has the following code listing func sendDataToServer( data : NSData ) { // Perform the task on a background queue. DispatchQueue.global().async { // Request the task assertion and save the ID. self.backgroundTaskID = UIApplication.shared. beginBackgroundTask (withName: "Finish Network Tasks") { // End the task if time expires. UIApplication.shared.endBackgroundTask(self.backgroundTaskID!) self.backgroundTaskID = UIBackgroundTaskInvalid } // Send the data synchronously. self.sendAppDataToServer( data: data) // End the task assertion. UIApplication.shared.endBackgroundTask(self.backgroundTaskID!) self.backgroundTaskID = UIBackgroundTaskInvalid } } The call to self.sendAppDataToServer( data: data) is unclear. Is this where the upload operation would go, wrapped in Dispatch.global().sync { }?
Replies
1
Boosts
0
Views
681
Activity
Nov ’21
How do you create a new AVAsset video that consists of only frames from given `CMTimeRange`s of another video?
Apple's sample code Identifying Trajectories in Video contains the following delegate callback: func cameraViewController(_ controller: CameraViewController, didReceiveBuffer buffer: CMSampleBuffer, orientation: CGImagePropertyOrientation) { let visionHandler = VNImageRequestHandler(cmSampleBuffer: buffer, orientation: orientation, options: [:]) if gameManager.stateMachine.currentState is GameManager.TrackThrowsState { DispatchQueue.main.async { // Get the frame of rendered view let normalizedFrame = CGRect(x: 0, y: 0, width: 1, height: 1) self.jointSegmentView.frame = controller.viewRectForVisionRect(normalizedFrame) self.trajectoryView.frame = controller.viewRectForVisionRect(normalizedFrame) } // Perform the trajectory request in a separate dispatch queue. trajectoryQueue.async { do { try visionHandler.perform([self.detectTrajectoryRequest]) if let results = self.detectTrajectoryRequest.results { DispatchQueue.main.async { self.processTrajectoryObservations(controller, results) } } } catch { AppError.display(error, inViewController: self) } } } } However, instead of drawing UI whenever detectTrajectoryRequest.results exist (https://developer.apple.com/documentation/vision/vndetecttrajectoriesrequest/3675672-results), I'm interested in using the CMTimeRange provided by each result to construct a new video. In effect, this would filter down the original video to only frames with trajectories. How might I accomplish this, perhaps through writing only specific time ranges' frames from one AVFoundation video to a new AVFoundation video?
Replies
1
Boosts
0
Views
835
Activity
Nov ’21
Trying to modernize and get working Apple's sample code AVReaderWriter. Why is this `DispatchGroup` work block not being called?
Apple's sample code "AVReaderWriter: Offline Audio / Video Processing" has the following listing let writingGroup = dispatch_group_create() // Transfer data from input file to output file. self.transferVideoTracks(videoReaderOutputsAndWriterInputs, group: writingGroup) self.transferPassthroughTracks(passthroughReaderOutputsAndWriterInputs, group: writingGroup) // Handle completion. let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) dispatch_group_notify(writingGroup, queue) { // `readingAndWritingDidFinish()` is guaranteed to call `finish()` exactly once. self.readingAndWritingDidFinish(assetReader, assetWriter: assetWriter) } in CynanifyOperation.swift (an NSOperation subclass that stylizes imported video and exports it). How would I get about writing this part in modern Swift so that it compiles and works? I've tried writing this as let writingGroup = DispatchGroup() // Transfer data from input file to output file. self.transferVideoTracks(videoReaderOutputsAndWriterInputs: videoReaderOutputsAndWriterInputs, group: writingGroup) self.transferPassthroughTracks(passthroughReaderOutputsAndWriterInputs: passthroughReaderOutputsAndWriterInputs, group: writingGroup) // Handle completion. writingGroup.notify(queue: .global()) { // `readingAndWritingDidFinish()` is guaranteed to call `finish()` exactly once. self.readingAndWritingDidFinish(assetReader: assetReader, assetWriter: assetWriter) } However, it's taking an extremely long time for self.readingAndWritingDidFinish(assetReader: assetReader, assetWriter: assetWriter) to be called, and my UI is stuck in the ProgressViewController with a loading spinner. Is there something I wrote incorrectly or missed conceptually in the Swift 5 version?
Replies
1
Boosts
0
Views
499
Activity
Nov ’21
How do restrict pan gesture recognizers to when a pinch gesture is occurring?
How do you only accept pan gestures when the user is in the process of a pinch gesture? In other words, I'd like to avoid delivering one finger pan gestures. @IBAction func pinchPiece(_ pinchGestureRecognizer: UIPinchGestureRecognizer) { guard pinchGestureRecognizer.state == .began || pinchGestureRecognizer.state == .changed, let piece = pinchGestureRecognizer.view else { // After pinch releases, zoom back out. if pinchGestureRecognizer.state == .ended { UIView.animate(withDuration: 0.3, animations: { pinchGestureRecognizer.view?.transform = CGAffineTransform.identity }) } return } adjustAnchor(for: pinchGestureRecognizer) let scale = pinchGestureRecognizer.scale piece.transform = piece.transform.scaledBy(x: scale, y: scale) pinchGestureRecognizer.scale = 1 // Clear scale so that it is the right delta next time. } @IBAction func panPiece(_ panGestureRecognizer: UIPanGestureRecognizer) { guard panGestureRecognizer.state == .began || panGestureRecognizer.state == .changed, let piece = panGestureRecognizer.view else { return } let translation = panGestureRecognizer.translation(in: piece.superview) piece.center = CGPoint(x: piece.center.x + translation.x, y: piece.center.y + translation.y) panGestureRecognizer.setTranslation(.zero, in: piece.superview) } public func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool { true }
Replies
1
Boosts
0
Views
891
Activity
Nov ’21
What is a good way to reset a UIPanGestureRecognizer's view to its original frame once the pan gesture is done?
Say you have a pinch gesture recognizer and pan gesture recognizer on an image view: @IBAction func pinchPiece(_ pinchGestureRecognizer: UIPinchGestureRecognizer) { guard pinchGestureRecognizer.state == .began || pinchGestureRecognizer.state == .changed, let piece = pinchGestureRecognizer.view else { // After pinch releases, zoom back out. if pinchGestureRecognizer.state == .ended { UIView.animate(withDuration: 0.3, animations: { pinchGestureRecognizer.view?.transform = CGAffineTransform.identity }) } return } adjustAnchor(for: pinchGestureRecognizer) let scale = pinchGestureRecognizer.scale piece.transform = piece.transform.scaledBy(x: scale, y: scale) pinchGestureRecognizer.scale = 1 // Clear scale so that it is the right delta next time. } @IBAction func panPiece(_ panGestureRecognizer: UIPanGestureRecognizer) { guard panGestureRecognizer.state == .began || panGestureRecognizer.state == .changed, let piece = panGestureRecognizer.view else { return } let translation = panGestureRecognizer.translation(in: piece.superview) piece.center = CGPoint(x: piece.center.x + translation.x, y: piece.center.y + translation.y) panGestureRecognizer.setTranslation(.zero, in: piece.superview) } public func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool { true } The pinch gesture's view resets to its original state after the gesture is done, which occurs in its else clause. What would be a good way to do the same for the pan gesture recognizer? Ideally I'd like the gesture recognizers to be in an extension of UIImageView, which would also mean that I can't add a store property to the extension for tracking the initial state of the image view.
Replies
5
Boosts
0
Views
1.2k
Activity
Dec ’21
Strategy for avoiding of removing duplicate Vision trajectory request results
I am saving time ranges from an input video asset where trajectories are found, then exporting only those segments to an output video file. Currently I track these time ranges in a stored property var timeRangesOfInterest: [Double : CMTimeRange], which is set in the trajectory request's completion handler func completionHandler(request: VNRequest, error: Error?) {         guard let request = request as? VNDetectTrajectoriesRequest else { return }         if let results = request.results,            results.count > 0 {             for result in results {                 var timeRange = result.timeRange                 timeRange.start = timeRange.start - self.assetWriterStartTime                 self.timeRangesOfInterest[timeRange.start.seconds] = timeRange             }         }     } Then these time ranges of interest are used in an export session to only export those segments /*          Finish writing, and asynchronously evaluate the results from writing          the samples.         */         assetReaderWriter.writingCompleted { result in             self.exportVideoTimeRanges(timeRanges: self.timeRangesOfInterest.map { $0.1 }) { result in                 completionHandler(result)             }         } Unfortunately however, I'm getting repeated trajectory video segments in the outputted video. Is this maybe because trajectory requests return "in progress" repeated trajectory results with slightly different time range start times? What might be a good strategy for avoiding or removing them? I noticed trajectory segments will appear out of order in the output as well.
Replies
0
Boosts
0
Views
877
Activity
Dec ’21