Post

Replies

Boosts

Views

Activity

How do you apply a diffable data source UI snapshot only after awaiting (with async/await) data fetched from the network?
I'm new to async/await, and am currently migrating my completion handler code to Swift 5.5's concurrency features. After generating an sync alternative in Xcode to my function func fetchMatchRecords(completion: @escaping ([Match]) -> Void), it becomes func fetchMatchRecords() async -> [Match]. I'm not sure how it would be used in the context of UIKit and diffable data sources. In a viewDidLoad, previously it would be MatchHistoryController.shared.fetchMatchRecords() { matches in DispatchQueue.main.async { self.dataSource.apply(self.initialSnapshot(), animatingDifferences: false) } } But I'm not sure how it would be used now Task { await MatchHistoryController.shared.fetchMatchRecords() } self.dataSource.apply(self.initialSnapshot(), animatingDifferences: false) How would I make sure that the snapshot is applied only after awaiting a successful fetch result? Here's the definition of initialSnapshot() that I used: func initialSnapshot() -> NSDiffableDataSourceSnapshot<Section, Match> { var snapshot = NSDiffableDataSourceSnapshot<Section, Match>() snapshot.appendSections([.main]) snapshot.appendItems(MatchHistoryController.shared.matches) return snapshot }
1
0
2.1k
Sep ’23
With the Vision framework, is it possible to get the time ranges or frames for which the video contains trajectories?
As far as I can tell - https://developer.apple.com/documentation/vision/identifying_trajectories_in_video trajectory detection lets you use characteristics of the trajectories detected for, say, drawing over the video as it plays. However, is it possible to mark which time ranges the video has detected trajectories, or perhaps access the frames for which there are trajectories?
1
0
758
Feb ’21
Should a delegate property passed into a struct also be declared as weak in the struct?
The Swift book says that "to prevent strong reference cycles, delegates are declared as weak references." protocol SomeDelegate: AnyObject { } class viewController: UIViewController, SomeDelegate { weak var delegate: SomeDelegate? override func viewDidLoad() { delegate = self } } Say the class parameterizes a struct with that delegate class viewController: UIViewController, SomeDelegate { weak var delegate: SomeDelegate? override func viewDidLoad() { delegate = self let exampleView = ExampleView(delegate: delegate) let hostingController = UIHostingController(rootView: exampleView) self.present(hostingController, animated: true) } } struct ExampleView: View { var delegate: SomeDelegate! var body: some View { Text("") } } Should the delegate property in the struct also be marked with weak?
1
0
2.2k
Oct ’21
How do restrict pan gesture recognizers to when a pinch gesture is occurring?
How do you only accept pan gestures when the user is in the process of a pinch gesture? In other words, I'd like to avoid delivering one finger pan gestures. @IBAction func pinchPiece(_ pinchGestureRecognizer: UIPinchGestureRecognizer) { guard pinchGestureRecognizer.state == .began || pinchGestureRecognizer.state == .changed, let piece = pinchGestureRecognizer.view else { // After pinch releases, zoom back out. if pinchGestureRecognizer.state == .ended { UIView.animate(withDuration: 0.3, animations: { pinchGestureRecognizer.view?.transform = CGAffineTransform.identity }) } return } adjustAnchor(for: pinchGestureRecognizer) let scale = pinchGestureRecognizer.scale piece.transform = piece.transform.scaledBy(x: scale, y: scale) pinchGestureRecognizer.scale = 1 // Clear scale so that it is the right delta next time. } @IBAction func panPiece(_ panGestureRecognizer: UIPanGestureRecognizer) { guard panGestureRecognizer.state == .began || panGestureRecognizer.state == .changed, let piece = panGestureRecognizer.view else { return } let translation = panGestureRecognizer.translation(in: piece.superview) piece.center = CGPoint(x: piece.center.x + translation.x, y: piece.center.y + translation.y) panGestureRecognizer.setTranslation(.zero, in: piece.superview) } public func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool { true }
1
0
860
Nov ’21
General guidelines for improving body pose action classifier performance
I just got an app feature working where the user imports a video file, each frame is fed to a custom action classifier, and then only frames with a certain action classified are exported. However, I'm finding that testing a one hour 4K video at 60 FPS is taking an unreasonably long time - it's been processing for 7 hours now on a MacBook Pro with M1 Max running the Mac Catalyst app. Are there any techniques or general guidance that would help with improving performance? As much as possible I'd like to preserve the input video quality, especially frame rate. One hour length for the video is expected, as it's of a tennis session (could be anywhere from 10 minutes to a couple hours). I made the body pose action classifier with Create ML.
2
0
1.3k
Jan ’22
How can I improve the speed of running a `VNDetectHumanBodyPoseRequest` on a `VNImageRequestHandler` for every `CMSampleBuffer` of an imported video?
Below, the sampleBufferProcessor closure is where the Vision body pose detection occurs. /// Transfers the sample data from the AVAssetReaderOutput to the AVAssetWriterInput, /// processing via a CMSampleBufferProcessor. /// /// - Parameters: /// - readerOutput: The source sample data. /// - writerInput: The destination for the sample data. /// - queue: The DispatchQueue. /// - completionHandler: The completion handler to run when the transfer finishes. /// - Tag: transferSamplesAsynchronously private func transferSamplesAsynchronously(from readerOutput: AVAssetReaderOutput, to writerInput: AVAssetWriterInput, onQueue queue: DispatchQueue, sampleBufferProcessor: SampleBufferProcessor, completionHandler: @escaping () -> Void) { /* The writerInput continously invokes this closure until finished or cancelled. It throws an NSInternalInconsistencyException if called more than once for the same writer. */ writerInput.requestMediaDataWhenReady(on: queue) { var isDone = false /* While the writerInput accepts more data, process the sampleBuffer and then transfer the processed sample to the writerInput. */ while writerInput.isReadyForMoreMediaData { if self.isCancelled { isDone = true break } // Get the next sample from the asset reader output. guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else { // The asset reader output has no more samples to vend. isDone = true break } // Process the sample, if requested. do { try sampleBufferProcessor?(sampleBuffer) } catch { /* The `readingAndWritingDidFinish()` function picks up this error. */ self.sampleTransferError = error isDone = true } // Append the sample to the asset writer input. guard writerInput.append(sampleBuffer) else { /* The writer could not append the sample buffer. The `readingAndWritingDidFinish()` function handles any error information from the asset writer. */ isDone = true break } } if isDone { /* Calling `markAsFinished()` on the asset writer input does the following: 1. Unblocks any other inputs needing more samples. 2. Cancels further invocations of this "request media data" callback block. */ writerInput.markAsFinished() /* Tell the caller the reader output and writer input finished transferring samples. */ completionHandler() } } } The processor closure runs body pose detection on every sample buffer so that later in the VNDetectHumanBodyPoseRequest completion handler, VNHumanBodyPoseObservation results are fed into a custom Core ML action classifier. private func videoProcessorForActivityClassification() -> SampleBufferProcessor { let videoProcessor: SampleBufferProcessor = { sampleBuffer in do { let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer) try requestHandler.perform([self.detectHumanBodyPoseRequest]) } catch { print("Unable to perform the request: \(error.localizedDescription).") } } return videoProcessor } How could I improve the performance of this pipeline? After testing with an hour long 4K video at 60 FPS, it took several hours to process running as a Mac Catalyst app on M1 Max.
1
0
1.1k
Jan ’22
Why does Accelerate appear so out of place in terms of naming style?
Reading a solution given in a book to adding the elements of an input array of doubles, an example is given with Accelerate as func challenge52c(numbers: [Double]) -> Double { var result: Double = 0.0 vDSP_sveD(numbers, 1, &result, vDSP_Length(numbers.count)) return result } I can understand why Accelerate API's don't adhere to Swift API design guidelines, why is it that they don't seem to use Cocoa guidelines either? Are there other conventions or precedents that I'm missing?
2
0
916
Apr ’22
When decoding a Codable struct from JSON, how do you initialize a property not present in the JSON?
Say that in this example here, this struct struct Reminder: Identifiable { var id: String = UUID().uuidString var title: String var dueDate: Date var notes: String? = nil var isComplete: Bool = false } is instead decoded from JSON array values (rather than constructed like in the linked example). If each JSON value were to be missing an "id", how would id then be initialized? When trying this myself I got an error keyNotFound(CodingKeys(stringValue: "id", intValue: nil), Swift.DecodingError.Context(codingPath: [_JSONKey(stringValue: "Index 0", intValue: 0)], debugDescription: "No value associated with key CodingKeys(stringValue: \"id\", intValue: nil) (\"id\").", underlyingError: nil)).
2
0
2.8k
May ’22
How do you configure collection view list cells to look inset with rounded corners?
In the Health app, it appears that cells and not sections are styled in this way: The closest I know of to getting to this appearance is setting the section to be inset grouped let listConfiguration = UICollectionLayoutListConfiguration(appearance: .insetGrouped) let listLayout = UICollectionViewCompositionalLayout.list(using: listConfiguration) collectionView.collectionViewLayout = listLayout but I'm not sure of a good approach to giving each cell this appearance like in the screenshot above. I'm assuming the list style collection view shown is two sections with three total cells, rather than three inset grouped sections.
Topic: UI Frameworks SubTopic: UIKit Tags:
2
0
2.8k
May ’22