"Code signing 'WatchDeuce Extension.appex' failed."
"View distribution logs for more information."
Does anyone have any suggestions for a solution or workaround? I've filed this as FB9171462 with the logs attached.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm new to async/await, and am currently migrating my completion handler code to Swift 5.5's concurrency features.
After generating an sync alternative in Xcode to my function func fetchMatchRecords(completion: @escaping ([Match]) -> Void), it becomes func fetchMatchRecords() async -> [Match].
I'm not sure how it would be used in the context of UIKit and diffable data sources.
In a viewDidLoad, previously it would be
MatchHistoryController.shared.fetchMatchRecords() { matches in
DispatchQueue.main.async {
self.dataSource.apply(self.initialSnapshot(), animatingDifferences: false)
}
}
But I'm not sure how it would be used now
Task {
await MatchHistoryController.shared.fetchMatchRecords()
}
self.dataSource.apply(self.initialSnapshot(), animatingDifferences: false)
How would I make sure that the snapshot is applied only after awaiting a successful fetch result?
Here's the definition of initialSnapshot() that I used:
func initialSnapshot() -> NSDiffableDataSourceSnapshot<Section, Match> {
var snapshot = NSDiffableDataSourceSnapshot<Section, Match>()
snapshot.appendSections([.main])
snapshot.appendItems(MatchHistoryController.shared.matches)
return snapshot
}
How do I resolve this issue when trying to re-import a custom SF Symbol into Apple's SF Symbols app? Is there an exact export configuration I'm missing in Sketch or Figma?
How do you create a picker where the user's selection corresponds to different values of an enumerated type?
I created a distinct icon set for visionOS, and specified its name in build settings. This is a Designed for iPad app. When I run it in the simulator, only the existing app icon shows. Is this supported for existing iPad apps, or am I missing something? There are no warnings in the asset catalog for this.
A quick web search shows that storing them in a plist is not recommended. What are the best practices here?
As far as I can tell - https://developer.apple.com/documentation/vision/identifying_trajectories_in_video trajectory detection lets you use characteristics of the trajectories detected for, say, drawing over the video as it plays. However, is it possible to mark which time ranges the video has detected trajectories, or perhaps access the frames for which there are trajectories?
Is there a UIKit equivalent to SwiftUI's confirmationDialog(_:isPresented:titleVisibility:actions:)?
The Swift book says that "to prevent strong reference cycles, delegates are declared as weak references."
protocol SomeDelegate: AnyObject {
}
class viewController: UIViewController, SomeDelegate {
weak var delegate: SomeDelegate?
override func viewDidLoad() {
delegate = self
}
}
Say the class parameterizes a struct with that delegate
class viewController: UIViewController, SomeDelegate {
weak var delegate: SomeDelegate?
override func viewDidLoad() {
delegate = self
let exampleView = ExampleView(delegate: delegate)
let hostingController = UIHostingController(rootView: exampleView)
self.present(hostingController, animated: true)
}
}
struct ExampleView: View {
var delegate: SomeDelegate!
var body: some View {
Text("")
}
}
Should the delegate property in the struct also be marked with weak?
How do you only accept pan gestures when the user is in the process of a pinch gesture? In other words, I'd like to avoid delivering one finger pan gestures.
@IBAction func pinchPiece(_ pinchGestureRecognizer: UIPinchGestureRecognizer) {
guard pinchGestureRecognizer.state == .began || pinchGestureRecognizer.state == .changed,
let piece = pinchGestureRecognizer.view else {
// After pinch releases, zoom back out.
if pinchGestureRecognizer.state == .ended {
UIView.animate(withDuration: 0.3, animations: {
pinchGestureRecognizer.view?.transform = CGAffineTransform.identity
})
}
return
}
adjustAnchor(for: pinchGestureRecognizer)
let scale = pinchGestureRecognizer.scale
piece.transform = piece.transform.scaledBy(x: scale, y: scale)
pinchGestureRecognizer.scale = 1 // Clear scale so that it is the right delta next time.
}
@IBAction func panPiece(_ panGestureRecognizer: UIPanGestureRecognizer) {
guard panGestureRecognizer.state == .began || panGestureRecognizer.state == .changed,
let piece = panGestureRecognizer.view else {
return
}
let translation = panGestureRecognizer.translation(in: piece.superview)
piece.center = CGPoint(x: piece.center.x + translation.x, y: piece.center.y + translation.y)
panGestureRecognizer.setTranslation(.zero, in: piece.superview)
}
public func gestureRecognizer(_ gestureRecognizer: UIGestureRecognizer,
shouldRecognizeSimultaneouslyWith otherGestureRecognizer: UIGestureRecognizer) -> Bool {
true
}
I just got an app feature working where the user imports a video file, each frame is fed to a custom action classifier, and then only frames with a certain action classified are exported.
However, I'm finding that testing a one hour 4K video at 60 FPS is taking an unreasonably long time - it's been processing for 7 hours now on a MacBook Pro with M1 Max running the Mac Catalyst app. Are there any techniques or general guidance that would help with improving performance? As much as possible I'd like to preserve the input video quality, especially frame rate. One hour length for the video is expected, as it's of a tennis session (could be anywhere from 10 minutes to a couple hours). I made the body pose action classifier with Create ML.
Below, the sampleBufferProcessor closure is where the Vision body pose detection occurs.
/// Transfers the sample data from the AVAssetReaderOutput to the AVAssetWriterInput,
/// processing via a CMSampleBufferProcessor.
///
/// - Parameters:
/// - readerOutput: The source sample data.
/// - writerInput: The destination for the sample data.
/// - queue: The DispatchQueue.
/// - completionHandler: The completion handler to run when the transfer finishes.
/// - Tag: transferSamplesAsynchronously
private func transferSamplesAsynchronously(from readerOutput: AVAssetReaderOutput,
to writerInput: AVAssetWriterInput,
onQueue queue: DispatchQueue,
sampleBufferProcessor: SampleBufferProcessor,
completionHandler: @escaping () -> Void) {
/*
The writerInput continously invokes this closure until finished or
cancelled. It throws an NSInternalInconsistencyException if called more
than once for the same writer.
*/
writerInput.requestMediaDataWhenReady(on: queue) {
var isDone = false
/*
While the writerInput accepts more data, process the sampleBuffer
and then transfer the processed sample to the writerInput.
*/
while writerInput.isReadyForMoreMediaData {
if self.isCancelled {
isDone = true
break
}
// Get the next sample from the asset reader output.
guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else {
// The asset reader output has no more samples to vend.
isDone = true
break
}
// Process the sample, if requested.
do {
try sampleBufferProcessor?(sampleBuffer)
} catch {
/*
The `readingAndWritingDidFinish()` function picks up this
error.
*/
self.sampleTransferError = error
isDone = true
}
// Append the sample to the asset writer input.
guard writerInput.append(sampleBuffer) else {
/*
The writer could not append the sample buffer.
The `readingAndWritingDidFinish()` function handles any
error information from the asset writer.
*/
isDone = true
break
}
}
if isDone {
/*
Calling `markAsFinished()` on the asset writer input does the
following:
1. Unblocks any other inputs needing more samples.
2. Cancels further invocations of this "request media data"
callback block.
*/
writerInput.markAsFinished()
/*
Tell the caller the reader output and writer input finished
transferring samples.
*/
completionHandler()
}
}
}
The processor closure runs body pose detection on every sample buffer so that later in the VNDetectHumanBodyPoseRequest completion handler, VNHumanBodyPoseObservation results are fed into a custom Core ML action classifier.
private func videoProcessorForActivityClassification() -> SampleBufferProcessor {
let videoProcessor: SampleBufferProcessor = { sampleBuffer in
do {
let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer)
try requestHandler.perform([self.detectHumanBodyPoseRequest])
} catch {
print("Unable to perform the request: \(error.localizedDescription).")
}
}
return videoProcessor
}
How could I improve the performance of this pipeline? After testing with an hour long 4K video at 60 FPS, it took several hours to process running as a Mac Catalyst app on M1 Max.
Reading a solution given in a book to adding the elements of an input array of doubles, an example is given with Accelerate as
func challenge52c(numbers: [Double]) -> Double {
var result: Double = 0.0
vDSP_sveD(numbers, 1, &result, vDSP_Length(numbers.count))
return result
}
I can understand why Accelerate API's don't adhere to Swift API design guidelines, why is it that they don't seem to use Cocoa guidelines either? Are there other conventions or precedents that I'm missing?
Say that in this example here, this struct
struct Reminder: Identifiable {
var id: String = UUID().uuidString
var title: String
var dueDate: Date
var notes: String? = nil
var isComplete: Bool = false
}
is instead decoded from JSON array values (rather than constructed like in the linked example). If each JSON value were to be missing an "id", how would id then be initialized? When trying this myself I got an error keyNotFound(CodingKeys(stringValue: "id", intValue: nil), Swift.DecodingError.Context(codingPath: [_JSONKey(stringValue: "Index 0", intValue: 0)], debugDescription: "No value associated with key CodingKeys(stringValue: \"id\", intValue: nil) (\"id\").", underlyingError: nil)).
In the Health app, it appears that cells and not sections are styled in this way:
The closest I know of to getting to this appearance is setting the section to be inset grouped
let listConfiguration = UICollectionLayoutListConfiguration(appearance: .insetGrouped)
let listLayout = UICollectionViewCompositionalLayout.list(using: listConfiguration)
collectionView.collectionViewLayout = listLayout
but I'm not sure of a good approach to giving each cell this appearance like in the screenshot above. I'm assuming the list style collection view shown is two sections with three total cells, rather than three inset grouped sections.