Post

Replies

Boosts

Views

Activity

Reply to Using CBPeripheralManager while using AccessorySetupKit framework
I've been pulling my hair out, even after pulling everything out in to a on a simple Multi-platform project, with two demos. ASK and Not ASK. If ASK isn't completely branched it burns BT radios. In the flagship sample project private static let pinkDice: ASPickerDisplayItem = { let descriptor = ASDiscoveryDescriptor() descriptor.bluetoothServiceUUID = DiceColor.pink.serviceUUID return ASPickerDisplayItem( name: DiceColor.pink.displayName, productImage: UIImage(named: DiceColor.pink.diceName)!, descriptor: descriptor ) }() I only see bluetoothServiceUUID provided, in the docs however: Each display item’s descriptor, a property of type ASDiscoveryDescriptor, needs to have a bluetoothCompanyIdentifier or bluetoothServiceUUID, and at least one of the following accessory identifiers: bluetoothNameSubstring A bluetoothManufacturerDataBlob and bluetoothManufacturerDataMask set to the same length. A bluetoothServiceDataBlob and > bluetoothServiceDataMask set to the same length. It wasn't until I removed bluetoothNameSubstring and ignored documentation, did I get the picker to do something. No clue why or how to debug. The Service UUIDs given to me are downcased, and that's what I entered in my NSAccessorySetupBluetoothServices array, it crashed because it must running an exact match withCBUUID(string: "my-downcased-uuid").uuidString let uuid = UUID() let uuidStringDowncased = uuid.uuidString.lowercased() let uuidString = uuid.uuidString let udidFromDownCase = UUID(uuidString: uuidString) let uuidFromString = UUID(uuidString: uuidStringDowncased) let cbuuidFromDownCase = CBUUID(string: uuidStringDowncased) let cbuuidFromString = CBUUID(string: uuidString) let result = udidFromDownCase == uuidFromString ? "They are the same" : "They are different" // => They are the same let result2 = cbuuidFromDownCase == cbuuidFromString ? "They are the same" : "They are different" // => They are the same If CBUUID thinks they are the same, shouldn't ASK too, when parsing undocumented plist keys? For a Project -> New -> Multi-platform App (So MacOS target must init CBCentralManager with privacy plist keys. What keys do I include, and what are the consequences? What breaks what depending on what's included? NSAccessorySetupBluetoothNames NSAccessorySetupKitSupports NSAccessorySetupBluetoothServices NSBluetoothAlwaysUsageDescription Within the same binary of an iOS 18 target, how do you demonstrate both implementations, with two separate peripherals? Dice one with ASK, dice two without. The overload for this signature...whyyyyyyy session.showPicker() Present a picker that shows accessories managed by a Device Discovery Extension in your app. session.showPicker(for:) Present a picker that shows discovered accessories matching an array of display items. invalidate() sounds important, demo/docs should talk about it more.
Topic: App & System Services SubTopic: Hardware Tags:
Aug ’25
Reply to ITMS-91109: Invalid package contents
Weird, just got this for App.app/Contents/Resources/DepthAnythingV2SmallF16.mlmodelc/weights/weight.bin, which is DepthAnythingV2SmallF16.mlpackage from https://developer.apple.com/machine-learning/models/ But only for MacOS build on a Multi-platform project, I just assumed I needed a MacOS plist blessing or a signing + capability. I'll poke around some more and try to get a better understanding.
Jun ’25
Reply to Simulate Background Fetch Not Working on Real Device, Works on Simulator
This is a very confusing reply. Nowhere does OP imply they are unfamiliar with iOS Background Execution Limits. Are you implying that "Debug -> Simulate Background Fetch" should be renamed to "Debug -> DO NOT Simulate Background Fetch"? or maybe Debug -> Simulate Background Fetch on Simulator (remember this is an Xcode.app menu item, not a Simulator.app menu item) What if the question is re-framed to "How do I execute code, that normally, and by normally I mean may or may not get scheduled, with absolutely no misconceptions about the magical vibe heuristics the conclave uses to schedule said task, right now. Or said another way, how can a developer, that is developing, simulate being scheduled by the all-knowing, wise, beautiful, system scheduler? I'll sign a new agreement that affirms my task is not worthy and volunteer a earliestBeginDate of INFINITE just to get a block ran The only useful non-slop answer I have found is set breakpoint after submit() e -l objc -- (void)[[BGTaskScheduler sharedScheduler] _simulateLaunchForTaskWithIdentifier:@"framework.this.love.i"] I strongly recommend that you watch WWDC 2020 Session 10063 Background execution demystified. read https://developer.apple.com/forums/thread/775182 It’s an excellent resource.
May ’25
Reply to CreateML crashes with Unexpected Error on Feature Extraction
What's the point of even shipping CreateML if it's not capable of extracting 20,000 360x360 embeddings on a flagship M2? This has the added advantage that you will only need to do this once, and can then train multiple models, or one model over multiple iterations. C'mon man we're talking about 20k images here, no one is going to deploy a vector db for IFPv2. I've literally never even seen Create ML Components in the wild besides WWDC session classifying 50 bananas. The framework documentation is abysmal. Half the "snippets" and transformer generics in auto-generated documentation were obviously never tried in an actual compiler. lol // List all the labels. let labels = ["aloe", "cactus", "person", "pot", "window_sill"] // Compose the estimator. let estimator = ImageReader() .appending(ImageFeaturePrint()) .appending(FullyConnectedNetworkMultiLabelClassifier<Float, String>(labels: labels))
Jan ’25
Reply to The yolo11 object detection model I exported to coreml stopped working in macOS15.2 beta.
After reading a bunch of related posts, it looks like it might be related to the compute units and flexible shapes and that the ANE doesn't support it. I bet disabling the MLE5Engine flag coincidentally is the equivalent of config.computeUnits = .cpuAndGPU I bet the arch running on the neural engine is creating the nonsensical outputs, where constrained to cpu/gpu is returning normal behavior. Going to test and see if I can confirm.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Jan ’25
Reply to Getting ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
I don't understand how it's being added in during torchscript conversion? I can add the new soft max with validating lossLayer, but I can't figure out how to swap/pop the original SoftmaxND. I thought del builder.layer[-1] would be enough, but NOOP. So now I'm stuck with protobuf validations that that multiple layers are consuming the same outputs
Topic: Machine Learning & AI SubTopic: General Tags:
Nov ’24
Reply to Build Export differences between Xcode 15.2 and 15.3
This is happening everywhere. This is the only useful information I could find, no clue how they came up with that temp workaround, though. Seems like they might have used the red phone. https://github.com/googleads/swift-package-manager-google-mobile-ads/issues/62#issuecomment-1981219533 Re-tooling commits branching platform for path differences https://github.com/firebase/firebase-ios-sdk/pull/12517/files https://github.com/facebook/facebook-ios-sdk/issues/2353 https://github.com/google/GoogleAppMeasurement/issues/62 etc, etc.
Mar ’24
Reply to [Create ML Components] The transformer is not representable as a CoreML model (ImageReader).
Somehow got it to export by composing the transformer explicitly, which gave a ComposedTransformer<ImageFeaturePrint, FullyConnectedNetworkMultiLabelClassifierModel>, so I'm guessing the chaining used in the documentation was never attempted to export since return type is invalid let annotatedFeatures = detectionFiles.map { AnnotatedFeature( feature: directoryURL.appending(component: $0.filename), annotation: $0.labels ) } let reader = ImageReader() let (training, validation) = annotatedFeatures.randomSplit(by: 0.8) let featurePrint = ImageFeaturePrint(revision: 2) let classifier = FullyConnectedNetworkMultiLabelClassifier<Float, String>(labels: labels) let task = featurePrint.appending(classifier) Task { let trainingImages = try await reader.applied(to: training) let validationImages = try await reader.applied(to: validation) let model = try await task.fitted( to: trainingImages, validateOn: validationImages, eventHandler: { event in debugPrint(event) } ) try! model.export(to: modelFile) }
Sep ’23