Hey
We are testing a project on xcode 14 beta 5 and we have an issue with a model that is simply Apple's Vision Feature Print (embeddings). The model has the input 299x299, then a visionFeaturePrint layer and the output is float64[2048].
The model is in Core ML Package v3 and was created using CoreML Tools, cutting the layer added by Create ML into a classification model.
The result depends solely on the interaction that invokes the prediction despite the input image (simulator/Apple M1 chip). On the device works as expected.
let config = MLModelConfiguration()
#if targetEnvironment(simulator)
config.computeUnits = .cpuOnly
#else
config.computeUnits = .all
#endif
model = try! ImageSemanticInfo_iOS(configuration: config)
let buffer = thumb!.toCVPixelBuffer()!
for _ in 0..<3{
let results = try! model!.prediction(image: buffer).sceneprint
}
For example, if we take just the first entry of the embedding, we will always get the following results, regardless of the input image used:
0.474750816822052 - First call
0.3231460750102997 - Second call
0.37376347184181213 - Third call
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
If you set it, does it mean that the user will have an "old style" alert when a permission request is done?