Is anyone else seeing their apps crash on iOS/macOS 17.4/14.4 and newer when building a project that simply just includes the iOS 18 @AssistantIntent Macro?
The beta 4 releases still have this problem. There are no notes about this that I have seen in the beta release notes. Crash message shown in console when trying to run on 17.4, 17.5, 17.5.1, etc:
dyld[21935]: Symbol not found: _$s10AppIntents15AssistantSchemaV06IntentD0VAC0E0AAWP Referenced from: <F7A1FEF0-F3B0-379C-A914-D1FB0BA7C693> /Users/jonathan/Library/Developer/CoreSimulator/Devices/CA308F47-BCA8-4429-8599-1BB1CCEAB5B6/data/Containers/Bundle/Application/D7DC8E16-90DB-406A-A521-20F18326E4A7/IntentDemo.app/IntentDemo.debug.dylib Expected in: <88E18E38-24EC-364E-94A1-E7922AD247AF> /Library/Developer/CoreSimulator/Volumes/iOS_21F79/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.5.simruntime/Contents/Resources/RuntimeRoot/System/Library/Frameworks/AppIntents.framework/AppIntents
Obviously, the new Apple Intelligence AssistantIntents only work on the 2024 OS releases. However, even when these new App Intents are marked with @available(iOS 18, macOS 15, *), the app crashes on any earlier OS version. But it runs just fine on iOS 18 and macOS 15...
I would love for me to just have done something wrong but I don’t think I have… Here is the sample project: https://github.com/JTostitos/FB14323923
Maybe it's a compiler issue thats failing to strip out the macro when building for older OS's or an Xcode issue - I have no idea. I just would like to know why its not working and how to resolve it.
Thanks in advance for anyones help...
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I wanted to join the Apple Intelligence after I updated my iPhone 15pro to iOS 18.1 beta. But it is still showing that I’m in the waitlist. It has been almost one day! Why? Is it normal?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I've been attempting to install tf metal on my computer so that I can use GPUs instead of CPUs. I have tf macOS installed already, and I am fully updated with pip and tf. I'm currently 2 months into building and training a tf CNN, and I'm at the point where training a single epoch for my network will take a week (I have a lot of data that I need to use). I desperately need to use GPUs but am stuck with CPUs for now. I can't get access to a cluster, so the best I can do is continue to use my M2 MacBook. Is there any other way I can install TF metal? Is there a way I can use GPUs (rather than CPUs) when using TF if I can't get install metal?
I keep getting this error message:
"ERROR: Could not find a version that satisfies the requirement tensorflow-metal (from versions: none) ERROR: No matching distribution found for tensorflow-metal"
I looked on apple forums, tried to download it from GitHub (the page is down), and anything else I could think of and/or find on the internet to help, but it still isn't installing.
I've used the following commands and still no luck:
python -m pip install tensorflow-metal
pip install https://github.com/apple/tensorflow_metal/releases/download/v0.5.0/tensorflow_metal-0.5.0-py3-none-any.whl
pip install tensorflow-metal
pip3 install tensorflow-metal
SYSTEM_VERSION_COMPAT=0 python -m pip install tensorflow-metal
SYSTEM_VERSION_COMPAT=0 pip install tensorflow-macos tensorflow-metal
conda install -c anaconda tensorflow-gpu
Any help would be appreciated! Thanks so much!
With iOS 18, Writing Tools are enabled for text fields all over the system. But under the hood, this uses Apple's on device LLM to summarize a piece of text. Is there any kind of Swift API to access this LLM summarization feature for pieces of text that I provide to the API? Instead of forcing the user to select the text.
All errors in TranslationError return the same error code, making it difficult to differentiate between them. How can this issue be resolved?
Topic:
Machine Learning & AI
SubTopic:
Core ML
Tags:
Swift Student Challenge
iOS
Machine Learning
Core ML
getting this error again and again even if I tried reinstalling.
Traceback (most recent call last):
File "", line 1, in
File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/init.py", line 439, in
_ll.load_library(_plugin_dir)
File "/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: OBJC_CLASS$_MPSGraphRandomOpDescriptor
Referenced from: /Users/aman/LLM/env/lib/python3.8/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: /System/Library/Frameworks/MetalPerformanceShadersGraph.framework/Versions/A/MetalPerformanceShadersGraph
Recently, deep learning model have been getting larger, and sometimes loading models has become a bottleneck. I download the .mlpackage format CoreML from the internet and need to use compileModelAtURL to convert the .mlpackage into an .mlmodelc, then call modelWithContentsOfURL to convert the .mlmodelc into a handle. Generally, generating a handle with modelWithContentsOfURL is very slow. I noticed from WWDC 2023 that it is possible to cache the compiled results (see https://developer.apple.com/videos/play/wwdc2023/10049/?time=677, which states "This compilation includes further optimizations for the specific compute device and outputs an artifact that the compute device can run. Once complete, Core ML caches these artifacts to be used for subsequent model loads."). However, it seems that I couldn't find how to cache in the documentation.
Topic:
Machine Learning & AI
SubTopic:
Core ML
We have a code that crashed The crash stack is as follows
Thread 26 Crashed:
0 CoreFoundation 0x0000000198b0569c CFRelease + 44
1 CoreFoundation 0x0000000198b12334 __CFBasicHashRehash + 1172
2 CoreFoundation 0x0000000198b015dc __CFBasicHashAddValue + 100
3 CoreFoundation 0x0000000198b232e4 CFDictionarySetValue + 208
4 Foundation 0x00000001979b0378 _getStringAtMarker + 464
5 Foundation 0x00000001979b016c _NSXPCSerializationStringForObject + 56
6 Foundation 0x00000001979cec4c __44-[NSXPCDecoder _decodeArrayOfObjectsForKey:]_block_invoke + 52
7 Foundation 0x00000001979ceb90 _NSXPCSerializationIterateArrayObject + 208
8 Foundation 0x00000001979cda7c -[NSXPCDecoder _decodeArrayOfObjectsForKey:] + 240
9 Foundation 0x00000001979cd1bc -[NSDictionary(NSDictionary) initWithCoder:] + 176
10 Foundation 0x00000001979ae6e8 _decodeObject + 1264
11 Foundation 0x00000001979cec4c __44-[NSXPCDecoder _decodeArrayOfObjectsForKey:]_block_invoke + 52
12 Foundation 0x00000001979ceb90 _NSXPCSerializationIterateArrayObject + 208
13 Foundation 0x00000001979cda7c -[NSXPCDecoder _decodeArrayOfObjectsForKey:] + 240
14 Foundation 0x00000001979cd1a4 -[NSDictionary(NSDictionary) initWithCoder:] + 152
15 Foundation 0x00000001979ae6e8 _decodeObject + 1264
16 Foundation 0x00000001979ad030 -[NSXPCDecoder _decodeObjectOfClasses:atObject:] + 148
17 Foundation 0x0000000197a0a7f0 _NSXPCSerializationDecodeTypedObjCValuesFromArray + 892
18 Foundation 0x0000000197a0a1f8 _NSXPCSerializationDecodeInvocationArgumentArray + 412
19 Foundation 0x0000000197a0866c -[NSXPCDecoder __decodeXPCObject:allowingSimpleMessageSend:outInvocation:outArguments:outArgumentsMaxCount:outMethodSignature:outSelector:isReply:replySelector:] + 700
20 Foundation 0x0000000197a61078 -[NSXPCDecoder _decodeReplyFromXPCObject:forSelector:] + 76
21 Foundation 0x0000000197a5f690 -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 252
22 Foundation 0x0000000197a63664 __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188
23 Foundation 0x0000000197a08058 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] + 2244
24 CoreFoundation 0x0000000198b19d88 ___forwarding___ + 1016
25 CoreFoundation 0x0000000198b198d0 _CF_forwarding_prep_0 + 96
26 AppleNeuralEngine 0x00000001e912ab1c -[_ANEDaemonConnection loadModel:sandboxExtension:options:qos:withReply:] + 332
27 AppleNeuralEngine 0x00000001e912a674 __44-[_ANEClient doLoadModel:options:qos:error:]_block_invoke + 360
28 libdispatch.dylib 0x00000001a0a21dd4 _dispatch_client_callout + 20
29 libdispatch.dylib 0x00000001a0a312c4 _dispatch_lane_barrier_sync_invoke_and_complete + 56
30 AppleNeuralEngine 0x00000001e9129ef0 -[_ANEClient doLoadModel:options:qos:error:] + 500
31 Espresso 0x00000001a7e02034 Espresso::ANERuntimeEngine::compiler::build_segment(std::__1::shared_ptr<Espresso::abstract_batch> const&, int, Espresso::net_compiler_segment_based::segment_t const&) + 3736
32 Espresso 0x00000001a7e010cc Espresso::net_compiler_segment_based::build(std::__1::shared_ptr<Espresso::abstract_batch> const&, int, int) + 384
33 Espresso 0x00000001a7df02a4 Espresso::ANERuntimeEngine::compiler::build(std::__1::shared_ptr<Espresso::abstract_batch> const&, int, int) + 120
34 Espresso 0x00000001a7e1b3a4 Espresso::net::__build(std::__1::shared_ptr<Espresso::abstract_batch> const&, int, int) + 360
35 Espresso 0x00000001a7e178e0 Espresso::abstract_context::compute_batch_sync(void (std::__1::shared_ptr<Espresso::abstract_batch> const&) block_pointer) + 112
36 Espresso 0x00000001a7e198b8 EspressoLight::espresso_plan::prepare_compiler_if_needed() + 3208
37 Espresso 0x00000001a7e183f4 EspressoLight::espresso_plan::prepare() + 1712
38 Espresso 0x00000001a7da8e78 espresso_plan_build_with_options + 300
39 Espresso 0x00000001a7da8d30 espresso_plan_build + 44
40 CoreML 0x00000001b346645c -[MLNeuralNetworkEngine rebuildPlan:error:] + 536
41 CoreML 0x00000001b3464294 -[MLNeuralNetworkEngine _setupContextAndPlanWithConfiguration:usingCPU:reshapeWithContainer:error:] + 3132
42 CoreML 0x00000001b34797a0 -[MLNeuralNetworkEngine initWithContainer:configuration:error:] + 196
43 CoreML 0x00000001b347962c +[MLNeuralNetworkEngine loadModelFromCompiledArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] + 164
44 CoreML 0x00000001b34792a0 +[MLLoader _loadModelWithClass:fromArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] + 144
45 CoreML 0x00000001b3478c64 +[MLLoader _loadModelFromArchive:configuration:modelVersion:compilerVersion:loaderEvent:useUpdatableModelLoaders:loadingClasses:error:] + 532
46 CoreML 0x00000001b34650c8 +[MLLoader _loadWithModelLoaderFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] + 424
47 CoreML 0x00000001b3474bc8 +[MLLoader _loadModelFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] + 460
48 CoreML 0x00000001b347a024 +[MLLoader _loadModelFromAssetAtURL:configuration:loaderEvent:error:] + 244
49 CoreML 0x00000001b3479cbc +[MLLoader loadModelFromAssetAtURL:configuration:error:] + 104
50 CoreML 0x00000001b347ac2c -[MLModelAsset load:] + 564
51 CoreML 0x00000001b347a9c4 -[MLModelAsset modelWithError:] + 24
52 CoreML 0x00000001b347a7b4 +[MLModel modelWithContentsOfURL:configuration:error:] + 172
53 CoreML 0x00000001b37afbc4 +[MLModel modelWithContentsOfURL:error:] + 76
Core code
MLModel* model = nil;
NSError *error = nil;
@try
{
model = [MLModel modelWithContentsOfURL:modelURL error:&error];
}
@catch (NSException *exception)
{
model = nil;
return Ret_OperationErr_InvalidInit;
}
Two question:
What does this stack mean?
I added @ try @ catch, why is it still crashing?
Topic:
Machine Learning & AI
SubTopic:
Core ML
I am searching for a method to remove background from a video. it can be from camera Session fileOutput url or from photo library.
I was able to accomplish live preview of removed background with the depth data and some metal framework code from the example Enhancing Live Video by Leveraging TrueDepth Camera Data. However I count figure out a way to save this as a video so that I can upload it.
Also this method is using over 150% of cpu ( Xcode cpu usage ), which seems to be quite a lot and the device is getting heated up so fast and drops the frames when It hot.
I also found something similar from GitHub using CoreML example by Dmitry Voitekh which only uses less than 40% cpu.
Any information regarding this will be helpful.
Objective : Remove Background from video and save it
I am checking actual movement on iOS18.1 beta 3 devices, but the following items are not functioning.
Image Playground
Image Wand
Genmoji
Please let me know the following
Are the above 3 items available on iOS18.1 beta 3?
If available, are there any other operations other than enabling Apple Intelligece that are required to use the features?
Xcode Version: Version 15.2 (15C500b)
com.github.apple.coremltools.source: torch==1.12.1
com.github.apple.coremltools.version: 7.2
Compute: Mixed (Float16, Int32)
Storage: Float16
The input to the mlpackage is MultiArray (Float16 1 × 1 × 544 × 960)
The flexibility is: 1 × 1 × 544 × 960 | 1 × 1 × 384 × 640 | 1 × 1 × 736 × 1280 | 1 × 1 × 1088 × 1920
I tested this on iPhone XR, iPhone 11, iPhone 12, iPhone 13, and iPhone 14. On all devices except the iPhone 11, the model runs correctly on the NPU. However, on the iPhone 11, the model runs on the CPU instead.
Here is the CoreMLTools conversion code I used:
mlmodel = ct.convert(trace,
inputs=[ct.TensorType(shape=input_shape, name="input", dtype=np.float16)],
outputs=[ct.TensorType(name="output", dtype=np.float16, shape=output_shape)],
convert_to='mlprogram',
minimum_deployment_target=ct.target.iOS16
)
We have an application that receives a message (through MQTT) from an external system to snap a photo, runs a CoreML vision request on the image, and then sends the results back. The customer has 100s of devices and recently on a couple of those devices (13 pros), the customer encountered an issue in which the devices were not responding in time. There was no crash, just some individual inferences were slowed down. The device performs 1000s of requests per day. Upon further evaluation of the request before and after in the device logs, I noticed that Apple loads the following
default 2024-09-04 13:18:31.310401 -0400 ProcessName Processing image for reference: XXX
default 2024-09-04 13:18:31.403606 -0400 ProcessName Found matching service: H1xANELoadBalancer
default 2024-09-04 13:18:31.403646 -0400 ProcessName Found matching service: H11ANEIn
default 2024-09-04 13:18:31.403661 -0400 ProcessName Found ANE device :1
default 2024-09-04 13:18:31.403681 -0400 ProcessName Total num of devices 1
default 2024-09-04 13:18:31.403681 -0400 ProcessName (Single-ANE System) Opening H11ANE device at index 0
default 2024-09-04 13:18:31.403681 -0400 ProcessName H11ANEDevice::H11ANEDeviceOpen, usage type: 1
In a good scenario (above), these actions will performed very quickly (in a split second). The app doesn't do anything until coreml inference result is returned. In the bad scenario (below), there is a delay of about 4 seconds from app passing the control to vision request and then getting the response back (leading to timeouts with the customer)
default 2024-09-04 13:19:08.777468 -0400 ProcessName Processing image for reference: ZZZ
default 2024-09-04 13:19:12.199758 -0400 ProcessName Found matching service: H1xANELoadBalancer
default 2024-09-04 13:19:12.199800 -0400 ProcessName Found matching service: H11ANEIn
default 2024-09-04 13:19:12.199812 -0400 ProcessName Found ANE device :1
default 2024-09-04 13:19:12.199832 -0400 ProcessName Total num of devices 1
default 2024-09-04 13:19:12.199834 -0400 ProcessName (Single-ANE System) Opening H11ANE device at index 0
default 2024-09-04 13:19:12.199834 -0400 ProcessName H11ANEDevice::H11ANEDeviceOpen, usage type: 1
The logs are in order, I haven't removed anything. The code is fairly simple, it's just running a vision request without doing much. Has anyone encountered this before?
Topic:
Machine Learning & AI
SubTopic:
Core ML
Hi,
I have an existing app with AppEntities defined, that works on iOS16 and iOS17. The AppEntities also have EntityPropertyQuery defined, so they work as 'find intents'. I want to use the new @AssistantEntity on iOS18, while supporting the previous versions. What's the best way to do this?
For e.g. I have a 'person' AppEntity:
@available(iOS 16.0, macOS 13.0, watchOS 9.0, tvOS 16.0, *)
struct CJLogAppEntity: AppEntity {
static var defaultQuery = CJLogAppEntityQuery()
....
}
struct CJLogAppEntityQuery: EntityPropertyQuery {
...
}
How do I adopt this with @AssistantEntity(schema: .journal.entry) for iOS18, while maintaining compatibility with iOS16 and 17?
Hello everyone,
I have a PyTorch model that outputs an image. I converted this model to CoreML using coremltools, and the resulting CoreML model can be used in my iOS project to perform inference using the MLModel's prediction function, which returns a result of type CVPixelBuffer.
I want to avoid allocating memory every time I call the prediction function. Instead, I would like to use a pre-allocated buffer. I noticed that MLModel provides an overloaded prediction function that accepts an MLPredictionOptions object. This object has an outputBackings member, which allows me to pass a pre-allocated CVPixelBuffer.
However, when I attempt to do this, I encounter the following error:
Copy from tensor to pixel buffer (pixel_format_type: BGRA, image_pixel_type: BGR8, component_dtype: INT, component_pack: FMT_32) is not supported.
Could someone point out what I might be doing wrong? How can I make MLModel use my pre-allocated CVPixelBuffer instead of creating a new one each time?
Here is the Python code I used to convert the PyTorch model to CoreML, where I specified the color_layout as coremltools.colorlayout.BGR:
def export_ml(model, resolution="640x360"):
ml_path = f"model.mlpackage"
print("exporting ml model")
width, height = map(int, resolution.split('x'))
img0 = torch.randn(1, 3, height, width)
img1 = torch.randn(1, 3, height, width)
traced_model = torch.jit.trace(model, (img0, img1))
input_shape = ct.Shape(shape=(1, 3, height, width))
output_type_img = ct.ImageType(name="out", scale=1.0, bias=[0, 0, 0], color_layout=ct.colorlayout.BGR)
ml_model = ct.convert(
traced_model,
inputs=[input_type_img0, input_type_img1],
outputs=[output_type_img]
)
ml_model.save(ml_path)
Here is the Swift code in my iOS project that calls the MLModel's prediction function:
func prediction(image1: CVPixelBuffer, image2: CVPixelBuffer, model: MLModel) -> CVPixelBuffer? {
let options = MLPredictionOptions()
guard let outputBuffer = outputBacking else {
fatalError("Failed to create CVPixelBuffer.")
}
options.outputBackings = ["out": outputBuffer]
// Perform the prediction
guard let prediction = try? model.prediction(from: RifeInput(img0: image1, img1: image2), options: options) else {
Log.i("Failed to perform prediction")
return nil
}
// Extract the result
guard let cvPixelBuffer = prediction.featureValue(for: "out")?.imageBufferValue else {
Log.i("Failed to get results from the model")
return nil
}
return cvPixelBuffer
}
Here is the code I used to create the outputBacking:
let attributes: [String: Any] = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true,
kCVPixelBufferWidthKey as String: Int(640),
kCVPixelBufferHeightKey as String: Int(360),
kCVPixelBufferIOSurfacePropertiesKey as String: [:]
]
let status = CVPixelBufferCreate(kCFAllocatorDefault, 640, 360, kCVPixelFormatType_32BGRA, attributes as CFDictionary, &outputBacking)
guard let outputBuffer = outputBacking else {
fatalError("Failed to create CVPixelBuffer.")
}
Any help or guidance would be greatly appreciated!
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Core ML
I've got Apple AI working on my iPhone 15 pro max, SIRI 2.0 working as expected, however I don't seem to have the below options for Apple AI working / appearing.
AI in mail
AI in notes
Clean up just stuck on downloading in photos
Not sure if my setup is wrong or it's just not available for me yet
When I try to run basically any CoreML model using MLPredictionOptions.outputBackings , inference throws the following error:
2024-09-11 15:36:00.184740-0600 run_demo[4260:64822] [coreml] Unrecognized ANE execution priority (null)
2024-09-11 15:36:00.185380-0600 run_demo[4260:64822] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unrecognized ANE execution priority (null)'
*** First throw call stack:
(
0 CoreFoundation 0x000000019812cec0 __exceptionPreprocess + 176
1 libobjc.A.dylib 0x0000000197c12cd8 objc_exception_throw + 88
2 CoreFoundation 0x000000019812cdb0 +[NSException exceptionWithName:reason:userInfo:] + 0
3 CoreML 0x00000001a1bf6504 _ZN12_GLOBAL__N_141espressoPlanPriorityFromPredictionOptionsEP19MLPredictionOptions + 264
4 CoreML 0x00000001a1bf68c0 -[MLNeuralNetworkEngine _matchEngineToOptions:error:] + 236
5 CoreML 0x00000001a1be254c __62-[MLNeuralNetworkEngine predictionFromFeatures:options:error:]_block_invoke + 68
6 libdispatch.dylib 0x0000000197e20658 _dispatch_client_callout + 20
7 libdispatch.dylib 0x0000000197e2fcd8 _dispatch_l
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Unrecognized ANE execution priority (null)'
*** First throw call stack:
(
0 CoreFoundation 0x000000019812cec0 __exceptionPreprocess + 176
1 libobjc.A.dylib 0x0000000197c12cd8 objc_exception_throw + 88
2 CoreFoundation 0x000000019812cdb0 +[NSException exceptionWithName:reason:userInfo:] + 0
3 CoreML 0x00000001a1bf6504 _ZN12_GLOBAL__N_141espressoPlanPriorityFromPredictionOptionsEP19MLPredictionOptions + 264
4 CoreML 0x00000001a1bf68c0 -[MLNeuralNetworkEngine _matchEngineToOptions:error:] + 236
5 CoreML 0x00000001a1be254c __62-[MLNeuralNetworkEngine predictionFromFeatures:options:error:]_block_invoke + 68
6 libdispatch.dylib 0x0000000197e20658 _dispatch_client_callout + 20
7 libdispatch.dylib 0x0000000197e2fcd8 _dispatch_lane_barrier_sync_invoke_and_complete + 56
8 CoreML 0x00000001a1be2450 -[MLNeuralNetworkEngine predictionFromFeatures:options:error:] + 304
9 CoreML 0x00000001a1c9e118 -[MLDelegateModel _predictionFromFeatures:usingState:options:error:] + 776
10 CoreML 0x00000001a1c9e4a4 -[MLDelegateModel predictionFromFeatures:options:error:] + 136
11 libMLBackend_coreml.dylib 0x00000001002f19f0 _ZN6CoreML8runModelENS_5ModelERNSt3__16vectorIPvNS1_9allocatorIS3_EEEES7_ + 904
12 libMLBackend_coreml.dylib 0x00000001002c56e8 _ZZN8ModelImp9runCoremlEPN2ML7Backend17ModelIoBindingImpEENKUlvE_clEv + 120
13 libMLBackend_coreml.dylib 0x00000001002c1e40 _ZNSt3__110__function6__funcIZN2ML4Util10WorkThread11runInThreadENS_8functionIFvvEEEEUlvE_NS_9allocatorIS8_EES6_EclEv + 40
14 libMLBackend_coreml.dylib 0x00000001002bc3a4 _ZZN2ML4Util10WorkThreadC1EvENKUlvE_clEv + 160
15 libMLBackend_coreml.dylib 0x00000001002bc244 _ZNSt3__114__thread_proxyB7v160006INS_5tupleIJNS_10unique_ptrINS_15__thread_structENS_14default_deleteIS3_EEEEZN2ML4Util10WorkThreadC1EvEUlvE_EEEEEPvSC_ + 52
16 libsystem_pthread.dylib 0x0000000197fd32e4 _pthread_start + 136
17 libsystem_pthread.dylib 0x0000000197fce0fc thread_start + 8
)
libc++abi: terminating due to uncaught exception of type NSException
Interestingly, if I don't use MLPredictionOptions to set pre-allocated output backings, then inference appears to run as expected.
A similar issue seems to have been discussed and fixed here: https://developer.apple.com/forums/thread/761649 , however I'm seeing this issue on a beta build that I downloaded today (Sept 11 2024).
Will this be fixed? Any advice would be greatly appreciated.
Thanks
As a user, when viewing a photo or image, I want to be able to tell Siri, “add this to ”, similar to example from the WWDC presentation where a photo is added to a note in the notes app.
Is this... possible with app domains as they are documented?
I see domains like open-file and open-photo, but I don't know if those are appropriate for this kind of functionality?
Hello,
I‘m using DockKit within my SwiftUI Application with GetStream. Before updating to iOS 18 yesterday the custom Tracking using DockKit worked like a charm, but After updating it stopped working unexpectedly.
What‘s more curious: using the official GetStream Video Calls Application it works on iOS18 still, but Not within my Application. I can confirm, that my iPhone is still paired and I can receive logs about the current docking State and everything seems fine.
Any suggestions what I‘m missing here?
I need to add AI Image Playground in my iOS app with UIKit, as per WWDC 2024 introduce new AI Image Playground API, I didn't find any official document yet, So how can add it ?
I want my confidence of model is worked according to the when I detected the object by real time camera with help of ml model in android its gives me different results with different confidence as like 75, 40,30,95 not range 95 to 100 but when I used same model in ios its will give me range above 95 of any case. so what will be reason do you think
Topic:
Machine Learning & AI
SubTopic:
Core ML