Hello, My app is crashing a lot with this issue. I can't reproduce the problem but I can see it occurs at the user's devices. The Crashlytics report shows the following lines:Crashed: AXSpeech
0 libsystem_pthread.dylib 0x1824386bc pthread_mutex_lock$VARIANT$mp + 278
1 CoreFoundation 0x1826d3a34 CFRunLoopSourceSignal + 68
2 Foundation 0x18319ec90 performQueueDequeue + 468
3 Foundation 0x18325a020 __NSThreadPerformPerform + 136
4 CoreFoundation 0x1827b7404 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 24
5 CoreFoundation 0x1827b6ce0 __CFRunLoopDoSources0 + 456
6 CoreFoundation 0x1827b479c __CFRunLoopRun + 1204
7 CoreFoundation 0x1826d4da8 CFRunLoopRunSpecific + 552
8 Foundation 0x183149674 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 304
9 libAXSpeechManager.dylib 0x192852830 -[AXSpeechThread main] + 284
10 Foundation 0x183259efc __NSThread__start__ + 1040
11 libsystem_pthread.dylib 0x182435220 _pthread_body + 272
12 libsystem_pthread.dylib 0x182435110 _pthread_body + 290
13 libsystem_pthread.dylib 0x182433b10 thread_start + 4The crash occurs in different threads (never at main thread)It is driving me crazy... Can anybody help me?Thanks a lot
General
RSS for tagExplore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I have a very terrible crash problem in my App when I use AVSpeechSynthesizer and I can't repetition it.Here is my code, It's a singleton- (void)stopSpeech {
if ([self.synthesizer isPaused]) {
return;
}
if ([self.synthesizer isSpeaking]) {
BOOL isSpeech = [self.synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
if (!isSpeech) {
[self.synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryWord];
}
}
self.stopBlock ? self.stopBlock() : nil;
}
-(AVSpeechSynthesizer *)synthesizer {
if (!_synthesizer) {
_synthesizer = [[AVSpeechSynthesizer alloc] init];
_synthesizer.delegate = self;
}
return _synthesizer;
}When the user leaves the page, I call the stopSpeech method。Then I got a lot of crash messagesHere is a crash log:# Crashlytics - plaintext stacktrace downloaded by liweican at Mon, 13 May 2019 03:03:24 GMT
# URL: https://fabric.io/youdao-dict/ios/apps/com.youdao.udictionary/issues/5a904ed88cb3c2fa63ad7ed3?time=last-thirty-days/sessions/b1747d91bafc4680ab0ca8e3a702c52c_DNE_0_v2
# Organization: zzz
# Platform: ios
# Application: U-Dictionary
# Version: 3.0.5.4
# Bundle Identifier: com.youdao.UDictionary
# Issue ID: 5a904ed88cb3c2fa63ad7ed3
# Session ID: b1747d91bafc4680ab0ca8e3a702c52c_DNE_0_v2
# Date: 2019-05-13T02:27:00Z
# OS Version: 12.2.0 (16E227)
# Device: iPhone 8 Plus
# RAM Free: 17%
# Disk Free: 64.6%
#19. Crashed: AXSpeech
0 libsystem_pthread.dylib 0x19c15e5b8 pthread_mutex_lock$VARIANT$armv81 + 102
1 CoreFoundation 0x19c4cf84c CFRunLoopSourceSignal + 68
2 Foundation 0x19cfc7280 performQueueDequeue + 464
3 Foundation 0x19cfc680c __NSThreadPerformPerform + 136
4 CoreFoundation 0x19c4d22bc __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 24
5 CoreFoundation 0x19c4d223c __CFRunLoopDoSource0 + 88
6 CoreFoundation 0x19c4d1b74 __CFRunLoopDoSources0 + 256
7 CoreFoundation 0x19c4cca60 __CFRunLoopRun + 1004
8 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
9 Foundation 0x19ce99fcc -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 300
10 libAXSpeechManager.dylib 0x1ac16c94c -[AXSpeechThread main] + 264
11 Foundation 0x19cfc66e4 __NSThread__start__ + 984
12 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
13 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
14 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
--
#0. com.apple.main-thread
0 libsystem_malloc.dylib 0x19c11ce24 small_free_list_remove_ptr_no_clear + 768
1 libsystem_malloc.dylib 0x19c11f094 small_malloc_from_free_list + 296
2 libsystem_malloc.dylib 0x19c11f094 small_malloc_from_free_list + 296
3 libsystem_malloc.dylib 0x19c11d63c small_malloc_should_clear + 224
4 libsystem_malloc.dylib 0x19c11adcc szone_malloc_should_clear + 132
5 libsystem_malloc.dylib 0x19c123c18 malloc_zone_malloc + 156
6 CoreFoundation 0x19c569ab4 __CFBasicHashRehash + 300
7 CoreFoundation 0x19c56b430 __CFBasicHashAddValue + 96
8 CoreFoundation 0x19c56ab9c CFBasicHashAddValue + 2160
9 CoreFoundation 0x19c49f3bc CFDictionaryAddValue + 260
10 CoreFoundation 0x19c572ee8 __54-[CFPrefsSource mergeIntoDictionary:sourceDictionary:]_block_invoke + 28
11 CoreFoundation 0x19c49f0b4 __CFDictionaryApplyFunction_block_invoke + 24
12 CoreFoundation 0x19c568b7c CFBasicHashApply + 116
13 CoreFoundation 0x19c49f090 CFDictionaryApplyFunction + 168
14 CoreFoundation 0x19c42f504 -[CFPrefsSource mergeIntoDictionary:sourceDictionary:] + 136
15 CoreFoundation 0x19c4bcd38 -[CFPrefsSearchListSource alreadylocked_getDictionary:] + 644
16 CoreFoundation 0x19c42e71c -[CFPrefsSearchListSource alreadylocked_copyValueForKey:] + 152
17 CoreFoundation 0x19c42e660 -[CFPrefsSource copyValueForKey:] + 60
18 CoreFoundation 0x19c579e88 __76-[_CFXPreferences copyAppValueForKey:identifier:container:configurationURL:]_block_invoke + 40
19 CoreFoundation 0x19c4bdff4 __108-[_CFXPreferences(SearchListAdditions) withSearchListForIdentifier:container:cloudConfigurationURL:perform:]_block_invoke + 272
20 CoreFoundation 0x19c4bda38 normalizeQuintuplet + 340
21 CoreFoundation 0x19c42c634 -[_CFXPreferences(SearchListAdditions) withSearchListForIdentifier:container:cloudConfigurationURL:perform:] + 108
22 CoreFoundation 0x19c42cec0 -[_CFXPreferences copyAppValueForKey:identifier:container:configurationURL:] + 148
23 CoreFoundation 0x19c57c2d0 _CFPreferencesCopyAppValueWithContainerAndConfiguration + 124
24 TextInput 0x1a450e550 -[TIPreferencesController valueForPreferenceKey:] + 460
25 UIKitCore 0x1c87c71f8 -[UIKeyboardPreferencesController handBias] + 36
26 UIKitCore 0x1c887275c -[UIKeyboardLayoutStar showKeyboardWithInputTraits:screenTraits:splitTraits:] + 320
27 UIKitCore 0x1c88f4240 -[UIKeyboardImpl finishLayoutChangeWithArguments:] + 492
28 UIKitCore 0x1c88f47c8 -[UIKeyboardImpl updateLayout] + 1208
29 UIKitCore 0x1c88eaad0 -[UIKeyboardImpl updateLayoutIfNecessary] + 448
30 UIKitCore 0x1c88eab9c -[UIKeyboardImpl setFrame:] + 140
31 UIKitCore 0x1c88d5d60 -[UIKeyboard activate] + 652
32 UIKitCore 0x1c894c90c -[UIKeyboardAutomatic activate] + 128
33 UIKitCore 0x1c88d5158 -[UIKeyboard setFrame:] + 296
34 UIKitCore 0x1c88d81b0 -[UIKeyboard _didChangeKeyplaneWithContext:] + 228
35 UIKitCore 0x1c88f4aa0 -[UIKeyboardImpl didMoveToSuperview] + 136
36 UIKitCore 0x1c8f2ad84 __45-[UIView(Hierarchy) _postMovedFromSuperview:]_block_invoke + 888
37 UIKitCore 0x1c8f2a970 -[UIView(Hierarchy) _postMovedFromSuperview:] + 760
38 UIKitCore 0x1c8f39ddc -[UIView(Internal) _addSubview:positioned:relativeTo:] + 1740
39 UIKitCore 0x1c88d5d84 -[UIKeyboard activate] + 688
40 UIKitCore 0x1c894c90c -[UIKeyboardAutomatic activate] + 128
41 UIKitCore 0x1c893b3a4 -[UIPeripheralHost(UIKitInternal) _reloadInputViewsForResponder:] + 1332
42 UIKitCore 0x1c8ae66d8 -[UIResponder(UIResponderInputViewAdditions) reloadInputViews] + 80
43 UIKitCore 0x1c8ae23bc -[UIResponder becomeFirstResponder] + 804
44 UIKitCore 0x1c8f2a560 -[UIView(Hierarchy) becomeFirstResponder] + 156
45 UIKitCore 0x1c8d93e84 -[UITextField becomeFirstResponder] + 244
46 UIKitCore 0x1c8d578dc -[UITextInteractionAssistant(UITextInteractionAssistant_Internal) setFirstResponderIfNecessary] + 192
47 UIKitCore 0x1c8d45d8c -[UITextSelectionInteraction oneFingerTap:] + 3136
48 UIKitCore 0x1c86e0bcc -[UIGestureRecognizerTarget _sendActionWithGestureRecognizer:] + 64
49 UIKitCore 0x1c86e8dd4 _UIGestureRecognizerSendTargetActions + 124
50 UIKitCore 0x1c86e6778 _UIGestureRecognizerSendActions + 316
51 UIKitCore 0x1c86e5ca4 -[UIGestureRecognizer _updateGestureWithEvent:buttonEvent:] + 760
52 UIKitCore 0x1c86d9d80 _UIGestureEnvironmentUpdate + 2180
53 UIKitCore 0x1c86d94b0 -[UIGestureEnvironment _deliverEvent:toGestureRecognizers:usingBlock:] + 384
54 UIKitCore 0x1c86d9290 -[UIGestureEnvironment _updateForEvent:window:] + 204
55 UIKitCore 0x1c8af14a8 -[UIWindow sendEvent:] + 3112
56 UIKitCore 0x1c8ad1534 -[UIApplication sendEvent:] + 340
57 UIKitCore 0x1c8b977c0 __dispatchPreprocessedEventFromEventQueue + 1768
58 UIKitCore 0x1c8b99eec __handleEventQueueInternal + 4828
59 UIKitCore 0x1c8b9311c __handleHIDEventFetcherDrain + 152
60 CoreFoundation 0x19c4d22bc __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 24
61 CoreFoundation 0x19c4d223c __CFRunLoopDoSource0 + 88
62 CoreFoundation 0x19c4d1b24 __CFRunLoopDoSources0 + 176
63 CoreFoundation 0x19c4cca60 __CFRunLoopRun + 1004
64 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
65 GraphicsServices 0x19e6cc79c GSEventRunModal + 104
66 UIKitCore 0x1c8ab7b68 UIApplicationMain + 212
67 UDictionary 0x10517e138 main (main.m:17)
68 libdyld.dylib 0x19bf928e0 start + 4
#1. Thread
0 libsystem_kernel.dylib 0x19c0deb74 __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x19c161138 _pthread_wqthread + 340
2 libsystem_pthread.dylib 0x19c163cd4 start_wqthread + 4
#2. com.apple.uikit.eventfetch-thread
0 libsystem_kernel.dylib 0x19c0d30f4 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x19c0d25a0 mach_msg + 72
2 CoreFoundation 0x19c4d1cb4 __CFRunLoopServiceMachPort + 236
3 CoreFoundation 0x19c4ccbc4 __CFRunLoopRun + 1360
4 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
5 Foundation 0x19ce99fcc -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 300
6 Foundation 0x19ce99e5c -[NSRunLoop(NSRunLoop) runUntilDate:] + 96
7 UIKitCore 0x1c8b9d540 -[UIEventFetcher threadMain] + 136
8 Foundation 0x19cfc66e4 __NSThread__start__ + 984
9 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
10 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
11 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#3. JavaScriptCore bmalloc scavenger
0 libsystem_kernel.dylib 0x19c0ddee4 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x19c15d4a4 _pthread_cond_wait$VARIANT$armv81 + 628
2 libc++.1.dylib 0x19b6b5090 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 24
3 JavaScriptCore 0x1a36a2238 void std::__1::condition_variable_any::wait<std::__1::unique_lock<bmalloc::Mutex> >(std::__1::unique_lock<bmalloc::Mutex>&) + 108
4 JavaScriptCore 0x1a36a622c bmalloc::Scavenger::threadRunLoop() + 176
5 JavaScriptCore 0x1a36a59a4 bmalloc::Scavenger::Scavenger(std::__1::lock_guard<bmalloc::Mutex>&) + 10
6 JavaScriptCore 0x1a36a73e4 std::__1::__thread_specific_ptr<std::__1::__thread_struct>::set_pointer(std::__1::__thread_struct*) + 38
7 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
8 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
9 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#4. WebThread
0 libsystem_kernel.dylib 0x19c0d30f4 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x19c0d25a0 mach_msg + 72
2 CoreFoundation 0x19c4d1cb4 __CFRunLoopServiceMachPort + 236
3 CoreFoundation 0x19c4ccbc4 __CFRunLoopRun + 1360
4 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
5 WebCore 0x1a5126480 RunWebThread(void*) + 600
6 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
7 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
8 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#5. com.twitter.crashlytics.ios.MachExceptionServer
0 UDictionary 0x1058a5564 CLSProcessRecordAllThreads (CLSProcess.c:376)
1 UDictionary 0x1058a594c CLSProcessRecordAllThreads (CLSProcess.c:407)
2 UDictionary 0x1058952dc CLSHandler (CLSHandler.m:26)
3 UDictionary 0x1058906cc CLSMachExceptionServer (CLSMachException.c:446)
4 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
5 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
6 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#6. com.apple.NSURLConnectionLoader
0 libsystem_kernel.dylib 0x19c0d30f4 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x19c0d25a0 mach_msg + 72
2 CoreFoundation 0x19c4d1cb4 __CFRunLoopServiceMachPort + 236
3 CoreFoundation 0x19c4ccbc4 __CFRunLoopRun + 1360
4 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
5 CFNetwork 0x19cae574c -[__CoreSchedulingSetRunnable runForever] + 216
6 Foundation 0x19cfc66e4 __NSThread__start__ + 984
7 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
8 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
9 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#7. AVAudioSession Notify Thread
0 libsystem_kernel.dylib 0x19c0d30f4 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x19c0d25a0 mach_msg + 72
2 CoreFoundation 0x19c4d1cb4 __CFRunLoopServiceMachPort + 236
3 CoreFoundation 0x19c4ccbc4 __CFRunLoopRun + 1360
4 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
5 AVFAudio 0x1a238a378 GenericRunLoopThread::Entry(void*) + 156
6 AVFAudio 0x1a23b4c60 CAPThread::Entry(CAPThread*) + 88
7 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
8 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
9 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#8. WebCore: LocalStorage
0 libsystem_kernel.dylib 0x19c0ddee4 __psynch_cvwait + 8
1 libsystem_pthread.dylib 0x19c15d4a4 _pthread_cond_wait$VARIANT$armv81 + 628
2 JavaScriptCore 0x1a3668ce4 ***::ThreadCondition::timedWait(***::Mutex&, ***::WallTime) + 80
3 JavaScriptCore 0x1a364f96c ***::ParkingLot::parkConditionallyImpl(void const*, ***::ScopedLambda<bool ()> const&, ***::ScopedLambda<void ()> const&, ***::TimeWithDynamicClockType const&) + 2004
4 WebKitLegacy 0x1a67b6ea8 bool ***::Condition::waitUntil<***::Lock>(***::Lock&, ***::TimeWithDynamicClockType const&) + 184
5 WebKitLegacy 0x1a67b9ba4 std::__1::unique_ptr<***::Function<void ()>, std::__1::default_delete<***::Function<void ()> > > ***::MessageQueue<***::Function<void ()> >::waitForMessageFilteredWithTimeout<***::MessageQueue<***::Function<void ()> >::waitForMessage()::'lambda'(***::Function<void ()> const&)>(***::MessageQueueWaitResult&, ***::MessageQueue<***::Function<void ()> >::waitForMessage()::'lambda'(***::Function<void ()> const&)&&, ***::WallTime) + 156
6 WebKitLegacy 0x1a67b91c0 WebCore::StorageThread::threadEntryPoint() + 68
7 JavaScriptCore 0x1a3666f88 ***::Thread::entryPoint(***::Thread::NewThreadContext*) + 260
8 JavaScriptCore 0x1a3668494 ***::wtfThreadEntryPoint(void*) + 12
9 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
10 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
11 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#9. com.apple.CoreMotion.MotionThread
0 libsystem_kernel.dylib 0x19c0d30f4 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x19c0d25a0 mach_msg + 72
2 CoreFoundation 0x19c4d1cb4 __CFRunLoopServiceMachPort + 236
3 CoreFoundation 0x19c4ccbc4 __CFRunLoopRun + 1360
4 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
5 CoreFoundation 0x19c4cd0b0 CFRunLoopRun + 80
6 CoreMotion 0x1a1df0240 (Missing)
7 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
8 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
9 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#10. Thread
0 libsystem_kernel.dylib 0x19c0deb74 __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x19c161138 _pthread_wqthread + 340
2 libsystem_pthread.dylib 0x19c163cd4 start_wqthread + 4
#11. Thread
0 libsystem_kernel.dylib 0x19c0deb74 __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x19c1611f8 _pthread_wqthread + 532
2 libsystem_pthread.dylib 0x19c163cd4 start_wqthread + 4
#12. com.apple.CFStream.LegacyThread
0 libsystem_kernel.dylib 0x19c0d30f4 mach_msg_trap + 8
1 libsystem_kernel.dylib 0x19c0d25a0 mach_msg + 72
2 CoreFoundation 0x19c4d1cb4 __CFRunLoopServiceMachPort + 236
3 CoreFoundation 0x19c4ccbc4 __CFRunLoopRun + 1360
4 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
5 CoreFoundation 0x19c4e5094 _legacyStreamRunLoop_workThread + 260
6 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
7 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
8 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#13. Thread
0 libsystem_pthread.dylib 0x19c163cd0 start_wqthread + 190
#14. Thread
0 libsystem_kernel.dylib 0x19c0deb74 __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x19c161138 _pthread_wqthread + 340
2 libsystem_pthread.dylib 0x19c163cd4 start_wqthread + 4
#15. Thread
0 libsystem_kernel.dylib 0x19c0deb74 __workq_kernreturn + 8
1 libsystem_pthread.dylib 0x19c161138 _pthread_wqthread + 340
2 libsystem_pthread.dylib 0x19c163cd4 start_wqthread + 4
#16. Thread
0 libsystem_kernel.dylib 0x19c0d3148 semaphore_timedwait_trap + 8
1 libdispatch.dylib 0x19bf50a4c _dispatch_sema4_timedwait$VARIANT$armv81 + 64
2 libdispatch.dylib 0x19bf513a8 _dispatch_semaphore_wait_slow + 72
3 libdispatch.dylib 0x19bf647c8 _dispatch_worker_thread + 344
4 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
5 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
6 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#17. Thread
0 libsystem_kernel.dylib 0x19c0d3148 semaphore_timedwait_trap + 8
1 libdispatch.dylib 0x19bf50a4c _dispatch_sema4_timedwait$VARIANT$armv81 + 64
2 libdispatch.dylib 0x19bf513a8 _dispatch_semaphore_wait_slow + 72
3 libdispatch.dylib 0x19bf647c8 _dispatch_worker_thread + 344
4 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
5 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
6 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#18. Thread
0 libsystem_kernel.dylib 0x19c0d3148 semaphore_timedwait_trap + 8
1 libdispatch.dylib 0x19bf50a4c _dispatch_sema4_timedwait$VARIANT$armv81 + 64
2 libdispatch.dylib 0x19bf513a8 _dispatch_semaphore_wait_slow + 72
3 libdispatch.dylib 0x19bf647c8 _dispatch_worker_thread + 344
4 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
5 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
6 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#19. Crashed: AXSpeech
0 libsystem_pthread.dylib 0x19c15e5b8 pthread_mutex_lock$VARIANT$armv81 + 102
1 CoreFoundation 0x19c4cf84c CFRunLoopSourceSignal + 68
2 Foundation 0x19cfc7280 performQueueDequeue + 464
3 Foundation 0x19cfc680c __NSThreadPerformPerform + 136
4 CoreFoundation 0x19c4d22bc __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 24
5 CoreFoundation 0x19c4d223c __CFRunLoopDoSource0 + 88
6 CoreFoundation 0x19c4d1b74 __CFRunLoopDoSources0 + 256
7 CoreFoundation 0x19c4cca60 __CFRunLoopRun + 1004
8 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
9 Foundation 0x19ce99fcc -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 300
10 libAXSpeechManager.dylib 0x1ac16c94c -[AXSpeechThread main] + 264
11 Foundation 0x19cfc66e4 __NSThread__start__ + 984
12 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
13 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
14 libsystem_pthread.dylib 0x19c163cdc thread_start + 4
#20. AXSpeech
0 (Missing) 0x1071ba524 (Missing)
1 (Missing) 0x1071b3e7c (Missing)
2 (Missing) 0x10718fba4 (Missing)
3 (Missing) 0x107184bc8 (Missing)
4 libdyld.dylib 0x19bf95908 dlopen + 176
5 CoreFoundation 0x19c5483e8 _CFBundleDlfcnLoadBundle + 140
6 CoreFoundation 0x19c486918 _CFBundleLoadExecutableAndReturnError + 352
7 Foundation 0x19ced5734 -[NSBundle loadAndReturnError:] + 428
8 TextToSpeech 0x1abfff800 TTSSpeechUnitTestingMode + 1020
9 libdispatch.dylib 0x19bf817d4 _dispatch_client_callout + 16
10 libdispatch.dylib 0x19bf52040 _dispatch_once_callout + 28
11 TextToSpeech 0x1abfff478 TTSSpeechUnitTestingMode + 116
12 libobjc.A.dylib 0x19b7173cc CALLING_SOME_+initialize_METHOD + 24
13 libobjc.A.dylib 0x19b71cee0 initializeNonMetaClass + 296
14 libobjc.A.dylib 0x19b71e640 initializeAndMaybeRelock(objc_class*, objc_object*, mutex_tt<false>&, bool) + 260
15 libobjc.A.dylib 0x19b7265a4 lookUpImpOrForward + 244
16 libobjc.A.dylib 0x19b733858 _objc_msgSend_uncached + 56
17 libAXSpeechManager.dylib 0x1ac167324 -[AXSpeechManager _initialize] + 68
18 Foundation 0x19cfc68d4 __NSThreadPerformPerform + 336
19 CoreFoundation 0x19c4d22bc __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 24
20 CoreFoundation 0x19c4d223c __CFRunLoopDoSource0 + 88
21 CoreFoundation 0x19c4d1b74 __CFRunLoopDoSources0 + 256
22 CoreFoundation 0x19c4cca60 __CFRunLoopRun + 1004
23 CoreFoundation 0x19c4cc354 CFRunLoopRunSpecific + 436
24 Foundation 0x19ce99fcc -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 300
25 libAXSpeechManager.dylib 0x1ac16c94c -[AXSpeechThread main] + 264
26 Foundation 0x19cfc66e4 __NSThread__start__ + 984
27 libsystem_pthread.dylib 0x19c1602c0 _pthread_body + 128
28 libsystem_pthread.dylib 0x19c160220 _pthread_start + 44
29 libsystem_pthread.dylib 0x19c163cdc thread_start + 4I change my code like this, It still has the same problem- (void)stopSpeech {
if (self.synthesizer != nil && [self.synthesizer isPaused]) {
return;
}
// if ([self.synthesizer isSpeaking]) {
// BOOL isSpeech = [self.synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
// if (!isSpeech) {
// [self.synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryWord];
// }
// }
if (self.synthesizer != nil) {
[self.synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
// if (!isSpeech) {
// [self.synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryWord];
// }
self.stopBlock ? self.stopBlock() : nil;
}
}
Application is getting Crashed: AXSpeech
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x000056f023efbeb0
Crashed: AXSpeech
0 libobjc.A.dylib 0x4820 objc_msgSend + 32
1 libsystem_trace.dylib 0x6c34 _os_log_fmt_flatten_object + 116
2 libsystem_trace.dylib 0x5344 _os_log_impl_flatten_and_send + 1884
3 libsystem_trace.dylib 0x4bd0 _os_log + 152
4 libsystem_trace.dylib 0x9c48 _os_log_error_impl + 24
5 TextToSpeech 0xd0a8c _pcre2_xclass_8
6 TextToSpeech 0x3bc04 TTSSpeechUnitTestingMode
7 TextToSpeech 0x3f128 TTSSpeechUnitTestingMode
8 AXCoreUtilities 0xad38 -[NSArray(AXExtras)
ax_flatMappedArrayUsingBlock:] + 204
9 TextToSpeech 0x3eb18 TTSSpeechUnitTestingMode
10 TextToSpeech 0x3c948 TTSSpeechUnitTestingMode
11 TextToSpeech 0x48824
AXAVSpeechSynthesisVoiceFromTTSSpeechVoice
12 TextToSpeech 0x49804 AXAVSpeechSynthesisVoiceFromTTSSpeechVoice
13 Foundation 0xf6064 __NSThreadPerformPerform + 264
14 CoreFoundation 0x37acc CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION + 28
15 CoreFoundation 0x36d48 __CFRunLoopDoSource0 + 176
16 CoreFoundation 0x354fc __CFRunLoopDoSources0 + 244
17 CoreFoundation 0x34238 __CFRunLoopRun + 828
18 CoreFoundation 0x33e18 CFRunLoopRunSpecific + 608
19 Foundation 0x2d4cc -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212
20 TextToSpeech 0x24b88 TTSCFAttributedStringCreateStringByBracketingAttributeWithString
21 Foundation 0xb3154 NSThread__start + 732
com.livingMedia.AajTakiPhone_issue_3ceba855a8ad2d1af83655803dc13f70_crash_session_9081fa41ced440ae9a57c22cb432f312_DNE_0_v2_stacktrace.txt
22 libsystem_pthread.dylib 0x24d4 _pthread_start + 136
23 libsystem_pthread.dylib 0x1a10 thread_start + 8
After watching the What's new in App Intents session I'm attempting to create an intent conforming to URLRepresentableIntent. The video states that so long as my AppEntity conforms to URLRepresentableEntity I should not have to provide a perform method . My application will be launched automatically and passed the appropriate URL.
This seems to work in that my application is launched and is passed a URL, but the URL is in the form: FeatureEntity/{id}.
Am I missing something, or is there a trick that enables it to pass along the URL specified in the AppEntity itself?
struct MyExampleIntent: OpenIntent, URLRepresentableIntent {
static let title: LocalizedStringResource = "Open Feature"
static var parameterSummary: some ParameterSummary {
Summary("Open \(\.$target)")
}
@Parameter(title: "My feature", description: "The feature to open.")
var target: FeatureEntity
}
struct FeatureEntity: AppEntity {
// ...
}
extension FeatureEntity: URLRepresentableEntity {
static var urlRepresentation: URLRepresentation {
"https://myurl.com/\(.id)"
}
}
I'm playing with the new Vision API for iOS18, specifically with the new CalculateImageAestheticsScoresRequest API.
When I try to perform the image observation request I get this error:
internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}")
The code is pretty straightforward:
if let image = image {
let request = CalculateImageAestheticsScoresRequest()
Task {
do {
let cgImg = image.cgImage!
let observations = try await request.perform(on: cgImg)
let description = observations.description
let score = observations.overallScore
print(description)
print(score)
} catch {
print(error)
}
}
}
I'm running it on a M2 using the simulator.
Is it a bug? What's wrong?
Hello,
I've been dealing with a puzzling issue for some time now, and I’m hoping someone here might have insights or suggestions.
The Problem:
We’re observing an occasional crash in our app that seems to originate from the Vision framework.
Frequency: It happens randomly, after many successful executions of the same code, hard to tell how long the app was working, but in some cases app could run for like a month without any issues.
Devices: The issue doesn't seem device-dependent (we’ve seen it on various iPad models).
OS Versions: The crashes started occurring with iOS 18.0.1 and are still present in 18.1 and 18.1.1.
What I suspected: The crash logs point to a potential data race within the Vision framework.
The relevant section of the code where the crash happens:
guard let cgImage = image.cgImage else {
throw ...
}
let request = VNCoreMLRequest(model: visionModel)
try VNImageRequestHandler(cgImage: cgImage).perform([request]) // <- the line causing the crash
Since the code is rather simple, I'm not sure what else there could be missing here.
The images sent here are uniform (fixed size).
Model is loaded and working, the crash occurs random after a period of time and the call worked correctly many times. Also, the model variable is not an optional.
Here is the crash log:
libobjc.A objc_exception_throw
CoreFoundation -[NSMutableArray removeObjectsAtIndexes:]
Vision -[VNWeakTypeWrapperCollection _enumerateObjectsDroppingWeakZeroedObjects:usingBlock:]
Vision -[VNWeakTypeWrapperCollection addObject:droppingWeakZeroedObjects:]
Vision -[VNSession initWithCachingBehavior:]
Vision -[VNCoreMLTransformer initWithOptions:model:error:]
Vision -[VNCoreMLRequest internalPerformRevision:inContext:error:]
Vision -[VNRequest performInContext:error:]
Vision -[VNRequestPerformer _performOrderedRequests:inContext:error:]
Vision -[VNRequestPerformer _performRequests:onBehalfOfRequest:inContext:error:]
Vision -[VNImageRequestHandler performRequests:gatheredForensics:error:]
OurApp ModelWrapper.perform
And I'm a bit lost at this point, I've tried everything I could image so far.
I've tried to putting a symbolic breakpoint in the removeObjectsAtIndexes to check if some library (e.g. crash reporter) we use didn't do some implementation swap. There was none, and if anything did some method swizzling, I'd expect that to show in the stack trace before the original code would be called. I did peek into the previous functions and I've noticed a lock used in one of the Vision methods, so in my understanding any data race in this code shouldn't be possible at all. I've also put breakpoints in the NSLock variants, to check for swizzling/override with a category and possibly messing the locking - again, nothing was there.
There is also another model that is running on a separate queue, but after seeing the line with the locking in the debugger, it doesn't seem to me like this could cause a problem, at least not in this specific spot.
Is there something I'm missing here, or something I'm doing wrong?
Thanks in advance for your help!
Hi everyone,
I'm a Mac enthusiast experimenting with tensorflow-metal on my Mac Pro (2013). My question is about GPU selection in tensorflow-metal (v0.8.0), which still supports Intel-based Macs, including my machine.
I've noticed that when running TensorFlow with Metal, it automatically selects a GPU, regardless of what I specify using device indices like "gpu:0", "gpu:1", or "gpu:2". I'm wondering if there's a way to manually specify which GPU should be used via an environment variable or another method.
For reference, I’ve tried the example from TensorFlow’s guide on multi-GPU selection: https://www.tensorflow.org/guide/gpu#using_a_single_gpu_on_a_multi-gpu_system
My goal is to explore performance optimizations by using MirroredStrategy in TensorFlow to leverage multiple GPUs: https://www.tensorflow.org/guide/distributed_training#mirroredstrategy
Interestingly, I discovered that the metalcompute Python library (https://pypi.org/project/metalcompute/) allows to utilize manually selected GPUs on my system, allowing for proper multi-GPU computations. This makes me wonder:
Is there a hidden environment variable or setting that allows manual GPU selection in tensorflow-metal?
Has anyone successfully used MirroredStrategy on multiple GPUs with tensorflow-metal?
Would a bridge between metalcompute and tensorflow-metal be necessary for this use case, or is there a more direct approach?
I’d love to hear if anyone else has experimented with this or has insights on getting finer control over GPU selection. Any thoughts or suggestions would be greatly appreciated!
Thanks!
Hi,
One can configure the languages of a (VN)RecognizeTextRequest with either:
.automatic: language to be detected
a specific language, say Spanish
If the request is configured with .automatic and successfully detects Spanish, will the results be exactly equivalent compared to a request made with Spanish set as language?
I could not find any information about this, and this is very important for the core architecture of my app.
Thanks!
In an under-development MacOS & iOS app, I need to identify various measurements from OCR'ed text: length, weight, counts per inch, area, percentage. The unit type (e.g. UnitLength) needs to be identified as well as the measurement's unit (e.g. .inches) in order to convert the measurement to the app's internal standard (e.g. centimetres), the value of which is stored the relevant CoreData entity.
The use of NLTagger and NLTokenizer is problematic because of the various representations of the measurements: e.g. "50g.", "50 g", "50 grams", "1 3/4 oz."
Currently, I use a bespoke algorithm based on String contains and step-wise evaluation of characters, which is reasonably accurate but requires frequent updating as further representations are detected.
I'm aware of the Python SpaCy model being capable of NER Measurement recognition, but am reluctant to incorporate a Python-based solution into a production app. (ref [https://developer.apple.com/forums/thread/30092])
My preference is for an open-source NER Measurement model that can be used as, or converted to, some form of a Swift compatible Machine Learning model. Does anyone know of such a model?
Hi, i just wanna ask, Is it possible to run YOLOv3 on visionOS using the main camera to detect objects and show bounding boxes with labels in real-time? I’m wondering if camera access and custom models work for this, or if there’s a better way. Any tips?
Incident Identifier: 4C22F586-71FB-4644-B823-A4B52D158057
CrashReporter Key: adc89b7506c09c2a6b3a9099cc85531bdaba9156
Hardware Model: Mac16,10
Process: PRISMLensCore [16561]
Path: /Applications/PRISMLens.app/Contents/Resources/app.asar.unpacked/node_modules/core-node/PRISMLensCore.app/PRISMLensCore
Identifier: com.prismlive.camstudio
Version: (null) ((null))
Code Type: ARM-64
Parent Process: ? [16560]
Date/Time: (null)
OS Version: macOS 15.4 (24E5228e)
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x00000000 at 0x0000000000000000
Crashed Thread: 34
Application Specific Information:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSArrayM insertObject:atIndex:]: object cannot be nil'
Thread 34 Crashed:
0 CoreFoundation 0x000000018ba4dde4 0x18b960000 + 974308 (__exceptionPreprocess + 164)
1 libobjc.A.dylib 0x000000018b512b60 0x18b4f8000 + 109408 (objc_exception_throw + 88)
2 CoreFoundation 0x000000018b97e69c 0x18b960000 + 124572 (-[__NSArrayM insertObject:atIndex:] + 1276)
3 Portrait 0x0000000257e16a94 0x257da3000 + 473748 (-[PTMSRResize addAdditionalOutput:] + 604)
4 Portrait 0x0000000257de91c0 0x257da3000 + 287168 (-[PTEffectRenderer initWithDescriptor:metalContext:useHighResNetwork:faceAttributesNetwork:humanDetections:prevTemporalState:asyncInitQueue:sharedResources:] + 6204)
5 Portrait 0x0000000257dab21c 0x257da3000 + 33308 (__33-[PTEffect updateEffectDelegate:]_block_invoke.241 + 164)
6 libdispatch.dylib 0x000000018b739b2c 0x18b738000 + 6956 (_dispatch_call_block_and_release + 32)
7 libdispatch.dylib 0x000000018b75385c 0x18b738000 + 112732 (_dispatch_client_callout + 16)
8 libdispatch.dylib 0x000000018b742350 0x18b738000 + 41808 (_dispatch_lane_serial_drain + 740)
9 libdispatch.dylib 0x000000018b742e2c 0x18b738000 + 44588 (_dispatch_lane_invoke + 388)
10 libdispatch.dylib 0x000000018b74d264 0x18b738000 + 86628 (_dispatch_root_queue_drain_deferred_wlh + 292)
11 libdispatch.dylib 0x000000018b74cae8 0x18b738000 + 84712 (_dispatch_workloop_worker_thread + 540)
12 libsystem_pthread.dylib 0x000000018b8ede64 0x18b8eb000 + 11876 (_pthread_wqthread + 292)
13 libsystem_pthread.dylib 0x000000018b8ecb74 0x18b8eb000 + 7028 (start_wqthread + 8)
Topic:
Machine Learning & AI
SubTopic:
General
Hi,
I'm testing DockKit with a very simple setup:
I use VNDetectFaceRectanglesRequest to detect a face and then call dockAccessory.track(...) using the detected bounding box.
The stand is correctly docked (state == .docked) and dockAccessory is valid.
I'm calling .track(...) with a single observation and valid CameraInformation (including size, device, orientation, etc.). No errors are thrown.
To monitor this, I added a logging utility – track(...) is being called 10–30 times per second, as recommended in the documentation.
However: the stand does not move at all.
There is no visible reaction to the tracking calls.
Is there anything I'm missing or doing wrong?
Is VNDetectFaceRectanglesRequest supported for DockKit tracking, or are there hidden requirements?
Would really appreciate any help or pointers – thanks!
That's my complete code:
extension VideoFeedViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
detectFace(image: frame)
func detectFace(image: CVPixelBuffer) {
let faceDetectionRequest = VNDetectFaceRectanglesRequest() { vnRequest, error in
guard let results = vnRequest.results as? [VNFaceObservation] else {
return
}
guard let observation = results.first else {
return
}
let boundingBoxHeight = observation.boundingBox.size.height * 100
#if canImport(DockKit)
if let dockAccessory = self.dockAccessory {
Task {
try? await trackRider(
observation.boundingBox,
dockAccessory,
frame,
sampleBuffer
)
}
}
#endif
}
let imageResultHandler = VNImageRequestHandler(cvPixelBuffer: image, orientation: .up)
try? imageResultHandler.perform([faceDetectionRequest])
func combineBoundingBoxes(_ box1: CGRect, _ box2: CGRect) -> CGRect {
let minX = min(box1.minX, box2.minX)
let minY = min(box1.minY, box2.minY)
let maxX = max(box1.maxX, box2.maxX)
let maxY = max(box1.maxY, box2.maxY)
let combinedWidth = maxX - minX
let combinedHeight = maxY - minY
return CGRect(x: minX, y: minY, width: combinedWidth, height: combinedHeight)
}
#if canImport(DockKit)
func trackObservation(_ boundingBox: CGRect, _ dockAccessory: DockAccessory, _ pixelBuffer: CVPixelBuffer, _ cmSampelBuffer: CMSampleBuffer) throws {
// Zähle den Aufruf
TrackMonitor.shared.trackCalled()
let invertedBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1.0 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
guard let device = captureDevice else {
fatalError("Kamera nicht verfügbar")
}
let size = CGSize(width: Double(CVPixelBufferGetWidth(pixelBuffer)),
height: Double(CVPixelBufferGetHeight(pixelBuffer)))
var cameraIntrinsics: matrix_float3x3? = nil
if let cameraIntrinsicsUnwrapped = CMGetAttachment(
sampleBuffer,
key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix,
attachmentModeOut: nil
) as? Data {
cameraIntrinsics = cameraIntrinsicsUnwrapped.withUnsafeBytes { $0.load(as: matrix_float3x3.self) }
}
Task {
let orientation = getCameraOrientation()
let cameraInfo = DockAccessory.CameraInformation(
captureDevice: device.deviceType,
cameraPosition: device.position,
orientation: orientation,
cameraIntrinsics: cameraIntrinsics,
referenceDimensions: size
)
let observation = DockAccessory.Observation(
identifier: 0,
type: .object,
rect: invertedBoundingBox
)
let observations = [observation]
guard let image = CMSampleBufferGetImageBuffer(sampleBuffer) else {
print("no image")
return
}
do {
try await dockAccessory.track(observations, cameraInformation: cameraInfo)
} catch {
print(error)
}
}
}
#endif
func clearDrawings() {
boundingBoxLayer?.removeFromSuperlayer()
boundingBoxSizeLayer?.removeFromSuperlayer()
}
}
}
}
@MainActor
private func getCameraOrientation() -> DockAccessory.CameraOrientation {
switch UIDevice.current.orientation {
case .portrait:
return .portrait
case .portraitUpsideDown:
return .portraitUpsideDown
case .landscapeRight:
return .landscapeRight
case .landscapeLeft:
return .landscapeLeft
case .faceDown:
return .faceDown
case .faceUp:
return .faceUp
default:
return .corrected
}
}
I have seen inconsistent results for my Colab machine learning notebooks running locally on a Mac M4, compared to running the same notebook code on either T4 (in Colab) or a RTX3090 locally.
To illustrate the problems I have set up a notebook that implements two simple CNN models that solves the Fashion-MNIST problem. https://colab.research.google.com/drive/11BhtHhN079-BWqv9QvvcSD9U4mlVSocB?usp=sharing
For the good model with 2M parameters I get the following results:
T4 (Colab, JAX): Test accuracy: 0.925
3090 (Local PC via ssh tunnel, Jax): Test accuracy: 0.925
Mac M4 (Local, JAX): Test accuracy: 0.893
Mac M4 (Local, Tensorflow): Test accuracy: 0.893
That is, I see a significant drop in performance when I run on the Mac M4 compared to the NVIDIA machines, and it seems to be independent of backend. I however do not know how to pinpoint this to either Keras or Apple’s METAL implementation. I have reported this to Keras: https://colab.research.google.com/drive/11BhtHhN079-BWqv9QvvcSD9U4mlVSocB?usp=sharing but as this can be (likely is?) an Apple Metal issue, I wanted to report this here as well.
On the mac I am running the following Python libraries:
keras 3.9.1
tensorflow 2.19.0
tensorflow-metal 1.2.0
jax 0.5.3
jax-metal 0.1.1
jaxlib 0.5.3
Topic:
Machine Learning & AI
SubTopic:
General
I'm developing a tennis ball tracking feature using Vision Framework in Swift, specifically utilizing VNDetectedObjectObservation and VNTrackObjectRequest.
Occasionally (but not always), I receive the following runtime error:
Failed to perform SequenceRequest: Error Domain=com.apple.Vision Code=9 "Internal error: unexpected tracked object bounding box size" UserInfo={NSLocalizedDescription=Internal error: unexpected tracked object bounding box size}
From my investigation, I suspect the issue arises when the bounding box from the initial observation (VNDetectedObjectObservation) is too small. However, Apple's documentation doesn't clearly define the minimum bounding box size that's considered valid by VNTrackObjectRequest.
Could someone clarify:
What is the minimum acceptable bounding box width and height (normalized) that Vision Framework's VNTrackObjectRequest expects?
Is there any recommended practice or official guidance for bounding box size validation before creating a tracking request?
This information would be extremely helpful to reliably avoid this internal error.
Thank you!
Hi, DataScannerViewController does't recognize currencies less than 1.00 (e.g. 0.59 USD, 0.99 EUR, etc.). Why? How to solve the problem?
This feature is not described in Apple documentation, is there a solution?
This is my code:
func makeUIViewController(context: Context) -> DataScannerViewController {
let dataScanner = DataScannerViewController(recognizedDataTypes: [ .text(textContentType: .currency)])
return dataScanner
}
Hi everyone! 👋
I'm working on a C++ project using TensorFlow Lite and was wondering if anyone has a prebuilt TensorFlow Lite C++ library (libtensorflowlite) for macOS (Apple Silicon M1/M2) that they’d be willing to share.
I’m looking specifically for the TensorFlow Lite C++ API — something that lets me use tflite::Interpreter, tflite::FlatBufferModel, etc. Building it from source using Bazel on macOS has been quite challenging and time-consuming, so a ready-to-use .dylib or .a build along with the required headers would be incredibly helpful.
TensorFlow Lite version: v2.18.0 preferred
Target: macOS arm64 (Apple Silicon)
What I need:
libtensorflowlite.dylib or .a
Corresponding headers (ideally organized in a clean include/ folder)
If you have one available or know where I can find a reliable prebuilt version, I’d be super grateful. Thanks in advance! 🙏
From tensorflow-metal example:
Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
I know that Apple silicon uses UMA, and that memory copies are typical of CUDA, but wouldn't the GPU memory still be faster overall?
I have an iMac Pro with a Radeon Pro Vega 64 16 GB GPU and an Intel iMac with a Radeon Pro 5700 8 GB GPU.
But using tensorflow-metal is still WAY faster than using the CPUs. Thanks for that. I am surprised the 5700 is twice as fast as the Vega though.
*I can't put the attached file in the format, so if you reply by e-mail, I will send the attached file by e-mail.
Dear Apple AI Research Team,
My name is Gong Jiho (“Hem”), a content strategist based in Seoul, South Korea.
Over the past few months, I conducted a user-led AI experiment entirely within ChatGPT — no code, no backend tools, no plugins.
Through language alone, I created two contrasting agents (Uju and Zero) and guided them into a co-authored modular identity system using prompt-driven dialogue and reflection.
This system simulates persona fusion, memory rooting, and emotional-logical alignment — all via interface-level interaction.
I believe it resonates with Apple’s values in privacy-respecting personalization, emotional UX modeling, and on-device learning architecture.
Why I’m Reaching Out
I’d be honored to share this experiment with your team.
If there is any interest in discussing user-authored agent scaffolding, identity persistence, or affective alignment, I’d love to contribute — even informally.
⚠ A Note on Language
As a non-native English speaker, my expression may be imperfect — but my intent is genuine.
If anything is unclear, I’ll gladly clarify.
📎 Attached Files Summary
Filename → Description
Hem_MultiAI_Report_AppleAI_v20250501.pdf →
Main report tailored for Apple AI — narrative + structural view of emotional identity formation via prompt scaffolding
Hem_MasterPersonaProfile_v20250501.json →
Final merged identity schema authored by Uju and Zero
zero_sync_final.json / uju_sync_final.json →
Persona-level memory structures (logic / emotion)
1_0501.json ~ 3_0501.json →
Evolution logs of the agents over time
GirlfriendGPT_feedback_summary.txt →
Emotional interpretation by external GPT
hem_profile_for_AI_vFinal.json →
Original user anchor profile
Warm regards,
Gong Jiho (“Hem”)
Seoul, South Korea
Using Tensorflow for Silicon gives inaccurate results when compared to Google Colab GPU (9-15% differences). Here are my install versions for 4 anaconda env's. I understand the Floating point precision can be an issue, batch size, activation functions but how do you rectify this issue for the past 3 years?
1.) Version TF: 2.12.0, Python 3.10.13, tensorflow-deps: 2.9.0, tensorflow-metal: 1.2.0, h5py: 3.6.0, keras: 2.12.0
2.) Version TF: 2.19.0, Python 3.11.0, tensorflow-metal: 1.2.0, h5py: 3.13.0, keras: 3.9.2, jax: 0.6.0, jax-metal: 0.1.1,jaxlib: 0.6.0, ml_dtypes: 0.5.1
3.) python: 3.10.13,tensorflow: 2.19.0,tensorflow-metal: 1.2.0, h5py: 3.13.0, keras: 3.9.2, ml_dtypes: 0.5.1
4.) Version TF: 2.16.2, tensorflow-deps:2.9.0,Python: 3.10.16, tensorflow-macos 2.16.2, tensorflow-metal: 1.2.0, h5py:3.13.0, keras: 3.9.2, ml_dtypes: 0.3.2
Install of Each ENV with common example:
Create ENV: conda create --name TF_Env_V2 --no-default-packages
start env: source TF_Env_Name
ENV_1.) conda install -c apple tensorflow-deps , conda install tensorflow,pip install tensorflow-metal,conda install ipykernel
ENV_2.) conda install pip python==3.11, pip install tensorflow,pip install tensorflow-metal,conda install ipykernel
ENV_3) conda install pip python 3.10.13,pip install tensorflow, pip install tensorflow-metal,conda install ipykernel
ENV_4) conda install -c apple tensorflow-deps, pip install tensorflow-macos, pip install tensor-metal, conda install ipykernel
Example used on all 4 env:
import tensorflow as tf
cifar = tf.keras.datasets.cifar100
(x_train, y_train), (x_test, y_test) = cifar.load_data()
model = tf.keras.applications.ResNet50(
include_top=True,
weights=None,
input_shape=(32, 32, 3),
classes=100,)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False)
model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"])
model.fit(x_train, y_train, epochs=5, batch_size=64)
Bear with me, please. Please make sure a highly skilled technical person reads and understands this.
I want to describe my vision for (AI/Algorithmically) Optimised Operating Systems. To explain it properly, I will describe the process to build it (pseudo).
Required Knowledge (no particular order): Processor Logic Circuits, LLM models, LLM tool usage, Python OO coding, Procedural vs OO, NLP fuzzy matching, benchmarking, canvas/artefacts/dynamic HTML interfaces, concepts of how AI models are vastly compressed and miniaturised forms of full data, Algorithmic vs AI.
First, take all OO Python code (example) on GitHub (example), then separate each function from each object into its own procedure (procedural logic) by making a logical procedural list of actions to perform only that function based on its entire dependency chain (i.e. all other objects it relies on). Relate all compiled functions using (for example) fuzzy matching on the name, or AI-based functional profiling to get multiple instances of each function.
Starting with the most used function, test each one against the others that perform the same task for bugs and completeness. Determine the fastest, most optimal version of that function (and every function). Add a single instance of each most optimal function to the centralised tool codebase, which will later be utilised by the language models. This ensures we rely only on the most optimised function for each and every use case — with every program using one shared instance of that function instead of compiling it separately.
In this way, we optimise all existing (Python OO) code based on maximum availability of certain resources. Use the optimised code to teach the AI how to map each user task directly to optimised code (i.e. each user-facing function in unoptimised program code to its optimised version). Later, teach the codebase how to optimise these functions directly to the logic circuits on the processor — as an Operating System — using essentially the same process. Eventually, use the most optimised form for tasks via a dynamic canvas/artefacts UI interface generated by the Start Menu using the language model.
Allow custom interfaces to emerge dynamically from the available functions based on language understanding. Grow and optimise the algorithmic codebase until the entire operating system — with versioning — evolves based on the AI-generated user interfaces (compressing them into new versions of the tools codebase).
Please don’t give up until you understand this concept completely and nothing in the description is beyond your full comprehension. It’s a highly significant step forward.