Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Integer arithmetic with Accelerate
Almost all the functions in Accelerate are for single precision (Float) and double precision (Double) operations. However, I stumbled upon three integer arithmetic functions which operate on Int32 values. Are there any more functions in Accelerate that operate on integer values? If not, then why aren't there more functions that work with integers?
1
0
526
Oct ’24
"failed to processImage" in videoProcessor
Hello, I’m working on a program that analyzes video files frame by frame to detect human poses in each frame. However, during the process of reading observations from the stream, the analysis frequently stops with the following error: [LOG_ERROR] /Library/Caches/com.apple.xbs/Sources/MediaAnalysis/VideoProcessing/VCPHumanPoseImageRequest.mm[85]: code -18 [LOG_ERROR] /Library/Caches/com.apple.xbs/Sources/MediaAnalysis/VideoProcessing/VCPHumanPoseImageRequest.mm[178]: code -18 The error was caught and printed using a do-catch block, and here is the output: Error Domain=NSOSStatusErrorDomain Code=-18 "Error: failed to processImage" UserInfo={NSLocalizedDescription=Error: failed to processImage} While the do-catch block helps prevent the app from crashing, the frames following the error cannot be analyzed. I’m hoping to understand the cause of this error, or find a way to skip the problematic frames and continue analyzing the subsequent ones. My development environment is Xcode Version 16.0 (16A242d) and iOS 18.0. Thank you for your help. (Attaching my code below.) let videoProcessor = VideoProcessor(videoURL) let bodyPoseRequest = DetectHumanBodyPoseRequest() let asset = AVURLAsset(url: videoURL) let videoTrack = try await asset.loadTracks(withMediaType: .video).first let bodyPoseStream = try await videoProcessor.addRequest(bodyPoseRequest) videoProcessor.startAnalysis() do { for try await observations in bodyPoseStream { guard let observation = observations.first else { continue } if let timeRange = observation.timeRange { /// do something... } } } catch { print("\(error.localizedDescription)") }
0
1
377
Oct ’24
Unable to Get Result from DetectHorizonRequest - Result is nil
I am using Apple’s Vision framework with DetectHorizonRequest to detect the horizon in an image. Here is my code: func processHorizonImage(_ ciImage: CIImage) async { let request = DetectHorizonRequest() do { let result = try await request.perform(on: ciImage) print(result) } catch { print(error) } } After calling the perform method, I am getting result as nil. To ensure the request's correctness, I have verified the following: The input CIImage is valid and contains a visible horizon. No errors are being thrown. The relevant frameworks are properly imported. Given that my image contains a clear horizon, why am I still not getting any results? I would appreciate any help or suggestions to resolve this issue. Thank you for your support! This is the image
0
0
625
Oct ’24
Object capture
Hi All, Is it possible to record a video using the Object Capture instead of taking a series of pictures ? Is it possible to get the bounding box coordinates of the object we capture ?
0
0
499
Oct ’24
Create ML not recognizing Acceleration and Rotation Features
Hi, I'm training a model that should detect a forehand and a backend stroke. The data looks like this: activity,timestamp,Acceleration_X,Acceleration_Y,Acceleration_Z,Rotation_X,Rotation_Y,Rotation_Z forehand,0.0,0.08,-0.08,0.03,0.18,0.26,0.32 I can load it in Create ML but it's showing the acceleration and rotation x,y,z as seperate Doubles and not as one feature. What do I have to change to make this work? Thank you
0
0
441
Oct ’24
Will Apple Intelligence gather feedback from users out of beta?
I had assumed that Apple Intelligence features would not allow users to give thumbs up or down when they are released later this year. But I recently stumbled upon new marketing material for the iPad Mini (A17 Pro), and in an embedded video on the marketing page, it shows the ability to give a thumbs up and down on an Image generated with Image Wand. https://www.apple.com/ipad-mini/ Was my assumption about non-beta users not being able to submit feedback on the model’s outputs wrong, or was Apple perhaps taking a screen recording of an unreleased beta and forgot to disable the feedback UI? I assume it can’t be the ladder.
1
1
428
Oct ’24
Seeking Feedback on an Idea: Real-Time Siri Running Coach for iOS
Hello everyone, I hope you’re all doing well. I’m not a developer, but I have an idea for an iOS app that I’d love to get your thoughts on. I wanted to share it here to gather feedback from this knowledgeable community and to learn from your expertise. Idea Overview: Real-Time AI Running Coach for iOS The concept is an iOS application that provides personalized, real-time running coaching by leveraging on-device data sources and Apple’s latest technologies. The app aims to offer an adaptive and motivating running experience while ensuring user privacy through on-device processing. Key Features: • Personalized Coaching: • Utilize real-time biometric data and personal insights to deliver AI-driven coaching tailored to the user’s mental and physical state. • Analyze health metrics, activity data, mood check-ins, and more to provide context-based motivational feedback. • Privacy First: • All data processing occurs on-device using Apple’s frameworks like Core ML, ensuring no personal data leaves the device. • Adaptive Motivation: • Implement Natural Language Processing to analyze user inputs like journal entries or mood check-ins. • Generate personalized coaching cues based on historical performance and mood trends. • Performance Enhancement: • Offer dynamic adjustments to pace, route, and strategy in real time to help improve running performance. • Seamless integration with Apple Watch for real-time data collection and haptic feedback. Technologies and Frameworks Involved: • HealthKit: Access health metrics such as heart rate, distance run, VO₂ max, sleep patterns, etc. • Core ML: On-device machine learning for real-time data analysis without latency. • Natural Language Processing: Analyze personal inputs for better coaching personalization. • Core Motion & Core Location: Track motion data and location services for runs. • AVFoundation & Speech: Provide real-time voice feedback and coaching cues. • SiriKit Integration: Allow users to initiate workouts and receive updates via Siri. Target Audience: • Runners of all levels seeking personalized coaching that adapts to their mental and physical states. • Users who prioritize privacy and want AI-driven insights without their data leaving the device. • Tech-savvy fitness enthusiasts who use iOS devices and Apple wearables. Questions for the Community: 1. Feasibility: Is this idea technically achievable using current iOS frameworks and technologies? 2. Data Access: Are there limitations in accessing and processing the necessary data on-device, especially regarding privacy and permissions? 3. Potential Challenges: What hurdles might developers face in creating such an app, and how could they be addressed? 4. Advice: As someone without a technical background, what steps would you recommend I take to move this idea forward? I truly appreciate any feedback or insights you can provide. I’m excited about the potential of this idea but also aware there may be complexities I’m not considering. Thank you for taking the time to read this! Best regards, Paul
0
0
462
Oct ’24
CreateML
I'm trying to use the Spatial model to perform Object Tracking on a .usdz file that I create. After loading the file, which I can view correctly in the console, I start the training. Initially, I notice that the disk usage on my PC increases. After several GB, the usage stops, but the training progress remains for hours at 0.00% with the message "About 8hr." How can I understand what the issue is? Has anyone else experienced the same problem? Thanks Diego
1
1
587
Oct ’24
Apple intelligence is not available in the iPhone 15 Pro brought from China.
My iPhone 15 Pro is from Hong Kong (China). I am outside of China and Asia in general. I have never been to China myself and the iPhone was activated in another country. And it is not the EU. My iPhone's language, Siri and region settings are changed to US English. Updated to iOS 18.1 RC. But Apple Intelligence doesn't show up in the Siri settings.
1
1
724
Oct ’24
Keras 3 and Tensorflow GPU does not have support on apple silicon
hi, I am currently running LSTM on TensorFlow. However, when i switched from keras2 to keras3. code running time has increased 10 times -- it seems there is no GPU acceleration. Here is my code: batch size = 256 optimiser = adam activation = tanh _______________________________________________ Layer (type) Output Shape Param # ============================================= input_1 (InputLayer) [(None, 7, 16)] 0 bidirectional (Bidirection (None, 7, 320) 226560 al) bidirectional_1 (Bidirecti (None, 7, 512) 1181696 onal) bidirectional_2 (Bidirecti (None, 256) 656384 onal) dense (Dense) (None, 1) 257 ============================================== Total params: 2064897 (7.88 MB) Trainable params: 2064897 (7.88 MB) Non-trainable params: 0 (0.00 Byte) ______________________________________________ This is keras 3.6.0 + tensorflow 2.17.0 + tensorflow-metal 1.1.0 training status: Training------------ Epoch 1/200 28/681 ━━━━━━━━━━━━━━━━━━━━ 8:13 756ms/step - loss: 0.5901 - mape: 338.6876 - mse: 0.8591 This is keras 2.14.0 + tensorflow 2.14.0 + tensorflow-metal 1.1.0 training status: Training------------ Epoch 1/200 681/681 [==============================] - 37s 49ms/step - loss: 3.6345 - mape: 499038.7500 - mse: 34.4148 - val_loss: 3.5452 - val_mape: 41.7964 - val_mse: 32.0133 - lr: 0.0010 Is that because keras3 has no GPU support on macos? Apart from that, if I change LSTM activation from tanh to sigmoid in keras2, it does not have GPU support as well. My system is 15.0.1 and the code was running on python3.11 I am not sure why these happen. Thanks
2
0
1.4k
Oct ’24
Question about Apple Intelligence
I downloaded the RC beta version on my Macbook and joined the waitlist so far I haven't received any message or any kind of notifications that I'm in but I have a question kinda silly but just want confirmation. By joining the beta and AI on my MacBook, whenever the official version is released, am I gonna have AI on my iPhone since already joined AI through my MacBook in the beta version? Kinda curious about it.
1
0
324
Oct ’24
Apple Intelligence missing Image features released in 18.2 today
I'm using an iPhone 15 Pro Max and running developer beta 18.2 released today. I've already been an 'Apple Intelligence' user and now have been able to link it with my PAID ChatGPT account. HOWEVER; I'm searching for these Image features everyone seems to be posting about and cannot find them anywhere. I'm apparently supposed to sign up for beta access to the Image features through some new Apple natively released app that was supposedly included in this build update, which I cannot find. What gives??!!
4
2
1.5k
Oct ’24
New Vison api - CoreML - "The VNDetectorProcessOption_ScenePrints required option was not found"
I'm trying to run a coreML model. This is an image classifier generated using: let parameters = MLImageClassifier.ModelParameters(validation: .dataSource(validationDataSource), maxIterations: 25, augmentation: [], algorithm: .transferLearning( featureExtractor: .scenePrint(revision: 2), classifier: .logisticRegressor )) let model = try MLImageClassifier(trainingData: .labeledDirectories(at: trainingDir.url), parameters: parameters) I'm trying to run it with the new async Vision api let model = try MLModel(contentsOf: modelUrl) guard let modelContainer = try? CoreMLModelContainer(model: model) else { fatalError("The model is missing") } let request = CoreMLRequest(model: modelContainer) let image = NSImage(named:"testImage")! let cgImage = image.toCGImage()! let handler = ImageRequestHandler(cgImage) do { let results = try await handler.perform(request) print(results) } catch { print("Failed: \(error)") } This gives me Failed: internalError("Error Domain=com.apple.Vision Code=7 "The VNDetectorProcessOption_ScenePrints required option was not found" UserInfo={NSLocalizedDescription=The VNDetectorProcessOption_ScenePrints required option was not found}") Please help! Am I missing something?
2
0
499
Oct ’24
Core ML Model Prediction in 120 FPS faster than 60 FPS
Hi, I found when continuously predicting with the same Core ML model in 120 FPS will be faster than in 60 FPS. I use Macbook Pro M2 and turn on ProMotion to run Core ML model prediction with a 120 FPS video, the average prediction time is 7.46ms as below: But when I turn off ProMotion, set 60 Hz refresh rate, and run Core ML model prediction with a 60 FPS video, the average prediction time is 10.91ms as below: What could be the technical explanation for these results? Is there any documentation or technical literature that addresses this behavior?
2
0
558
Oct ’24
switching region from China to US, Apple Intelligence still unable to use
Originally when Apple Intelligence launched there are some T&C for using and activate this Apple Intelligence. For activating Apple Intelligence at China first of all purchased iPhone must be non-Chinese iPhone means that the iPhone aren’t purchased in China not include Hong Kong and Macau. Also if use this Chinese Apple account are also unable to activate the Apple Intelligence. To activate the Apple Intelligence, I have travel to Hong Kong and purchase iPhone 16 pro Max. To reach the requirement of activating Apple Intelligence, I’ve decided to switch my region from China to United States. I’ve started to switch my account from October 19, CST 2:00am, Shanghai time 3:00pm. Till now CST time 8:30am October 24, Shanghai Time 9:30pm I still can’t join the Apple Intelligence waitlist. I’m also upgraded my phone to iOS 18.2. I’ve contacted the Apple support using my Chinese phone number and it transferred to me Philippines Apple support team. It seems like the Philippine Apple support team doesn’t help me get anything. The Philippine Apple support team keeps saying that the beta version iOS right now in my phone has some problem on it. But when I log out this Apple ID, and changed to another Apple ID that is UK ones. I can successfully enable the Apple Intelligence. What does this say?! This shows that my Apple account has a problem on it. It doesn’t switch successfully to United States server! And the Philippine Apple support team keep asking me to restore my iPhone like crazy! I’ve told them that I have used several Apple account that is from United States and United Kingdom that can successfully enable the Apple Intelligence. But the Philippine Apple support team said that my Apple account doesn’t have any problem! Apple please solve the problem! Anyone who have facing this kind of problem please share to us!!! Cheers!
1
0
3.3k
Oct ’24