Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

About VisionKit DataScannerViewController
Hi I'm having a problem with DataScannerViewController, I'm using the volume barcode scanning feature in my app, prior to that I was using an AVCaptureDevice with the UltraWideAngle set. After discovering DataScannerViewController, we planned to replace the previous obsolete code with DataScannerViewController, all together it was ok, when I want to set the ultra wide angle, I don't know how to start. I tried to get the minZoomFactor and I realized that I get 0.0 I tried to set zoomFactor to 1.0 and I found that he is not valid Note: func dataScannerDidZoom(_ dataScanner: DataScannerViewController), when I try to get the minZoomFactor, set the zoomFactor in this proxy method, I find that it is valid! What should I do next, I want to use only DataScannerViewController and implement ultra wide angle Thanks a lot.
1
0
636
Jan ’25
AI-Powered Feed Customization via User-Defined Algorithm
Hey guys 👋 I’ve been thinking about a feature idea for iOS that could totally change the way we interact with apps like Twitter/X. Imagine if we could define our own recommendation algorithm, and have an AI on the iPhone that replaces the suggested tweets in the feed with ones that match our personal interests — based on public tweets, and without hacking anything. Kinda like a personalized "AI skin" over the app that curates content you actually care about. Feels like this would make content way more relevant and less algorithmically manipulative. Would love to know what you all think — and if Apple could pull this off 🔥
1
0
72
Jun ’25
Help Needed: SwiftUI View with Camera Integration and Core ML Object Recognition
Hi everyone, I'm working on a SwiftUI app and need help building a view that integrates the device's camera and uses a pre-trained Core ML model for real-time object recognition. Here's what I want to achieve: Open the device's camera from a SwiftUI view. Capture frames from the camera feed and analyze them using a Create ML-trained Core ML model. If a specific figure/object is recognized, automatically close the camera view and navigate to another screen in my app. I'm looking for guidance on: Setting up live camera capture in SwiftUI. Using Core ML and Vision frameworks for real-time object recognition in this context. Managing navigation between views when the recognition condition is met. Any advice, code snippets, or examples would be greatly appreciated! Thanks in advance!
1
0
732
Jan ’25
CoreML: Model loading utilities
Hello, We find that models sometimes load very fast (<< 1 second) and sometimes encounter very long load times (>> 120 seconds). During such slow load times, the model is being compiled. We would greatly appreciate the ability to check cache validity via CoreML and determine that we are about to encounter long load times so that we can mitigate and provide a good user experience. A secondary issue: sometimes the cache is corrupted (typically .mpsgraphpackage yielding Metal cold asserts). This yields load failures and OS errors that persist between launches, and we have to manually nuke the cache (~/Library/..../my-app/...) for the CoreML assets. A CoreML API for clearing caches and hardening from asserts across the load paths would be appreciated
1
0
118
Jun ’25
What is the Foundation Models support for basic math?
I am experimenting with Foundation Models in my time tracking app to analyze users tracked events, but I am finding that the model struggles with even basic computation of time. Specifically converting from seconds to hours and minutes. To give just one example, when I prompt: "Convert 3672 seconds to hours, minutes, and seconds. Don't include the calculations in the resulting output" I get this: "3672 seconds is equal to 1 hour, 0 minutes, and 36 seconds". Which is clearly wrong - it should be 1 hour, 1 minute, and 12 seconds. Another issue that I saw a lot is that seconds were considered to be minutes, or that the hours were just completely off. What can I do to make the support for math better? Or is that just something that the model is not meant to be used for?
1
0
194
Jun ’25
CoreML Conversion Display Issues
Hello! I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly. I have some code that may be relevant in debugging this issue: How I use the model: model = BallTrackerNet() # this is the model architecture which will be referenced later device = self.device # cpu model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights model = model.to(device) model.eval() Here is the BallTrackerNet() model itself import torch.nn as nn import torch class ConvBlock(nn.Module): def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True): super().__init__() self.block = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias), nn.ReLU(), nn.BatchNorm2d(out_channels) ) def forward(self, x): return self.block(x) class BallTrackerNet(nn.Module): def __init__(self, out_channels=256): super().__init__() self.out_channels = out_channels self.conv1 = ConvBlock(in_channels=9, out_channels=64) self.conv2 = ConvBlock(in_channels=64, out_channels=64) self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = ConvBlock(in_channels=64, out_channels=128) self.conv4 = ConvBlock(in_channels=128, out_channels=128) self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv5 = ConvBlock(in_channels=128, out_channels=256) self.conv6 = ConvBlock(in_channels=256, out_channels=256) self.conv7 = ConvBlock(in_channels=256, out_channels=256) self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv8 = ConvBlock(in_channels=256, out_channels=512) self.conv9 = ConvBlock(in_channels=512, out_channels=512) self.conv10 = ConvBlock(in_channels=512, out_channels=512) self.ups1 = nn.Upsample(scale_factor=2) self.conv11 = ConvBlock(in_channels=512, out_channels=256) self.conv12 = ConvBlock(in_channels=256, out_channels=256) self.conv13 = ConvBlock(in_channels=256, out_channels=256) self.ups2 = nn.Upsample(scale_factor=2) self.conv14 = ConvBlock(in_channels=256, out_channels=128) self.conv15 = ConvBlock(in_channels=128, out_channels=128) self.ups3 = nn.Upsample(scale_factor=2) self.conv16 = ConvBlock(in_channels=128, out_channels=64) self.conv17 = ConvBlock(in_channels=64, out_channels=64) self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels) self.softmax = nn.Softmax(dim=1) self._init_weights() def forward(self, x, testing=False): batch_size = x.size(0) x = self.conv1(x) x = self.conv2(x) x = self.pool1(x) x = self.conv3(x) x = self.conv4(x) x = self.pool2(x) x = self.conv5(x) x = self.conv6(x) x = self.conv7(x) x = self.pool3(x) x = self.conv8(x) x = self.conv9(x) x = self.conv10(x) x = self.ups1(x) x = self.conv11(x) x = self.conv12(x) x = self.conv13(x) x = self.ups2(x) x = self.conv14(x) x = self.conv15(x) x = self.ups3(x) x = self.conv16(x) x = self.conv17(x) x = self.conv18(x) # x = self.softmax(x) out = x.reshape(batch_size, self.out_channels, -1) if testing: out = self.softmax(out) return out def _init_weights(self): for module in self.modules(): if isinstance(module, nn.Conv2d): nn.init.uniform_(module.weight, -0.05, 0.05) if module.bias is not None: nn.init.constant_(module.bias, 0) elif isinstance(module, nn.BatchNorm2d): nn.init.constant_(module.weight, 1) nn.init.constant_(module.bias, 0) I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated! Thanks! Michael
13
0
1.1k
Jan ’25
Xcode AI Coding Assistance Option(s)
Not finding a lot on the Swift Assist technology announced at WWDC 2024. Does anyone know the latest status? Also, currently I use OpenAI's macOS app and its 'Work With...' functionality to assist with Xcode development, and this is okay, certainly saves copying code back and forth, but it seems like AI should be able to do a lot more to help with Xcode app development. I guess I'm looking at what people are doing with AI in Visual Studio, Cline, Cursor and other IDEs and tools like those and feel a bit left out working in Xcode. Please let me know if there are AI tools or techniques out there you use to help with your Xcode projects. Thanks in advance!
6
0
11k
Mar ’25
How to encode Tool.Output (aka PromptRepresentable)?
Hey, I've been trying to write an AI agent for OpenAI's GPT-5, but using the @Generable Tool types from the FoundationModels framework, which is super awesome btw! I'm having trouble implementing the tool calling, though. When I receive a tool call from the OpenAI api, I do the following: Find the tool in my [any Tool] array via the tool name I get from the model if let tool = tools.first(where: { $0.name == functionCall.name }) { // ... } Parse the arguments of the tool call via GeneratedContent(json:) let generatedContent = try GeneratedContent(json: functionCall.arguments) Pass the tool and arguments to a function that calls tool.call(arguments: arguments) and returns the tool's output type private func execute<T: Tool>(_ tool: T, with generatedContent: GeneratedContent) async throws -> T.Output { let arguments = try T.Arguments.init(generatedContent) return try await tool.call(arguments: arguments) } Up to this point, everything is working as expected. However, the tool's output type is any PromptRepresentable and I have no idea how to turn that into something that I can encode and send back to the model. I assumed there might be a way to turn it into a GeneratedContent but there is no fitting initializer. Am I missing something or is this not supported? Without a way to return the output to an external provider, it wouldn't really be possible to use FoundationModels Tool type I think. That would be unfortunate because it's implemented so elegantly. Thanks!
2
0
228
Aug ’25
iOS 18 new RecognizedTextRequest DEADLOCKS if more than 2 are run in parallel
Following WWDC24 video "Discover Swift enhancements in the Vision framework" recommendations (cfr video at 10'41"), I used the following code to perform multiple new iOS 18 `RecognizedTextRequest' in parallel. Problem: if more than 2 request are run in parallel, the request will hang, leaving the app in a state where no more requests can be started. -> deadlock I tried other ways to run the requests, but no matter the method employed, or what device I use: no more than 2 requests can ever be run in parallel. func triggerDeadlock() {} try await withThrowingTaskGroup(of: Void.self) { group in // See: WWDC 2024 Discover Siwft enhancements in the Vision framework at 10:41 // ############## THIS IS KEY let maxOCRTasks = 5 // On a real-device, if more than 2 RecognizeTextRequest are launched in parallel using tasks, the request hangs // ############## THIS IS KEY for idx in 0..<maxOCRTasks { let url = ... // URL to some image group.addTask { // Perform OCR let _ = await performOCRRequest(on: url: url) } } var nextIndex = maxOCRTasks for try await _ in group { // Wait for the result of the next child task that finished if nextIndex < pageCount { group.addTask { let url = ... // URL to some image // Perform OCR let _ = await performOCRRequest(on: url: url) } nextIndex += 1 } } } } // MARK: - ASYNC/AWAIT version with iOS 18 @available(iOS 18, *) func performOCRRequest(on url: URL) async throws -> [RecognizedText] { // Create request var request = RecognizeTextRequest() // Single request: no need for ImageRequestHandler // Configure request request.recognitionLevel = .accurate request.automaticallyDetectsLanguage = true request.usesLanguageCorrection = true request.minimumTextHeightFraction = 0.016 // Perform request let textObservations: [RecognizedTextObservation] = try await request.perform(on: url) // Convert [RecognizedTextObservation] to [RecognizedText] return textObservations.compactMap { observation in observation.topCandidates(1).first } } I also found this Swift forums post mentioning something very similar. I also opened a feedback: FB17240843
7
0
243
Aug ’25
How to confirm whether CreatML is training
I am currently training a Tabular Classification model in CreatML. The dataset comprises 30 features, including 1,000,000 training data points and 1,000,000 verification data points. Could you please estimate the approximate training time for an M4Max MacBook Pro? During the training process, CreatML has been displaying the “Processing” status, but there is no progress bar. I would like to ascertain whether the training is still ongoing, as I have often suspected that it has ceased.
1
0
644
Jan ’25
Request for Agentic AI Mode (MCP Protocol) Support in Future Versions of iOS or Xcode
Hello Apple Team, Thank you for the recent Group Lab and for your continued work on advancing Xcode and developer tools. I’d like to submit a feature request: Are there any plans to introduce support for Agentic AI Mode (MCP protocol) in future versions of iOS or Xcode? As developer tools evolve toward more intelligent and context-aware environments, the integration of agentic AI capabilities could significantly enhance productivity and unlock new creative workflows. Looking forward to your consideration, and thank you again for the excellent session. Best regards
3
0
174
Jun ’25
VNCoreMLTransform - request failed
Keep getting error : I have tried Picker for File, Photo Library , both same results . Debugging the resize for 360x360 but still facing this error. The model I'm trying to implement is created with CreateMLComponents The process is from example of WWDC 2022 Banana Ripeness , I have used index for each .jpg . Prediction Failed: The VNCoreMLTransform request failed Is there some possible way to solve it or is error somewhere in training of model ?
1
0
462
Mar ’25
FoundationModels Content Sanitizer Blocking Legitimate Text Processing
I'm developing a macOS application using the FoundationModels framework (LanguageModelSession) and encountering issues with the content sanitizer blocking legitimate text input. ** Issue Description:** The content sanitizer is flagging text strings that contain certain substrings, even when they represent legitimate technical content. For example: F_SEEL_SEX1S.wav (sE Electronics SEX1S microphone model) Technical product identifiers Serial numbers and version codes ** Broader Concern:** The content sanitizer appears to be applying restrictions that seem inappropriate for user-owned content. Even if a filename were something like "human sex.wav", users should have the right to process their own legitimate files on their own devices without content filtering interference. ** Error Messages:** SensitiveContentSettings: Sanitizer model found unsafe content in value FoundationModels.LanguageModelSession.GenerationError error 2 ** Questions:** Is there a way to disable content sanitization for processing user-owned content? 2. What's the recommended approach for applications that need to handle arbitrary user text? 3. Are there APIs to process personal content without filtering restrictions? ** Environment:** macOS 26.0 FoundationModels framework LanguageModelSession Any guidance would be appreciated.
1
0
311
Jun ’25
DataScannerViewController does't recognize currency less 1.00
Hi, DataScannerViewController does't recognize currencies less than 1.00 (e.g. 0.59 USD, 0.99 EUR, etc.). Why? How to solve the problem? This feature is not described in Apple documentation, is there a solution? This is my code: func makeUIViewController(context: Context) -&gt; DataScannerViewController { let dataScanner = DataScannerViewController(recognizedDataTypes: [ .text(textContentType: .currency)]) return dataScanner }
4
0
150
Apr ’25
Converted Model Preview Issues in Xcode
Hello! I have a TrackNet model that I have converted to CoreML (.mlpackage) using coremltools, and the conversion process appears to go smoothly as I get the .mlpackage file I am looking for with the weights and model.mlmodel file in the folder. However, when I drag it into Xcode, it just shows up as 4 script tags (as pictured) instead of the model "interface" that is typically expected. I initially was concerned that my model was not compatible with CoreML, but upon logging the conversions, everything seems to be converted properly. I have some code that may be relevant in debugging this issue: How I use the model: model = BallTrackerNet() # this is the model architecture which will be referenced later device = self.device # cpu model.load_state_dict(torch.load("models/balltrackerbest.pt", map_location=device)) # balltrackerbest is the weights model = model.to(device) model.eval() Here is the BallTrackerNet() model itself: import torch.nn as nn import torch class ConvBlock(nn.Module): def __init__(self, in_channels, out_channels, kernel_size=3, pad=1, stride=1, bias=True): super().__init__() self.block = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=pad, bias=bias), nn.ReLU(), nn.BatchNorm2d(out_channels) ) def forward(self, x): return self.block(x) class BallTrackerNet(nn.Module): def __init__(self, out_channels=256): super().__init__() self.out_channels = out_channels self.conv1 = ConvBlock(in_channels=9, out_channels=64) self.conv2 = ConvBlock(in_channels=64, out_channels=64) self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = ConvBlock(in_channels=64, out_channels=128) self.conv4 = ConvBlock(in_channels=128, out_channels=128) self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv5 = ConvBlock(in_channels=128, out_channels=256) self.conv6 = ConvBlock(in_channels=256, out_channels=256) self.conv7 = ConvBlock(in_channels=256, out_channels=256) self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv8 = ConvBlock(in_channels=256, out_channels=512) self.conv9 = ConvBlock(in_channels=512, out_channels=512) self.conv10 = ConvBlock(in_channels=512, out_channels=512) self.ups1 = nn.Upsample(scale_factor=2) self.conv11 = ConvBlock(in_channels=512, out_channels=256) self.conv12 = ConvBlock(in_channels=256, out_channels=256) self.conv13 = ConvBlock(in_channels=256, out_channels=256) self.ups2 = nn.Upsample(scale_factor=2) self.conv14 = ConvBlock(in_channels=256, out_channels=128) self.conv15 = ConvBlock(in_channels=128, out_channels=128) self.ups3 = nn.Upsample(scale_factor=2) self.conv16 = ConvBlock(in_channels=128, out_channels=64) self.conv17 = ConvBlock(in_channels=64, out_channels=64) self.conv18 = ConvBlock(in_channels=64, out_channels=self.out_channels) self.softmax = nn.Softmax(dim=1) self._init_weights() def forward(self, x, testing=False): batch_size = x.size(0) x = self.conv1(x) x = self.conv2(x) x = self.pool1(x) x = self.conv3(x) x = self.conv4(x) x = self.pool2(x) x = self.conv5(x) x = self.conv6(x) x = self.conv7(x) x = self.pool3(x) x = self.conv8(x) x = self.conv9(x) x = self.conv10(x) x = self.ups1(x) x = self.conv11(x) x = self.conv12(x) x = self.conv13(x) x = self.ups2(x) x = self.conv14(x) x = self.conv15(x) x = self.ups3(x) x = self.conv16(x) x = self.conv17(x) x = self.conv18(x) # x = self.softmax(x) out = x.reshape(batch_size, self.out_channels, -1) if testing: out = self.softmax(out) return out def _init_weights(self): for module in self.modules(): if isinstance(module, nn.Conv2d): nn.init.uniform_(module.weight, -0.05, 0.05) if module.bias is not None: nn.init.constant_(module.bias, 0) elif isinstance(module, nn.BatchNorm2d): nn.init.constant_(module.weight, 1) nn.init.constant_(module.bias, 0) Here is also the meta data of my model: [ { "metadataOutputVersion" : "3.0", "storagePrecision" : "Float16", "outputSchema" : [ { "hasShapeFlexibility" : "0", "isOptional" : "0", "dataType" : "Float32", "formattedType" : "MultiArray (Float32 1 × 256 × 230400)", "shortDescription" : "", "shape" : "[1, 256, 230400]", "name" : "var_462", "type" : "MultiArray" } ], "modelParameters" : [ ], "specificationVersion" : 6, "mlProgramOperationTypeHistogram" : { "Cast" : 2, "Conv" : 18, "Relu" : 18, "BatchNorm" : 18, "Reshape" : 1, "UpsampleNearestNeighbor" : 3, "MaxPool" : 3 }, "computePrecision" : "Mixed (Float16, Float32, Int32)", "isUpdatable" : "0", "availability" : { "macOS" : "12.0", "tvOS" : "15.0", "visionOS" : "1.0", "watchOS" : "8.0", "iOS" : "15.0", "macCatalyst" : "15.0" }, "modelType" : { "name" : "MLModelType_mlProgram" }, "userDefinedMetadata" : { "com.github.apple.coremltools.source_dialect" : "TorchScript", "com.github.apple.coremltools.source" : "torch==2.5.1", "com.github.apple.coremltools.version" : "8.1" }, "inputSchema" : [ { "hasShapeFlexibility" : "0", "isOptional" : "0", "dataType" : "Float32", "formattedType" : "MultiArray (Float32 1 × 9 × 360 × 640)", "shortDescription" : "", "shape" : "[1, 9, 360, 640]", "name" : "input_frames", "type" : "MultiArray" } ], "generatedClassName" : "BallTracker", "method" : "predict" } ] I have been struggling with this conversion for almost 2 weeks now so any help, ideas or pointers would be greatly appreciated! Let me know if any other information would be helpful to see as well. Thanks! Michael
1
0
662
Jan ’25