Post

Replies

Boosts

Views

Activity

Reply to iCloud Drive silent upload deadlock caused by stale HTTP/3 session in nsurlsessiond (FB22476701)
Thank you for providing such a detailed account of the iCloud Drive file upload deadlock issue on macOS 26.4.1. It sounds like you've done extensive troubleshooting and analysis to identify the root cause. Here's a summary of what you've discovered and some additional thoughts or suggestions that might help refine your approach or assist others experiencing similar issues: Summary of Findings: Root Cause: A stale HTTP/3 (QUIC) session in nsurlsessiond's BackgroundConnectionPool leads to a deadlock during file uploads. Behavior: Deadlock occurs specifically with HTTP/3, while HTTP/1.1 works without issues post-restart. Affects larger files (>100 KB); smaller files may occasionally succeed. Restarting both cloudd and nsurlsessiond resolves the issue temporarily by clearing the poisoned session. Reproduction: Consistent behavior observed across multiple tests with varied file sizes. Diagnosis: Log analysis can help identify occurrences using specific grep patterns. Recovery: A targeted kill command for user-level instances of cloudd and nsurlsessiond provides a quick fix. Additional Thoughts and Suggestions: Potential Enhancements for Apple: Automatic Session Management: Implement automatic invalidation of QUIC sessions after a threshold of failures (as you suggested), potentially integrated into CFNetwork or NSURLSession directly. Improved Logging: Enhance logging to surface errors like these to users in Finder or System Settings, perhaps with actionable suggestions or clearer error messages. API for Pool Invalidation: Expose APIs that allow services like CloudKit to explicitly invalidate problematic session pools without needing a full daemon restart. Diagnostic Tools: Consider adding built-in diagnostic tools or scripts that users can run to identify and potentially resolve such deadlocks without manual intervention. For Users and Administrators: Script Automation: For frequent issues, consider setting up a monitoring script that automatically runs the recovery command when the specific deadlock pattern is detected in logs. Alternative Protocols: Temporarily disabling QUIC in network settings (if feasible and supported) might be a workaround until a permanent fix is applied, though this may impact performance for other QUIC-enabled applications. Feedback Loop: Encourage affected users to submit feedback through Apple's Feedback Assistant, including the collected logs, to ensure the issue is prioritized and tracked. Further Debugging: Network Packet Analysis: Capturing network packets during a deadlock might provide additional insights into what exactly fails mid-transfer. System State Snapshots: Taking system snapshots before and after the deadlock could help Apple engineers diagnose what might be causing the session cache corruption. Your detailed documentation and methodical approach are invaluable for both addressing the current issue and helping Apple refine their systems. Keep monitoring for updates from Apple regarding this problem, as they may release patches or guidance based on feedback like yours.
2w
Reply to Images added in Reality Composer look darker in AR
When working with images in Reality Composer and noticing that they appear darker in AR compared to other objects, it's likely due to how images are handled differently in terms of lighting. Here are some insights and recommendations to achieve a more consistent lighting response for image-based artworks: Expected Behavior and Lighting Model Lighting Differences: Images: Typically, images are treated as materials with an albedo texture, meaning they reflect the light that hits them without additional complex interactions like specular highlights or subsurface scattering. This can make them appear darker if the scene lighting is dim or if the image lacks bright areas. Other Objects: 3D models often use more complex materials that include specular reflections and other lighting properties, which can make them appear brighter and more dynamic under varying lighting conditions. Lighting Models in Reality Composer: Reality Composer uses a simplified lighting model compared to full-fledged 3D engines. It primarily relies on ambient and directional lights, which might not fully illuminate textured images as expected. Recommendations for Consistent Lighting Increase Ambient Light: Adjust the ambient light in your scene to ensure that it provides enough illumination for images. Higher ambient light levels can help reduce the perceived darkness of images. Add Point or Spot Lights: Place additional point or spot lights in your scene to directly illuminate the images. Position these lights to mimic real-world light sources and enhance the visibility of image details. Adjust Image Material Properties: If Reality Composer allows, adjust the material properties of your images. Increasing the brightness or contrast of the image texture can help it stand out better in AR. Use Image Effects: Consider applying image effects or adjustments to enhance visibility. For example, increasing exposure or adding a slight glow can make images appear more vibrant. Preprocess Images: Edit your images before importing them into Reality Composer. Enhance brightness, contrast, and saturation to ensure they are more visible under various lighting conditions. Real-World Calibration: Test your AR experience in different real-world environments to understand how lighting conditions affect your images. Calibrate your scene's lighting settings based on typical lighting conditions where the AR experience will occur. Feedback and Iteration: Continuously test and iterate on your scene. Gather feedback on how images appear in different environments and make necessary adjustments to lighting and materials. By understanding the differences in how images and other objects are lit in Reality Composer and applying these recommendations, you can achieve a more consistent and visually appealing AR experience for your image-based artworks.
Topic: Design SubTopic: General Tags:
Mar ’26
Reply to First attempt at a PKPass
Creating a custom Wallet pass, especially one intended to display a simple vertical layout of an address, can be a bit tricky due to the constraints and specific formatting requirements of Apple's Wallet Pass format. Based on the JSON you've provided, here are some steps and considerations to help you achieve the desired layout: Key Considerations Field Configuration: The primaryFields and auxiliaryFields in your JSON are used to display information on the pass. For a simple address layout, you might want to creatively use these fields. Text Styling: Wallet passes have limited styling options, but you can control the font size, weight, and color to some extent using the textStyle attribute in fields. Layout Adjustments: Since Wallet passes don't support arbitrary layouts, you'll need to fit your content into the available fields and styles. You might need to compromise on the exact layout you envisioned. JSON Adjustments Here's an adjusted version of your JSON that attempts to create a vertical address layout using primaryFields and auxiliaryFields: code-block { "formatVersion": 1, "passTypeIdentifier": "pass.org.danleys.4KSBarcode", "serialNumber": "__SERIAL__", "teamIdentifier": "----", "organizationName": "4 K.I.D.S. Sake", "description": "4KSBarcode", "logoText": "4 K.I.D.S. Sake", "foregroundColor": "rgb(255, 255, 255)", "backgroundColor": "rgb(255,0,0)", "storeCard" : { "primaryFields" : [ { "key" : "ClientID", "label" : "Address", "value" : "339 Remington Blvd" }, { "key" : "city", "label" : "", "value" : "Bolingbrook, IL 60440" } ], "auxiliaryFields": [] }, "barcode": { "format": "PKBarcodeFormatCode128", "message": "__SERIAL__", "altText": "__SERIAL__", "messageEncoding": "iso-8859-1" } } Additional Tips Text Style: You can try different textStyle values like title, subheadline, or body to adjust the appearance of the text. However, these styles have predefined appearances, so they might not perfectly match your vision. Preview and Testing: Continuously test your pass using both the iPhone simulator and a physical device, as the appearance can differ slightly between them. Use Apple's Pass Viewer app to inspect your pass and make iterative adjustments to your JSON. Documentation and Examples: Refer to Apple's official documentation on Wallet passes for detailed specifications and examples. They provide guidance on how to structure your JSON to achieve various layouts and styles. Community and Forums: If you're still facing issues, consider reaching out to the Apple Developer Forums or communities like Stack Overflow. Other developers may have encountered similar challenges and can offer insights. By carefully configuring your JSON and testing across different devices, you should be able to achieve a pass layout that closely meets your expectations.
Topic: Design SubTopic: General Tags:
Mar ’26
Reply to NavigationTitle in LiquidGlass style
To achieve a LiquidGlass navigation title style in SwiftUI similar to what you see in macOS apps like Photos, you'll need to customize the appearance beyond what's available with standard SwiftUI modifiers. The LiquidGlass effect typically involves a semi-transparent, blurred background that reacts dynamically to the content beneath it. While SwiftUI doesn't directly support a LiquidGlass effect out of the box, you can create a custom view to mimic this behavior. Here's a basic approach to get you started: Custom LiquidGlass Navigation Title import` SwiftUI` struct LiquidGlassEffect: ViewModifier { var color: Color = .white var opacity: Double = 0.6 func body(content: Content) -> some View { content .background( GeometryReader { geometry in Color.clear .blur(radius: 10) .frame(width: geometry.size.width, height: geometry.size.height) .opacity(opacity) .blendMode(.destinationOver) .background(color.opacity(0.1)) } .edgesIgnoringSafeArea(.top) ) .overlay(
Topic: Design SubTopic: General Tags:
Mar ’26
Reply to tensorflow-metal ReLU activation fails to clip negative values on M4 Apple Silicon
It sounds like you're encountering an issue with the ReLU activation function on the Mac M4 using TensorFlow-Metal. Let's go through a few steps to troubleshoot and potentially resolve this issue: Potential Causes and Solutions TensorFlow-Metal Version Compatibility: Ensure that you are using the latest compatible version of TensorFlow-Metal. Sometimes, bugs are fixed in newer releases. Check for updates via pip: pip install --upgrade tensorflow-macos tensorflow-metal ReLU Implementation in Metal: TensorFlow-Metal might have a different implementation of ReLU that doesn't handle edge cases like floating-point precision issues with exactly zero values. You could try a workaround by slightly offsetting zero values before applying ReLU: def custom_relu(x): return tf.maximum(x, 1e-10) model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(10,)), tf.keras.layers.Dense(5, activation=custom_relu) ]) Environment and Configuration: Double-check that TensorFlow is correctly configured to use Metal. You can explicitly set it to use Metal by uncommenting the line to disable GPU visibility for testing: tf.config.set_visible_devices([], 'GPU') Restart your Python interpreter or Jupyter notebook session after changing configurations to ensure changes take effect. Test with Smaller Data Types: Sometimes precision issues arise with floating-point types. Try using tf.float16 or tf.float32 explicitly and see if the behavior changes: data = np.ones((1, 10), dtype=np.float32) weights = [np.ones((10, 5), dtype=np.float32) * -1, np.ones(5, dtype=np.float32) * -1] Check for Known Issues: Look up any known issues or discussions related to TensorFlow-Metal on GitHub or community forums. There might be specific patches or advice for your hardware configuration. Fallback to CPU: As a temporary measure, you can force the model to run on the CPU to bypass the issue until a fix is available: tf.config.set_visible_devices([], 'GPU') Conclusion Implement these suggestions and see if any of them resolve the issue with negative values not being clipped to zero. If the problem persists, consider reaching out to the TensorFlow-Metal maintainers or community for further assistance, providing them with details about your setup and the reproduction script.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Mar ’26
Reply to Two errors in debug: com.apple.modelcatalog.catalog sync and nw_protocol_instance_set_output_handler
Based on the error messages you're encountering in Xcode, it seems like there are two main issues to address: nwprotocolinstancesetoutputhandler Not calling removeinput_handler: This error typically indicates a problem with how network connections are being managed using Apple's Network framework. Specifically, it suggests that an output handler was set on a network protocol instance without a corresponding removal of the input handler, which could lead to resource leaks or crashes. Steps to Resolve: Ensure that every call to nw_protocol_instance_set_output_handler is matched with a call to nw_protocol_instance_remove_input_handler when you're done with the connection or when the connection is closed. Review the lifecycle of your network connections to ensure handlers are properly managed, especially in asynchronous or error-prone paths. Check if there are any closures or network operations that might inadvertently leave handlers dangling. com.apple.modelcatalog.catalog sync: connection error: This error points to an issue with accessing a service named com.apple.modelcatalog.catalog, likely due to sandbox restrictions, as indicated by the error message. Steps to Resolve: Entitlements: Verify that your app's entitlements file includes the necessary permissions to access this service. You might need specific entitlements related to model catalogs or network services. App Sandbox: Since the error mentions sandbox restrictions, ensure that your app's sandbox configuration allows for the required network connections. You may need to adjust the sandbox profile to permit connections to the specific service or domain. Error Handling: Implement robust error handling to manage cases where the connection fails due to permissions or other issues. This can include retry logic or user notifications. Debugging: Temporarily disable the feature that interacts with com.apple.modelcatalog.catalog to confirm that it's the source of the problem. If the error persists without it, investigate other parts of your network code. Additional Considerations for Your FoundationRepo: Conditional Logic: Your FoundationRepo seems to handle whether to use local or remote models based on system capabilities. Ensure that these checks are accurate and that the logic for switching between them is sound. Network Requests: Within FoundationRepo, ensure that network requests are properly configured, especially with respect to URLs and headers. The base URL you're using looks placeholder-like, so confirm it's correct before making requests. Concurrency: Given that FoundationRepo is an actor, ensure that all accesses to its properties and methods are properly synchronized to avoid concurrency issues. By addressing these areas, you should be able to resolve the errors you're encountering. If the issues persist, consider reviewing recent changes to your codebase or consulting Apple's documentation and developer forums for more specific guidance related to the frameworks you're using.
Mar ’26
Reply to Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
The hover effects feature introduced in visionOS 26 represents a significant enhancement for creating immersive experiences, allowing users to interact with virtual objects using spatial gestures. However, the behavior of these gestures, particularly with the PSVR 2 controllers, can indeed vary based on how they are integrated into different types of views and applications. Clarification on Hover Effects and PSVR 2 Controllers Hover Effects in VisionOS: Hover effects are designed to provide feedback when a user's gaze is directed towards a virtual object. This is akin to a mouse hover on a desktop and can trigger spatial events or visual highlights. Controller Gestures: On the PSVR 2, traditional button presses (like pulling the trigger) are well-supported across various interfaces, including SwiftUI. However, gaze-based interactions like hover effects require additional handling to map spatial gestures to these events. Discrepancies in Behavior Sample Application Behavior: The sample you referenced likely relies on Compositor Services and Metal for rendering, which might not directly map the trigger press to hover events as SwiftUI does for button clicks. Instead, it may expect pinch gestures or explicit gaze-based input to trigger spatial events. SwiftUI Views: SwiftUI abstracts much of the complexity involved in handling different input methods, providing a more consistent interface for gestures like trigger presses. This is why you observe trigger presses being recognized as button clicks in SwiftUI views. Possible Explanations and Solutions Gesture Mapping: Ensure that your sample application correctly maps hover events to the appropriate PSVR 2 controller gestures. This might involve customizing gesture recognizers to handle gaze-based input and trigger presses separately. Compositor Services API: Dive deeper into the Compositor Services API documentation to understand how hover events are triggered and how they can be customized for different input devices. You may need to handle spatial events manually and map them to the desired actions. Event Handling Updates: Check for any updates or patches to visionOS or the Compositor Services framework that might address discrepancies in gesture handling. Apple frequently updates its APIs to improve compatibility and functionality. Community and Support: Engage with the Apple Developer forums or reach out to Apple Support for specific guidance on implementing hover effects with PSVR 2 controllers. They may provide insights or workarounds tailored to your application's needs. Custom Implementations: Consider implementing custom gesture recognizers that combine gaze tracking with controller input to achieve the desired hover effect behavior. This might offer more flexibility and control over how interactions are handled. In summary, while hover effects and PSVR 2 controller gestures are designed to work together, discrepancies can arise due to differences in how input is processed in various frameworks. By carefully mapping gestures and leveraging the latest API features, you can create a more seamless and intuitive user experience in your visionOS application.
Topic: Spatial Computing SubTopic: General Tags:
Mar ’26
Reply to ApplePay Payment Sheet for onfile payment method
You can set up recurring payments for Apple Pay on the web by following these steps: Visit Apple Pay on the Web: Go to the Apple Pay section on the Apple website. Set Up Recurring Payments: Look for options related to setting up recurring payments or deferred payment requests. This might be under a section like "Manage Payments" or similar. Authorize Payment Method: When prompted, you will need to authorize your payment method. This is typically done through a confirmation screen where you can use Touch ID, Face ID, or enter your passcode. Review Repayment Details: Ensure that all repayment details are correct, including the amount, frequency, and payment method. Confirmation: Complete the setup by confirming your choices. You should receive a confirmation that your recurring payments have been set up. If you encounter any issues, you might want to check the Apple Support website or contact Apple Support directly for assistance.
Topic: Safari & Web SubTopic: General Tags:
Mar ’26
Reply to macOS system autocomplete cannot be disabled on via standard HTML attributes
On macOS, the behavior of system-level autocomplete suggestions in elements can indeed be frustrating for developers who wish to have full control over this feature. This behavior is primarily controlled by the operating system and the browser, rather than being fully customizable through standard HTML attributes or JavaScript APIs. Here are some insights and potential workarounds: Intentional Behavior User Experience: Apple designs macOS to enhance user convenience by offering autocomplete suggestions for common inputs, such as email addresses, phone numbers, and verification codes. This is intended to streamline user interactions and reduce typing effort. Security and Privacy: By controlling autocomplete at the system level, Apple aims to protect user privacy and security, ensuring that sensitive information is not automatically filled in without explicit user consent. Limitations of HTML Attributes Standard Attributes: Attributes like autocomplete="off", autocorrect="off", autocapitalize="off", and spellcheck="false" are intended to guide browser behavior regarding form field autocomplete. However, on macOS, these attributes may not fully override the system-level autocomplete suggestions, especially for specific types of content like email verification codes. Potential Workarounds While you cannot completely disable system-level autocomplete on macOS, you can try the following approaches to mitigate its impact or improve user experience: Custom Input Fields: Instead of using , create a custom input field using and elements. You can then style it to resemble a and implement your own logic for handling line breaks and input expansion. JavaScript Interception: Listen for input events on the and manually manipulate the content to remove or format autocomplete suggestions as they appear. This is a workaround and may not be foolproof, especially with rapid input or complex suggestions. Instructional Text: Provide clear instructions to users near the , explaining that autocomplete suggestions may appear and how they can manage or ignore them. App-Specific Settings: If your application is used in a controlled environment (e.g., within an enterprise), consider communicating with Apple or exploring enterprise policies that might allow for more granular control over autocomplete settings. Ultimately, while you have limited control over system-level autocomplete on macOS, these strategies can help you manage its impact and enhance the user experience within your web application.
Topic: Safari & Web SubTopic: General
Mar ’26
Reply to app-site-association.cdn-apple.com | Cache not updating
When dealing with Apple's App Site Association (ASA) and encountering delays in cache updates, there are several steps and considerations to help troubleshoot and potentially resolve the issue: Verify ASA File Format: Ensure your apple-app-site-association file strictly adheres to the format specified by Apple. Any syntax errors can prevent proper parsing and caching by Apple's servers. Use a validator tool to check your ASA file: Apple's ASA Validator. Correct Use of Cache Busting: While cache busting can ensure that your local development environment sees the latest version, it might interfere with Apple's caching mechanism, which expects a consistent URL. Consider using cache busting only temporarily for testing. Check for Typos in Domain Names: Double-check that the domain specified in your ASA file matches exactly with the domain used in your app and that there are no typographical errors. Ensure Proper Hosting and Accessibility: Your ASA file must be accessible via HTTPS at the exact path /.well-known/apple-app-site-association. Verify that your server configuration allows serving files from this path without restrictions. Test accessibility using tools like curl or wget to ensure the file returns a 200 OK status with the correct content. Reduce TTL for DNS Records: Temporarily lower the Time to Live (TTL) for your DNS records associated with app-site-association.cdn-apple.com. This can help speed up the propagation of updates, though it's generally a short-term measure. Force Refresh of App Site Association: On iOS devices, you can attempt to force a refresh of the ASA by reinstalling your app or clearing the app's data. While not guaranteed, this can sometimes prompt the device to fetch the latest ASA. Monitor Apple's Developer Forums and Status Page: Check Apple's Developer forums for any known issues or updates related to ASA caching. Visit the Apple System Status page to ensure there are no outages affecting app-site association services. Contact Apple Developer Support: If the issue persists beyond 72 hours with no resolution, consider reaching out to Apple Developer Support. They can investigate the issue on their end and provide specific guidance or resolve any server-side problems. Review Recent Changes: Reflect on any recent changes to your network configuration, server settings, or code that might have inadvertently affected the ASA caching process. By systematically checking these areas, you can identify potential issues and take appropriate actions to ensure your universal links are updated correctly by Apple's systems.
Topic: Safari & Web SubTopic: General Tags:
Mar ’26
Reply to How to enable MIE on MacOS
On macOS with Apple Silicon, Memory Integrity Enforcement (MIE) combined with Arm Memory Tagging Extension (MTE) can be a fantastic way to spot heap memory issues. While stack sanitizer support for MTE is generally easier to set up, enabling heap tagging usually requires some specific settings or environment variables. These might not be as well-documented or easily accessible as on other platforms like Linux. Here are a few ways you might try to enable heap tagging with MTE on macOS: Environment Variables: Apple’s documentation and developer tools might not list environment variables for controlling MTE heap tagging as clearly as they do for some other sanitizers. However, you can often find what you need by experimenting with variables that are usually used for debugging and runtime settings. Some common variables to try include: MallocStackLogging: This is mainly for logging the stack, but sometimes related environment variables can affect how the memory allocator works. DYLD_INSERT_LIBRARIES: You might try loading a custom library that changes how the memory allocator behaves to enable MTE, but this requires a good understanding of both MTE and how macOS works inside. Compiler and Linker Flags: Besides stack sanitizer flags, there might be other compiler or linker options that are specific to enabling MTE features on Apple Silicon. Check the latest Xcode and LLVM documentation for any new flags related to MTE. Custom Allocator: You could also try creating a custom allocator to see if that helps enable MTE. To enable heap tagging, you might consider using a custom memory allocator that supports MTE. Libraries like jemalloc or tcmalloc have been adapted for MTE on other platforms, and it might be possible to port or adapt them for your needs. System Configuration: Check if there are any system-level configurations or kernel settings that enable or control MTE features. This could involve some low-level debugging or reaching out to Apple Developer Support for any undocumented features.
Mar ’26
Reply to How can I change the output dimensions of a CoreML model in Xcode when the outputs come from a NonMaximumSuppression layer?
In CoreML, setting fixed output dimensions for layers like NonMaximumSuppression (NMS) directly isn't straightforward because these layers typically produce outputs based on the input data, which can vary in size. The NMS layer reduces the number of detected objects based on overlap and confidence scores, so the number of outputs is inherently dynamic. However, you can work around this limitation by using a few strategies: Padding Outputs: You can modify your model to ensure that the maximum possible number of detections is produced, and then pad the results to a fixed size. This involves editing your model's architecture in a framework like TensorFlow or PyTorch before converting it to CoreML. For example, you can add a layer that pads the detection outputs to a fixed size like 100. Post-Processing in Swift: After obtaining the outputs from CoreML, you can perform post-processing in Swift to pad the results to the desired dimensions. This approach is flexible and doesn't require modifying the CoreML model itself. Here's an example of how you might handle this in Swift: import CoreML func processDetections(confidence: MLMultiArray, coordinates: MLMultiArray) -> (confidence: [Float], coordinates: [[Float]]) { // Assuming confidence and coordinates are 1D arrays let maxDetections = 100 var paddedConfidence = confidence.floatData var paddedCoordinates = coordinates.floatData // Pad confidence while paddedConfidence.count < maxDetections * 5 { paddedConfidence.append(0.0) } // Pad coordinates while paddedCoordinates.count < maxDetections * 4 { paddedCoordinates.append(0.0) } // Convert back to 2D arrays for confidence and coordinates let confidenceArray = stride(from: 0, to: paddedConfidence.count, by: 5).map { Array(paddedConfidence[$0..<min($0 + 5, paddedConfidence.count)]) } let coordinatesArray = stride(from: 0, to: paddedCoordinates.count, by: 4).map { Array(paddedCoordinates[$0..<min($0 + 4, paddedCoordinates.count)]) } return (confidenceArray, coordinatesArray) } Custom Layer Implementation: If you have control over the model training process, you could implement a custom layer in TensorFlow or PyTorch that mimics the behavior of NMS but outputs fixed-size results. This custom layer can then be included in the model before conversion to CoreML. By using one of these strategies, you can ensure that your CoreML model outputs have fixed dimensions, even if the NMS layer itself produces variable-sized outputs.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Mar ’26
Reply to Translation framework use in Swift 6
The error you're encountering is due to Swift's concurrency model introduced in Swift 5.5, which aims to prevent data races by enforcing actor isolation. In Swift 6, these checks have become more stringent, especially when dealing with concurrent and actor-isolated contexts. The Translation framework's methods, like translate, are likely designed to work concurrently, which means they expect to be called from an actor context. When you pass the session object, which is likely actor-isolated to the main actor (due to how URLSession typically works), to a concurrent method, Swift flags this as a potential data race. How to Fix the Issue Make the Session Actor If possible, wrap the URLSession instance in an actor. This way, all interactions with it are serialized and safe from data races. actor TranslationSession { private let session: URLSession init() { self.session = URLSession(configuration: .default) } func translate(_ text: String) async throws -> TranslatedText { let configuration = TranslationConfiguration(sourceLanguage: .english, targetLanguage: .french) return try await session.translate(text, configuration: configuration) } } Then, use this actor in your concurrent context: await withTaskGroup(of: Void.self) { group in group.addTask { let translationSession = TranslationSession() let translatedText = try await translationSession.translate(sourceText) targetText = translatedText.targetText } } Explicitly Isolate to Main Actor If making the session an actor isn't feasible, you can explicitly ensure that the translate call happens on the main actor: await` withTaskGroup(of: Void.self) { group in group.addTask { let configuration = TranslationConfiguration(sourceLanguage: .english, targetLanguage: .french) let response = try await withCheckedContinuation { continuation in DispatchQueue.main.async { continuation.resume(returning: session.translate(sourceText, configuration: configuration)) } } targetText = response.targetText } Check for Actor Conformance in Frameworks Ensure that the Translation framework's methods are correctly annotated for concurrency. If they are not, you might be hitting a limitation where the framework itself doesn't fully conform to Swift's concurrency model yet. In such cases, consider reaching out to Apple or checking for updates to the framework. (: By aligning your use of URLSession with Swift's concurrency model, either by making it an actor or ensuring operations occur on the main actor, you can resolve the data race warning and successfully integrate the Translation framework in your Swift 6 project with concurrency enabled.
Topic: Machine Learning & AI SubTopic: General Tags:
Feb ’26
Reply to iCloud Drive silent upload deadlock caused by stale HTTP/3 session in nsurlsessiond (FB22476701)
Thank you for providing such a detailed account of the iCloud Drive file upload deadlock issue on macOS 26.4.1. It sounds like you've done extensive troubleshooting and analysis to identify the root cause. Here's a summary of what you've discovered and some additional thoughts or suggestions that might help refine your approach or assist others experiencing similar issues: Summary of Findings: Root Cause: A stale HTTP/3 (QUIC) session in nsurlsessiond's BackgroundConnectionPool leads to a deadlock during file uploads. Behavior: Deadlock occurs specifically with HTTP/3, while HTTP/1.1 works without issues post-restart. Affects larger files (>100 KB); smaller files may occasionally succeed. Restarting both cloudd and nsurlsessiond resolves the issue temporarily by clearing the poisoned session. Reproduction: Consistent behavior observed across multiple tests with varied file sizes. Diagnosis: Log analysis can help identify occurrences using specific grep patterns. Recovery: A targeted kill command for user-level instances of cloudd and nsurlsessiond provides a quick fix. Additional Thoughts and Suggestions: Potential Enhancements for Apple: Automatic Session Management: Implement automatic invalidation of QUIC sessions after a threshold of failures (as you suggested), potentially integrated into CFNetwork or NSURLSession directly. Improved Logging: Enhance logging to surface errors like these to users in Finder or System Settings, perhaps with actionable suggestions or clearer error messages. API for Pool Invalidation: Expose APIs that allow services like CloudKit to explicitly invalidate problematic session pools without needing a full daemon restart. Diagnostic Tools: Consider adding built-in diagnostic tools or scripts that users can run to identify and potentially resolve such deadlocks without manual intervention. For Users and Administrators: Script Automation: For frequent issues, consider setting up a monitoring script that automatically runs the recovery command when the specific deadlock pattern is detected in logs. Alternative Protocols: Temporarily disabling QUIC in network settings (if feasible and supported) might be a workaround until a permanent fix is applied, though this may impact performance for other QUIC-enabled applications. Feedback Loop: Encourage affected users to submit feedback through Apple's Feedback Assistant, including the collected logs, to ensure the issue is prioritized and tracked. Further Debugging: Network Packet Analysis: Capturing network packets during a deadlock might provide additional insights into what exactly fails mid-transfer. System State Snapshots: Taking system snapshots before and after the deadlock could help Apple engineers diagnose what might be causing the session cache corruption. Your detailed documentation and methodical approach are invaluable for both addressing the current issue and helping Apple refine their systems. Keep monitoring for updates from Apple regarding this problem, as they may release patches or guidance based on feedback like yours.
Replies
Boosts
Views
Activity
2w
Reply to tensorflow-metal ReLU activation fails to clip negative values on M4 Apple Silicon
Did this help ?
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Mar ’26
Reply to Images added in Reality Composer look darker in AR
When working with images in Reality Composer and noticing that they appear darker in AR compared to other objects, it's likely due to how images are handled differently in terms of lighting. Here are some insights and recommendations to achieve a more consistent lighting response for image-based artworks: Expected Behavior and Lighting Model Lighting Differences: Images: Typically, images are treated as materials with an albedo texture, meaning they reflect the light that hits them without additional complex interactions like specular highlights or subsurface scattering. This can make them appear darker if the scene lighting is dim or if the image lacks bright areas. Other Objects: 3D models often use more complex materials that include specular reflections and other lighting properties, which can make them appear brighter and more dynamic under varying lighting conditions. Lighting Models in Reality Composer: Reality Composer uses a simplified lighting model compared to full-fledged 3D engines. It primarily relies on ambient and directional lights, which might not fully illuminate textured images as expected. Recommendations for Consistent Lighting Increase Ambient Light: Adjust the ambient light in your scene to ensure that it provides enough illumination for images. Higher ambient light levels can help reduce the perceived darkness of images. Add Point or Spot Lights: Place additional point or spot lights in your scene to directly illuminate the images. Position these lights to mimic real-world light sources and enhance the visibility of image details. Adjust Image Material Properties: If Reality Composer allows, adjust the material properties of your images. Increasing the brightness or contrast of the image texture can help it stand out better in AR. Use Image Effects: Consider applying image effects or adjustments to enhance visibility. For example, increasing exposure or adding a slight glow can make images appear more vibrant. Preprocess Images: Edit your images before importing them into Reality Composer. Enhance brightness, contrast, and saturation to ensure they are more visible under various lighting conditions. Real-World Calibration: Test your AR experience in different real-world environments to understand how lighting conditions affect your images. Calibrate your scene's lighting settings based on typical lighting conditions where the AR experience will occur. Feedback and Iteration: Continuously test and iterate on your scene. Gather feedback on how images appear in different environments and make necessary adjustments to lighting and materials. By understanding the differences in how images and other objects are lit in Reality Composer and applying these recommendations, you can achieve a more consistent and visually appealing AR experience for your image-based artworks.
Topic: Design SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’26
Reply to First attempt at a PKPass
Creating a custom Wallet pass, especially one intended to display a simple vertical layout of an address, can be a bit tricky due to the constraints and specific formatting requirements of Apple's Wallet Pass format. Based on the JSON you've provided, here are some steps and considerations to help you achieve the desired layout: Key Considerations Field Configuration: The primaryFields and auxiliaryFields in your JSON are used to display information on the pass. For a simple address layout, you might want to creatively use these fields. Text Styling: Wallet passes have limited styling options, but you can control the font size, weight, and color to some extent using the textStyle attribute in fields. Layout Adjustments: Since Wallet passes don't support arbitrary layouts, you'll need to fit your content into the available fields and styles. You might need to compromise on the exact layout you envisioned. JSON Adjustments Here's an adjusted version of your JSON that attempts to create a vertical address layout using primaryFields and auxiliaryFields: code-block { "formatVersion": 1, "passTypeIdentifier": "pass.org.danleys.4KSBarcode", "serialNumber": "__SERIAL__", "teamIdentifier": "----", "organizationName": "4 K.I.D.S. Sake", "description": "4KSBarcode", "logoText": "4 K.I.D.S. Sake", "foregroundColor": "rgb(255, 255, 255)", "backgroundColor": "rgb(255,0,0)", "storeCard" : { "primaryFields" : [ { "key" : "ClientID", "label" : "Address", "value" : "339 Remington Blvd" }, { "key" : "city", "label" : "", "value" : "Bolingbrook, IL 60440" } ], "auxiliaryFields": [] }, "barcode": { "format": "PKBarcodeFormatCode128", "message": "__SERIAL__", "altText": "__SERIAL__", "messageEncoding": "iso-8859-1" } } Additional Tips Text Style: You can try different textStyle values like title, subheadline, or body to adjust the appearance of the text. However, these styles have predefined appearances, so they might not perfectly match your vision. Preview and Testing: Continuously test your pass using both the iPhone simulator and a physical device, as the appearance can differ slightly between them. Use Apple's Pass Viewer app to inspect your pass and make iterative adjustments to your JSON. Documentation and Examples: Refer to Apple's official documentation on Wallet passes for detailed specifications and examples. They provide guidance on how to structure your JSON to achieve various layouts and styles. Community and Forums: If you're still facing issues, consider reaching out to the Apple Developer Forums or communities like Stack Overflow. Other developers may have encountered similar challenges and can offer insights. By carefully configuring your JSON and testing across different devices, you should be able to achieve a pass layout that closely meets your expectations.
Topic: Design SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’26
Reply to NavigationTitle in LiquidGlass style
To achieve a LiquidGlass navigation title style in SwiftUI similar to what you see in macOS apps like Photos, you'll need to customize the appearance beyond what's available with standard SwiftUI modifiers. The LiquidGlass effect typically involves a semi-transparent, blurred background that reacts dynamically to the content beneath it. While SwiftUI doesn't directly support a LiquidGlass effect out of the box, you can create a custom view to mimic this behavior. Here's a basic approach to get you started: Custom LiquidGlass Navigation Title import` SwiftUI` struct LiquidGlassEffect: ViewModifier { var color: Color = .white var opacity: Double = 0.6 func body(content: Content) -> some View { content .background( GeometryReader { geometry in Color.clear .blur(radius: 10) .frame(width: geometry.size.width, height: geometry.size.height) .opacity(opacity) .blendMode(.destinationOver) .background(color.opacity(0.1)) } .edgesIgnoringSafeArea(.top) ) .overlay(
Topic: Design SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’26
Reply to Qwen3 VL CoreML
anyone know how to read these
Replies
Boosts
Views
Activity
Mar ’26
Reply to tensorflow-metal ReLU activation fails to clip negative values on M4 Apple Silicon
It sounds like you're encountering an issue with the ReLU activation function on the Mac M4 using TensorFlow-Metal. Let's go through a few steps to troubleshoot and potentially resolve this issue: Potential Causes and Solutions TensorFlow-Metal Version Compatibility: Ensure that you are using the latest compatible version of TensorFlow-Metal. Sometimes, bugs are fixed in newer releases. Check for updates via pip: pip install --upgrade tensorflow-macos tensorflow-metal ReLU Implementation in Metal: TensorFlow-Metal might have a different implementation of ReLU that doesn't handle edge cases like floating-point precision issues with exactly zero values. You could try a workaround by slightly offsetting zero values before applying ReLU: def custom_relu(x): return tf.maximum(x, 1e-10) model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(10,)), tf.keras.layers.Dense(5, activation=custom_relu) ]) Environment and Configuration: Double-check that TensorFlow is correctly configured to use Metal. You can explicitly set it to use Metal by uncommenting the line to disable GPU visibility for testing: tf.config.set_visible_devices([], 'GPU') Restart your Python interpreter or Jupyter notebook session after changing configurations to ensure changes take effect. Test with Smaller Data Types: Sometimes precision issues arise with floating-point types. Try using tf.float16 or tf.float32 explicitly and see if the behavior changes: data = np.ones((1, 10), dtype=np.float32) weights = [np.ones((10, 5), dtype=np.float32) * -1, np.ones(5, dtype=np.float32) * -1] Check for Known Issues: Look up any known issues or discussions related to TensorFlow-Metal on GitHub or community forums. There might be specific patches or advice for your hardware configuration. Fallback to CPU: As a temporary measure, you can force the model to run on the CPU to bypass the issue until a fix is available: tf.config.set_visible_devices([], 'GPU') Conclusion Implement these suggestions and see if any of them resolve the issue with negative values not being clipped to zero. If the problem persists, consider reaching out to the TensorFlow-Metal maintainers or community for further assistance, providing them with details about your setup and the reproduction script.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Mar ’26
Reply to Two errors in debug: com.apple.modelcatalog.catalog sync and nw_protocol_instance_set_output_handler
Based on the error messages you're encountering in Xcode, it seems like there are two main issues to address: nwprotocolinstancesetoutputhandler Not calling removeinput_handler: This error typically indicates a problem with how network connections are being managed using Apple's Network framework. Specifically, it suggests that an output handler was set on a network protocol instance without a corresponding removal of the input handler, which could lead to resource leaks or crashes. Steps to Resolve: Ensure that every call to nw_protocol_instance_set_output_handler is matched with a call to nw_protocol_instance_remove_input_handler when you're done with the connection or when the connection is closed. Review the lifecycle of your network connections to ensure handlers are properly managed, especially in asynchronous or error-prone paths. Check if there are any closures or network operations that might inadvertently leave handlers dangling. com.apple.modelcatalog.catalog sync: connection error: This error points to an issue with accessing a service named com.apple.modelcatalog.catalog, likely due to sandbox restrictions, as indicated by the error message. Steps to Resolve: Entitlements: Verify that your app's entitlements file includes the necessary permissions to access this service. You might need specific entitlements related to model catalogs or network services. App Sandbox: Since the error mentions sandbox restrictions, ensure that your app's sandbox configuration allows for the required network connections. You may need to adjust the sandbox profile to permit connections to the specific service or domain. Error Handling: Implement robust error handling to manage cases where the connection fails due to permissions or other issues. This can include retry logic or user notifications. Debugging: Temporarily disable the feature that interacts with com.apple.modelcatalog.catalog to confirm that it's the source of the problem. If the error persists without it, investigate other parts of your network code. Additional Considerations for Your FoundationRepo: Conditional Logic: Your FoundationRepo seems to handle whether to use local or remote models based on system capabilities. Ensure that these checks are accurate and that the logic for switching between them is sound. Network Requests: Within FoundationRepo, ensure that network requests are properly configured, especially with respect to URLs and headers. The base URL you're using looks placeholder-like, so confirm it's correct before making requests. Concurrency: Given that FoundationRepo is an actor, ensure that all accesses to its properties and methods are properly synchronized to avoid concurrency issues. By addressing these areas, you should be able to resolve the errors you're encountering. If the issues persist, consider reviewing recent changes to your codebase or consulting Apple's documentation and developer forums for more specific guidance related to the frameworks you're using.
Replies
Boosts
Views
Activity
Mar ’26
Reply to Possible Bug - Hover Effects/Spatial Event Compatibilty with PSVR2 Controllers?
The hover effects feature introduced in visionOS 26 represents a significant enhancement for creating immersive experiences, allowing users to interact with virtual objects using spatial gestures. However, the behavior of these gestures, particularly with the PSVR 2 controllers, can indeed vary based on how they are integrated into different types of views and applications. Clarification on Hover Effects and PSVR 2 Controllers Hover Effects in VisionOS: Hover effects are designed to provide feedback when a user's gaze is directed towards a virtual object. This is akin to a mouse hover on a desktop and can trigger spatial events or visual highlights. Controller Gestures: On the PSVR 2, traditional button presses (like pulling the trigger) are well-supported across various interfaces, including SwiftUI. However, gaze-based interactions like hover effects require additional handling to map spatial gestures to these events. Discrepancies in Behavior Sample Application Behavior: The sample you referenced likely relies on Compositor Services and Metal for rendering, which might not directly map the trigger press to hover events as SwiftUI does for button clicks. Instead, it may expect pinch gestures or explicit gaze-based input to trigger spatial events. SwiftUI Views: SwiftUI abstracts much of the complexity involved in handling different input methods, providing a more consistent interface for gestures like trigger presses. This is why you observe trigger presses being recognized as button clicks in SwiftUI views. Possible Explanations and Solutions Gesture Mapping: Ensure that your sample application correctly maps hover events to the appropriate PSVR 2 controller gestures. This might involve customizing gesture recognizers to handle gaze-based input and trigger presses separately. Compositor Services API: Dive deeper into the Compositor Services API documentation to understand how hover events are triggered and how they can be customized for different input devices. You may need to handle spatial events manually and map them to the desired actions. Event Handling Updates: Check for any updates or patches to visionOS or the Compositor Services framework that might address discrepancies in gesture handling. Apple frequently updates its APIs to improve compatibility and functionality. Community and Support: Engage with the Apple Developer forums or reach out to Apple Support for specific guidance on implementing hover effects with PSVR 2 controllers. They may provide insights or workarounds tailored to your application's needs. Custom Implementations: Consider implementing custom gesture recognizers that combine gaze tracking with controller input to achieve the desired hover effect behavior. This might offer more flexibility and control over how interactions are handled. In summary, while hover effects and PSVR 2 controller gestures are designed to work together, discrepancies can arise due to differences in how input is processed in various frameworks. By carefully mapping gestures and leveraging the latest API features, you can create a more seamless and intuitive user experience in your visionOS application.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’26
Reply to ApplePay Payment Sheet for onfile payment method
You can set up recurring payments for Apple Pay on the web by following these steps: Visit Apple Pay on the Web: Go to the Apple Pay section on the Apple website. Set Up Recurring Payments: Look for options related to setting up recurring payments or deferred payment requests. This might be under a section like "Manage Payments" or similar. Authorize Payment Method: When prompted, you will need to authorize your payment method. This is typically done through a confirmation screen where you can use Touch ID, Face ID, or enter your passcode. Review Repayment Details: Ensure that all repayment details are correct, including the amount, frequency, and payment method. Confirmation: Complete the setup by confirming your choices. You should receive a confirmation that your recurring payments have been set up. If you encounter any issues, you might want to check the Apple Support website or contact Apple Support directly for assistance.
Topic: Safari & Web SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’26
Reply to macOS system autocomplete cannot be disabled on via standard HTML attributes
On macOS, the behavior of system-level autocomplete suggestions in elements can indeed be frustrating for developers who wish to have full control over this feature. This behavior is primarily controlled by the operating system and the browser, rather than being fully customizable through standard HTML attributes or JavaScript APIs. Here are some insights and potential workarounds: Intentional Behavior User Experience: Apple designs macOS to enhance user convenience by offering autocomplete suggestions for common inputs, such as email addresses, phone numbers, and verification codes. This is intended to streamline user interactions and reduce typing effort. Security and Privacy: By controlling autocomplete at the system level, Apple aims to protect user privacy and security, ensuring that sensitive information is not automatically filled in without explicit user consent. Limitations of HTML Attributes Standard Attributes: Attributes like autocomplete="off", autocorrect="off", autocapitalize="off", and spellcheck="false" are intended to guide browser behavior regarding form field autocomplete. However, on macOS, these attributes may not fully override the system-level autocomplete suggestions, especially for specific types of content like email verification codes. Potential Workarounds While you cannot completely disable system-level autocomplete on macOS, you can try the following approaches to mitigate its impact or improve user experience: Custom Input Fields: Instead of using , create a custom input field using and elements. You can then style it to resemble a and implement your own logic for handling line breaks and input expansion. JavaScript Interception: Listen for input events on the and manually manipulate the content to remove or format autocomplete suggestions as they appear. This is a workaround and may not be foolproof, especially with rapid input or complex suggestions. Instructional Text: Provide clear instructions to users near the , explaining that autocomplete suggestions may appear and how they can manage or ignore them. App-Specific Settings: If your application is used in a controlled environment (e.g., within an enterprise), consider communicating with Apple or exploring enterprise policies that might allow for more granular control over autocomplete settings. Ultimately, while you have limited control over system-level autocomplete on macOS, these strategies can help you manage its impact and enhance the user experience within your web application.
Topic: Safari & Web SubTopic: General
Replies
Boosts
Views
Activity
Mar ’26
Reply to app-site-association.cdn-apple.com | Cache not updating
When dealing with Apple's App Site Association (ASA) and encountering delays in cache updates, there are several steps and considerations to help troubleshoot and potentially resolve the issue: Verify ASA File Format: Ensure your apple-app-site-association file strictly adheres to the format specified by Apple. Any syntax errors can prevent proper parsing and caching by Apple's servers. Use a validator tool to check your ASA file: Apple's ASA Validator. Correct Use of Cache Busting: While cache busting can ensure that your local development environment sees the latest version, it might interfere with Apple's caching mechanism, which expects a consistent URL. Consider using cache busting only temporarily for testing. Check for Typos in Domain Names: Double-check that the domain specified in your ASA file matches exactly with the domain used in your app and that there are no typographical errors. Ensure Proper Hosting and Accessibility: Your ASA file must be accessible via HTTPS at the exact path /.well-known/apple-app-site-association. Verify that your server configuration allows serving files from this path without restrictions. Test accessibility using tools like curl or wget to ensure the file returns a 200 OK status with the correct content. Reduce TTL for DNS Records: Temporarily lower the Time to Live (TTL) for your DNS records associated with app-site-association.cdn-apple.com. This can help speed up the propagation of updates, though it's generally a short-term measure. Force Refresh of App Site Association: On iOS devices, you can attempt to force a refresh of the ASA by reinstalling your app or clearing the app's data. While not guaranteed, this can sometimes prompt the device to fetch the latest ASA. Monitor Apple's Developer Forums and Status Page: Check Apple's Developer forums for any known issues or updates related to ASA caching. Visit the Apple System Status page to ensure there are no outages affecting app-site association services. Contact Apple Developer Support: If the issue persists beyond 72 hours with no resolution, consider reaching out to Apple Developer Support. They can investigate the issue on their end and provide specific guidance or resolve any server-side problems. Review Recent Changes: Reflect on any recent changes to your network configuration, server settings, or code that might have inadvertently affected the ASA caching process. By systematically checking these areas, you can identify potential issues and take appropriate actions to ensure your universal links are updated correctly by Apple's systems.
Topic: Safari & Web SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’26
Reply to How to enable MIE on MacOS
On macOS with Apple Silicon, Memory Integrity Enforcement (MIE) combined with Arm Memory Tagging Extension (MTE) can be a fantastic way to spot heap memory issues. While stack sanitizer support for MTE is generally easier to set up, enabling heap tagging usually requires some specific settings or environment variables. These might not be as well-documented or easily accessible as on other platforms like Linux. Here are a few ways you might try to enable heap tagging with MTE on macOS: Environment Variables: Apple’s documentation and developer tools might not list environment variables for controlling MTE heap tagging as clearly as they do for some other sanitizers. However, you can often find what you need by experimenting with variables that are usually used for debugging and runtime settings. Some common variables to try include: MallocStackLogging: This is mainly for logging the stack, but sometimes related environment variables can affect how the memory allocator works. DYLD_INSERT_LIBRARIES: You might try loading a custom library that changes how the memory allocator behaves to enable MTE, but this requires a good understanding of both MTE and how macOS works inside. Compiler and Linker Flags: Besides stack sanitizer flags, there might be other compiler or linker options that are specific to enabling MTE features on Apple Silicon. Check the latest Xcode and LLVM documentation for any new flags related to MTE. Custom Allocator: You could also try creating a custom allocator to see if that helps enable MTE. To enable heap tagging, you might consider using a custom memory allocator that supports MTE. Libraries like jemalloc or tcmalloc have been adapted for MTE on other platforms, and it might be possible to port or adapt them for your needs. System Configuration: Check if there are any system-level configurations or kernel settings that enable or control MTE features. This could involve some low-level debugging or reaching out to Apple Developer Support for any undocumented features.
Replies
Boosts
Views
Activity
Mar ’26
Reply to How can I change the output dimensions of a CoreML model in Xcode when the outputs come from a NonMaximumSuppression layer?
In CoreML, setting fixed output dimensions for layers like NonMaximumSuppression (NMS) directly isn't straightforward because these layers typically produce outputs based on the input data, which can vary in size. The NMS layer reduces the number of detected objects based on overlap and confidence scores, so the number of outputs is inherently dynamic. However, you can work around this limitation by using a few strategies: Padding Outputs: You can modify your model to ensure that the maximum possible number of detections is produced, and then pad the results to a fixed size. This involves editing your model's architecture in a framework like TensorFlow or PyTorch before converting it to CoreML. For example, you can add a layer that pads the detection outputs to a fixed size like 100. Post-Processing in Swift: After obtaining the outputs from CoreML, you can perform post-processing in Swift to pad the results to the desired dimensions. This approach is flexible and doesn't require modifying the CoreML model itself. Here's an example of how you might handle this in Swift: import CoreML func processDetections(confidence: MLMultiArray, coordinates: MLMultiArray) -> (confidence: [Float], coordinates: [[Float]]) { // Assuming confidence and coordinates are 1D arrays let maxDetections = 100 var paddedConfidence = confidence.floatData var paddedCoordinates = coordinates.floatData // Pad confidence while paddedConfidence.count < maxDetections * 5 { paddedConfidence.append(0.0) } // Pad coordinates while paddedCoordinates.count < maxDetections * 4 { paddedCoordinates.append(0.0) } // Convert back to 2D arrays for confidence and coordinates let confidenceArray = stride(from: 0, to: paddedConfidence.count, by: 5).map { Array(paddedConfidence[$0..<min($0 + 5, paddedConfidence.count)]) } let coordinatesArray = stride(from: 0, to: paddedCoordinates.count, by: 4).map { Array(paddedCoordinates[$0..<min($0 + 4, paddedCoordinates.count)]) } return (confidenceArray, coordinatesArray) } Custom Layer Implementation: If you have control over the model training process, you could implement a custom layer in TensorFlow or PyTorch that mimics the behavior of NMS but outputs fixed-size results. This custom layer can then be included in the model before conversion to CoreML. By using one of these strategies, you can ensure that your CoreML model outputs have fixed dimensions, even if the NMS layer itself produces variable-sized outputs.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Mar ’26
Reply to Translation framework use in Swift 6
The error you're encountering is due to Swift's concurrency model introduced in Swift 5.5, which aims to prevent data races by enforcing actor isolation. In Swift 6, these checks have become more stringent, especially when dealing with concurrent and actor-isolated contexts. The Translation framework's methods, like translate, are likely designed to work concurrently, which means they expect to be called from an actor context. When you pass the session object, which is likely actor-isolated to the main actor (due to how URLSession typically works), to a concurrent method, Swift flags this as a potential data race. How to Fix the Issue Make the Session Actor If possible, wrap the URLSession instance in an actor. This way, all interactions with it are serialized and safe from data races. actor TranslationSession { private let session: URLSession init() { self.session = URLSession(configuration: .default) } func translate(_ text: String) async throws -> TranslatedText { let configuration = TranslationConfiguration(sourceLanguage: .english, targetLanguage: .french) return try await session.translate(text, configuration: configuration) } } Then, use this actor in your concurrent context: await withTaskGroup(of: Void.self) { group in group.addTask { let translationSession = TranslationSession() let translatedText = try await translationSession.translate(sourceText) targetText = translatedText.targetText } } Explicitly Isolate to Main Actor If making the session an actor isn't feasible, you can explicitly ensure that the translate call happens on the main actor: await` withTaskGroup(of: Void.self) { group in group.addTask { let configuration = TranslationConfiguration(sourceLanguage: .english, targetLanguage: .french) let response = try await withCheckedContinuation { continuation in DispatchQueue.main.async { continuation.resume(returning: session.translate(sourceText, configuration: configuration)) } } targetText = response.targetText } Check for Actor Conformance in Frameworks Ensure that the Translation framework's methods are correctly annotated for concurrency. If they are not, you might be hitting a limitation where the framework itself doesn't fully conform to Swift's concurrency model yet. In such cases, consider reaching out to Apple or checking for updates to the framework. (: By aligning your use of URLSession with Swift's concurrency model, either by making it an actor or ensuring operations occur on the main actor, you can resolve the data race warning and successfully integrate the Translation framework in your Swift 6 project with concurrency enabled.
Topic: Machine Learning & AI SubTopic: General Tags:
Replies
Boosts
Views
Activity
Feb ’26