When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I followed the WWDC video to learn Sharplay. I understood the first creation of seats, but I couldn't learn some of the following content very well, so I hope you can give me a list code. The contents are as follows:
I have already taken a seat.
struct TeamSelectionTemplate: SpatialTemplate {
let elements: [any SpatialTemplateElement] = [
.seat(position: .app.offsetBy(x: 0, z: 4)),
.seat(position: .app.offsetBy(x: 1, z: 4)),
.seat(position: .app.offsetBy(x: -1, z: 4)),
.seat(position: .app.offsetBy(x: 2, z: 4)),
.seat(position: .app.offsetBy(x: -2, z: 4)),
]
}
I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSelectionTemplate. Thank you very much.
We can add many models in the Reality Composer Pro scene, but when I use RealityView to display and add modifiers in SwiftUI, the modifiers will have Effect, and I don't want to do this. I hope this modifier will be valid for a single model in the Reality Composer Pro scenario.
May I ask how to add modifiers to a single model in the Reality Composer Pro scene?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
SwiftUI
RealityKit
Reality Composer Pro
visionOS
My App needs to send and receive messages to the server, but my server does not have SSL, so I can only disable ATS in the development stage. But if I want to put the app on the shelf, then I still disable ATS when I put it on the shelf, and the server still does not have SSL. Will it be packaged? Is pp warned and terminated by Xcode? Will it be rejected by the Apple audit department? Can it be put on the App Store normally and provided to all users?
Note: My server is completely safe without any security risks. I didn't apply for SSL just because I didn't have enough funds.
Topic:
App & System Services
SubTopic:
Networking
Tags:
Sign in with Apple
Authentication Services
App Store Server API
visionOS
Apple Intelligents is here, but I have some problems. First of all, it often shows that something is being downloaded on the settings page. Is this normal? And the Predictive Code Completion Model in Xcode seems to have been suddenly deleted and needs to be re-downloaded, and the error The operation couldn't be complet has occurred. Ed. (ModelCatalog.CatalogErrors.AssetErrors error 1.), detailed log:
The operation couldn’t be completed. (ModelCatalog.CatalogErrors.AssetErrors error 1.)
Domain: ModelCatalog.CatalogErrors.AssetErrors
Code: 1
User Info: {
DVTErrorCreationDateKey = "2024-08-27 14:42:54 +0000";
}
--
Failed to find asset: com.apple.fm.code.generate_small_v1.base - no asset
Domain: ModelCatalog.CatalogErrors.AssetErrors
Code: 1
--
System Information
macOS Version 15.1 (Build 24B5024e)
Xcode 16.0 (23049) (Build 16A5230g)
Timestamp: 2024-08-27T22:42:54+08:00
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Beta
Xcode
Developer Tools
Apple Intelligence
Hello,
I would like to inquire about the release date of Swift Assist’s beta version. Apple has stated that it will be released later this year, but they have not provided a specific date or time.
Could you please provide information on the beta version’s release date? Additionally, is there a trial version available? If so, when was it released?
Thank you for your assistance.
In visionOS, the virtual content is covered by the hand by default, so I want to know that in the hybrid space, if the distance of an entity is behind a real object, how can the object in the room be covered like the virtual content is covered by the hand?
In WWDC24, visionOS hand tracking has a new function that can make an entity track the hand faster (but at the expense of a certain degree of accuracy), and the video only explains how to implement ARKit, so please ask how to implement the anchorEntiy in the reality view.
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
I have created a portal and attached it to a wall using the AnchorEntity. However, I am seeking guidance on how to determine the size of the wall so that the portal can fully occupy it. Initially, I attempted to locate relevant information within the demo code, but I encountered difficulties in comprehending certain sections. I would appreciate it if someone could provide a step-by-step explanation or a reference to the appropriate code. Thank you for your assistance.
In visionOS, ARKit is to integrate virtual and reality. However, most of the functions RealityKit can be easily implemented (except for Scene reconstruction, Room Tracking and enterprise API), so do I still need to use ARKit? Is there any difference between them?
I intend to participate in the Swift Student Challenge 25. I see Rules, It is mentioned that Playgrounds works should be a work that can be experienced in three minutes. However, my work does not meet this requirement.
Create an interactive scene in an app playground that can be experienced within three minutes.
Initially, my work was not intended for the Challenge but for the App Store. However, I decided to submit it to the Challenge, and my work and I met the requirements of the Challenge. Therefore, my work is a complete application, which makes it impossible for the judges to experience it within three minutes. It may take more time. Does this have any impact?
Topic:
Community
SubTopic:
Swift Student Challenge
Tags:
Swift Student Challenge
Swift
Swift Playground
SwiftUI
I am interested in learning the Metal framework for rendering development. However, most of Apple’s official documentation uses Objective-C code. Therefore, I am seeking guidance on whether it is more advantageous for me to focus solely on learning Swift to gain proficiency in Metal.
I am currently preparing my submission for the Swift Student Challenge, and my app playground is quite comprehensive. Based on my estimations, it may take approximately 4 to 5.5 minutes for the reviewers to fully experience the interactive elements of my app. Every component is integral to the overall experience, and I would prefer not to remove any content, as each part not only contributes to the overall interactivity but also effectively demonstrates my abilities across different technical and creative domains.
However, I noticed the guideline on https://developer.apple.com/swift-student-challenge/eligibility stating that the interactive scene should be “experienced within three minutes.” While this does not appear to be a main requirement, my app playground significantly exceeds this timeframe.
Could you kindly clarify whether exceeding the three-minute guideline could result in my submission being rejected, or if it might negatively impact the evaluation process? I would greatly appreciate any insights you can provide.
Thank you for your time and consideration. I look forward to your response.
Topic:
Community
SubTopic:
Swift Student Challenge
Tags:
Swift Student Challenge
Swift Playground
Swans Quest
Playground Support
The charging port of my iPhone may be damaged due to water, and it cannot be charged and transmitted data. It can only be charged wirelessly that does not support data transmission. However, since Xcode supports wireless debugging, I can continue to test my App. However, I recently changed to a new Mac, but there is no connection record with the iPhone in the new Mac, which makes it impossible to debug wirelessly.
So I want to know how to realize wireless debugging on such a device without debugging records?