In WWDC24, visionOS hand tracking has a new function that can make an entity track the hand faster (but at the expense of a certain degree of accuracy), and the video only explains how to implement ARKit, so please ask how to implement the anchorEntiy in the reality view.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
I have created a portal and attached it to a wall using the AnchorEntity. However, I am seeking guidance on how to determine the size of the wall so that the portal can fully occupy it. Initially, I attempted to locate relevant information within the demo code, but I encountered difficulties in comprehending certain sections. I would appreciate it if someone could provide a step-by-step explanation or a reference to the appropriate code. Thank you for your assistance.
In visionOS, ARKit is to integrate virtual and reality. However, most of the functions RealityKit can be easily implemented (except for Scene reconstruction, Room Tracking and enterprise API), so do I still need to use ARKit? Is there any difference between them?
I intend to participate in the Swift Student Challenge 25. I see Rules, It is mentioned that Playgrounds works should be a work that can be experienced in three minutes. However, my work does not meet this requirement.
Create an interactive scene in an app playground that can be experienced within three minutes.
Initially, my work was not intended for the Challenge but for the App Store. However, I decided to submit it to the Challenge, and my work and I met the requirements of the Challenge. Therefore, my work is a complete application, which makes it impossible for the judges to experience it within three minutes. It may take more time. Does this have any impact?
Topic:
Community
SubTopic:
Swift Student Challenge
Tags:
Swift Student Challenge
Swift
Swift Playground
SwiftUI
I am interested in learning the Metal framework for rendering development. However, most of Apple’s official documentation uses Objective-C code. Therefore, I am seeking guidance on whether it is more advantageous for me to focus solely on learning Swift to gain proficiency in Metal.
I am currently preparing my submission for the Swift Student Challenge, and my app playground is quite comprehensive. Based on my estimations, it may take approximately 4 to 5.5 minutes for the reviewers to fully experience the interactive elements of my app. Every component is integral to the overall experience, and I would prefer not to remove any content, as each part not only contributes to the overall interactivity but also effectively demonstrates my abilities across different technical and creative domains.
However, I noticed the guideline on https://developer.apple.com/swift-student-challenge/eligibility stating that the interactive scene should be “experienced within three minutes.” While this does not appear to be a main requirement, my app playground significantly exceeds this timeframe.
Could you kindly clarify whether exceeding the three-minute guideline could result in my submission being rejected, or if it might negatively impact the evaluation process? I would greatly appreciate any insights you can provide.
Thank you for your time and consideration. I look forward to your response.
Topic:
Community
SubTopic:
Swift Student Challenge
Tags:
Swift Student Challenge
Swift Playground
Swans Quest
Playground Support
The charging port of my iPhone may be damaged due to water, and it cannot be charged and transmitted data. It can only be charged wirelessly that does not support data transmission. However, since Xcode supports wireless debugging, I can continue to test my App. However, I recently changed to a new Mac, but there is no connection record with the iPhone in the new Mac, which makes it impossible to debug wirelessly.
So I want to know how to realize wireless debugging on such a device without debugging records?
I'm developing the VisionOS app. I want to know how to play spatial audio in addition to RealityKit? If it's iOS or macOS, how to play spatial audio in addition to RealityKit?
Hello, I'm adding a CollisionComponent to an entity in RealityView. CollisionComponent requires that a Mesh must be provided as a reference for collision detection. However, in order to achieve more accurate detection, I hope that this Mesh resource is a geometric shape of a USDZ model. Is there any way to make it happen? Thank you!
I have a USDZ model called 'GooseNModel' in the visionOS App project. I'm sure that this model contains an animation, so I wrote the following code to display the model with animation:
import SwiftUI
import RealityKit
RealityView{ content in
if let GooseNModel = try? await Entity(named: "GooseNModel"),
let animation = GooseNModel.availableAnimations.first {
GooseNModel.playAnimation(animation)
content.add(GooseNModel)
}
}
But when I ran it, I found that it did not contain animation as I imagined, but a static model. How to solve it?
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
Note:I have seen https://developer.apple.com/videos/play/wwdc2024/10101/, but I don't know much about it.
In RealityView I have two entities that contain tracking components and collision components, which are used to follow the hand and detect collisions. In the Behaviors component of one of the entities, there is an instruction to execute action through onCollision. However, when I test, they cannot execute action after collisions. Why is this?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
AR / VR
RealityKit
Reality Composer Pro
visionOS
How to create a visionOS project in Xcode15.0beta
I want to do visionOS games. Which one is better, SwiftUI or UIKit? What are the advantages?