In this code:
https://developer.apple.com/documentation/visionos/incorporating-real-world-surroundings-in-an-immersive-experience
It contains a physical collision reaction between virtual objects and the real world, which is realized by creating a grid with physical components. However, I don't understand the information in the document very well. Who can give me a solution? Thank you!
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In Reality View, I want to move an entity A to the position of entity B, but I can't determine the coordinates of entity B (for example, entity B is tracking the hand). What's the solution?
The light of RealityView can only be effective on virtual objects. I hope it can be projected into the real world. What API can be implemented?
Apple Intelligents is here, but I have some problems. First of all, it often shows that something is being downloaded on the settings page. Is this normal? And the Predictive Code Completion Model in Xcode seems to have been suddenly deleted and needs to be re-downloaded, and the error The operation couldn't be complet has occurred. Ed. (ModelCatalog.CatalogErrors.AssetErrors error 1.), detailed log:
The operation couldn’t be completed. (ModelCatalog.CatalogErrors.AssetErrors error 1.)
Domain: ModelCatalog.CatalogErrors.AssetErrors
Code: 1
User Info: {
DVTErrorCreationDateKey = "2024-08-27 14:42:54 +0000";
}
--
Failed to find asset: com.apple.fm.code.generate_small_v1.base - no asset
Domain: ModelCatalog.CatalogErrors.AssetErrors
Code: 1
--
System Information
macOS Version 15.1 (Build 24B5024e)
Xcode 16.0 (23049) (Build 16A5230g)
Timestamp: 2024-08-27T22:42:54+08:00
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Beta
Xcode
Developer Tools
Apple Intelligence
In RealityView I have two entities that contain tracking components and collision components, which are used to follow the hand and detect collisions. In the Behaviors component of one of the entities, there is an instruction to execute action through onCollision. However, when I test, they cannot execute action after collisions. Why is this?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
AR / VR
RealityKit
Reality Composer Pro
visionOS
Is there any action that can clone the entity in RealityView to the number I want? If there is, please let me know. Thank you!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
visionOS
I can execute an action by allowing Xcode to send a notification to Reality Composer Pro via NotificationCenter, or I can send notifications to Xcode through the Notification Action in Reality Composer Pro. However, I discovered that they were unable to accept notifications from both parties within my project. To ascertain whether there was an error in my code, I created a simple Demo project. I utilized the same code and determined that it functioned normally within the Demo project. It is perplexing that I am unable to resolve this issue. Do I require additional modifications?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Notification Center
SwiftUI
Reality Composer Pro
visionOS
In visionOS2, there exists a function that enables users to raise their hand to display the home button. However, this functionality conflicts with the interaction required for the mixed display space utilized within my application. Therefore, I seek a method to disable this functionality.
The entity in My RealityView contains tracking components and allows them to track different places of the hand. However, I found that except for the fingertip of the index finger, the fingertip of the thumb, the palm and the wrist, all other positions cannot be tracked normally (such as the fingertip of the middle finger). How can I solve it (I think it may be a beta version of the bug)
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
AR / VR
RealityKit
Reality Composer Pro
visionOS
In my Volume, there is a RealityView that includes lighting effects. However, when the user drags the position of the window back and forth, the farther the distance between the volume and the user, the greater the brightness of the light effect. ( I believe this may be a Beta version of a bug.)
Note: The volume windowGroup has the .defaultWorldScaling(.dynamic) property.
In the reality view, I found that the entity could not cast a shadow on the reality. What configuration should I add to achieve this function?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
SwiftUI
RealityKit
Reality Composer Pro
visionOS
I am attempting to execute actions after clicking an entity in Reality View using the Behaviors component. I have added the Input Target component and the Tap gesture as follows:
TapGesture().targetedToAnyEntity()
.onEnded({ value in
_ = value.entity.applyTapForBehaviors()
})
)
However, during testing, I have observed that the entity does not appear to recognize the click gesture. Could you kindly provide any relevant documentation or guidance on this matter?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
SwiftUI
RealityKit
Reality Composer Pro
visionOS
Apple Intelligence is now available in macOS 15.1 and iOS 18.1 (Beta). However, it is currently not supported on visionOS. Despite the fact that visionOS runs on M2 silicon with 16GB of unified memory, Apple Intelligence cannot be utilized on this platform. I want enable Apple Intelligence for my visionOS app.
During testing, I encountered an issue with SharePlay. Since SharePlay necessitates multi-device testing, I intend to use my Mac and Vision Pro for testing. However, since these two devices are also my primary devices, I am reluctant to switch Apple IDs for testing purposes. Instead, I would like to test the original Apple ID. However, since both devices belong to the same Apple ID and rely on the same phone number, they are unable to FaceTime each other. I am at a loss as to how to proceed.
I have developed a code that initiates the Timeline in the Reality Composer Pro scene every 12.93 seconds.
RealityView { … }
.onAppear {
startTimer()
}
.onDisappear {
stopTimer()
}
func startTimer() {
timer = Timer.scheduledTimer(withTimeInterval: 12.93, repeats: true) { _ in
action()
}
}
func stopTimer() {
timer?.invalidate()
}
func action() {
print(“SunUpDown”)
NotificationCenter.default.post(
name: NSNotification.Name(“RealityKit.NotificationTrigger”),
object: nil,
userInfo: [
“RealityKit.NotificationTrigger.Scene”: scene as Any,
“RealityKit.NotificationTrigger.Identifier”: “SunUpDown”
]
)
}
Upon receiving the “SunUpDown” command, Timeline will be executed.
However, everything was functioning normally when I was running the scene, and I could continue looping until I attempted to zoom in on the window and discovered that it ceased looping. Could you please provide an explanation for this behavior?
Note: The window type is volumetric, and the parameter of the defaultWorldScaling modifier is dynamic.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
SwiftUI
RealityKit
Reality Composer Pro
visionOS