In the AudioServicesPlaySystemSound function of AudioToolbox, you can enter the corresponding SystemSoundID to play some sound effects that come with the system. However, I can't be sure what sound effect each number corresponds to, so I want to know all the sound effects in visionOS and its corresponding SystemSoundID.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
In the apple map of some areas, there will be a very realistic real-life 3D map. And now I want to call it through 3d in visionOS (like model3d). How can I call it?
Note: What I ask for is not to have an effect similar to 3d on a flat screen like in iOS, but to display the USDZ model in visionOS.
This is a visionOS App. I added contextMenu under a combination view, but when I pressed the view for a long time, there was no response. I tried to use this contextMenu in other views, which can be used normally, so I think there is something wrong with this combination view, but I don't know what the problem is. I hope you can remind me. Thank you!
Views with problems:
struct NAMEView: View {
@StateObject private var placeStore = PlaceStore()
var body: some View {
ZStack {
Group {
HStack(spacing: 2) {
Image(systemName: "mappin.circle.fill")
.font(.system(size: 50))
.symbolRenderingMode(.multicolor)
.accessibilityLabel("your location")
.accessibilityAddTraits([.isHeader])
.padding(.leading, 5.5)
VStack {
Text("\(placeStore.locationName)")
.font(.title3)
.accessibilityLabel(placeStore.locationName)
Text("You are here in App")
.font(.system(size: 13))
.foregroundColor(.secondary)
.accessibilityLabel("You are here in App")
}
.hoverEffect { effect, isActive, _ in
effect.opacity(isActive ? 1 : 0)
}
.padding()
}
}
.onAppear {
placeStore.updateLocationName()
}
.glassBackgroundEffect()
.hoverEffect { effect, isActive, proxy in
effect.clipShape(.capsule.size(
width: isActive ? proxy.size.width : proxy.size.height,
height: proxy.size.height,
anchor: .leading
))
.scaleEffect(isActive ? 1.05 : 1.0)
}
}
}
}
In Xcode16Beta4, it contains Predictive Code Completion, and Predictive Code Completion is also with other SDKs in the page opened by Xcode for the first time. Waiting for download.
However, I want to know: 1. What is Predictive Code Completion? 2. I didn't download Predictive Code Completion on the SDK download page when I first opened Xcode. Where should I download it later?
When I run my visionOS App, RealityKitContent Report an error:
Tool terminated by signal 'Segmentation fault: 11'
And it points to a USDZ model I imported, but in the scene, my model can be displayed normally and there is no damage. Why does an error occur? How can I check and repair it?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
RealityKit
Reality Composer Pro
visionOS
In a listing program in WWDC24, it shows that users can control the robot to walk by pinching and sliding. However, I haven't found any documents or videos related to this function. If you know, please let me know. Thank you!
I used such a gesture under a reality view.
DragGesture().targetedToAnyEntity()
.onChanged { value in
print("DragGesture")
self.dragOffset = value.translation
self.startTimer()
}
.onEnded { _ in
self.dragOffset = .zero
self.direction = "None"
self.stopTimer()
}
However, due to the special nature of Reality View, it is impossible to detect gestures normally, so I think some modifiers should be added after value.translation, but I don't know what modifiers are. Can you give me some? Do you know? Thank you.
I was making a gesture to let the goose (character) walk, but I had two problems.
1: I added collision and physical body components to the goose and the collided entity, but I found that those physical formations could not completely block the way of the goose. For example, a tree is in front of it. After the goose is blocked, it will cross the tree or run to the top of the tree as long as it is a little faster.
2: Because the knowledge I have accumulated is not very complete, I can control the movement of the goose on the z-axis. I hope that the user's gestures can be realized by dragging back and forth (z-axis), but I can only realize the user's gestures by dragging up and down (y-axis). I hope you can give me some guidance:
GooseOriginalPosition.z + Float(translation.height / 10000)
This is the complete code:
@State var goose: Entity?
@State var isDraggingGoose = false
@State var gooseOriginalPosition = SIMD3<Float>(repeating: 0)
RealityView { content in
if let model = try? await Entity(named: "WorldScene", in: realityKitContentBundle) {
content.add(model)
}
if let gooseEntity = try? await Entity(named: "Goose", in: realityKitContentBundle) {
gooseEntity.scale = SIMD3<Float>(repeating: 0.3)
content.add(gooseEntity)
goose = gooseEntity
}
}
.simultaneousGesture(DragGesture()
.targetedToAnyEntity()
.onChanged { value in
handleDrag(value)
}
.onEnded { _ in
isDraggingGoose = false
gooseTimer?.invalidate()
})
func handleDrag(_ value: EntityTargetValue<DragGesture.Value>) {
guard let goose = goose else { return }
if !isDraggingGoose {
isDraggingGoose = true
gooseOriginalPosition = goose.position(relativeTo: nil)
}
let translation = value.gestureValue.translation
let newPosition = SIMD3<Float>(
gooseOriginalPosition.x + Float(translation.width / 10000),
gooseOriginalPosition.y,
gooseOriginalPosition.z + Float(translation.height / 10000)//I hope the gesture here should be z-axis drag.
)
goose.setPosition(newPosition, relativeTo: nil)
}
How to realize that a string of descriptions of a view will appear after looking at a view for a long time in the diagram
In a scenario involving one of the entities in a Reality Composer Pro environment, I intend for this entity to display a blue material when viewed by the user. To achieve this, I have added the following Shader Graphs to the materials associated with this entity:
Additionally, I have included the HoverEffectComponent component to the Reality View in the code:
RealityView { content in
if let model = try? await Entity(named: “WorldScene”, in: realityKitContentBundle) {
let hoverEffect = HoverEffectComponent(.shader(.default))
model.components.set(hoverEffect)
content.add(model)
}
}
However, hover this entity, I am unable to observe any visual reaction. Could you please provide guidance on how to resolve this issue?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
During testing, I encountered an issue with SharePlay. Since SharePlay necessitates multi-device testing, I intend to use my Mac and Vision Pro for testing. However, since these two devices are also my primary devices, I am reluctant to switch Apple IDs for testing purposes. Instead, I would like to test the original Apple ID. However, since both devices belong to the same Apple ID and rely on the same phone number, they are unable to FaceTime each other. I am at a loss as to how to proceed.
I have developed a code that initiates the Timeline in the Reality Composer Pro scene every 12.93 seconds.
RealityView { … }
.onAppear {
startTimer()
}
.onDisappear {
stopTimer()
}
func startTimer() {
timer = Timer.scheduledTimer(withTimeInterval: 12.93, repeats: true) { _ in
action()
}
}
func stopTimer() {
timer?.invalidate()
}
func action() {
print(“SunUpDown”)
NotificationCenter.default.post(
name: NSNotification.Name(“RealityKit.NotificationTrigger”),
object: nil,
userInfo: [
“RealityKit.NotificationTrigger.Scene”: scene as Any,
“RealityKit.NotificationTrigger.Identifier”: “SunUpDown”
]
)
}
Upon receiving the “SunUpDown” command, Timeline will be executed.
However, everything was functioning normally when I was running the scene, and I could continue looping until I attempted to zoom in on the window and discovered that it ceased looping. Could you please provide an explanation for this behavior?
Note: The window type is volumetric, and the parameter of the defaultWorldScaling modifier is dynamic.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
SwiftUI
RealityKit
Reality Composer Pro
visionOS
Apple Intelligence is now available in macOS 15.1 and iOS 18.1 (Beta). However, it is currently not supported on visionOS. Despite the fact that visionOS runs on M2 silicon with 16GB of unified memory, Apple Intelligence cannot be utilized on this platform. I want enable Apple Intelligence for my visionOS app.
I found that my visionOS Simulator is very strange. Many functions and features are missing. For example, I learned from the Internet that the immersive scenes of Environments in their visionOS Simulator can be opened, but I click There was no response after the attack. There are not only these, but also many system features. I saw on the Internet that other developers have them, and I am missing. I'm worried that this will have an impact on me when testing my app. May I ask why?
Some information:
My updated Xcode version is the latest Xcode15.1Beta.
Device: iMac (2021)
Simulator system number: 21N305