I used such a gesture under a reality view.
DragGesture().targetedToAnyEntity()
.onChanged { value in
print("DragGesture")
self.dragOffset = value.translation
self.startTimer()
}
.onEnded { _ in
self.dragOffset = .zero
self.direction = "None"
self.stopTimer()
}
However, due to the special nature of Reality View, it is impossible to detect gestures normally, so I think some modifiers should be added after value.translation, but I don't know what modifiers are. Can you give me some? Do you know? Thank you.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In a listing program in WWDC24, it shows that users can control the robot to walk by pinching and sliding. However, I haven't found any documents or videos related to this function. If you know, please let me know. Thank you!
When I run my visionOS App, RealityKitContent Report an error:
Tool terminated by signal 'Segmentation fault: 11'
And it points to a USDZ model I imported, but in the scene, my model can be displayed normally and there is no damage. Why does an error occur? How can I check and repair it?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
RealityKit
Reality Composer Pro
visionOS
In Xcode16Beta4, it contains Predictive Code Completion, and Predictive Code Completion is also with other SDKs in the page opened by Xcode for the first time. Waiting for download.
However, I want to know: 1. What is Predictive Code Completion? 2. I didn't download Predictive Code Completion on the SDK download page when I first opened Xcode. Where should I download it later?
In the apple map of some areas, there will be a very realistic real-life 3D map. And now I want to call it through 3d in visionOS (like model3d). How can I call it?
Note: What I ask for is not to have an effect similar to 3d on a flat screen like in iOS, but to display the USDZ model in visionOS.
This is a visionOS App. I added contextMenu under a combination view, but when I pressed the view for a long time, there was no response. I tried to use this contextMenu in other views, which can be used normally, so I think there is something wrong with this combination view, but I don't know what the problem is. I hope you can remind me. Thank you!
Views with problems:
struct NAMEView: View {
@StateObject private var placeStore = PlaceStore()
var body: some View {
ZStack {
Group {
HStack(spacing: 2) {
Image(systemName: "mappin.circle.fill")
.font(.system(size: 50))
.symbolRenderingMode(.multicolor)
.accessibilityLabel("your location")
.accessibilityAddTraits([.isHeader])
.padding(.leading, 5.5)
VStack {
Text("\(placeStore.locationName)")
.font(.title3)
.accessibilityLabel(placeStore.locationName)
Text("You are here in App")
.font(.system(size: 13))
.foregroundColor(.secondary)
.accessibilityLabel("You are here in App")
}
.hoverEffect { effect, isActive, _ in
effect.opacity(isActive ? 1 : 0)
}
.padding()
}
}
.onAppear {
placeStore.updateLocationName()
}
.glassBackgroundEffect()
.hoverEffect { effect, isActive, proxy in
effect.clipShape(.capsule.size(
width: isActive ? proxy.size.width : proxy.size.height,
height: proxy.size.height,
anchor: .leading
))
.scaleEffect(isActive ? 1.05 : 1.0)
}
}
}
}
In vision OS, the tab bar of TabView is outside the window by default.
If I switch a page without TabView to a page that needs TabView in my program, the tab bar will suddenly appear on the left side of the screen without any animation. I hope it has an animation when it appears (such as easeIn, move). I tried it in Tab. Other animation-related modifiers such as animation are added under View, but there is no animation in the tab bar. Only the view in the tab has an animation effect, but this is not what I want. What I want is that the tab bar outside the window can have animation. What should I do?
In the AudioServicesPlaySystemSound function of AudioToolbox, you can enter the corresponding SystemSoundID to play some sound effects that come with the system. However, I can't be sure what sound effect each number corresponds to, so I want to know all the sound effects in visionOS and its corresponding SystemSoundID.
Since my question exceeds 700 words, please check it in the attachment. Thank you!
Question
My App needs to send and receive messages to the server, but my server does not have SSL, so I can only disable ATS in the development stage. But if I want to put the app on the shelf, then I still disable ATS when I put it on the shelf, and the server still does not have SSL. Will it be packaged? Is pp warned and terminated by Xcode? Will it be rejected by the Apple audit department? Can it be put on the App Store normally and provided to all users?
Note: My server is completely safe without any security risks. I didn't apply for SSL just because I didn't have enough funds.
Topic:
App & System Services
SubTopic:
Networking
Tags:
Sign in with Apple
Authentication Services
App Store Server API
visionOS
We can add many models in the Reality Composer Pro scene, but when I use RealityView to display and add modifiers in SwiftUI, the modifiers will have Effect, and I don't want to do this. I hope this modifier will be valid for a single model in the Reality Composer Pro scenario.
May I ask how to add modifiers to a single model in the Reality Composer Pro scene?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
SwiftUI
RealityKit
Reality Composer Pro
visionOS
I learned Sharplay from the WWDC video. I understand the creation of seats, but I can't learn some of the following contents well, so I hope you can help me. The content is as follows: I have set up the seats.
struct TeamSelectionTemplate: SpatialTemplate {
let elements: [any SpatialTemplateElement] = [
.seat(position: .app.offsetBy(x: 0, z: 4)),
.seat(position: .app.offsetBy(x: 1, z: 4)),
.seat(position: .app.offsetBy(x: -1, z: 4)),
.seat(position: .app.offsetBy(x: 2, z: 4)),
.seat(position: .app.offsetBy(x: -2, z: 4)),
]
}
It was mentioned in one of my previous posts: "I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSe lectionTemplate.", and someone replied to me and asked me to try systemCoordinator.configuration.spatialTemplatePreference = .custom (TeamSelectionTemplate()), however, Xcode error Cannot find 'systemCoordinator' in scope How to solve it? Thank you!
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
Note:I have seen https://developer.apple.com/videos/play/wwdc2024/10101/, but I don't know much about it.
I followed the WWDC video to learn Sharplay. I understood the first creation of seats, but I couldn't learn some of the following content very well, so I hope you can give me a list code. The contents are as follows:
I have already taken a seat.
struct TeamSelectionTemplate: SpatialTemplate {
let elements: [any SpatialTemplateElement] = [
.seat(position: .app.offsetBy(x: 0, z: 4)),
.seat(position: .app.offsetBy(x: 1, z: 4)),
.seat(position: .app.offsetBy(x: -1, z: 4)),
.seat(position: .app.offsetBy(x: 2, z: 4)),
.seat(position: .app.offsetBy(x: -2, z: 4)),
]
}
I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSelectionTemplate. Thank you very much.
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?