I am developing an immersive application featured with hands interacting my virtual objects. When my hand passes through the object, the rendered color of my hand is like blending hand color with object's color together, both semi transparent. I wonder if it is possible to make my hand be always "opaque", or say the alpha value of rendered hand (coz it's VST) is always 1, but the object's alpha value could be varied in terms of whether it is interacting with hand.
(I was thinking this kind of feature might be supported by a specific component (just like HoverEffectComponent), but I didn't find that out)
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Currently I am using mixed style immersive view to place both my WindowView(plain style) and ImmersiveView content together. The issue is that the rendering depth testing may always let the virtual content block my normal WindowView. Is it possible to manually set windowedVIew always displays in the front of my virtual view in mixed style immersion? (I know modelSortGroup but it doesn't quite fits here)
Or if I can dynamically change the .progressive value when the immersive space is open (set the value to zero means .mixed itself right?)
I was developing my project just like HappyBeam, having a mechanism that after playing a small game round, a main menu pops up letting player to play again or back to main menu.
When I start my first game after installing it on my vision pro, it works fine, which was basically like HappyBeam, counting from 3 to 1 then await openImmersiveView(id: "xxx") to entering my game. After finish the round, I call await dismissImmersiveView() and reset my game ready for the next move. Letting player to choose play again or back to menu to continue my game.
However, this time, when my counter counting 3 to 1, the immersive view didn't show up and the visionOS menu showed up instead (I guess it's because immersive view cannot be open). Some error showed in my logger are below
<FBSWorkspaceScenesClient:0x281820e00 com.apple.frontboard.systemappservices> scene request failed to return scene with error response : <NSError: 0x2839bc270; domain: FBSWorkspaceErrorDomain; code: 1 ("InvalidScene"); "scene invalidated before create completion">
------------------------------------------------
Unable to present an Immersive Space for id 'ImmersiveSpace': Error Domain=FBSWorkspaceErrorDomain Code=1 "scene invalidated before create completion" UserInfo={BSErrorCodeDescription=InvalidScene, NSLocalizedFailureReason=scene invalidated before create completion}
------------------------------------------------
Error: BSLogAddStateCaptureBlockWithTitle(EventDeferringState:com.milanow.mygame:SFBSystemService-C90B0828-4522-4098-9E6A-0D5968CFCEB8) state data format error: <NSError: 0x283947360; domain: BSSharedStateCapturing; code: 1; "Input generated no data"> {
NSUnderlyingError = <__NSCFError: 0x2839451d0; domain: NSCocoaErrorDomain; code: 3851> {
NSDebugDescription = Property list invalid for format: 200 (property lists cannot contain NULL);
};
}
Wonder what happens here since no helpful info could be found online.
(I think the openimmersiveview code snippet is boring but I may still post it here)
var body: some Scene {
Group {
WindowGroup(id: "MainUI") {
MainView()
}
.windowStyle(.plain)
.windowResizability(.contentSize)
ImmersiveSpace(id: "ImmersiveSpace") {
GameView()
}
.immersionStyle(selection: .constant(.mixed), in: .mixed)
}
.onChange(of: gameModel.state) { oldValue, newValue in
guard oldValue != newValue else {
return
}
if case let .dismissingImmersiveSpace(finish) = newValue {
Task {
await dismissImmersiveSpace()
openWindow(id: "MainUI")
finish()
}
} else if case let .openingImmersiveSpace(startPlaying) = newValue {
Task {
await openImmersiveSpace(id: "ImmersiveSpace")
dismissWindow(id: "MainUI")
startPlaying()
}
} else if case .playing = oldValue {
openWindow(id: "MainUI")
} else if case .playing = newValue {
dismissWindow(id: "MainUI")
}
}
}
You can see that nothing magic here just follow what HappyBeam says.
(Wonder what makes a scene invalid really)
Unity's PolySpatial has HoverEffect GameObject supported, I think it basically means even though I don't know the exact entity that user is looking at. But the developer can provide a event callback to RealityKit that "please change me the entity's mesh to other color". Just like hoverEffect on SwiftUI component.
So I wonder if there was a closure to let RealityKit the system to fire up my callback?
I was trying to load an Entity by Entity(named: sceneName, in: realityKitContentBundle), which works for many of my .usda file. But this time I got an error:
Error loading asset from scene PinballTable.usda: failed to load '7058602595919186152 Scene (RealityFileAsset)Bundle/RealityKitContent-RealityKitContent-resources/RealityKitContent.reality/Scene_14.compiledscene' (Asset provider load failed: type 'RealityFileAsset' -- Failed to load compiled data for asset path 'Scene_14.compiledscene', due to error: Failed to deserialize asset data.)
Any ideas on why this won't work? I have checked the size of my .usda file it's around 42kb so I won't think it's sake of file size. Due to many .usda reference inside of this scene, I suspect that it might be the case the bundle cannot locate other usda reference. So I export the whole scene into .usdz file and it turns to 118kb. Wonder if this could be the only issue here that affect the loading result but this is what I have tried so far.
visionOS System: visionOS beta 2/simulator 1.1 (neither works)
XCode: 15.4/16.0 beta (neither works)
It's all about notifications to trigger actions from RCP's new Timeline system. From Compose interactive 3D content in Reality Composer Pro I am actually starting to confuse why there was need to use Entity.applyTapForBehaviors in code to trigger content in Behaviors Component. Simply because in Behaviors Component, we have chosen OnTap to allow a "Tap Notification" to trigger our action (on a selected target object).
Then I guess by selecting OnCollision this trigger, I should write something like CollisionEvent.entityA.applyCollisionForBehaviors, which we don't have. And ofc the collision on my object won't trigger this action (because I only did things in RCP not in code).
Ignoring this post has pointed out we could use Behaviors Component's OnNotification to trigger something for now.
I found that I could still use OnTap trigger but actually put my code Entity.applyTapForBehaviors under my subscribed collision's begin event. That actually works better than OnCollision
So what is the design principles here? And how could I trigger a collision notification to let my Behaviors Component's OnCollision actually works?
I created a simple Timeline animation with only a "Play Audio" action in RCP. Also a Behaviors Component setting an "OnTap" trigger to fire this Timeline animation.
In my code, I simply run Entity.applyTapForBehaviors() when something happened. The audio can be normally played on the simulator but cannot be played on the device.
Any potential bug leads this behavior?
Env below:
Simulator Version: visionOS 2.0 (22N5286g)
XCode Version: Version 16.0 beta 4 (16A5211f)
Device Version: visionOS 2.0 beta (latest)
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
SwiftUI
RealityKit
Reality Composer Pro
visionOS
I have three basic elements in this UI page: View, Alert, Toolbar. I put Toolbar and Alert along with the View, when I click a button on Toolbar, my alert window shows up. Below could be a simple version of my code:
@State private var showAlert = false
HStack {
// ...
}
.alert(Text("Quit the game?"), isPresented: $showAlert) {
MyAlertWindow()
} message: {
Text("Description text about this alert")
}
.toolbar {
ToolbarItem(placement: .bottomOrnament) {
MyToolBarButton(showAlert: $showAlert)
}
}
And in MyToolBarButton I just toggle the binded showAlert variable to try to open/close the alert window.
When running on either simulator or device, the bahavior is quite strange. Where when toggle MyToolBarButton the alert window takes like 2-3 seconds to show-up, and all the elements on the alert window is grayed out, behaving like the whole window is losing focus. I have to click the moving control bar below (by dragging gesture) to make the whole window back to focus.
And this is not the only issue, I also find MyToolBarButton cannot be pressed to close the alert window (even thogh when I try to click the button on my alert window it closes itself).
Oh btw I don't know if this may affect but I open the window with my immersive view opened (though I tested it won't affect anything here)
Any idea of what's going on here?
XCode 16.1 / visionOS 2 beta 6
RealityContentKit bundle resource issue
Recently I always encounter weird loading bugs from RealityKitContent bundle. When I was trying to load audio resource as AudioFileResource or AudioFileGroupResource by loading from *.usda from RealityKitContent bundle, with this method. My code is nothing complicated but simple as below:
let primPath: String = "/SampleAudios/SE_bounce_audio"
guard let resource = try? AudioFileGroupResource.load(named: primPath, from: "MyScene.usda", in: realityKitContentBundle) else {
return
}
And the runtime program "sometimes"(whenever I change something RCP it somethings work again but the behavior is unpredictable) reports that it "Cannot find MyScene.usda:/SampleAudios/SE_bounce_audio in RealityKitContent.bundle".
I put MyScene.usda under the root folder of RealityKitContent package because I found that RealityKit just cannot find any *.usda scene if you didn't put that on the root level (could be a bug because of the way it indexes its files).
I even double checked my .usda file with usdview, the primPath is absolutely correct. I think there are some unknown issues when RealityKitContent copy resources and build the package. I tried to play with the package Package.swift file a bit to see if I could manually copy my resources (everything) and let the package carry my resources but it just didn't work. So right now I just keep this file untouched below (just upgrade the swift-tools-version to 6.0 as only that can supports .visionOS(.v2)):
// swift-tools-version:6.0
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "RealityKitContent",
platforms: [
.visionOS(.v2)
],
products: [
// Products define the executables and libraries a package produces, and make them visible to other packages.
.library(
name: "RealityKitContent",
targets: ["RealityKitContent"]),
],
dependencies: [
// Dependencies declare other packages that this package depends on.
// .package(url: /* package url */, from: "1.0.0"),
],
targets: [
// Targets are the basic building blocks of a package. A target can define a module or a test suite.
// Targets can depend on other targets in this package, and on products in packages this package depends on.
.target(
name: "RealityKitContent"
),
]
)
That is just issue one, RealityKitContent package build issue.
Audio file format issue
Another is about Audio File Format RCP supports. I remember is a place (WWDC?) saying .wav and .mp4 are supported to be used as audio source. But when I try to set up Spatial Audio, I find sometimes *.wav or *.mp3 can also be imported as AudioSourceFile. But the behavior is unpredictable. With two *.wav files SE_ball_hit_01.wav and SE_ball_hit_02.wav, only SE_ball_hit_01.wav is supported, 02 is reported as the format is not supported/ Check out my screenshots to see the details of two files. Two files have almost the same format (same sample rate or channel).
I understand there might be different requirements for a source file to be used as Spatial or Ambient audio. But I haven't figured that out or there is nothing I can find helpful on Apple Documentation. So what is the rules?
Thanks for reading and any thought is welcomed.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Swift Packages
Audio
RealityKit
Reality Composer Pro
My visionOS requires access to users' personal photos. The trigger mechanism is: when user firstly opens a FooView, a task attached to that FooView and calling let status = PHPhotoLibrary.authorizationStatus(for: .readWrite), if the status is .notDetermined, then calling PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) to let visionOS pop out a window to request Photo access.
However, the app crashes every time when user selects Limited Access and the system try to pop out a photo library picker. And btw, I have set Prevent limited photos access alert to Yes, but it shouldn't affect the behavior here I guess.
There was a debugger message here:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Presentations are not permitted within volumetric window scenes.'
However, the window this view belongs to is a .plain style window (though there were 3D object appearing in the other view of same windowgroup)
This is my code snippet if this helps:
checkAndUpdatePhotoAuthorization is just a wrapper of PHPhotoLibrary.authorizationStatus(for: .readWrite)
private func checkAndUpdatePhotoAuthorization() -> PHAuthorizationStatus {
let currentStatus = PHPhotoLibrary.authorizationStatus(for: .readWrite)
switch currentStatus {
case .authorized:
print("Photo library access authorized.")
isPhotoGalleryAuthorized = true
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = false
isPhotoGalleryDetermined = true
case .limited:
print("Photo library access limited.")
isPhotoGalleryLimited = true
isPhotoGalleryAuthorized = false
isPhotoGalleryAccessRestricted = false
isPhotoGalleryDetermined = true
case .notDetermined:
isPhotoGalleryDetermined = false
print("Photo library access not determined.")
case .denied:
print("Photo library access denied.")
isPhotoGalleryAuthorized = false
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = false
showSettingsAlert = true
isPhotoGalleryDetermined = true
case .restricted:
print("Photo library access restricted.")
isPhotoGalleryAuthorized = false
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = true
showPhotoAuthExplainationAlert = true
isPhotoGalleryDetermined = true
@unknown default:
print("Photo library Unknown authorization status.")
isPhotoGalleryAuthorized = false
isPhotoGalleryLimited = false
isPhotoGalleryAccessRestricted = false
isPhotoGalleryDetermined = true
}
return currentStatus
}
And then FooView attaches task to fire up checkAndUpdatePhotoAuthorization()
var body: some View {
EmptyView()
}
.task {
try? await Task.sleep(for: .seconds(1.0))
let status = self.checkAndUpdatePhotoAuthorization()
if status == .notDetermined {
DispatchQueue.main.async {
PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler)
}
}
Another thing worth to mention is that SOMETIMES it won't crash when running on a debug build. But it crashes when it comes to TF.
Any other idea? Big thanks in advance
XCode version: 16.2 beta 3
VisionOS version: 2.2
Suppose there was an immersiveSpace, and an Entity() being added to the space as child entity of the content. This entity is responsible for playing background music by calling prepareAudio, gaining a controller and play the music. (check the basic code below)
When it was playing music, a .plain window and an immersiveSpace are both presented. I believe this immersiveSpace is holding the handle of the controller so as long as immersiveSpace is open, the music won't stop.
However if I close the .plain window (by closing system-level close button), the music just stopped. But the immersiveSpace is still open. If right now I check the value of controller.isPlaying, it was still true. But you just cannot hear the music anymore.
To reproduce, simply open an visionOS template App project, selecting volume and full immersive, and replace some code inImmersiveView.swift with the code below. Also simply drag any .mp3 file and replace the AudioFileResource's name. And you could reproduce this bug.
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Put skybox here. See example in World project available at
// https://developer.apple.com/
if let audioResource = try? await AudioFileResource(named: "anyMP3file.mp3") {
let ent = Entity()
immersiveContentEntity.addChild(ent)
let controller = ent.prepareAudio(audioResource)
controller.play()
}
}
}
I wonder why this happen? I mean how should I keep the music playing when I close the .plain window?
Thanks!
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows:
And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird).
Is there something I am missing?
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic.
Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments.
So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere.
OS/Sys: VisionOS 2.3/XCode 16.3
I'm creating an immersive experience with RealityView (just consider it Fruit Ninja like experience). Saying I have some random generated fruits that were generated by certain criteria in System.update function. And I want to interact these generated fruits with whatever hand gesture.
Well it simply doesn't work, the gesture.onChange function isn't fire as I expected. I put both InputTargetComponent and CollisionComponent to make it detectable in an immersive view. It works fine if I already set up these fruits in the scene with Reality Composer Pro before the app running.
Here is what I did
Firstly I load the fruitTemplate by:
let tempScene = try await Entity(named: fruitPrefab.usda, in: realityKitContentBundle)
fruitTemplate = tempScene.findEntity(named: "fruitPrefab")
Then I clone it during the System.update(context) function. parent is an invisible object being placed in .zero in my loaded immersive scene
let fruitClone = fruitTemplate!.clone(recursive: true)
fruitClone.position = pos
fruitClone.scale = scale
parent.addChild(fruitClone)
I attached my gesture to RealityView by
.gesture(DragGesture(minimumDistance: 0.0)
.targetedToAnyEntity()
.onChanged { value in
print("dragging")
}
.onEnded { tapEnd in
print("dragging ends")
}
)
I was considering if the runtime-generated entity is not tracked by RealityView, but since I have added it as a child to a placeholder entity in the scene, it should be fine...right?
Or I just needs to put a new AnchorEntity there?
Thanks for any advice in advance. I've been tried it out for the whole day.
Topic:
App & System Services
SubTopic:
Core OS
Tags:
visionOS
Reality Composer Pro
iPad and iOS apps on visionOS
I cannot figure out how shareplay + spatial persona place the origin of RealityKit's coordinate system
I have an App on visionOS with immersiveSpace(.mix) scene. In the scene I am using ARKit to track my hand, creating a virtual object following the movement of my hand palm. Every frame I query positions from the HandAnchor to update the position of my object, using originFromAnchorTransform to correctly place my object in the scene.
However when I try to adopt that into a shareplay experience with spatial persona, the virtual object's position update becomes a mess. With different template (.sidebyside or .conversational), the origin in my space appears with no pattern. I can always see that the virtual object don't follow my hand but in a random place. it seems that there was a differnce/transform between handAnchor's origin and immersiveSpace origin under stpatial persona + shareplay mode. isn't it?
Or is there something I can try to use convert(displacement.inverse.rotation, from: .immersiveSpace, to: .scene) that mentioned here: https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting and https://developer.apple.com/documentation/swiftui/environmentvalues/immersivespacedisplacement to do the translation and apply it on my virtual object. But not working yet. Can someone tells me how to do this correctly?