Suppose there was an immersiveSpace, and an Entity() being added to the space as child entity of the content. This entity is responsible for playing background music by calling prepareAudio, gaining a controller and play the music. (check the basic code below)
When it was playing music, a .plain window and an immersiveSpace are both presented. I believe this immersiveSpace is holding the handle of the controller so as long as immersiveSpace is open, the music won't stop.
However if I close the .plain window (by closing system-level close button), the music just stopped. But the immersiveSpace is still open. If right now I check the value of controller.isPlaying, it was still true. But you just cannot hear the music anymore.
To reproduce, simply open an visionOS template App project, selecting volume and full immersive, and replace some code inImmersiveView.swift with the code below. Also simply drag any .mp3 file and replace the AudioFileResource's name. And you could reproduce this bug.
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Put skybox here. See example in World project available at
// https://developer.apple.com/
if let audioResource = try? await AudioFileResource(named: "anyMP3file.mp3") {
let ent = Entity()
immersiveContentEntity.addChild(ent)
let controller = ent.prepareAudio(audioResource)
controller.play()
}
}
}
I wonder why this happen? I mean how should I keep the music playing when I close the .plain window?
Thanks!
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Currently I want to recreate a window which is similar to system window in ImmersiveSpace. But we only can use the meter unit in RealityKit. I create a plane entity, I don't know how to set the size using meter unit to make the plane's size totally consistent with the system window.
Also, I want to know the z and y position of the system window in the immersive space.
I started a new project using RealityKit and RealityView, intended as an AR app on iPhone and iPad, but eventually VisionOS as well. I'm challenged because I find much of the recent documentations, WWDC videos, etc, include features that are VisionOS only.
Right now, I would simply like to create some gesture functionality that is similar to AR Quick Look defaults, meaning drag to reposition, two fingers to rotate or zoom. In the past, this would be implemented with something like:
arView.installGestures([.all], for: entity)
however, with RealityView I don't know how (or if possible) to access an ARView.
In RealityKit, I have found this doc: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
However, many of the features in that posting are VisionOS only, and I've found no good documentation on the topic that is specific or at least compatible with iOS.
I know reverting to an ARView is an option, but I want to use RealityView if at all possible as I see it as more forward-looking.
Topic:
Spatial Computing
SubTopic:
General
Hi, are we allowed to push the default support in Package.swift up to iOS 18 to allow for the latest APIs?
And with the terms of the competition, can we use stock 3D USDZ assets?
Thank you!
Hi everyone,
I've been exploring an idea that involves using virtual light sources in VisionOS/RealityKit to interact with real-world objects. Specifically, I'd like to simulate a scenario where a virtual spotlight or other light source casts light or shadows onto real-world environments, creating the effect of virtual lighting interacting with physical surroundings. Is this currently feasible within VisionOS/RealityKit?
Thank you!
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure.
How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
Topic:
Spatial Computing
SubTopic:
General
I want to create a screenshot (static image) of the current view on the Apple Vision Pro using written code in visionOS. Unfortunately, I currently can’t find a way to achieve this. The only option I’ve found so far is through Reality Composer Pro. However, since I want to accomplish this directly through code, this approach is not an option for me.
Is there any way to detect if an entity is being looked at in a RealityView. I know it is possible to add a "HoverEffectComponent()" which will highlight the entity a little when you gaze on it, but there doesn't seem to be any way to call a function from this. There is also no GazeGesture or anything similar.
In visionOS, once an immersive space is opened, the background color is solid black, is it possible to make this background transparent?
FYI, The Immersive spaces on visionOS uses Compositor Services for drawing 3D content.
If I update vision pro size then textfield and button are not update as per its new size.
Its working on some scren but not in some screen.
Please refer below screenshot for your reference,
Hi team I don't know from this error has popped up any suggestions please
Thanks
Zippy Games
Hi, I am trying to update what entities are visible in my RealityView. After the SwiftData set is updated, I have to restart the app for it to appear in the RealityView.
Also, the RealityView does not close when I move to a different tab. It keeps everything on and tracking, leaving the model in the same location I left it.
import SwiftUI
import RealityKit
import MountainLake
import SwiftData
struct RealityLakeView: View {
@Environment(\.modelContext) private var context
@Query private var items: [Item]
var body: some View {
RealityView { content in
print("View Loaded")
let lakeScene = try? await Entity(named: "Lake", in: mountainLakeBundle)
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
@MainActor func addEntity(name: String) {
if let lakeEntity = lakeScene?.findEntity(named: name) {
// Add the Cube_1 entity to the RealityView
anchor.addChild(lakeEntity)
} else {
print(name + "entity not found in the Lake scene.")
}
}
addEntity(name: "Island")
for item in items {
if(item.enabled) {
addEntity(name: item.value)
}
}
// Add the horizontal plane anchor to the scene
content.add(anchor)
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.edgesIgnoringSafeArea(.all)
}
}
#Preview {
RealityLakeView()
}
Topic:
Spatial Computing
SubTopic:
General
Tags:
Swift Packages
RealityKit
Reality Composer Pro
SwiftData
We use Unity6+VisonOS2.2 to develop an MR Application.
App Mode: RealityKit with PolySpatial,
In the actual test, we found that when my moving position is more than 80~100 meters away from the starting position of the application, my current position will be reset to Vector.zero, which will cause my application experience to be very bad. Is anyone experiencing the same problem? Is there a solution?
Topic:
Spatial Computing
SubTopic:
General
In visionOS, there are existing modifiers that can completely conceal the hands. However, I am interested in learning how to achieve the effect of only one hand disappearing while the other hand remains visible.
.upperLimbVisibility(.hidden)
Hi,
since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ.
Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm)
So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
I have a MeshResource and I would like to create a collision component from it.
let childBounds = child.visualBounds(relativeTo: self)
var childShape: ShapeResource
do {
// Crashed by the following line instead of throwing a Swift Error
childShape = try await ShapeResource.generateConvex(from: childModel.mesh);
} catch {
childShape = ShapeResource.generateBox(size: childBounds.extents)
childShape = childShape.offsetBy(translation: childBounds.center)
}
Based on this document: https://developer.apple.com/documentation/realitykit/shaperesource/generateconvex(from:)-6upj9
Will throw an error if mesh does not define a nonempty convex volume. For example, will fail if all the vertices in mesh are coplanar.
But, the method crashes the app instead of throwing a Swift error
Incident Identifier: 35CD58F8-FFE3-48EA-85D3-6D241D8B0B4C
CrashReporter Key: FE6790CA-6481-BEFD-CB26-F4E27652BEAE
Hardware Model: Mac15,11
...
Version: 1.0 (1)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd_sim [2057]
Coalition: com.apple.CoreSimulator.SimDevice.85A2B8FA-689F-4237-B4E8-DDB93460F7F6 [1496]
Responsible Process: SimulatorTrampoline [910]
Date/Time: 2025-01-26 16:13:17.5053 +0800
Launch Time: 2025-01-26 16:13:09.5755 +0800
OS Version: macOS 15.2 (24C101)
Release Type: User
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x00000001abf841d0
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [17316]
Triggered by Thread: 0
Thread 0 Crashed:
0 CoreRE 0x1abf841d0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedron + 232
1 CoreRE 0x1abf845f0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedronFromMesh + 868
2 RealityFoundation 0x1d25613bc static ShapeResource.generateConvex(from:) + 148
Here is the message on the app console from Xcode
/Library/Caches/com.apple.xbs/Sources/REKit_Sim/ThirdParty/PhysX/physx/source/physxcooking/src/convex/QuickHullConvexHullLib.cpp (935) : internal error : QuickHullConvexHullLib::findSimplex: Simplex input points appers to be coplanar.
Failed to cook convex mesh (0x3)
assertion failure: 'convexPolyhedronShape != nullptr' (REAssetManagerCollisionShapeAssetCreateConvexPolyhedron:line 356) Bad parameters passed for convex mesh creation.
Message from debug
The above crash happened on a visionOS simulator (visionOS 2.2 (22N840)
Hello, I am currently developing a Vision Pro VR application with Unreal Engine 5.5. Is it possible to interact with objects (grabbing, clicking on buttons)? I cannot find any information on this. Thank you.
Topic:
Spatial Computing
SubTopic:
General
I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes.
Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it.
Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar.
Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
I am getting the error "Initializing hosting entity without a context" in the console when I build and run my game in XCode 16.0 beta, targeting Vision Pro OS 2.0 (22N5252n).
Not sure where the error is originating.
I would like to integrate the object capture API with a ML model for analysis. So, i will need to get the current frame into CG images for further process.
Thanks in advance !