Post

Replies

Boosts

Views

Activity

iOS 16.2: Cannot load underlying module for 'ARKit'
I have a strange warning in Xcode associated with ARKit. It isn't a big issue because there is a work around, but I am curious why this is happening and how I can avoid it in the future. I opened up an old AR project in Xcode, and the editor gave a strange error message on the "import ARKit" line saying Cannot load underlying module for 'ARKit' Despite the error message, the code continues to build and run. I've quit & restarted Xcode, rebooted the Mac, and even deleted and and redownloaded Xcode, but the error/warning was still there. Upon some additional testing, I discovered that I only get this message when targeting "iOS 16.2" but not when targeting 16.0, 16.1, 16.3, or 16.4. (I did not try pre-iOS 16) Any idea why my Xcode no longer likes ARKit on iOS 16.2 on my Mac? Development platform: Xcode: Version 14.3.1 (14E300c) macOS: 13.4 (22F66) Mac: Mac Studio 2022 iOS on iPhone 14 Pro Max: 16.5 (20F66) Screenshot:
2
1
1.4k
Aug ’23
Xcode Beta RC didn't have an option for vision simulator
I just downloaded the latest Xcode beta, Version 15.0 (15A240d) and ran into some issues: On start up, I was not given an option to download the Vision simulator. I cannot create a project targeted at visionOS I cannot build/run a hello world app for Vision. In my previous Xcode-beta (Version 15.0 beta 8 (15A5229m)), there was an option to download the vision simulator, and I can create projects for the visionOS and run the code in the vision simulator. The Xcode file downloaded was named "Xcode" instead of "Xcode-beta". I didn't want to get rid of the exiting Xcode, so I selected Keep Both. Now I have 3 Xcodes in the Applications folder Xcode Xcode copy Xcode-beta That is the only thing I see that might have been different about my install. Hardware: Mac Studio 2022 with M1 Max macOS Ventura 13.5.2 Any idea what I did wrong?
2
0
1.8k
Sep ’23
Setting volumetric window size at openWindow()
When defining a volumetric WindowGroup, I can set the defaultSize(). It is possible to set a different volume size when opening a window with openWindow()? In my use case, I want to display potentially different models that are of different sizes inside the volumetric window, and I want to preserve each model's size. I would like to create a volumetric window that is optimally sized for each model. Alternatively I could create a volumetric window that is large enough to fit the largest model, and then reposition smaller models inside the volume to be at the front & bottom of the volume, but I haven't figured out how to do that either (Post on that question)
0
0
746
Sep ’23
Portals and ImmersiveSpace?
I've added a simple visionOS Portal to an app's initial WindowGroup (a window with an attached portal is all that is displayed), but I've had troubles adding a portal to an ImmersiveSpace. For example, using the boilerplate code that Xcode creates for a mixed spatial experience, I'd like to turn on & off the ImmersiveSpace which has a portal in it. So far, the portal isn't showing up. Is it possible to add a portal to an ImmersiveSpace? Are there any restrictions on where portals can be added?
1
0
784
Feb ’24
Occlusion material and progressive ImmersiveSpace
In a progressive ImmersiveSpace, I created an object (a cylinder) and applied an OcclusionMaterial to it. It does hide my virtual content behind it, but does not show the content of my room. The cylinder just appears black. In progressive (or full?) ImmersiveSpace, is it possible to apply occlusion material (or something else), so I can see the room behind the virtual content? Basically, I want to punch a hole through the virtual content and see the room behind it. As a practical example, imagine being in a progressive ImmersiveSpace, but you have a plane with an occlusion mesh applied to it above your Apple Magic Keyboard so you can see your keyboard. Is this possible?
0
0
670
Feb ’24
SpatialTapGesture and collision surface's normal?
I see example code converting the results of a SpatialTap to a SIMD3 location. For example, from WWDC session Meet ARKit for spatial computing: let location3D = value.convert(value.location3D, from: .global, to: .scene) What I really want is a simd_float4x4 that includes orientation of the surface that the tap gesture/cast collided with? My goal is to place an object with its Y-axis along the normal of the surface that was tapped. For example, in the referenced WWDC session, they create a CollisionComponent from the MeshAnchor data. If that mesh data is covering a curved couch cushion, I would like the normal from that curved cushion (i.e., the closest triangle approximating it). Is this possible? My planned fallback is to only use planes for collision surfaces for tap gestures, extract the tap gesture value's entity (which I am hoping is the plane), and grab its transform for the orientation information. I am hoping Apple has a simple function call that is more general than my fallback approach.
1
0
799
Mar ’24
visionOS 3D tap location offset by ~0.35m?
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors. I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location). The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking. Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking? And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking). Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location. I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location. Here is how I captured the tap gesture and extracted the 3D location: var myTapGesture: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { event in let location3D = event.convert(event.location3D, from: .global, to: .scene) let entity = event.entity model.handleTap(location: location3D, entity: entity) } } Here is how I set the position of the purple cube: func handleTap(location: SIMD3<Float>, entity: Entity) { let positionEntity = Entity() positionEntity.setPosition(location, relativeTo: nil) ... }
5
0
1.6k
Apr ’24
Portion of Canvas that is visible in ScrollView?
I have a Canvas inside a ScrollView on a Mac. The Canvas's size is determined by a model (for the example below, I am simply drawing a grid of circles of a given radius). Everything appears to works fine. However, I am wondering if it is possible for the Canvas rendering code to know what portion of the Canvas is actually visible in the ScrollView? For example, if the Canvas is large but the visible portion is small, I would like to avoid drawing content that is not visible. Is this possible? Example of Canvas in a ScrollView I am using for testing: struct MyCanvas: View { @ObservedObject var model: MyModel var body: some View { ScrollView([.horizontal, .vertical]) { Canvas { context, size in // Placeholder rendering code for row in 0..<model.numOfRows { for col in 0..<model.numOfColumns { let left: CGFloat = CGFloat(col * model.radius * 2) let top: CGFloat = CGFloat(row * model.radius * 2) let size: CGFloat = CGFloat(model.radius * 2) let rect = CGRect(x: left, y: top, width: size, height: size) let path = Circle().path(in: rect) context.fill(path, with: .color(.red)) } } } .frame(width: CGFloat(model.numOfColumns * model.radius * 2), height: CGFloat(model.numOfRows * model.radius * 2)) } } }
0
0
485
May ’24
Triangle count and texture size budget for RealityKit on visionOS
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general). Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace? What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
2
0
1.6k
Jun ’24
Xbox controller and visionOS 2
I am having problems getting button input from an Xbox game controller. I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS. The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement. buttonA.valueChangedHandler = { button, value, pressed in os_log("Got valueChangedHandler") } At the end of RealityView, I have the modifier RealityView { content in // stuff } .handlesGameControllerEvents(matching: .gamepad) But I am never seeing the log message appear in the console when I press the 'A' button (or any other button). Any ideas what I might be doing wrong? The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
1
1
1.2k
Jun ’24
Getting the Wi-Fi's SSID on macOS
I want to extend an existing macOS app distributed through the Mac App Store with the capability to track the Wi-Fi's noise and signal strength along with the SSID it is connected to over time. Using CWWiFiClient.shared().interface(), I can get noiseMeasurement() and rssiValue() fine, but ssid() always returns nil. I am assuming this is a privacy issue (?). Are there specific entitlements I can request or ways to prompt the user to grant the app privilege to access the SSID values?
1
0
1.2k
Jul ’24
WKWebView for general purpose web browser
I created a simple web browser using WKWebView, but as far as I can tell, there is not a way to auto-populate credentials or save credentials a user enters into a login form at a 3rd-party website like Netflix (i.e., not my own app domain). Is this correct? If this is wrong, what are the APIs to support this? My use case is that I want to create an immersive app in visionOS that includes a window that lets the user surf the web (among other things). Ideally, I could just use a Safari window in my immersive app, but I don't think this is possible either. My work around is to create my own web browser... which works, minus the credential issue. Is it possible to bring a Safari window into an immersive visionOS app's experience? (IMHO, that would be a great feature)
0
0
626
Jul ’24
RealityView in macOS, Skybox, and lighting issue
I am testing RealityView on a Mac, and I am having troubles controlling the lighting. I initially add a red cube, and everything is fine. (see figure 1) I then activate a skybox with a star field, the star field appears, and then the red cube is only lit by the star field. Then I deactivate the skybox expecting the original lighting to return, but the cube continues to be lit by the skybox. The background is no longer showing the skybox, but the cube is never lit like it originally was. Is there a way to return the lighting of the model to the original lighting I had before adding the skybox? I seem to recall ARView's environment property had both a lighting.resource and a background, but I don't see both of those properties in RealityViewCameraContent's environment. Sample code for 15.1 Beta (24B5024e), Xcode 16.0 beta (16A5171c) struct MyRealityView: View { @Binding var isSwitchOn: Bool @State private var blueNebulaSkyboxResource: EnvironmentResource? var body: some View { RealityView { content in // Create a red cube 10cm on a side let mesh = MeshResource.generateBox(size: 0.1) let simpleMaterial = SimpleMaterial(color: .red, isMetallic: false) let model = ModelComponent( mesh: mesh, materials: [simpleMaterial] ) let redBoxEntity = Entity() redBoxEntity.components.set(model) content.add(redBoxEntity) // Load skybox let blueNeb2Name = "BlueNeb2" blueNebulaSkyboxResource = try? await EnvironmentResource(named: blueNeb2Name) } update: { content in if (blueNebulaSkyboxResource != nil) && (isSwitchOn == true) { content.environment = .skybox(blueNebulaSkyboxResource!) } else { content.environment = .default } } .realityViewCameraControls(CameraControls.orbit) } } Figure 1 (default lighting before adding the skybox): Figure 2 (after activating skybox with star field; cube is lit by / reflects skybox): Figure 3 (removing skybox by setting content.environment to .default, cube still reflects skybox; it is hard to see):
1
0
813
Aug ’24
App Environment SkyDome's UV values
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318. Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map). Is this the standard UV mapping for half a SkyDome? It has caused some issues when I've applied some HDRIs.
1
0
714
Sep ’24
iOS 16.2: Cannot load underlying module for 'ARKit'
I have a strange warning in Xcode associated with ARKit. It isn't a big issue because there is a work around, but I am curious why this is happening and how I can avoid it in the future. I opened up an old AR project in Xcode, and the editor gave a strange error message on the "import ARKit" line saying Cannot load underlying module for 'ARKit' Despite the error message, the code continues to build and run. I've quit & restarted Xcode, rebooted the Mac, and even deleted and and redownloaded Xcode, but the error/warning was still there. Upon some additional testing, I discovered that I only get this message when targeting "iOS 16.2" but not when targeting 16.0, 16.1, 16.3, or 16.4. (I did not try pre-iOS 16) Any idea why my Xcode no longer likes ARKit on iOS 16.2 on my Mac? Development platform: Xcode: Version 14.3.1 (14E300c) macOS: 13.4 (22F66) Mac: Mac Studio 2022 iOS on iPhone 14 Pro Max: 16.5 (20F66) Screenshot:
Replies
2
Boosts
1
Views
1.4k
Activity
Aug ’23
Xcode Beta RC didn't have an option for vision simulator
I just downloaded the latest Xcode beta, Version 15.0 (15A240d) and ran into some issues: On start up, I was not given an option to download the Vision simulator. I cannot create a project targeted at visionOS I cannot build/run a hello world app for Vision. In my previous Xcode-beta (Version 15.0 beta 8 (15A5229m)), there was an option to download the vision simulator, and I can create projects for the visionOS and run the code in the vision simulator. The Xcode file downloaded was named "Xcode" instead of "Xcode-beta". I didn't want to get rid of the exiting Xcode, so I selected Keep Both. Now I have 3 Xcodes in the Applications folder Xcode Xcode copy Xcode-beta That is the only thing I see that might have been different about my install. Hardware: Mac Studio 2022 with M1 Max macOS Ventura 13.5.2 Any idea what I did wrong?
Replies
2
Boosts
0
Views
1.8k
Activity
Sep ’23
Setting volumetric window size at openWindow()
When defining a volumetric WindowGroup, I can set the defaultSize(). It is possible to set a different volume size when opening a window with openWindow()? In my use case, I want to display potentially different models that are of different sizes inside the volumetric window, and I want to preserve each model's size. I would like to create a volumetric window that is optimally sized for each model. Alternatively I could create a volumetric window that is large enough to fit the largest model, and then reposition smaller models inside the volume to be at the front & bottom of the volume, but I haven't figured out how to do that either (Post on that question)
Replies
0
Boosts
0
Views
746
Activity
Sep ’23
Portals and ImmersiveSpace?
I've added a simple visionOS Portal to an app's initial WindowGroup (a window with an attached portal is all that is displayed), but I've had troubles adding a portal to an ImmersiveSpace. For example, using the boilerplate code that Xcode creates for a mixed spatial experience, I'd like to turn on & off the ImmersiveSpace which has a portal in it. So far, the portal isn't showing up. Is it possible to add a portal to an ImmersiveSpace? Are there any restrictions on where portals can be added?
Replies
1
Boosts
0
Views
784
Activity
Feb ’24
Does RealityKit support clipping planes?
Does RealityKit support a clipping plane, where I can define a plane and have all content on one side of the plane not rendered?
Replies
0
Boosts
1
Views
610
Activity
Feb ’24
Occlusion material and progressive ImmersiveSpace
In a progressive ImmersiveSpace, I created an object (a cylinder) and applied an OcclusionMaterial to it. It does hide my virtual content behind it, but does not show the content of my room. The cylinder just appears black. In progressive (or full?) ImmersiveSpace, is it possible to apply occlusion material (or something else), so I can see the room behind the virtual content? Basically, I want to punch a hole through the virtual content and see the room behind it. As a practical example, imagine being in a progressive ImmersiveSpace, but you have a plane with an occlusion mesh applied to it above your Apple Magic Keyboard so you can see your keyboard. Is this possible?
Replies
0
Boosts
0
Views
670
Activity
Feb ’24
SpatialTapGesture and collision surface's normal?
I see example code converting the results of a SpatialTap to a SIMD3 location. For example, from WWDC session Meet ARKit for spatial computing: let location3D = value.convert(value.location3D, from: .global, to: .scene) What I really want is a simd_float4x4 that includes orientation of the surface that the tap gesture/cast collided with? My goal is to place an object with its Y-axis along the normal of the surface that was tapped. For example, in the referenced WWDC session, they create a CollisionComponent from the MeshAnchor data. If that mesh data is covering a curved couch cushion, I would like the normal from that curved cushion (i.e., the closest triangle approximating it). Is this possible? My planned fallback is to only use planes for collision surfaces for tap gestures, extract the tap gesture value's entity (which I am hoping is the plane), and grab its transform for the orientation information. I am hoping Apple has a simple function call that is more general than my fallback approach.
Replies
1
Boosts
0
Views
799
Activity
Mar ’24
visionOS 3D tap location offset by ~0.35m?
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors. I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location). The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking. Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking? And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking). Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location. I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location. Here is how I captured the tap gesture and extracted the 3D location: var myTapGesture: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { event in let location3D = event.convert(event.location3D, from: .global, to: .scene) let entity = event.entity model.handleTap(location: location3D, entity: entity) } } Here is how I set the position of the purple cube: func handleTap(location: SIMD3<Float>, entity: Entity) { let positionEntity = Entity() positionEntity.setPosition(location, relativeTo: nil) ... }
Replies
5
Boosts
0
Views
1.6k
Activity
Apr ’24
Portion of Canvas that is visible in ScrollView?
I have a Canvas inside a ScrollView on a Mac. The Canvas's size is determined by a model (for the example below, I am simply drawing a grid of circles of a given radius). Everything appears to works fine. However, I am wondering if it is possible for the Canvas rendering code to know what portion of the Canvas is actually visible in the ScrollView? For example, if the Canvas is large but the visible portion is small, I would like to avoid drawing content that is not visible. Is this possible? Example of Canvas in a ScrollView I am using for testing: struct MyCanvas: View { @ObservedObject var model: MyModel var body: some View { ScrollView([.horizontal, .vertical]) { Canvas { context, size in // Placeholder rendering code for row in 0..<model.numOfRows { for col in 0..<model.numOfColumns { let left: CGFloat = CGFloat(col * model.radius * 2) let top: CGFloat = CGFloat(row * model.radius * 2) let size: CGFloat = CGFloat(model.radius * 2) let rect = CGRect(x: left, y: top, width: size, height: size) let path = Circle().path(in: rect) context.fill(path, with: .color(.red)) } } } .frame(width: CGFloat(model.numOfColumns * model.radius * 2), height: CGFloat(model.numOfRows * model.radius * 2)) } } }
Replies
0
Boosts
0
Views
485
Activity
May ’24
Triangle count and texture size budget for RealityKit on visionOS
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general). Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace? What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
Replies
2
Boosts
0
Views
1.6k
Activity
Jun ’24
Xbox controller and visionOS 2
I am having problems getting button input from an Xbox game controller. I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS. The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement. buttonA.valueChangedHandler = { button, value, pressed in os_log("Got valueChangedHandler") } At the end of RealityView, I have the modifier RealityView { content in // stuff } .handlesGameControllerEvents(matching: .gamepad) But I am never seeing the log message appear in the console when I press the 'A' button (or any other button). Any ideas what I might be doing wrong? The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
Replies
1
Boosts
1
Views
1.2k
Activity
Jun ’24
Getting the Wi-Fi's SSID on macOS
I want to extend an existing macOS app distributed through the Mac App Store with the capability to track the Wi-Fi's noise and signal strength along with the SSID it is connected to over time. Using CWWiFiClient.shared().interface(), I can get noiseMeasurement() and rssiValue() fine, but ssid() always returns nil. I am assuming this is a privacy issue (?). Are there specific entitlements I can request or ways to prompt the user to grant the app privilege to access the SSID values?
Replies
1
Boosts
0
Views
1.2k
Activity
Jul ’24
WKWebView for general purpose web browser
I created a simple web browser using WKWebView, but as far as I can tell, there is not a way to auto-populate credentials or save credentials a user enters into a login form at a 3rd-party website like Netflix (i.e., not my own app domain). Is this correct? If this is wrong, what are the APIs to support this? My use case is that I want to create an immersive app in visionOS that includes a window that lets the user surf the web (among other things). Ideally, I could just use a Safari window in my immersive app, but I don't think this is possible either. My work around is to create my own web browser... which works, minus the credential issue. Is it possible to bring a Safari window into an immersive visionOS app's experience? (IMHO, that would be a great feature)
Replies
0
Boosts
0
Views
626
Activity
Jul ’24
RealityView in macOS, Skybox, and lighting issue
I am testing RealityView on a Mac, and I am having troubles controlling the lighting. I initially add a red cube, and everything is fine. (see figure 1) I then activate a skybox with a star field, the star field appears, and then the red cube is only lit by the star field. Then I deactivate the skybox expecting the original lighting to return, but the cube continues to be lit by the skybox. The background is no longer showing the skybox, but the cube is never lit like it originally was. Is there a way to return the lighting of the model to the original lighting I had before adding the skybox? I seem to recall ARView's environment property had both a lighting.resource and a background, but I don't see both of those properties in RealityViewCameraContent's environment. Sample code for 15.1 Beta (24B5024e), Xcode 16.0 beta (16A5171c) struct MyRealityView: View { @Binding var isSwitchOn: Bool @State private var blueNebulaSkyboxResource: EnvironmentResource? var body: some View { RealityView { content in // Create a red cube 10cm on a side let mesh = MeshResource.generateBox(size: 0.1) let simpleMaterial = SimpleMaterial(color: .red, isMetallic: false) let model = ModelComponent( mesh: mesh, materials: [simpleMaterial] ) let redBoxEntity = Entity() redBoxEntity.components.set(model) content.add(redBoxEntity) // Load skybox let blueNeb2Name = "BlueNeb2" blueNebulaSkyboxResource = try? await EnvironmentResource(named: blueNeb2Name) } update: { content in if (blueNebulaSkyboxResource != nil) && (isSwitchOn == true) { content.environment = .skybox(blueNebulaSkyboxResource!) } else { content.environment = .default } } .realityViewCameraControls(CameraControls.orbit) } } Figure 1 (default lighting before adding the skybox): Figure 2 (after activating skybox with star field; cube is lit by / reflects skybox): Figure 3 (removing skybox by setting content.environment to .default, cube still reflects skybox; it is hard to see):
Replies
1
Boosts
0
Views
813
Activity
Aug ’24
App Environment SkyDome's UV values
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318. Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map). Is this the standard UV mapping for half a SkyDome? It has caused some issues when I've applied some HDRIs.
Replies
1
Boosts
0
Views
714
Activity
Sep ’24