I've added a simple visionOS Portal to an app's initial WindowGroup (a window with an attached portal is all that is displayed), but I've had troubles adding a portal to an ImmersiveSpace.
For example, using the boilerplate code that Xcode creates for a mixed spatial experience, I'd like to turn on & off the ImmersiveSpace which has a portal in it.
So far, the portal isn't showing up.
Is it possible to add a portal to an ImmersiveSpace? Are there any restrictions on where portals can be added?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I see example code converting the results of a SpatialTap to a SIMD3 location. For example, from WWDC session Meet ARKit for spatial computing:
let location3D = value.convert(value.location3D, from: .global, to: .scene)
What I really want is a simd_float4x4 that includes orientation of the surface that the tap gesture/cast collided with?
My goal is to place an object with its Y-axis along the normal of the surface that was tapped.
For example, in the referenced WWDC session, they create a CollisionComponent from the MeshAnchor data. If that mesh data is covering a curved couch cushion, I would like the normal from that curved cushion (i.e., the closest triangle approximating it).
Is this possible?
My planned fallback is to only use planes for collision surfaces for tap gestures, extract the tap gesture value's entity (which I am hoping is the plane), and grab its transform for the orientation information.
I am hoping Apple has a simple function call that is more general than my fallback approach.
I have a Canvas inside a ScrollView on a Mac. The Canvas's size is determined by a model (for the example below, I am simply drawing a grid of circles of a given radius). Everything appears to works fine.
However, I am wondering if it is possible for the Canvas rendering code to know what portion of the Canvas is actually visible in the ScrollView?
For example, if the Canvas is large but the visible portion is small, I would like to avoid drawing content that is not visible.
Is this possible?
Example of Canvas in a ScrollView I am using for testing:
struct MyCanvas: View {
@ObservedObject var model: MyModel
var body: some View {
ScrollView([.horizontal, .vertical]) {
Canvas { context, size in
// Placeholder rendering code
for row in 0..<model.numOfRows {
for col in 0..<model.numOfColumns {
let left: CGFloat = CGFloat(col * model.radius * 2)
let top: CGFloat = CGFloat(row * model.radius * 2)
let size: CGFloat = CGFloat(model.radius * 2)
let rect = CGRect(x: left, y: top, width: size, height: size)
let path = Circle().path(in: rect)
context.fill(path, with: .color(.red))
}
}
}
.frame(width: CGFloat(model.numOfColumns * model.radius * 2),
height: CGFloat(model.numOfRows * model.radius * 2))
}
}
}
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318.
Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map).
Is this the standard UV mapping for half a SkyDome?
It has caused some issues when I've applied some HDRIs.
I created a simple web browser using WKWebView, but as far as I can tell, there is not a way to auto-populate credentials or save credentials a user enters into a login form at a 3rd-party website like Netflix (i.e., not my own app domain).
Is this correct?
If this is wrong, what are the APIs to support this?
My use case is that I want to create an immersive app in visionOS that includes a window that lets the user surf the web (among other things). Ideally, I could just use a Safari window in my immersive app, but I don't think this is possible either. My work around is to create my own web browser... which works, minus the credential issue.
Is it possible to bring a Safari window into an immersive visionOS app's experience? (IMHO, that would be a great feature)
Is there an equivalent to MultipeerConnectivityService that implements SynchronizationService over TCP/IP connections?
I'd like to have two users in separate locations, each with a local ARAnchor but then have a synchronized RealityKit scene graph attached to their separate ARAnchors.
Is this possible?
Thanks,
Given an AnchorEntity from say RealityKit's Scene anchors collection, is it possible to retrieve the ARAnchor that was used when creating the AnchorEntity?
Looking through the AnchorEntity documentation, - https://developer.apple.com/documentation/realitykit/anchorentity it seems that while you can create an AnchorEntity using an ARAnchor, there is no way to retrieve that ARAnchor afterwards.
Alternatively, the ARSession delegate functions receive a list of ARAnchors or an ARFrame that has ARAnchors, but I could not find an approach to retrieve AnchorEntities that might be associated with any of these ARAnchors.
Given an ARAnchor, is there a way to get an AnchorEntity associated with it?
Thanks,
In ARKit+RealityKit I do a raycast from the ARView's center, then create an AnchorEntity at the result and add a target ModelEntity (a flattened cube) to the AnchorEntity.
guard let result = session.raycast(query).first else { return }
let newAnchor = AnchorEntity(raycastResult: result)
newAnchor.addChild(placementTargetEntity)
arView.scene.addAnchor(newAnchor)
I repeat this for each frame update via the ARSessionDelegate session(_:didUpdate:), removing the previous AnchorEntity first.
I use this as a target to let the user know where the full model will be placed when they tap the screen.
This works find under iOS 14, but I get strange results with iPadOS 15 - two different placements are created on different screen updates, offset from each other and slightly rotated from each other.
Has anyone else had issues with raycast() or creating an AnchorEntity from the result?
Is the use of session(_:didUpdate:) via ARSessionDelegate to update virtual content considered bad style now? (I noticed in the WWDC21 they used a different mechanism to update their virtual content.)
(If any Apple engineers read this, I filed a feedback with sample code and video of the issue at FB9535616)
During testing of my app the frames per second -- shown either in the Xcode debug navigator or ARView .showStatistics -- sometimes drops by half and stays down there.
This low FPS will continue even when I kill the app completely and restart.
However, after giving my phone a break, the fps returns to 60 fps.
Does ARKit automatically throttle down FPS when the device gets too hot?
If so, is there a signal my program can catch from ARKit or the OS that can tell me this is happening?
I've been playing with Apple's StoreKit 2 demo code (buying the cars, subscriptions, ...), and sometimes when I purchase a car, one or more of the other buttons visually flip state (e.g., purchased checkmark changes back to the price).
Leaving the StoreView and returning to it shows the correct state for each of the buttons.
I am using the StoreKit Configuration Products.storekit (for the scheme), so testing in Xcode.
I get this in both the simulator and on my actual phone.
The issue is random. The vast majority of the time everything works perfectly.
Is anyone else seeing this issue?
Does anyone know how to address it?
Dev environment:
Xcode 13.0 beta 5 (13A5212g)
macOS 12.0 Beta (21A5534d)
Mac mini (M1, 2020)
During my first external test using TestFlight for an In-App Purchase (iPadOS), the user was
(1) Prompted for their Apple ID & password
(2) Prompted for their password a second time
(3) (User believes) prompted for their password a third time
Are these multiple prompts for their password expected behavior, or have I done something wrong?
I'm looking for documentation/guidance on USDZ and scene model sizes. My focus is on RealityKit-based apps.
I found the 2018 WWDC presentation
Integrating Apps and Content with AR Quick Look
which mentions a rule of thumb for a USDZ model of:
100K polygons
One set of 2048x2048 textures
10 seconds of animations
Are these number still recommended in 2021?
Are these numbers just for Quicklook, or do they apply to RealityKit-based apps too?
If a RealityKit scene loads several USDZ models, should the cumulative number of polygons across all models be 100K, or is the 100K number on a per-model basis?
The talk mentioned AR Quicklook will dynamically downsample textures for devices with less memory. Does RealityKit do this as well?
If so, can I error on providing a larger texture (e.g., 4096 x 4096) and trust RealityKit to downsample as appropriate for me?
(I am hoping there is some documentation covering questions like this)
In a previous post I asked if 100,000 polygons is still the recommended size for USDZ Quick Look models on the web. (The answer is yes)
But I realize my polygons are 4-sided but are not planar, so they have to be broken down into 2 triangles when rendered.
Given that, should I shoot for 50,000 polygons (i.e., 100,000 triangles)?
Or does the 100,000 polygon statistic already assume polygons will be subdivided into triangles?
(The models are generated from digital terrain (GeoTIFF) data, not a 3D modeling tool)
Does Apple have any documentation on using Reality Converter to convert FBX to USDZ on an M1 Max?
I'm trying to convert an .fbx file to USDZ with Apple's Reality Converter on an M1 Mac (macOS 12.3 Beta), but everything I've tried so far has failed.
When I try to convert .fbx files on my Intel-based iMac Pro, it succeeds.
Following some advice on these forums, I tried to install all packages from Autodesk
https://www.autodesk.com/developer-network/platform-technologies/fbx-sdk-2020-0
FBX SDK 2020.0.1 Clang
FBX Python SDK Mac
FBX SDK 2020.0.1 Python Mac
FBX Extensions SDK 2020.0.1 Mac
Still no joy.
I have a work around - I still have my Intel-based iMac. But I'd like to switch over to my M1 Mac for all my development.
Any pointers?
Note: I couldn't get the usdzconvert command line tool to work on my M1 Mac either. /usr/bin/python isn't there.
I am running into a strange bug where the exact same code compiles fine in one project but generates a compiler error in another project.
In particular, I am trying to create an AnchorEnity from an ARAnchor.
func addModelTo(anchor: ARAnchor)
{
let entityAnchor = AnchorEntity(anchor: anchor)
...
}
The compiler error message is not even consistent. Sometimes I get a single error message:
Cannot convert value of type 'ARAnchor' to expected argument type 'AnchoringComponent.Target'
Other times I get an error with two possible issues:
No exact matches in call to initializer
Candidate '() -> AnchorEntity' requires 0 arguments, but 1 was provided (RealityFoundation.AnchorEntity)
Candidate expects value of type 'AnchoringComponent.Target' for parameter #1 (got '(anchor: ARAnchor)')
I'm trying to track down why this sometimes causes an error and sometimes it does not.
Any pointers?