In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general).
Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace?
What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Is there an equivalent to MultipeerConnectivityService that implements SynchronizationService over TCP/IP connections?
I'd like to have two users in separate locations, each with a local ARAnchor but then have a synchronized RealityKit scene graph attached to their separate ARAnchors.
Is this possible?
Thanks,
I don't know if this is an issue with Apple's Reality Converter app or Blender (I'm using 3.0 on the Mac), but when I export a model as .obj and import it to Reality Converter, the scale is off by a factor of 100.
That is, the following workflow creates tiny (1/100 scale) entities:
Blender > [.obj] > Reality Converter > [USDZ]
But this workflow is OK:
Blender > [.glb] > Reality Converter > [USDZ]
Two workarounds are:
export as .glb/.gltf,
when exporting .obj set the scale factor to 100 in Blender
Is this a known issue, or am I doing something wrong?
If it is an issue, should I file a bug report?
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
In ARKit+RealityKit I do a raycast from the ARView's center, then create an AnchorEntity at the result and add a target ModelEntity (a flattened cube) to the AnchorEntity.
guard let result = session.raycast(query).first else { return }
let newAnchor = AnchorEntity(raycastResult: result)
newAnchor.addChild(placementTargetEntity)
arView.scene.addAnchor(newAnchor)
I repeat this for each frame update via the ARSessionDelegate session(_:didUpdate:), removing the previous AnchorEntity first.
I use this as a target to let the user know where the full model will be placed when they tap the screen.
This works find under iOS 14, but I get strange results with iPadOS 15 - two different placements are created on different screen updates, offset from each other and slightly rotated from each other.
Has anyone else had issues with raycast() or creating an AnchorEntity from the result?
Is the use of session(_:didUpdate:) via ARSessionDelegate to update virtual content considered bad style now? (I noticed in the WWDC21 they used a different mechanism to update their virtual content.)
(If any Apple engineers read this, I filed a feedback with sample code and video of the issue at FB9535616)
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors.
I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location).
The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking.
Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking?
And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking).
Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location.
I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location.
Here is how I captured the tap gesture and extracted the 3D location:
var myTapGesture: some Gesture {
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { event in
let location3D = event.convert(event.location3D, from: .global, to: .scene)
let entity = event.entity
model.handleTap(location: location3D, entity: entity)
}
}
Here is how I set the position of the purple cube:
func handleTap(location: SIMD3<Float>, entity: Entity) {
let positionEntity = Entity()
positionEntity.setPosition(location, relativeTo: nil)
...
}
Does Apple have any documentation on using Reality Converter to convert FBX to USDZ on an M1 Max?
I'm trying to convert an .fbx file to USDZ with Apple's Reality Converter on an M1 Mac (macOS 12.3 Beta), but everything I've tried so far has failed.
When I try to convert .fbx files on my Intel-based iMac Pro, it succeeds.
Following some advice on these forums, I tried to install all packages from Autodesk
https://www.autodesk.com/developer-network/platform-technologies/fbx-sdk-2020-0
FBX SDK 2020.0.1 Clang
FBX Python SDK Mac
FBX SDK 2020.0.1 Python Mac
FBX Extensions SDK 2020.0.1 Mac
Still no joy.
I have a work around - I still have my Intel-based iMac. But I'd like to switch over to my M1 Mac for all my development.
Any pointers?
Note: I couldn't get the usdzconvert command line tool to work on my M1 Mac either. /usr/bin/python isn't there.
When testing In-App Purchases in Xcode with a .storekit file, I can delete past purchase transactions, so I can re-test the purchase experience.
I've switched to using a Sandbox tester and made purchases. However, I cannot find how to delete previous purchase transactions made in the sandbox so I can re-run the tests.
Is this possible?