The ARMeshGeometry - https://developer.apple.com/documentation/arkit/armeshgeometry documentation references ARMeshClassification, - https://developer.apple.com/documentation/arkit/armeshclassification but I cannot find any obvious way to get classification information for the mesh data.
I found the classificationOf(faceWithIndex: index) function in the Xcode sample project Visualizing and Interacting with a Reconstructed Scene - https://developer.apple.com/documentation/arkit/content_anchors/visualizing_and_interacting_with_a_reconstructed_scene, but it seems pretty complex.
Is there something simpler that I am missing?
It also seems from the code that a mesh doesn't have a classification, but only individual geometry faces in the mesh have a classification.
Is it common for a single mesh to represent many different objects (e.g., a chair, floor, and wall) all at the same time?
Thanks,
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Is it possible to turn on and off different occlusion material when using Scene Understanding with LiDAR and RealityKit?
For example, if ARKit identifies a wall, I don't want that mesh to be used during occlusion (but I do want occlusion for other things, like the couch or the floor)
If I could do this, it would essentially make my walls transparent, and I could see the RealityKit objects that extend beyond the room I am in.
Thanks,
When I create an AnchorEntity like this:
let entityAnchor = AnchorEntity(plane: [.horizontal], classification: [.floor], minimumBounds: [0.2,0.2])
and add a USDZ model to it, I get a nice ground shadow.
But if I create an AnchorEntity using an ARAnchor like this:
let entityAnchor = AnchorEntity(anchor: anchor)
I do not get that nice ground shadow.
Is there a way to get that ground shadow I get from a plane anchor but with an EntityAnchor where I can specify where it goes or attach it to an ARAnchor?
[Note: for LiDAR devices, I can get a nice shadow using
config.sceneReconstruction = .mesh
arView.environment.sceneUnderstanding.options.insert(.occlusion)
arView.environment.sceneUnderstanding.options.insert(.receivesLighting)
but creating the environment mesh is computationally expensive. I'd like to avoid that if possible.]
I've been creating USDA files manually and converting them to USDZ via Apple's usdzconvert tool (version 0.64).
In the file I set unit size to be 1 meter
metersPerUnit = 1.0
but the USDZ keeps the unit size at 1 cm.
Apple's Reality Converter does process the metersPerUnit metadata, so that is a viable work-around for me. But sometimes I'd prefer the command-line tool.
Is there an update to the usdzconvert tool? I couldn't find one.
RealityKit has a CollisionFilter to determine which entities can collide with which other ones.
Perchance, is there something similar for OcclusionMaterial?
In effect, I'd like to have the ability to have a model with an OcclusionMaterial "occlude this entity but not that entity".
I've recently added some USDZ files to a web page, and I can download and display them fine via AR Quick Look on an iPhone or iPad.
I've noticed full occlusion is active in the AR view.
Over time, the device appears to heat up and the frame rate drops.
Are there any properties I can set in the <a rel="ar" ...> HTML tag to control things like occlusion or autofocus (i.e., turn them off)?
Has anyone successfully
imported Apple's (FBX) robot for Apple's CapturingBodyMotionIn3D demo into Blender
exported it back out (GLTF or other format)
converted it back to USDZ via Reality Converter
and gotten it to work in Apple's demo app again?
I have run into numerous problems, and each effort to fix a problem leads to new ones. For example, importing Apple's FBX robot has the bones pointing in funny directions (see attachment).
When I try to correct this on import by aligning the bones, the robot in the Apple app looks like it went through Star Trek transporter accident - limbs at weird angles.
I'm sharing this in case someone else wants to use Apple's RoomPlan to create a model and import it into Blender.
The problem: I could not successfully import a USDZ model from the RoomPlan app into Blender. (I went through the normal process of importing a USDZ file into Blender: change the file type from ".usdz" to ".zip"; unzipped the file; then tried to import the ".usda" file). No surfaces appeared.
The solution: In Apple's source code from here, in the file RoomCaptureViewController.swift, I changed the line
try finalResults?.export(to: destinationURL, exportOptions: .parametric)
to
try finalResults?.export(to: destinationURL, exportOptions: .mesh)
recompiled, and went through the USDZ to USDA conversion process again. This time it worked.
Apparently Blender cannot import parametric USDA models.
When setting ARView's environment camera feed exposure to a negative value to make the camera feed dimmer, for example
arView.environment.background = .cameraFeed(exposureCompensation: -3)
can this negatively affect ARKit's ability to track the device's localization and mapping capability?
That is, is the device's use of the camera for SLAM purposes independent of the exposureCompensation value?
I’m embarking on a new project that will involve animating 3D faces & mouths. I’m looking at using ARFaceAnchors and blendShapes to capture data that will be used to animate the models’ facial expressions.
I have a few basic questions:
(1) As far as I can tell, Apple has not supported exporting Memojis to rigged 3D models. Is this still the case?
(2) I did find one web site that said Apple’s AvatarKit is now public, but everywhere else I’ve checked, it is still a private framework (and Xcode complains). Is AvatarKit still private?
(3) It looks like all 52 blendShapes for an ARFaceAnchor are updated every frame, which updates 60 times a second This is 3120 data points per second. Are there any best practice guides to reduce the data? For example, “These 10 blendShapes capture the most important features for animating a face.”
(4) It appears that visionOS does not support ARFaceAnchor. If I want to present a remote user as a Memoji (or other rigged model) in a shared experience, is there any way to do that at the current time?
When defining a volumetric WindowGroup, I can set the defaultSize().
It is possible to set a different volume size when opening a window with openWindow()?
In my use case, I want to display potentially different models that are of different sizes inside the volumetric window, and I want to preserve each model's size. I would like to create a volumetric window that is optimally sized for each model.
Alternatively I could create a volumetric window that is large enough to fit the largest model, and then reposition smaller models inside the volume to be at the front & bottom of the volume, but I haven't figured out how to do that either (Post on that question)
Does RealityKit support a clipping plane, where I can define a plane and have all content on one side of the plane not rendered?
In a progressive ImmersiveSpace, I created an object (a cylinder) and applied an OcclusionMaterial to it. It does hide my virtual content behind it, but does not show the content of my room. The cylinder just appears black.
In progressive (or full?) ImmersiveSpace, is it possible to apply occlusion material (or something else), so I can see the room behind the virtual content?
Basically, I want to punch a hole through the virtual content and see the room behind it.
As a practical example, imagine being in a progressive ImmersiveSpace, but you have a plane with an occlusion mesh applied to it above your Apple Magic Keyboard so you can see your keyboard.
Is this possible?
I have a Canvas inside a ScrollView on a Mac. The Canvas's size is determined by a model (for the example below, I am simply drawing a grid of circles of a given radius). Everything appears to works fine.
However, I am wondering if it is possible for the Canvas rendering code to know what portion of the Canvas is actually visible in the ScrollView?
For example, if the Canvas is large but the visible portion is small, I would like to avoid drawing content that is not visible.
Is this possible?
Example of Canvas in a ScrollView I am using for testing:
struct MyCanvas: View {
@ObservedObject var model: MyModel
var body: some View {
ScrollView([.horizontal, .vertical]) {
Canvas { context, size in
// Placeholder rendering code
for row in 0..<model.numOfRows {
for col in 0..<model.numOfColumns {
let left: CGFloat = CGFloat(col * model.radius * 2)
let top: CGFloat = CGFloat(row * model.radius * 2)
let size: CGFloat = CGFloat(model.radius * 2)
let rect = CGRect(x: left, y: top, width: size, height: size)
let path = Circle().path(in: rect)
context.fill(path, with: .color(.red))
}
}
}
.frame(width: CGFloat(model.numOfColumns * model.radius * 2),
height: CGFloat(model.numOfRows * model.radius * 2))
}
}
}
I created a simple web browser using WKWebView, but as far as I can tell, there is not a way to auto-populate credentials or save credentials a user enters into a login form at a 3rd-party website like Netflix (i.e., not my own app domain).
Is this correct?
If this is wrong, what are the APIs to support this?
My use case is that I want to create an immersive app in visionOS that includes a window that lets the user surf the web (among other things). Ideally, I could just use a Safari window in my immersive app, but I don't think this is possible either. My work around is to create my own web browser... which works, minus the credential issue.
Is it possible to bring a Safari window into an immersive visionOS app's experience? (IMHO, that would be a great feature)