Post

Replies

Boosts

Views

Activity

PHAsset of older photos is missing GPS info in Swift
I've been working in Swift on iOS to access images via UIImagePickerController, pulling the PHAsset from the picker delegate's "info" dictionary, and then pulling GPS information from the PHAsset. For newer photos, the asset.location is populated with GPS information. Also, with newer photos, CIImage's property dictionary has {GPS} information. So all is good with newer photos. But when I go back to images taken in 2017, asset.location is nil and there is no "{GPS} information in the CIImage. However, if I export the photo from Photos app on my Mac and then view it in Preview, there *is* GPS information. So am I missing some settings to find the GPS information in older photos using PHAsset on iOS? Thanks,
2
0
1.3k
Sep ’21
Getting ARMeshClassification information
The ARMeshGeometry - https://developer.apple.com/documentation/arkit/armeshgeometry documentation references ARMeshClassification, - https://developer.apple.com/documentation/arkit/armeshclassification but I cannot find any obvious way to get classification information for the mesh data. I found the classificationOf(faceWithIndex: index) function in the Xcode sample project Visualizing and Interacting with a Reconstructed Scene - https://developer.apple.com/documentation/arkit/content_anchors/visualizing_and_interacting_with_a_reconstructed_scene, but it seems pretty complex. Is there something simpler that I am missing? It also seems from the code that a mesh doesn't have a classification, but only individual geometry faces in the mesh have a classification. Is it common for a single mesh to represent many different objects (e.g., a chair, floor, and wall) all at the same time? Thanks,
0
0
716
Mar ’21
Is the scene geometry OcclusionMaterial accessible in RealityKit?
Is it possible to turn on and off different occlusion material when using Scene Understanding with LiDAR and RealityKit? For example, if ARKit identifies a wall, I don't want that mesh to be used during occlusion (but I do want occlusion for other things, like the couch or the floor) If I could do this, it would essentially make my walls transparent, and I could see the RealityKit objects that extend beyond the room I am in. Thanks,
0
0
629
Mar ’21
RealityKit Transform rotation: choosing clockwise vs. anti-clockwise
I'm using Transform's move(to:relativeTo:duration:timingFunction:) to rotate an Entity around the Y axis in an animated fashion (e.g., duration 2 seconds) Unfortunately, when I rotate from 6 radians (343.7*) to 6.6 radians (378.2*), the rotation does not continue anti-clockwise past 2 pi (360*) but backwards to 0.317 radians (18.2*). Is there a way to force a rotation about an axis to go in a clockwise or anti-clockwise direction when animating?
2
0
1.7k
Apr ’21
RealityKit playAnimation with transitionDuration causes a blink/glitch frame
I am experiencing a single video frame glitch when transitioning from one RealityKit Entity animation to another when transitionDuration is non-zero. This is with the current RealityKit and iOS 14.6 (i.e., not the betas). Is this a known issue? Have people succeeded in transitioning from one animation to another with a non-zero transition time and no strange blink? Background: I loaded two USDZ models, each with a different animation. One model will be shown, but the AnimationResource from the second model will (at some point) be applied to the first model. I originally created the models with Adobe's mixamo site (they are characters moving), downloaded the .fbx files, and then converted them to USDZ with Apple's "Reality Converter". I start the first model (robot) with its animation, then at some point I apply the animation from the second model (nextAnimationToPlay) to the original model (robot). If the transitionDuration is set to something other than 0, there appears a single video frame glitch (or blink) before the animation transition occurs (that single frame may be the model's original T-pose, but I'm not certain). robot.playAnimation(nextAnimationToPlay, transitionDuration: 1.0, startsPaused: false) If transitionDuration is set to 0, there is no glitch, but then I lose the smooth transition. I have tried variations. For example, setting startPaused to "true", and then calling resume() on the playback controller; also, waiting until the current animation completes before calling the playAnimation() with the next animation. Still, I get the quick blink. Any suggestions or pointers would be appreciated. Thanks,
1
0
1.3k
Aug ’21
AnchorEntity(plane:) ground shadow without the plane?
When I create an AnchorEntity like this: let entityAnchor = AnchorEntity(plane: [.horizontal], classification: [.floor], minimumBounds: [0.2,0.2]) and add a USDZ model to it, I get a nice ground shadow. But if I create an AnchorEntity using an ARAnchor like this: let entityAnchor = AnchorEntity(anchor: anchor) I do not get that nice ground shadow. Is there a way to get that ground shadow I get from a plane anchor but with an EntityAnchor where I can specify where it goes or attach it to an ARAnchor? [Note: for LiDAR devices, I can get a nice shadow using config.sceneReconstruction = .mesh arView.environment.sceneUnderstanding.options.insert(.occlusion) arView.environment.sceneUnderstanding.options.insert(.receivesLighting) but creating the environment mesh is computationally expensive. I'd like to avoid that if possible.]
0
0
870
Oct ’21
OcclusionMaterial filter?
RealityKit has a CollisionFilter to determine which entities can collide with which other ones. Perchance, is there something similar for OcclusionMaterial? In effect, I'd like to have the ability to have a model with an OcclusionMaterial "occlude this entity but not that entity".
0
0
540
Nov ’21
AR Quick Look additional controls?
I've recently added some USDZ files to a web page, and I can download and display them fine via AR Quick Look on an iPhone or iPad. I've noticed full occlusion is active in the AR view. Over time, the device appears to heat up and the frame rate drops. Are there any properties I can set in the <a rel="ar" ...> HTML tag to control things like occlusion or autofocus (i.e., turn them off)?
0
0
928
Nov ’21
Reality Converter scale issue
I don't know if this is an issue with Apple's Reality Converter app or Blender (I'm using 3.0 on the Mac), but when I export a model as .obj and import it to Reality Converter, the scale is off by a factor of 100. That is, the following workflow creates tiny (1/100 scale) entities: Blender > [.obj] > Reality Converter > [USDZ] But this workflow is OK: Blender > [.glb] > Reality Converter > [USDZ] Two workarounds are: export as .glb/.gltf, when exporting .obj set the scale factor to 100 in Blender Is this a known issue, or am I doing something wrong? If it is an issue, should I file a bug report?
3
0
1.9k
Dec ’21
Setting lensPosition to focus at infinity
I am creating a fixed-focus camera app with the focus distance at infinity (or at least 30+ feet away). When I set lensPosition to 1.0, the images were blurry. Some tests letting autofocus do the job showed a lensPosition of about 0.808 for my wide and telephoto lenses and 0.84 for the ultra wide lens did the trick. (iPhone 13 Max) Will the lensPosition to focus at infinity vary between devices and lenses on that device? Is there a way to determine the appropriate lensPosition at run time?
1
0
875
May ’22
State of the Union video on iOS or Apple TV
When I try to watch the WWDC22 "Platforms State of the Union" video on my Apple TV, iPhone, or iPad, using the Developer app, the video only shows a single frame every few seconds and there is no audio. The video plays fine on the Developer app on my Mac. Has anyone else had this problem? Is there a work around? (I've tried deleting the Developer app on my Apple TV and reinstalling it, but no joy).
1
0
830
Jan ’23
Body tracking robot and Blender
Has anyone successfully imported Apple's (FBX) robot for Apple's CapturingBodyMotionIn3D demo into Blender exported it back out (GLTF or other format) converted it back to USDZ via Reality Converter and gotten it to work in Apple's demo app again? I have run into numerous problems, and each effort to fix a problem leads to new ones. For example, importing Apple's FBX robot has the bones pointing in funny directions (see attachment). When I try to correct this on import by aligning the bones, the robot in the Apple app looks like it went through Star Trek transporter accident - limbs at weird angles.
0
0
905
Jan ’23
Importing RoomPlan output into Blender
I'm sharing this in case someone else wants to use Apple's RoomPlan to create a model and import it into Blender. The problem: I could not successfully import a USDZ model from the RoomPlan app into Blender. (I went through the normal process of importing a USDZ file into Blender: change the file type from ".usdz" to ".zip"; unzipped the file; then tried to import the ".usda" file). No surfaces appeared. The solution: In Apple's source code from here, in the file RoomCaptureViewController.swift, I changed the line try finalResults?.export(to: destinationURL, exportOptions: .parametric) to try finalResults?.export(to: destinationURL, exportOptions: .mesh) recompiled, and went through the USDZ to USDA conversion process again. This time it worked. Apparently Blender cannot import parametric USDA models.
0
0
1.8k
Jan ’23
AnchorEntity a child of an Entity?
Reviewing Apple's AnchorEntity documentation, I see that an AnchorEntity can be a child of an Entity in the RealityKit hierarchy. Has it always been this way? In my memory, an AnchorEntity was always just the base element in a Scene. If this was a change by Apple at some point, has Apple given examples where making an AnchorEntity a child of an Entity lets you do cool things you couldn't do before?
1
0
876
Feb ’23
Animating faces
I’m embarking on a new project that will involve animating 3D faces & mouths. I’m looking at using ARFaceAnchors and blendShapes to capture data that will be used to animate the models’ facial expressions. I have a few basic questions: (1) As far as I can tell, Apple has not supported exporting Memojis to rigged 3D models. Is this still the case? (2) I did find one web site that said Apple’s AvatarKit is now public, but everywhere else I’ve checked, it is still a private framework (and Xcode complains). Is AvatarKit still private? (3) It looks like all 52 blendShapes for an ARFaceAnchor are updated every frame, which updates 60 times a second This is 3120 data points per second. Are there any best practice guides to reduce the data? For example, “These 10 blendShapes capture the most important features for animating a face.” (4) It appears that visionOS does not support ARFaceAnchor. If I want to present a remote user as a Memoji (or other rigged model) in a shared experience, is there any way to do that at the current time?
0
0
646
Jul ’23