I had a weird case today when an endpoint system extension remained running even after I deleted the .app bundle.If I tried killing the process with "sudo kill -9 <pid>", the extension respawned.If I tried "sudo launchctl remove <name>", I was told I didn't have privilege.Searching my hard drive I found a copy of the system extension in /Macintosh HD/Library/System Extensions/...I rebooted into recovery mode, deleted the extension bundle, and restarted. Everything initially looked fine. The process did not come back.But then when I tried to re-build, re-package, re-install, and re-launch the application, the operating system complained that it could not find the system extension even though it was there in the .app bundle.The operating system seems to (A) create a cache/copy of the system extension bundle, and (my guess) (B) maintains a link to that cache location somewhere and tries to launch that cached system extension bundle.[my hacked solution was to rename the extension, including creating a new bundle ID and associated provisioning profile]Has anyone encountered a system extension that woud not die? Did you figure out how to kill it and clear out any caches of it?Thanks,
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I noticed an issue when testing my AR software on my iPhone 11 Pro when holding it horizontally to the right - Xcode's console starts printing this error message over and over.
[Technique] Could not transform image.
I created a new Xcode project with the ARKit template. Made no changes. Just built it and ran it on my iPhone 11 Pro (connected over USB)
Phone vertical - no problem
Phone horizontal to the left - no problem
Phone horizontal to the right - the above error message repeated
Any idea (1) why this occurring? And (2) why it occurs when the phone is horizontal to the right but not the left?
Any known work-arounds (besides "Don't hold it to the right")?
I've been working in Swift on iOS to access images via UIImagePickerController, pulling the PHAsset from the picker delegate's "info" dictionary, and then pulling GPS information from the PHAsset.
For newer photos, the asset.location is populated with GPS information.
Also, with newer photos, CIImage's property dictionary has {GPS} information.
So all is good with newer photos.
But when I go back to images taken in 2017, asset.location is nil and there is no "{GPS} information in the CIImage.
However, if I export the photo from Photos app on my Mac and then view it in Preview, there *is* GPS information.
So am I missing some settings to find the GPS information in older photos using PHAsset on iOS?
Thanks,
Is there an equivalent to MultipeerConnectivityService that implements SynchronizationService over TCP/IP connections?
I'd like to have two users in separate locations, each with a local ARAnchor but then have a synchronized RealityKit scene graph attached to their separate ARAnchors.
Is this possible?
Thanks,
The ARMeshGeometry - https://developer.apple.com/documentation/arkit/armeshgeometry documentation references ARMeshClassification, - https://developer.apple.com/documentation/arkit/armeshclassification but I cannot find any obvious way to get classification information for the mesh data.
I found the classificationOf(faceWithIndex: index) function in the Xcode sample project Visualizing and Interacting with a Reconstructed Scene - https://developer.apple.com/documentation/arkit/content_anchors/visualizing_and_interacting_with_a_reconstructed_scene, but it seems pretty complex.
Is there something simpler that I am missing?
It also seems from the code that a mesh doesn't have a classification, but only individual geometry faces in the mesh have a classification.
Is it common for a single mesh to represent many different objects (e.g., a chair, floor, and wall) all at the same time?
Thanks,
Given an AnchorEntity from say RealityKit's Scene anchors collection, is it possible to retrieve the ARAnchor that was used when creating the AnchorEntity?
Looking through the AnchorEntity documentation, - https://developer.apple.com/documentation/realitykit/anchorentity it seems that while you can create an AnchorEntity using an ARAnchor, there is no way to retrieve that ARAnchor afterwards.
Alternatively, the ARSession delegate functions receive a list of ARAnchors or an ARFrame that has ARAnchors, but I could not find an approach to retrieve AnchorEntities that might be associated with any of these ARAnchors.
Given an ARAnchor, is there a way to get an AnchorEntity associated with it?
Thanks,
Is it possible to turn on and off different occlusion material when using Scene Understanding with LiDAR and RealityKit?
For example, if ARKit identifies a wall, I don't want that mesh to be used during occlusion (but I do want occlusion for other things, like the couch or the floor)
If I could do this, it would essentially make my walls transparent, and I could see the RealityKit objects that extend beyond the room I am in.
Thanks,
I'm using Transform's move(to:relativeTo:duration:timingFunction:) to rotate an Entity around the Y axis in an animated fashion (e.g., duration 2 seconds)
Unfortunately, when I rotate from 6 radians (343.7*) to 6.6 radians (378.2*), the rotation does not continue anti-clockwise past 2 pi (360*) but backwards to 0.317 radians (18.2*).
Is there a way to force a rotation about an axis to go in a clockwise or anti-clockwise direction when animating?
I am experiencing a single video frame glitch when transitioning from one RealityKit Entity animation to another when transitionDuration is non-zero.
This is with the current RealityKit and iOS 14.6 (i.e., not the betas).
Is this a known issue?
Have people succeeded in transitioning from one animation to another with a non-zero transition time and no strange blink?
Background:
I loaded two USDZ models, each with a different animation. One model will be shown, but the AnimationResource from the second model will (at some point) be applied to the first model.
I originally created the models with Adobe's mixamo site (they are characters moving), downloaded the .fbx files, and then converted them to USDZ with Apple's "Reality Converter".
I start the first model (robot) with its animation, then at some point I apply the animation from the second model (nextAnimationToPlay) to the original model (robot).
If the transitionDuration is set to something other than 0, there appears a single video frame glitch (or blink) before the animation transition occurs (that single frame may be the model's original T-pose, but I'm not certain).
robot.playAnimation(nextAnimationToPlay, transitionDuration: 1.0, startsPaused: false)
If transitionDuration is set to 0, there is no glitch, but then I lose the smooth transition.
I have tried variations. For example, setting startPaused to "true", and then calling resume() on the playback controller; also, waiting until the current animation completes before calling the playAnimation() with the next animation. Still, I get the quick blink.
Any suggestions or pointers would be appreciated.
Thanks,
In ARKit+RealityKit I do a raycast from the ARView's center, then create an AnchorEntity at the result and add a target ModelEntity (a flattened cube) to the AnchorEntity.
guard let result = session.raycast(query).first else { return }
let newAnchor = AnchorEntity(raycastResult: result)
newAnchor.addChild(placementTargetEntity)
arView.scene.addAnchor(newAnchor)
I repeat this for each frame update via the ARSessionDelegate session(_:didUpdate:), removing the previous AnchorEntity first.
I use this as a target to let the user know where the full model will be placed when they tap the screen.
This works find under iOS 14, but I get strange results with iPadOS 15 - two different placements are created on different screen updates, offset from each other and slightly rotated from each other.
Has anyone else had issues with raycast() or creating an AnchorEntity from the result?
Is the use of session(_:didUpdate:) via ARSessionDelegate to update virtual content considered bad style now? (I noticed in the WWDC21 they used a different mechanism to update their virtual content.)
(If any Apple engineers read this, I filed a feedback with sample code and video of the issue at FB9535616)
Does RealityKit have an API to test if a ModelEntity (or its CollisionComponent) is currently visible on the screen?
During testing of my app the frames per second -- shown either in the Xcode debug navigator or ARView .showStatistics -- sometimes drops by half and stays down there.
This low FPS will continue even when I kill the app completely and restart.
However, after giving my phone a break, the fps returns to 60 fps.
Does ARKit automatically throttle down FPS when the device gets too hot?
If so, is there a signal my program can catch from ARKit or the OS that can tell me this is happening?
I've been playing with Apple's StoreKit 2 demo code (buying the cars, subscriptions, ...), and sometimes when I purchase a car, one or more of the other buttons visually flip state (e.g., purchased checkmark changes back to the price).
Leaving the StoreView and returning to it shows the correct state for each of the buttons.
I am using the StoreKit Configuration Products.storekit (for the scheme), so testing in Xcode.
I get this in both the simulator and on my actual phone.
The issue is random. The vast majority of the time everything works perfectly.
Is anyone else seeing this issue?
Does anyone know how to address it?
Dev environment:
Xcode 13.0 beta 5 (13A5212g)
macOS 12.0 Beta (21A5534d)
Mac mini (M1, 2020)
When testing In-App Purchases in Xcode with a .storekit file, I can delete past purchase transactions, so I can re-test the purchase experience.
I've switched to using a Sandbox tester and made purchases. However, I cannot find how to delete previous purchase transactions made in the sandbox so I can re-run the tests.
Is this possible?
When I create an AnchorEntity like this:
let entityAnchor = AnchorEntity(plane: [.horizontal], classification: [.floor], minimumBounds: [0.2,0.2])
and add a USDZ model to it, I get a nice ground shadow.
But if I create an AnchorEntity using an ARAnchor like this:
let entityAnchor = AnchorEntity(anchor: anchor)
I do not get that nice ground shadow.
Is there a way to get that ground shadow I get from a plane anchor but with an EntityAnchor where I can specify where it goes or attach it to an ARAnchor?
[Note: for LiDAR devices, I can get a nice shadow using
config.sceneReconstruction = .mesh
arView.environment.sceneUnderstanding.options.insert(.occlusion)
arView.environment.sceneUnderstanding.options.insert(.receivesLighting)
but creating the environment mesh is computationally expensive. I'd like to avoid that if possible.]