Post

Replies

Boosts

Views

Activity

ASPasswordCredential Returns a Blank Password with Apple Password App
Using the simplified sign-in with tvOS and a third party password manager, I receive a complete ASPasswordCredential, and I can easily log into my app. When I do the same thing but with Apple's password manager as the source, I receive an ASPasswordCredential that includes the email address, but the password is an empty string. I have tried deleting the credentials from Apple Passwords and regenerating them with a new login to the app's website. I have tried restarting my iPhone. Is this the expected behavior? How should I be getting a password from Apple's Password app with an ASAuthorizationPasswordRequest?
0
0
114
6d
Getting ShinyTV Example to Work
I have downloaded the ShinyTV example to test simplified sign-in on tvOS since it is not working in my own app, and I am having the same issue there. After assigning my team to the sample app, the bundle ID updates with my team id. I copy the bundle ID into a file entitled "apple-app-site-association" with this format: { "webcredentials": { "apps": [ "{MyTeamID}.com.example.apple-samplecode.ShinyTV{MyTeamID}" ] } } I upload the file to my personal site, ensuring that the content type is application/json. I adjust the Associated Domain entitlement to: webcredentials:*.{personal-site.com}?mode=developer using the alternate mode to force it to load from my site, not the CDN. When I run the build on tvOS, and click the Sign In button, it fails with these errors: Failed to start session: Error Domain=com.apple.CompanionServices.CPSErrorDomain Code=205 "Failed to prepare authorization requests" UserInfo={NSMultipleUnderlyingErrorsKey=( "Error Domain=com.apple.CompanionServices.CPSErrorDomain Code=205 \"Missing associated web credentials domains\" UserInfo={NSLocalizedDescription=Missing associated web credentials domains}" ), NSLocalizedDescription=Failed to prepare authorization requests} Session failed: Error Domain=com.apple.CompanionServices.CPSErrorDomain Code=205 "Failed to prepare authorization requests" UserInfo={NSMultipleUnderlyingErrorsKey=( "Error Domain=com.apple.CompanionServices.CPSErrorDomain Code=205 \"Missing associated web credentials domains\" UserInfo={NSLocalizedDescription=Missing associated web credentials domains}" ), NSLocalizedDescription=Failed to prepare authorization requests} ASAuthorizationController credential request failed with error: Error Domain=com.apple.AuthenticationServices.AuthorizationError Code=1004 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=( "Error Domain=com.apple.CompanionServices.CPSErrorDomain Code=205 \"(null)\"" )} Failed with error: Error Domain=com.apple.AuthenticationServices.AuthorizationError Code=1004 "Failed to prepare authorization requests" UserInfo={NSMultipleUnderlyingErrorsKey=( "Error Domain=com.apple.CompanionServices.CPSErrorDomain Code=205 \"Missing associated web credentials domains\" UserInfo={NSLocalizedDescription=Missing associated web credentials domains}" ), NSLocalizedDescription=Failed to prepare authorization requests} What am I missing here?
5
0
165
1w
Correct formatting of webcredentials app id
I have been trying to add improved tvOS login using an Associated Domain and web credentials. In some places, I am seeing that the format is <TEAM_ID>.<BUNDLE_ID>, and in other places I am seeing <APP_ID>.<BUNDLE_ID>. I am having trouble getting both to work, but in order to properly troubleshoot, I want to make sure that I am using the correct identifier. Can someone give me a definitive answer? The documentation says app id, but I have seen Apple engineers in this forum say team id, and many other posts around the internet also saying team id.
2
0
40
1w
Prevent Window (or Volume) Mouse Focus
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window. Is there anyway to reject that focus?
0
0
396
Nov ’24
Custom 3D Window Using RealityView
I have a RealityView displaying a Reality Composer Pro scene in window. Things are generally working fine, but the content seems to be appearing in front of and blocking the VisionOS window, rather than being contained inside it. Do I need to switch to a volumetric view for this to work? My scene simply contains a flat display which renders 3D content (it has a material that sends different imagery to each eye).
3
0
533
Nov ’24
RealityKit ShaderGraphMaterial parameters in Reality Composer Pro
I have a custom material using Shader Graph in Reality Composer Pro, and I am trying to rig up sliders to values to control the shader. I am able to read the values from the Shader Graph without a problem, and I can even update them when setting them from the LLDB command line and then getting the values back. But the changes are not reflected in the graphics. Is there some sort of update() method or something that is required to read the changed parameter values? On a related note, I am trying to understand what the MaterialParameters.Handle property is and why one would access a MaterialParameter via the handle vs just the name.
1
0
801
Aug ’24
RealityKit Subdivide
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
4
1
922
Jul ’24
Using Core Location in App Intent
I would like to retrieve the user's current location when they are logging some information with my App Intent. When the app has been just run, this works just fine, but if it has been force quit or not run recently, the Core Location lookup times out. I have tried logging the information and using the Core Location background mode, and I can verify that the background mode is triggering because there is an indicator on the status bar, but the background mode does not seem to fire the delegate. Is there a good way to debug this? When I run the app, everything works just fine, but I can't confirm that delegate calls are going through because I can't debug from an App Intent launch. Here is the perform method from my App Intent func perform() async throws -> some ProvidesDialog { switch PersistenceController.shared.addItem(name: name, inBackground: true) { case .success(_): return .result(dialog: "Created new pin called \(name)") case .failure(let error): return .result(dialog: "There was a problem: \(error.localizedDescription)") } } addItem calls LocationManager.shared.getCurrentCoordinates: func getCurrentCoordinates(inBackground: Bool = false, callback: @escaping (CLLocation?) -> Void) { if lastSeenLocation != nil { callback(lastSeenLocation) return } if inBackground { locationManager.allowsBackgroundLocationUpdates = true locationManager.showsBackgroundLocationIndicator = false } let status = CLLocationManager.authorizationStatus() guard status == .authorizedAlways || status == .authorizedWhenInUse else { DispatchQueue.main.async { [weak self] in self?.callback?(nil) self?.locationManager.allowsBackgroundLocationUpdates = false } return } self.callback = callback locationManager.startUpdatingLocation() } The CLLocationManager delegate didUpdateLocations then calls the callback with the location and sets allowsBackgroundLocationUpdates to false. And the callback saves the location data to Core Data. What is the best practice for using Core Location in an App Intent?
0
0
728
Sep ’23
RealityViewContent update
I am working on a project where changes in a window are reflected in a volumetric view which includes a RealityView. I have a shared data model between the window and volumetric view, but it unclear to me how I can programmatically refresh the RealityViewContent. Initially I tried holding the RealityViewContent passed from the RealityView closure in the data model, and I also tried embedding a .sink into the closure, but because the RealityViewContent is inout, neither of those work. And changes to the window's contents do not cause the RealityView's update closure fire. Is there a way to notify the RealityViewContent to update?
4
0
1.3k
Jun ’23
Override traitCollection of UIView
I am using a UIView with a nib as a template for a UIImage that I am generating, and I want to handle the output of the iPad in landscape differently than portrait. The best way I have figured I can do that is by setting up my landscape view for horizontal Regular, vertical Compact traits in the nib and assigning those traits before generating the image. I have tried using the performAsCurrent method which successfully changes the UITraitCollection.current value but does not affect my UIView traitCollection property, and I have tried overriding the traitCollection getter in the UIView class which returns the error: Class overrides the -traitCollection getter, which is not supported. If you're trying to override traits, you must use the appropriate API. Is there a way to do this for a UIView that is never drawn to the screen?
1
0
3.7k
Jul ’21
ARKit 5 Motion Capture Enabled?
I have an existing ARKit motion capture app that I recompiled in the new Xcode beta and ran on an iPad Pro (LiDAR model) running iOS 15 beta. I ran it alongside my iPhone 12 Pro running iOS 14.6 and did a video recording of both to see the improvements to motion capture. The recordings are identical which suggests to me that the ARKit 5 improvements were not enabled somehow. Is there something more I need to do? Does the current iOS 15 beta include the ARKit 5 changes?
4
0
1.2k
Jul ’21
Distinguishing Between Horizontal and Vertical ARRaycastResults when using .any
I am using ARView.raycast to find out information about a given estimatedPlane in front of the camera, and I want to treat horizontal results differently than vertical. It seems that when using .any in the alignment argument, a lot of the results returned are .any (not .horizontal or .vertical), and very few are explicitly horizontal or vertical. Conversely, if I raycast for just one alignment, I get plenty of results in the alignment I request, so I think many of those .any results are being inferred to a .horizontal or .vertical result when you don't use the .any alignment option. So what is the best way for me to do that same kind of inference on .any results so that I can categorize them by horizontal and vertical alignments? I see there is a worldTransform, and I also see that when I print the description of the result I get something like this: <ARRaycastResult: 0x281598370 target=estimatedPlane worldTransform=<translation=(-0.176499 -0.960197 -1.208480) rotation=(90.00° 0.00° -3.60°)>> So under the hood the description is converting the quaternion into Euler degrees. Is there a built-in function to do this? The one I have tried does not return results that line up with the raycast result description values.
2
0
1k
Jun ’21
Using NSURLSession delegate with BackgroundTasks
It seems that apps using background processing are required to implement BackgroundTasks, but I am struggling to figure out how to do that when continuing an URLSession upload task when a device enters the background. Currently, I use BGTaskScheduler to register a new task, and schedule that task when I enter the background: BGTaskScheduler.shared.register(forTaskWithIdentifier: backgroundTaskIdentifier, using: nil) { task in    &amp;#9; self.handleUploadTask(task) } The actual content of the task uses a stored uploadIdentifier to recreate the session configuration and get the relevant upload tasks. Then I add a cancel call for each to when the BGTask expires, and (unnecessarily?) resume each of those ongoing upload tasks: func handleUploadTask(_ task: BGTask) {      let uploader = UploadService()      if let identifier = uploader.uploadIdentifier {          let sessionConfig = URLSessionConfiguration.background(withIdentifier: identifier)          let session = URLSession(configuration: sessionConfig, delegate: firebase, delegateQueue: OperationQueue.main)          task.expirationHandler = {              session.getTasksWithCompletionHandler { (_, uploadTasks, _) in                  for uploadTask in uploadTasks {                         uploadTask.cancel()                  }              }          }          session.getTasksWithCompletionHandler { (_, uploadTasks, _) in              for uploadTask in uploadTasks {                     uploadTask.resume()              }          }      }  } How do I complete the BGTask when I get a callback from my URLSession delegate in my UploadService? Are there other pieces of the puzzle I am missing?
1
0
1.1k
Feb ’21
ARKit Body Detection in Squats
From my tests, one of the major limitations of body detection is a lack of environmental awareness. For example, body detection has no relationship to the scene detection, like floor plane, so a detected figure can move in unnatural ways like sliding backwards, and even through the ground plane, to solve for the detected position. This is particularly true when a user squats, which causes the root joint to move down in space and child joints to move up relative to the root. During a real squat, the root moves backward and down in space, but the body detection often moves the root backward one or more meters, and rotates the upper leg joints outward to match up what it sees causing the knees and feet to widen even though the feet are not moving in reality. I have tried many techniques to correct for this, from feeding motion data through a SceneKit IK robot to trying to use geometry to correct for the root's backward movement, but have not been able to convincingly correct for ARKit's body detection anomalies. Is there any way to put limits on ARKit's body detection (don't move the feet) or maybe some other technique for correcting it after the fact that anyone has devised?
1
0
1.1k
Nov ’20
iOS 14 change in performance cost of SCNNode creation
Code that has been working for many months including on iOS 14 beta builds became unusable with the release of iOS 14. In the past the performance cost of creating a SCNNode was small enough that I could use its many transform properties as a convenience tool for converting between quaternions and Euler angles, among other things, at runtime. After the release of iOS 14, the cost grew dramatically, slowing my app frame rate to <20 fps down from 60-120 fps. I have found mathematical solutions to replace my usage of SCNNodes but it seems like there are plenty of circumstances that could not be solved so readily.
2
0
1.4k
Sep ’20