Hi, I was working on my company's ARkit application and noticed that even when ARView is closed, the memory allocated for it never gets deallocated. I made a simple app to isolate the issue. github link
please let me know if this is an expected behavior and if it is, any insights on how to reclaim that memory would be highly appreciated!
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am trying to make the immersive version of AVplayerViewController bigger, but I can't find any information on how I can go about it. It seems that if I wanted to change immersive video viewing experience, only thing I can do is using VideoMaterial and put it on ModelEntity with .generatePlane. is there a way to change video size on immersive mode for AVplayerViewController?
Hi, I have a swift data class that has an enum PromptFont as a property.
the data class is hosted in CloudKit.
upon launch, I started getting this error:
below is the property and the enum
when I released this app, there was no error, and it was building and running flawlessly.
when I build this on a simulator (iPadOS 17.5), it runs just fine
In visionOS, i have been trying to implement this view as a background for information view, but i cannot find any information about it anywhere. Does anyone know what this is called or any workaround to achieve this look?
Hi, I added DockingRegion to my scene from Reality Composer Pro, and I am able to load up the scene, but DockingRegion is getting ignored and the scene is getting rendered with no change in AVPlayerViewController window. As it can be seen in Reality Composer Pro screenshot below, I set the width of the player to 666, and moved it to the back by 300cm, but the actual result does not reflect the position I set on Reality Composer Pro.
Is there anything else I should do other than loading up the Entity and adding to RealityView? Specifically, do I have to get the DockingRegion within the usda file and somehow enable it?
sample repo: https://github.com/ckse93/VideoDiffusionIssueSHowcase
Repo has detailed step by step workflow. as well as screenshot, python script compute result, and parameters
after running computeDiffuseReflectionUVs.py and mapping textures and reflection diffuse to objects, I noticed that reflection diffuse does not produce any color.
expected result is shown below, diffused light has color
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Hi, I downloaded a few files from apple developers website, and they are in .reality file format. I wanted to see how they are constructed, but there is no way to open them and look at the content inside (3d model, shader, animation, etc). Is there a way to look at the content of .reality file?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Hi, I'm trying to achieve 3D photo effects using a photo and a depth map, using GeomertyModifier. The offset is getting applied correctly in Reality Composer Pro, and in Xcode, but when I launch the app, the model offset is not getting applied. here are the screenshots of shader graphs and how it looks in the app
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
is it possible to dynamically update ModelPositionOffset of GeometryModifier with a depth map image?
in my code I set up the parameter for "DepthMapTexture" universal input node
and tried setting the depth map for depthTextureResource. I have 2 DrawableQueues. One for setting InputTexture, and one for setting DepthMapTexture. This only shows the part that concerns setting DepthMapTexture
this is where I define the plane entity.
and this is the shader graph
what I noticed with GeometryModifier is that, the depthMap image has to be same as input image's dimensions.
and when I applied this material to usdz file, with pre-assigned image and depth map from RCP, and loaded that Entity from code, depth map was applied correctly.
what I am unsure is that if it is impossible to define a model entity from code, apply ShaderGraphMaterial from RCP, and dynamically update the image used in GeometryModifier.
Maybe I'm missing something when defining Entity, something that allows geometric modifications?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
I have been trying to implement this look where a component looks "pushed in" but I could not find any resources regarding this effect. The closest I got was a combination of a RoundedRectangle and .glassBackgroundEffect(), but this makes the view look pushed out, instead of pushed in.
I was wondering if this is achievable in SwiftUI level, or even in UIKit level.
Hi, I'm trying to place an object in front of AVPlayer that is docked in VideoDockingRegion, but when launched in immersive space, the video passes through the objects placed in front of. How do I make sure these objects are visible?
image for reference
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
Shader Graph Editor
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
prefetching logic for UICollectionView on VisionOS does not work.
I have set up a Standalone test repo to demonstrate this issue. This repo is basically a visionOS version of Apple's guide project on implementation of prefetching logic.
in repo you will see a simple ViewController that has UICollectionView, wrapped inside UIViewControllerRepresentable.
on scroll, it should print 🕊️ prefetch start on console to demonstrate func collectionView(_ collectionView: UICollectionView, prefetchItemsAt indexPaths: [IndexPath]) is called. However it never happens on VisionOS devices.
With the same code it behaves correctly on iOS devices
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
UIKit
visionOS
iPad and iOS apps on visionOS