What is recommended best practice for importing a Blender 3D file into RCP? I assume as a .usdz file? Is there a WWDC24 session or other Apple resource that best explains this. I want to make sure I provide the right format/file to RCP from Blender.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I found some snapshot API in developer documents, like blows:
RealityKit / Views and attachments / ARView / /snapshot(saveToHDR:completion:)
SceneKit / SCNView / snapshot()
Is there a similar API in visionOS?and if not, how can I implement snapshot for realityview and usdz?
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great!
https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics
I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor.
Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display?
What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward?
The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical?
How is the camera image indexed, is it row-major and is the origin on the top left?
I am looking forward to learning about all the details on these extrinsics in order to make use of it.
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
RealityKit
Reality Composer Pro
Shader Graph Editor
We're developing a VisionOS application, where we would like to do product recognition (like food items).
We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction.
Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item.
How do we transform this 2D coordinate from the model prediction to a 3D anchor?
Hey there, I am working on an app that displays environmental data using PNG color channels to represent data ranges, which gets overlayed on a map. The sampled values aren't what I'm expecting though... for example an RGB value of 0x7f0000 (R = 0.5, G = 0, B = 0) would be seen as 0.21, 0, 0 in the shader. This basically makes it unusable if I'm trying to show scientific data... I'm half wondering if I am completely misunderstanding how sampling works in RealityKit / RealityComposerPro. Anybody have any idea why it works like this?
Actual result (chart labels added in photoshop):
Expected:
Red > 0.1 Shader Graph
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
We have a plane model (basecircle)without physics and rigid body components, and no gestures are implemented. However, when tapped, the model unexpectedly falls into infinity.
func fetchEnvResource(){
var simpleMaterial = SimpleMaterial()
env = try! Entity.loadModel(named: "bgMain5")
env.position += [0,0,-10]
envTexture = PhysicallyBasedMaterial()
envTextureUnlit = UnlitMaterial()
envTexture.baseColor = .init(texture: .init(try! .load(named: "bgMain5")))
envTextureUnlit.color = .init(texture: .init(try! .load(named: "bgMain5")))
env.isEnabled = false
let anchor = AnchorEntity(world: [0, 0, -3])
baseCircle = ModelEntity(mesh: .generatePlane(width: 1.5, depth: 1.5, cornerRadius: 0.75), materials: [SimpleMaterial(color: .green, isMetallic: false)])
env.components.set(InputTargetComponent())
baseMaterial = PhysicallyBasedMaterial()
baseMaterialUnlit = UnlitMaterial()
baseMaterial.baseColor = .init(texture: .init(try! .load(named: "groundTexture")))
baseMaterial.baseColor.tint = UIColor(white: 1.0, alpha: CGFloat(textureOpacity))
baseCircle.model?.materials = [baseMaterial]
baseCircle.generateCollisionShapes(recursive: false)
baseCircle.components.set(InputTargetComponent())
baseCircle.components[PhysicsBodyComponent.self] = .init(PhysicsBodyComponent(massProperties: .default, mode: .static))
baseCircle.physicsBody = PhysicsBodyComponent(
mode: .kinematic
)
anchorEntity.addChild(baseCircle)
baseCircle.position = [0,0,-3]
baseCircle.isEnabled = false
let cylinder = ModelEntity(mesh: .generateCylinder(height: 0.2, radius: 0.5), materials: [SimpleMaterial(color: .blue, isMetallic: false)])
cylinder.position = [0,-0.1,0]
cylinder.generateCollisionShapes(recursive: false)
cylinder.components[PhysicsBodyComponent.self] = .init(PhysicsBodyComponent(massProperties: .default, mode: .static))
cylinder.physicsBody = nil
cylinder.scale = [500, 100, 100]
anchor.addChild(cylinder)
}
The plane model in this issue is the BaseCircle. Any suggestions on how to solve this or potential fixes would be greatly appreciated
In RealityView, physical components are only applicable to certain solids. How can I simulate the physical effects of water and cloth?
enity.components.set(PhysicalComponent)
Hey, I have Enterprise Access on the account and have added the passthrough capability and the entitlement on the main project and the "Broadcast Upload" extension, too.
The broadcast works except it returns a black screen.
I am attaching some screenshots below of the entitlement file. I have tried searching online to no avail, so any help would be greatly appreciated. I am also attaching the code.
import Foundation
import AVFoundation
import ReplayKit
class VideoAssetWriter {
private var isRecording = false
private var outputStream: OutputStream?
private func setupConnection() {
guard outputStream == nil else { return }
print("setting up connection.")
let serverIP = macIP
let port = 12345
var readStream: Unmanaged<CFReadStream>?
var writeStream: Unmanaged<CFWriteStream>?
CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault,
serverIP as CFString,
UInt32(port),
&readStream,
&writeStream)
guard let writeStream = writeStream?.takeRetainedValue() else {
print("Failed to create write stream")
return
}
self.outputStream = writeStream as OutputStream
self.outputStream?.open()
}
func startRecording() {
isRecording = true
}
func processVideoSampleBuffer(_ sampleBuffer: CMSampleBuffer) {
print("Processing Sample 1")
guard isRecording else { return }
print("Processing Sample 2")
sendVideoChunkToServer(sampleBuffer)
}
private func sendVideoChunkToServer(_ sampleBuffer: CMSampleBuffer) {
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
print("Processing Sample 3")
let ciImage = CIImage(cvPixelBuffer: imageBuffer)
let context = CIContext()
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return }
print("Processing Sample 4")
let image = UIImage(cgImage: cgImage)
if let imageData = image.jpegData(compressionQuality: 0.5) {
guard imageData.count <= 10_000_000 else {
print("Frame too large: \(imageData.count) bytes")
return
}
if outputStream == nil {
setupConnection()
}
print("sending frame size up connection.")
// Convert to network byte order (big-endian)
var frameSize = UInt32(imageData.count).bigEndian
let sizeData = Data(bytes: &frameSize, count: MemoryLayout<UInt32>.size)
_ = sizeData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: sizeData.count) }
print("sending image data up connection.")
// Send frame data
_ = imageData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: imageData.count) }
}
}
func stopRecording() {
isRecording = false
outputStream?.close()
outputStream = nil
}
}
This is the broadcast picker view wrapper:
// Broadcast Picker View wrapper
struct BroadcastButtonView: UIViewRepresentable {
func makeUIView(context: Context) -> RPSystemBroadcastPickerView {
let broadcastPickerView = RPSystemBroadcastPickerView(
frame: CGRect(x: 0, y: 0, width: 200, height: 200)
)
// Make sure this matches your broadcast extension bundle identifier
broadcastPickerView.preferredExtension = "my-extension-bundle-identifier"
broadcastPickerView.showsMicrophoneButton = false
return broadcastPickerView
}
func updateUIView(_ uiView: RPSystemBroadcastPickerView, context: Context) {
}
}
The extension SampleHandler:
override func broadcastPaused() {
print("paused broadcast")
// User has requested to pause the broadcast. Samples will stop being delivered.
}
override func broadcastResumed() {
print("resumed broadcast")
// User has requested to resume the broadcast. Samples delivery will resume.
}
override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) {
print("broadcast received")
assetWriter?.processVideoSampleBuffer(sampleBuffer)
}
Looking forward to any and all help.
Information Property list:
Information property list for the extension:
The capabilities:
I’m developing an app using RealityKit and RealityView. On newer iPhones, such as the iPhone 15 Pro, Object Occlusion appears to be enabled by default, which causes 3D entities to be hidden behind real-world objects in the scene. However, I need to disable this behavior to ensure proper rendering of my 3D content.
This issue does not occur on older devices like the iPhone 13, where the app works as intended. I haven’t been able to find a solution to explicitly disable object occlusion on the newer devices for RealityView.
Any guidance or suggestions to resolve this issue would be greatly appreciated! Thanks!
After I played the audio for the entity the sound was very low and I wanted to adjust the sound size. No api is found. What should I do
if let audio = audioResources {
entity.playAudio(audio)
}
Hello All,
We're going to do a scene now, kind of like a time travel door. When the user selects the scene, the user passes through the door to show the current scene. The changes in the middle need to be more natural. It's even better if you can walk through an immersive space...
There is very little information now. How can I start doing this? Is there any information I can refer to
thanks
I'm doing a weather app, users can search locations for getting weather, but the problem is, the results only shows locations in my country, not in global. For example, I'm in China, I can't search New York, it just shows nothing. Here's my code:
@Observable
class SearchPlaceManager: NSObject {
var searchText: String = ""
let searchCompleter = MKLocalSearchCompleter()
var searchResults: [MKLocalSearchCompletion] = []
override init() {
super.init()
searchCompleter.resultTypes = .address
searchCompleter.delegate = self
}
@MainActor
func seachLocation() {
if !searchText.isEmpty {
searchCompleter.queryFragment = searchText
}
}
}
extension SearchPlaceManager: MKLocalSearchCompleterDelegate {
func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) {
withAnimation {
self.searchResults = completer.results
}
}
}
Also, I've tried to set searchCompleter.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: 0, longitude: 0), span: MKCoordinateSpan(latitudeDelta: 180, longitudeDelta: 360) ), but it doesn't work.
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project"
I do not find this to be true, sadly.
Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File?
The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen.
The sample code (Spaceship) does not compile for MacOS.
I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Swift
AR / VR
RealityKit
Reality Composer Pro
This might be a very silly question, anyway I tried many ways and didn't find a solution:
My Mac mini M4 is basic version which only has 16GB memory, it is very shy for developing Vision Pro application and testing with the simulator (CleanMacX always warning me low memory), I want to debug and test the application directly on Vision Pro (+ my app need both two hands gestures which simulator might not support) in stead of simulator, is there any proper instructions on how I test/debug/run the App on VP device directly instead of on simulator in favor of speed?
Topic:
Spatial Computing
SubTopic:
General
Dear Apple Engineers,
I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface.
I have the following specific questions:
How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface?
How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user?
Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques?
Thank you very much for your help!
In the particle effect of RealityKit, there is a Type:
ParticleEmitterComponent.Presets
He can invoke certain particle effects in certain systems, but I am interested in learning how to modify these particle entities (such as adjusting the color, the number of particles, the generation range…)?
Does anyone have any idea if Apple plans to add in UDIM support for its 3D development? It is a real bummer to not have this feature and makes an otherwise clean USD pipeline kinda suck.
I have been digging through the docs and the developer videos, and I have noticed a mention to RealityView having som potential limitations with anchors and world tracking. However, I haven’t been able to locate my answers.
Does anyone know (or point me to) if RealityView supports everything ARView does, and if not what are the difference?
I was fooling around with RealityView today with a simple plane anchor, and the stability of that anchor didn’t seem to be as steady as I recall ARView being In the past on iPhone.
I’m trying to determine if I should be rolling over into RealityView or stay with ARView on this little educational project. I would imagine the answer is to go RealityView, but I want to make sure I’m not setting myself up for failure based on any current limitations For anchors and world data.
Topic:
Spatial Computing
SubTopic:
ARKit
Hello,
I checked following documentations.
Vision | Apple Developer Documentation
Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer
I saw Vision Framework is available on visionOS.
So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?