Post

Replies

Boosts

Views

Activity

GroupActivity: Dropping activity as there is no active conversation:
Hi, I am having trouble with Share Play working. When I create and run the GroupActivity sample in SharePlay, I get the following message and GroupActivity does not work. https://mitemmetim.medium.com/shareplay-tutorial-share-custom-data-between-ios-and-macos-a50bfecf6e64 Dropping activity as there is no active conversation: <TUMutableConversationActivityCreateSessionRequest 0x2836731c0 activityIdentifier=jp.co.1planet.sample.SharePlayTutorial.SharePlayActivity applicationContext={length = 42, bytes = 0x62706c69 73743030 d0080000 00000000 ... 00000000 00000009 } metadata=<TUConversationActivityMetadata 0x28072d380 context=CPGroupActivityGenericContext title=SharePlay Example sceneAssociationBehavior=<TUConversationActivitySceneAssociationBehavior 0x28237a740 targetContentIdentifier=(null) shouldAssociateScene=1 preferredSceneSessionRole=(null)>> UUID=3137DDE4-F5B2-46B2-9097-30DD6CAE79A3> I tried running it on Mac and iOS, but it did not work as expected. By the way, we are also trying the following https://developer.apple.com/forums/thread/683624 I have no knowledge of GroupActivity; I have Group Activities set in Capability. Do I need to set anything else? Please let me know if you can find any solution to this message. By the way, I am using Xcode 15.2 Beta, iOS 17.1.1 and iOS 17.3 Beta, Mac OS 14.2.1 (23C71). Best Regards.
0
0
894
Jan ’24
Construction of luminous expression methods
Hi, I am investigating how to emit the following in my visionOS app. https://www.hiroakit.com/archives/1432 https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/ Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them. I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know. Thanks.
0
0
617
Mar ’24
The pinch operation by the left hand should be stopped.
Hi, I have a question. In visionOS, when a user looks at a button and performs a pinch gesture with their index finger and thumb, the button responds. By default, this works with both the left and right hands. However, I want to disable the pinch gesture when performed with the left hand while keeping it functional with the right hand. I understand that the system settings allow users to configure input for both hands, the left hand only, or the right hand only. However, I would like to control this behavior within the app itself. Is this possible? Best regards.
1
0
296
Feb ’25
Why don’t the dinosaurs in “Encounter Dinosaurs” respond to real-world light intensity?
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.” In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment. Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same. If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
1
0
613
Nov ’25
How to speed up build time when placing large USDZ files in RCP scenes
I’m currently developing a visionOS app that includes an RCP scene with a large USDZ file (around 2GB). Each time I make adjustments to the CG model in Blender, I export it as USDZ again, place it in the RCP scene, and then build the app using Xcode. However, because the USDZ file is quite large, the build process takes a long time, significantly slowing down my development speed. For example, I’d like to know if there are any effective ways to: Improve overall build performance Reduce the time between updating the USDZ file and completing the build Any advice or best practices for optimizing this workflow would be greatly appreciated. Best regards, Sadao
1
0
209
Nov ’25
Does an app using WKWebView require encryption compliance for worldwide App Store release?
I have two questions regarding releasing an app that uses an in-app browser (WKWebView) on the App Store worldwide. Question 1: Encryption usage Our app uses WKWebView and relies on standard encryption. Should this be declared as using encryption during the App Store submission? Question 2: If the answer to Question 1 is YES If it must be declared as using encryption, do we need to prepare and upload additional documentation when submitting the app in France? Also, would this require us to redo the entire build and upload process, even for an app version that has already been uploaded? Goal / request: We want to release an app using WKWebView worldwide, including France. We would like to understand all the necessary steps and requirements for completing the App Store release without unexpected rework. Best regards, P.S.: A similar question was posted a few years ago, but it seems there was no response. https://developer.apple.com/forums/thread/725047 Sadao
1
0
103
Nov ’25
Building a Full Space app that enables sharing a visionOS experience with nearby users.
Hello, I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users. Intended Features A Mixed Full Space app in which dozens of 3D models are placed in the space. These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines. The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space. Some media elements play automatically, while others are triggered by user interaction. However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as: audio only spatial audio video without audio video with audio I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario. Questions Is it technically possible to implement the experience described above using visionOS? Are there any important implementation considerations or limitations that should be taken into account? For example, when two participants experience the app simultaneously, how is the content positioned for each participant? Is the spatial placement of content shared across participants, or is it positioned relative to each participant’s viewpoint? For nearby participants, is it necessary to register a spatial Persona? My understanding is that spatial Personas are not visible for nearby users during the experience; is this correct? When experiencing SharePlay with nearby users, is it possible to share the experience without registering the other participant’s contact information? I have watched the following session, but I was unable to fully understand the feasibility of the above use case or the concrete implementation details: https://developer.apple.com/videos/play/wwdc2025/318/ Thank you.
1
0
268
3w
Simple question about visionOS and Shared Space
Hi, I have one question: you explained Shared Space at the beginning of the session video, but I didn't really understand it. Is this Shared Space like the Dock on a Mac? Are applications placed in the Shared Space and the operation is to launch the application placed in the Shared Space ? Why is the word "Shared" included, or is there a function to do Shared? "By default, apps launch into Shared Space." By default, apps launch into the Shared Space. What is the default? What is the non-default state? "People remain connected to their surroundings through passthrough." What does the above mean on visionOS? By the way, is the application that starts on the Shared Space the so-called clock, or does the Safari browser also work on the Shared Space? What kind of applications can only run on Full Space? I don't have an image of the role of each function on visionOS. If possible, it would be easier to understand if there is an actual image of the application running, not just a diagram. Best regards. Sadao Tokuyama https://1planet.co.jp/
6
0
3.7k
Jun ’23
How can I share space in Volumes?
Volumes allow an app to display 3D content in defined bounds, sharing the space with other apps What does it mean to be able to share space in Volumes? What are the benefits of being able to do this? Do you mean Shared Space? I don't understand Shared Space very well to begin with. they can be viewed from different angles. Does this mean that because it is 3D content with depth, if I change the angle, I can see it with depth? It seems obvious to me because it is 3D content. Is this related to Volumes?
1
0
716
Jun ’23
Value of type '(@escaping (TapGesture.Value) -> Void) -> _EndedGesture<TapGesture>' has no member 'targetedToAnyEntity'
Hi, If you write the sample code for the gesture in the following document and try to execute it, an error will occur. https://developer.apple.com/documentation/visionos/adding-3d-content-to-your-app Value of type '(@escaping (TapGesture.Value) -> Void) -> _EndedGesture' has no member 'targetedToAnyEntity' Is something missing? XCode 15.0 beta 2 (15A516b) visionOS 1.0 beta import SwiftUI import RealityKit import RealityKitContent struct SphereSecondView: View { @State var scale = false /*var tap: some Gesture { TapGesture() .onEnded { _ in print("Tap") scale.toggle() } }*/ var body: some View { RealityView {content in let model = ModelEntity( mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .yellow, isMetallic: true)]) content.add(model) } update: { content in if let model = content.entities.first { model.transform.scale = scale ? [2.0, 2.0, 2.0] : [1.0, 1.0, 1.0] } } .gesture(TapGesture().onEnded .targetedToAnyEntity() { _ in scale.toggle() }) } } #Preview{ SphereSecondView() } Best regards. Sadao Tokuyama https://1planet.co.jp/ https://twitter.com/tokufxug
2
0
1.2k
Jul ’23
How to place a 3D model in front of you in the Full Space app.
Hi, I am currently developing a Full Space App. I have a question about how to implement the display of Entity or Model Entity in front of the user. I want to move the Entity or Model Entity to the user's front, not only at the initial display, but also when the user takes an action such as tapping. (Animation is not required.) I want to perform the initial placement process to the user's front when the reset button is tapped. Thanks. Sadao Tokuyama https://twitter.com/tokufxug https://www.linkedin.com/in/sadao-tokuyama/ https://1planet.co.jp/tech-blog/category/applevisionpro
1
0
739
Oct ’23
How to access Persona Virtual Camera features
How do I access Persona Virtual Camera features from the app? I would be happy to add permissions or a simple implementation example. I know that this feature is probably only available with the Apple Vision Pro device, but it would be nice to share information about Persona Virtual Camera, including whether or not it works with the visionOS simulator, and a solid description of Persona Virtual Camera to help us understand how it works. If you have a page or video that explains Persona Virtual Camera well, please share it as well. Best Regards. Sadao Tokuyama https://1planet.co.jp/ https://twitter.com/tokufxug
0
0
1.1k
Dec ’23
Rendering bug when layering transparent textures front and back
If I put an alpha image texture on a model created in Blender and run it on RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below. I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and t hen impor ted it into Reality Composer Pro. When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her, t he following behaviors were obser ved in t he transparent areas ・The transparent areas do not become transparent ・The transparent areas become transparent toget her wit h t he image behind t hem the order of t he images becomes incorrect Best regards.
1
0
760
Nov ’24