Hi,
I am having trouble with Share Play working.
When I create and run the GroupActivity sample in SharePlay, I get the following message and GroupActivity does not work.
https://mitemmetim.medium.com/shareplay-tutorial-share-custom-data-between-ios-and-macos-a50bfecf6e64
Dropping activity as there is no active conversation: <TUMutableConversationActivityCreateSessionRequest 0x2836731c0 activityIdentifier=jp.co.1planet.sample.SharePlayTutorial.SharePlayActivity applicationContext={length = 42, bytes = 0x62706c69 73743030 d0080000 00000000 ... 00000000 00000009 } metadata=<TUConversationActivityMetadata 0x28072d380 context=CPGroupActivityGenericContext title=SharePlay Example sceneAssociationBehavior=<TUConversationActivitySceneAssociationBehavior 0x28237a740 targetContentIdentifier=(null) shouldAssociateScene=1 preferredSceneSessionRole=(null)>> UUID=3137DDE4-F5B2-46B2-9097-30DD6CAE79A3>
I tried running it on Mac and iOS, but it did not work as expected.
By the way, we are also trying the following
https://developer.apple.com/forums/thread/683624
I have no knowledge of GroupActivity; I have Group Activities set in Capability. Do I need to set anything else? Please let me know if you can find any solution to this message. By the way, I am using Xcode 15.2 Beta, iOS 17.1.1 and iOS 17.3 Beta, Mac OS 14.2.1 (23C71).
Best Regards.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I am investigating how to emit the following in my visionOS app.
https://www.hiroakit.com/archives/1432
https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/
Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them.
I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know.
Thanks.
Hi,
I have a question.
In visionOS, when a user looks at a button and performs a pinch gesture with their index finger and thumb, the button responds. By default, this works with both the left and right hands. However, I want to disable the pinch gesture when performed with the left hand while keeping it functional with the right hand.
I understand that the system settings allow users to configure input for both hands, the left hand only, or the right hand only. However, I would like to control this behavior within the app itself.
Is this possible?
Best regards.
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.”
In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment.
Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same.
If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
I’m currently developing a visionOS app that includes an RCP scene with a large USDZ file (around 2GB).
Each time I make adjustments to the CG model in Blender, I export it as USDZ again, place it in the RCP scene, and then build the app using Xcode.
However, because the USDZ file is quite large, the build process takes a long time, significantly slowing down my development speed.
For example, I’d like to know if there are any effective ways to:
Improve overall build performance
Reduce the time between updating the USDZ file and completing the build
Any advice or best practices for optimizing this workflow would be greatly appreciated.
Best regards,
Sadao
I have two questions regarding releasing an app that uses an in-app browser (WKWebView) on the App Store worldwide.
Question 1: Encryption usage
Our app uses WKWebView and relies on standard encryption. Should this be declared as using encryption during the App Store submission?
Question 2: If the answer to Question 1 is YES
If it must be declared as using encryption, do we need to prepare and upload additional documentation when submitting the app in France?
Also, would this require us to redo the entire build and upload process, even for an app version that has already been uploaded?
Goal / request:
We want to release an app using WKWebView worldwide, including France. We would like to understand all the necessary steps and requirements for completing the App Store release without unexpected rework.
Best regards,
P.S.: A similar question was posted a few years ago, but it seems there was no response.
https://developer.apple.com/forums/thread/725047
Sadao
Hi,
I have one question: you explained Shared Space at the beginning of the session video, but I didn't really understand it.
Is this Shared Space like the Dock on a Mac?
Are applications placed in the Shared Space and the operation is to launch the application placed in the Shared Space ? Why is the word "Shared" included, or is there a function to do Shared?
"By default, apps launch into Shared Space."
By default, apps launch into the Shared Space. What is the default? What is the non-default state?
"People remain connected to their surroundings through passthrough."
What does the above mean on visionOS?
By the way, is the application that starts on the Shared Space the so-called clock, or does the Safari browser also work on the Shared Space?
What kind of applications can only run on Full Space?
I don't have an image of the role of each function on visionOS.
If possible, it would be easier to understand if there is an actual image of the application running, not just a diagram.
Best regards.
Sadao Tokuyama
https://1planet.co.jp/
Volumes allow an app to display 3D content in defined bounds, sharing the space with other apps
What does it mean to be able to share space in Volumes? What are the benefits of being able to do this?
Do you mean Shared Space?
I don't understand Shared Space very well to begin with.
they can be viewed from different angles.
Does this mean that because it is 3D content with depth, if I change the angle, I can see it with depth?
It seems obvious to me because it is 3D content.
Is this related to Volumes?
Hi,
Will the Apple Vision Pro developer kit be sold only in the US? I live in Japan. Is it possible to get the Apple Vision Pro developer kit in Japan?
Best regards.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi,
If you write the sample code for the gesture in the following document and try to execute it, an error will occur.
https://developer.apple.com/documentation/visionos/adding-3d-content-to-your-app
Value of type '(@escaping (TapGesture.Value) -> Void) -> _EndedGesture' has no member 'targetedToAnyEntity'
Is something missing?
XCode 15.0 beta 2 (15A516b)
visionOS 1.0 beta
import SwiftUI
import RealityKit
import RealityKitContent
struct SphereSecondView: View
{
@State var scale = false
/*var tap: some Gesture {
TapGesture()
.onEnded { _ in
print("Tap")
scale.toggle()
}
}*/
var body: some View {
RealityView {content in
let model = ModelEntity(
mesh: .generateSphere(radius: 0.1),
materials: [SimpleMaterial(color: .yellow, isMetallic: true)])
content.add(model)
} update: { content in
if let model = content.entities.first {
model.transform.scale = scale ? [2.0, 2.0, 2.0] : [1.0, 1.0, 1.0]
}
}
.gesture(TapGesture().onEnded
.targetedToAnyEntity() { _ in
scale.toggle()
})
}
}
#Preview{
SphereSecondView()
}
Best regards.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi,
I am currently developing a Full Space App. I have a question about how to implement the display of Entity or Model Entity in front of the user. I want to move the Entity or Model Entity to the user's front, not only at the initial display, but also when the user takes an action such as tapping. (Animation is not required.) I want to perform the initial placement process to the user's front when the reset button is tapped.
Thanks.
Sadao Tokuyama
https://twitter.com/tokufxug
https://www.linkedin.com/in/sadao-tokuyama/
https://1planet.co.jp/tech-blog/category/applevisionpro
Feedback has been received, but when viewing Model3D, Entity, Model Entity, and AR Quick Look in the latest visionOS simulator, they appear dimmed.
https://feedbackassistant.apple.com/feedback/13235272
image:
https://ibb.co/GVLBKv7
How do I access Persona Virtual Camera features from the app? I would be happy to add permissions or a simple implementation example.
I know that this feature is probably only available with the Apple Vision Pro device, but it would be nice to share information about Persona Virtual Camera, including whether or not it works with the visionOS simulator, and a solid description of Persona Virtual Camera to help us understand how it works. If you have a page or video that explains Persona Virtual Camera well, please share it as well.
Best Regards.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
If I put an alpha image texture on a model created in Blender and run it on
RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below.
I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and
t hen impor ted it into Reality Composer Pro.
When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her,
t he following behaviors were obser ved in t he transparent areas
・The transparent areas do not become transparent
・The transparent areas become transparent toget her wit h t he image behind t hem
the order of t he images becomes incorrect
Best regards.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
RealityKit
Reality Composer Pro
visionOS
Hello,
I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode.
Specifically:
Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in.
Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural.
IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree.
My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals.
Has anyone else experienced similar issues?
Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0?
Any guidance would be greatly appreciated.
Thank you!