Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

SKTexture renders SF Symbols image always black
Hi, I'm creating a SF Symbols image like this: var img = UIImage(systemName: "x.circle" ,withConfiguration: symbolConfig)!.withTintColor(.red) In the debugger the image is really red. and I'm using this image to create a SKTexture: let shuffleTexture = SKTexture(image: img) The texture image is ALWAYS black and I have no idea how to change it's color. Nothing I've tried so far works. Any ideas how to solve this? Thank you! Best Regards, Frank
3
1
1.3k
Aug ’23
How to convert `DragGesture().onEnded`'s velocity CGSize to the SIMD3<Float> required in `PhysicsMotionComponent(linearVelocity, angularVelocity)`?
So if I drag an entity in RealityView I have to disable the PhysicsBodyComponent to make sure nothing fights dragging the entity around. This makes sense. When I finish a drag, this closure gets executed: .gesture( DragGesture() .targetedToAnyEntity() .onChanged { e in // ... } .onEnded { e in let velocity: CGSize = e.gestureValue.velocity } If I now re-add PhysicsBodyComponent to the component I just dragged, and I make it mode: .dynamic it will loose all velocity and drop straight down through gravity. Instead the solution is to apply mode: .kinematic and also apply a PhysicsMotionComponent component to the entity. This should retain velocity after letting go of the object. However, I need to instatiate it with PhysicsMotionComponent(linearVelocity: SIMD3<Float>, angularVelocity: SIMD3<Float>). How can I calculate the linearVelocity and angularVelocity when the e.gestureValue.velocity I get is just a CGSize? Is there another prop of gestureValue I should be looking at?
1
0
767
Aug ’23
Metal cpp compile errors
Hi, I trying to use Metal cpp, but I have compile error: ISO C++ requires the name after '::' to be found in the same scope as the name before '::' metal-cpp/Foundation/NSSharedPtr.hpp(162): template <class _Class> _NS_INLINE NS::SharedPtr<_Class>::~SharedPtr() { if (m_pObject) { m_pObject->release(); } } Use of old-style cast metal-cpp/Foundation/NSObject.hpp(149): template <class _Dst> _NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj) { #ifdef __OBJC__ return (__bridge _Dst)pObj; #else return (_Dst)pObj; #endif // __OBJC__ } XCode Project was generated using CMake: target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20) target_compile_options(${MODULE_NAME} PRIVATE "-Wgnu-anonymous-struct" "-Wold-style-cast" "-Wdtor-name" "-Wpedantic" "-Wno-gnu" ) May be need to set some CMake flags for C++ compiler ?
3
0
1.3k
Sep ’23
How to attach point cloud(or depth data) to heic?
I'm developing 3D Scanner works on iPad. I'm using AVCapturePhoto and Photogrammetry Session. photoCaptureDelegate is like below: extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic") let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ]) let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) let colorSpace = CGColorSpace(name: CGColorSpace.sRGB) let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ]) try? fileData!.write(to: fileUrl, options: .atomic) } } But, Photogrammetry session spits warning messages: Sample 0 missing LiDAR point cloud! Sample 1 missing LiDAR point cloud! Sample 2 missing LiDAR point cloud! Sample 3 missing LiDAR point cloud! Sample 4 missing LiDAR point cloud! Sample 5 missing LiDAR point cloud! Sample 6 missing LiDAR point cloud! Sample 7 missing LiDAR point cloud! Sample 8 missing LiDAR point cloud! Sample 9 missing LiDAR point cloud! Sample 10 missing LiDAR point cloud! The session creates a usdz 3d model but scale is not correct. I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
3
2
1.2k
Nov ’23
Animate RealityKit Component Property
Is it possible to animate some property on a RealityKit component? For example, the OpacityComponent has an opacity property that allows the opacity of the entities it's attached to, to be modified. I would like to animate the property so the entity fades in and out. I've been looking at the animation API for RealityKit and it either assumes the animation is coming from a USDZ (which this is not), or it allows properties of entities themselves to be animated using a BindTarget. I'm not sure how either can be adapted to modify component properties? Am I missing something? Thanks
3
0
1.4k
Jan ’24
Animated USD file | Starting position of the USD is not the same as first frame of animation
Hello all, I am building for visionOS with another engineer and using Reality Composer Pro to validate usd files. The starting position of my animated usdz, its position when it's first loaded, is not the same as the first frame of the animation on the usdz file For testing, I am using the AR Quick Look asset 'toy_biplane_idle.usdz' which demonstrates the same 'error' we're currently getting with our own usdz files. When the usdz is loaded, it is on the ground plane - But when the aniamtion is played, the plane 'snaps' to the position of the first frame of the animation - This 'snapping' behavior is giving us problems. We want the user ot see this plane in its static 'load' position with the option to play the animation. But we dont want it to snap when the user presses play Is it possible to load the .usdz in the position specified by the first frame of the animation? What is the best way to fix this issue. Thanks!
1
0
864
Jan ’24
PortalComponent – allow world content to peek out
Hello, I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?). I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it. If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this. Thanks for any hints!
5
0
1.5k
Feb ’24
How can I simultaneously apply the drag gesture to multiple entities?
I wanted to drag EntityA while also dragging EntityB independently. I've tried to separate them by entity but it only recognizes the latest drag gesture RealityView { content, attachments in ... } .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } ) .gesture( DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } ) also tried using the simultaneously but didn't work too, maybe i'm missing something .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } .simultaneously(with: DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } )
1
1
606
Feb ’24
physx cache crash using generated static collision with many entities
I'm using RealityKit for a scene with many static and dynamic ModelEntitys simulating physics. When all the entities have simple collision generated from .generateCollisionShapes I don't see any issues, but for some entities I need much more complex and accurate collision. For this I've been using ShapeResource.generateStaticMesh with the mesh's data (2769 positions, 16272 face indices in this case), which works exactly as desired with a low entity count. However once there are 600+ dynamic entities introducing even one static entity with complex collision will reliably trigger a crash when colliding with one of the dynamic entities (not necessarily on first contact, but inevitably after multiple collisions). If I arbitrarily limit the number of entities to a max of around 500 it seems to prevent the issue from happening, though the likelihood seems to increase with the number of entities so there may be a low probability of it triggering even at 500 entities that I haven't hit while testing. If physx imposes some kind of entity or collision face/shape limit or something like that I'd at least like to know exactly what it is, but ideally there's a way to work around this. Right now my "fix" is just arbitrarily restricting the entity count in a way that limits what my app can do. The crash triggers inside 0x00000001a6790dfc in physx::PxcDiscreteNarrowPhasePCM(physx::PxcNpThreadContext&, physx::PxcNpWorkUnit const&, physx::Gu::Cache&, physx::PxsContactManagerOutput&) () which looks like this (crash line has an -> arrow at the bottom) CoreRE`physx::PxcDiscreteNarrowPhasePCM: ... 0x1a6790df0 <+668>: mov x1, x24 0x1a6790df4 <+672>: bl 0x1a67913d8 ; physx::PxcNpCacheStreamPair::reserve(unsigned int) 0x1a6790df8 <+676>: ldrb w8, [x23] -> 0x1a6790dfc <+680>: str w8, [x0, #0x20]
2
0
523
Mar ’24
Error: apple/apple/game-porting-toolkit 1.1 did not build
My MacBook Air M1 has installed Mac OS Sonoma 14.3.1, and I tried to install game-poring-toolkit tonight. After the step which it requires me to input the command "brew -v install apple/apple/game-porting-toolkit", Terminal ran for minutes. But at the end this error appeared: Error: apple/apple/game-porting-toolkit 1.1 did not build. I don't know anything about coding and software. Could someone please tell me what cause this error and how to fix it after you read my post? I will appreciate your help!
32
8
18k
Mar ’24
Disable Foveation for ImmersiveSpace?
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene. Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
1
1
580
Mar ’24
Guideline 4.3(a) Original game misinterpreted as Spam
Hello, our game Nerd Survivors has been identified as spam. The game is based on an original IP and the only games that should share any resemblance with it are from our company. Today we got notified the rejection due to the violation of Guideline 4.3 but there is no reference to the other application or even a contact to the other developer. We do use Unity as our game engine, so some parts of the code might be shared across different games, but i cannot find any other justification of the failure. "Guideline 4.3(a) - Design - Spam We noticed your app shares a similar binary, metadata, and/or concept as apps submitted to the App Store by other developers, with only minor differences. Submitting similar or repackaged apps is a form of spam that creates clutter and makes it difficult for users to discover new apps."
1
0
989
May ’24
Specific barcode is not recognized
Hi, I face a problem that I could not scan a specific Code 39 barcode with Vision framework. We have multiple barcode in a label and almost all Code 39 can be scanned, but not for specific one. One more information, regardless the one that is not recognized with Vision can be read by a general barcode scanner. Have anyone faced similar situation? Is there unique condition to make it hard to scan the barcode when using Vision?(size, intensity, etc) Regards,
1
0
828
May ’24
IOS issues... Guideline 4.3(a) - Design - Spam
Hey everyone, My game got rejected due to Guideline 4.3. Here's the gameplay video: https://www.youtube.com/watch?v=Vb6uBUsOSDg Does anyone have experience dealing with Guideline 4.3 and can share some insights? I already appealed, explaining that my game has online multiplayer battles, but no response yet. Feeling frustrated after spending six months on this game. Any advice or suggestions would be appreciated. Thanks!
1
0
1.1k
May ’24
Using the same texture for both input & output of a fragment shader
Hello, This exact question was already asked in this forum (8 years ago) but I can't find a definitive answer: Does Metal allow using the same color texture as both an input and output (color attachment) of a fragment shader? Is the behavior defined somewhere? I believe this results in undefined behavior under both DirectX and OpenGL, so I'd assume the same for Metal, but then why doesn't Metal warn me about this as it does on some many other "misconfigurations"? It also seems to work correctly in my case, as I found out by accident. Would love to get a clarification! Thanks ahead!
8
1
1.1k
May ’24
[CAMetalLayer nextDrawable] returning nil because allocation failed.
Why do I get this error almost immediately on starting my rendering pass? Multiline BlockQuote. 2024-05-29 20:02:22.744035-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0) 2024-05-29 20:02:22.744455-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0) 2024-05-29 20:05:54.079981-0500 RoomPlanExampleApp[491:10025] [CAMetalLayer nextDrawable] returning nil because allocation failed. 2024-05-29 20:05:54.080144-0500 RoomPlanExampleApp[491:10341] [] <<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0)
7
1
2k
May ’24
Does anyone know if HDR video is supported in a RealityView?
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial. I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float. I don't believe it's displaying HDR properly or behaving like a raw AVPlayer. Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough? Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself? Thank you
7
0
1.4k
Jun ’24
Placing text over images
What is the best way to display text over images - I'd like the image to fade to white underneath the text so that the text is easier to read since I have no control over the contents of the images. I thought about having a second label behind the actual label with the same text in a slightly larger font and white color. but I'd rather have it be a gradual fading of the image just under the text rather than what looks like 3D text. Any suggestions?
3
0
984
Jun ’24
Wrong hitTest results in iOS 17.2
We’re experiencing an issue with wrong SceneKit hit testing results in iOS 17.2 compared with iOS 16.1 when using the either Metal or OpenGLES2 engines. Tapping on a 3D model to place a SCNNode // pointInScene: tapped point let hitResults = sceneView.hitTest(pointInScene, options: nil) return hitResults.first { $0.node.name?.compare("node_name") == .orderedSame }
3
0
982
Jun ’24