Post

Replies

Boosts

Views

Activity

RealityKit 3D texture
In my Metal-based app, I ray-march a 3D texture. I'd like to use RealityKit instead of my own code. I see there is a LowLevelTexture (beta) where I could specify a 3D texture. However on the Metal side, there doesn't seem to be any way to access a 3D texture (realitykit::texture::textures::custom returns a texture2d). Any work-arounds? Could I even do something icky like cast the texture2d to a texture3d in MSL? (is that even possible?) Could I encode the 3d texture into an argument buffer and get that in somehow?
1
0
736
Aug ’24
Can an SDF be rendered using RealityKit?
I'm trying to ray-march an SDF inside a RealityKit surface shader. For the SDF primitive to correctly render with other primitives, the depth of the fragment needs to be set according to the ray-surface intersection point. Is there a way to do that within a RealityKit surface shader? It seems the only values I can set are within surface::surface_properties. If not, can an SDF still be rendered in RealityKit using ray-marching?
1
1
760
Sep ’24
pro app rejected for 2.3.2
"Specifically, your App Description and screenshot references paid features but does not inform users that a purchase is required to access this content." My App Description (Pro 3D art app) doesn't mention that the entire app is a subscription. I didn't think I needed to because Final Cut Pro and Logic Pro don't do that either. Anyone had experience with this? Is there a double-standard or did App Review just make a mistake? Suppose I can add some language at the end of the App Description like "All Features unlocked with subscription"
1
0
535
Sep ’24
document-based sample code doesn't work... work around?
I just tried the "Building a document-based app with SwiftUI" sample code for iOS 18. https://developer.apple.com/documentation/swiftui/building-a-document-based-app-with-swiftui I can create a document and then close it. But once I open it back up, I can't navigate back to the documents browser. It also struggles to open documents (I would tap multiple times and nothing happens). This happens on both simulator and device. Will file a bug, but anyone know of a work-around? I can't use a document browser that is this broken.
1
1
430
Nov ’24
how to get a null acceleration structure w/o trigging an API validation error
I want to turn off my ray-tracing conditionally. There's is_null_acceleration_structure but when I don't bind an acceleration structure (or pass nil to setFragmentAccelerationStructure), I get the following API validation error: -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5782: failed assertion `Draw Errors Validation Fragment Function(vol_deferred_lighting): missing instanceAccelerationStructure binding at index 6 for accelerationStructure[0]. I can turn off API validation and it works, but it seems like I should be able to use nil for the acceleration structure w/o triggering a validation error. Seems like a bug, right? I suppose I can work around this by creating a separate pipeline with the ray-tracing disabled via a function constant instead of using is_null_acceleration_structure. (Can we get a ray-tracing tag for questions?)
1
0
540
Nov ’24
can't get Xcode not to build x86_64 for Swift Packages
I'm trying to improve my build time on macOS by not building for x86_64. I've got the following settings: This gets Xcode not to build x86_64 for my app, but not all the package dependencies. I've updated most of the packages to swift-tools-version: 6.0 but FlatBuffers is still on 5.8 and .macOS(.v10_14). GPT claims: If your deployment target is set to macOS 10.15 or earlier, Xcode may force x86_64 support for compatibility reasons. But Xcode is building x86_64 for ALL my packages, even the ones that don't depend on FlatBuffers. When I open a package in Xcode that depends on FlatBuffers, then it builds arm only, so that may be a red herring. Not sure what else to try.
1
0
307
Mar ’25
testing multichannel AudioUnit output with AVAudioEngine
I'm extending an AudioUnit to generate multi-channel output, and trying to write a unit test using AVAudioEngine. My test installs a tap on the AVAudioNode's output bus and ensures the output is not silence. This works for stereo. I've currently got: auto avEngine = [[AVAudioEngine alloc] init]; [avEngine attachNode:avAudioUnit]; auto format = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100. channels:channelCount]; [avEngine connect:avAudioUnit to:avEngine.mainMixerNode format:format]; where avAudioUnit is my AU. So it seems I need to do more than simply setting the channel count for the format when connecting, because after this code, [avAudioUnit outputFormatForBus:0].channelCount is still 2. Printing the graph yields: AVAudioEngineGraph 0x600001e0a200: initialized = 1, running = 1, number of nodes = 3 ******** output chain ******** node 0x600000c09a80 {'auou' 'ahal' 'appl'}, 'I' inputs = 1 (bus0, en1) <- (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] node 0x600000c09e00 {'aumx' 'mcmx' 'appl'}, 'I' inputs = 1 (bus0, en1) <- (bus0) 0x600000c14300, {'augn' 'brnz' 'brnz'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] outputs = 1 (bus0, en1) -> (bus0) 0x600000c09a80, {'auou' 'ahal' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] node 0x600000c14300 {'augn' 'brnz' 'brnz'}, 'I' outputs = 1 (bus0, en1) -> (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] So AVAudioEngine just silently ignores whatever channel counts I pass to it. If I do: auto numHardwareOutputChannels = [avEngine.outputNode outputFormatForBus:0].channelCount; NSLog(@"hardware output channels %d\n", numHardwareOutputChannels); I get 30, because I have an audio interface connected. So I would think AVAudioEngine would support this. I've also tried setting the format explicitly on the connection between the mainMixerNode and the outputNode to no avail.
0
2
1.6k
Jun ’22
"Linking two modules of different data layouts"
I'm getting this error when using fragmentLinkedFunctions in Metal. Compiler failed to build request exception: Error Domain=CompilerError Code=2 " Linking two modules of different data layouts: '' is '' whereas '1' is 'e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v16:16:16-v24:32:32-v32:32:32-v48:64:64-v64:64:64-v96:128:128-v128:128:128-v192:256:256-v256:256:256-v512:512:512-v1024:1024:1024-n8:16:32' SC compilation failure More boolean const than hw allows" UserInfo={NSLocalizedDescription= Linking two modules of different data layouts: '' is '' whereas '1' is 'e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v16:16:16-v24:32:32-v32:32:32-v48:64:64-v64:64:64-v96:128:128-v128:128:128-v192:256:256-v256:256:256-v512:512:512-v1024:1024:1024-n8:16:32' SC compilation failure More boolean const than hw allows} Anyone know what that all means? If I replace the body of my intersection function with just return {false, 0.0f}, I get only the More boolean const than hw allows.
0
0
943
Oct ’22
is the MPSDynamicScene example correctly computing the motion vector texture?
I'm trying to implement de-noising of AO in my app, using the MPSDynamicScene example as a guide: https://developer.apple.com/documentation/metalperformanceshaders/animating_and_denoising_a_raytraced_scene In that example, it computes motion vectors in UV coordinates, resulting in very small values: // Compute motion vectors if (uniforms.frameIndex > 0) { // Map current pixel location to 0..1 float2 uv = in.position.xy / float2(uniforms.width, uniforms.height); // Unproject the position from the previous frame then transform it from // NDC space to 0..1 float2 prevUV = in.prevPosition.xy / in.prevPosition.w * float2(0.5f, -0.5f) + 0.5f; // Next, remove the jittering which was applied for antialiasing from both // sets of coordinates uv -= uniforms.jitter; prevUV -= prevUniforms.jitter; // Then the motion vector is simply the difference between the two motionVector = uv - prevUV; } Yet the documentation for MPSSVGF seems to indicate the offsets should be expressed in texels: The motion vector texture must be at least a two channel texture representing how many texels * each texel in the source image(s) have moved since the previous frame. The remaining channels * will be ignored if present. This texture may be nil, in which case the motion vector is assumed * to be zero, which is suitable for static images. Is this a mistake in the example code? Asking because doing something similarly in my own app leaves AO trails, which would indicate the motion vector texture values are too small in magnitude. I don't really see trails in the example, even when I speed up the animation, but that could be due to the example being monochrome. Update: If I multiply the uv offsets by the size of the texture, I get a bad result. Which seems to indicate the header is misleading and they are in fact in uv coordinates. So perhaps the trails I'm seeing in my app are for some other reason. I also wonder who is actually using this API other than me? I would think most game engines are doing their own thing. Perhaps some of apple's own code uses it.
0
1
780
Aug ’23
Does ModelIO export materials to USD?
I've got the following code to generate an MDLMaterial from my own material data model: public extension MaterialModel { var mdlMaterial: MDLMaterial { let f = MDLPhysicallyPlausibleScatteringFunction() f.metallic.floatValue = metallic f.baseColor.color = CGColor(red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0) f.roughness.floatValue = roughness return MDLMaterial(name: name, scatteringFunction: f) } } When exporting to OBJ, I get the expected material properties: # Apple ModelI/O MTL File: testExport.mtl newmtl material_1 Kd 0.163277 0.0344635 0.229603 Ka 0 0 0 Ks 0 ao 0 subsurface 0 metallic 0 specularTint 0 roughness 0 anisotropicRotation 0 sheen 0.05 sheenTint 0 clearCoat 0 clearCoatGloss 0 newmtl material_2 Kd 0.814449 0.227477 0.124541 Ka 0 0 0 Ks 0 ao 0 subsurface 0 metallic 0 specularTint 0 roughness 1 anisotropicRotation 0 sheen 0.05 sheenTint 0 clearCoat 0 clearCoatGloss 0 However when exporting USD I just get: #usda 1.0 ( defaultPrim = "_0" endTimeCode = 0 startTimeCode = 0 timeCodesPerSecond = 60 upAxis = "Y" ) def Xform "Obj0" { def Mesh "_" { uniform bool doubleSided = 0 float3[] extent = [(896, 896, 896), (1152, 1152, 1148.3729)] int[] faceVertexCounts = ... int[] faceVertexIndices = ... point3f[] points = ... } def Mesh "_0" { uniform bool doubleSided = 0 float3[] extent = [(898.3113, 896.921, 1014.4961), (1082.166, 1146.7178, 1152)] int[] faceVertexCounts = ... int[] faceVertexIndices = ... point3f[] points = ... matrix4d xformOp:transform = ( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) ) uniform token[] xformOpOrder = ["xformOp:transform"] } } There aren't any material properties. FWIW, this specifies a set of common material parameters for USD: https://openusd.org/release/spec_usdpreviewsurface.html (Note: there is no tag for ModelIO, so using SceneKit, etc.)
0
0
871
Sep ’23
SwiftUI crash in AG::swift::existential_type_metadata::project_value
Anyone have a sense of what could cause this? Running on iOS 17.0.2. This seems to be a regression in iOS 17. (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x100ad4437fff8) * frame #0: 0x00000001ca2264ec AttributeGraph`AG::swift::existential_type_metadata::project_value(void const*) const + 40 frame #1: 0x00000001ca2349a8 AttributeGraph`AG::LayoutDescriptor::compare_existential_values(AG::swift::existential_type_metadata const*, unsigned char const*, unsigned char const*, unsigned int) + 108 frame #2: 0x00000001ca21b938 AttributeGraph`AG::LayoutDescriptor::Compare::operator()(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 560 frame #3: 0x00000001ca21b9b8 AttributeGraph`AG::LayoutDescriptor::Compare::operator()(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 688 frame #4: 0x00000001ca21b674 AttributeGraph`AG::LayoutDescriptor::compare(unsigned char const*, unsigned char const*, unsigned char const*, unsigned long, unsigned int) + 96 frame #5: 0x00000001ca21afb0 AttributeGraph`AGGraphSetOutputValue + 268 frame #6: 0x00000001a7bdd924 SwiftUI`___lldb_unnamed_symbol227590 + 72 frame #7: 0x00000001a6ce9194 SwiftUI`___lldb_unnamed_symbol111702 + 20 frame #8: 0x000000019bca3994 libswiftCore.dylib`Swift.withUnsafePointer<τ_0_0, τ_0_1>(to: inout τ_0_0, _: (Swift.UnsafePointer<τ_0_0>) throws -> τ_0_1) throws -> τ_0_1 + 28 frame #9: 0x00000001a6c6d70c SwiftUI`___lldb_unnamed_symbol110270 + 1592 frame #10: 0x00000001a7bdeb3c SwiftUI`___lldb_unnamed_symbol227617 + 408 frame #11: 0x00000001a7bde698 SwiftUI`___lldb_unnamed_symbol227614 + 876 frame #12: 0x00000001a7619cfc SwiftUI`___lldb_unnamed_symbol184045 + 32 frame #13: 0x00000001ca21e854 AttributeGraph`AG::Graph::UpdateStack::update() + 512 frame #14: 0x00000001ca215504 AttributeGraph`AG::Graph::update_attribute(AG::data::ptr<AG::Node>, unsigned int) + 424 frame #15: 0x00000001ca21ff58 AttributeGraph`AG::Subgraph::update(unsigned int) + 848 frame #16: 0x00000001a7a621d4 SwiftUI`___lldb_unnamed_symbol216794 + 384 frame #17: 0x00000001a7a63610 SwiftUI`___lldb_unnamed_symbol216852 + 24 frame #18: 0x00000001a710a638 SwiftUI`___lldb_unnamed_symbol143862 + 28 frame #19: 0x00000001a7b55a0c SwiftUI`___lldb_unnamed_symbol223201 + 108 frame #20: 0x00000001a7b481f4 SwiftUI`___lldb_unnamed_symbol223031 + 96 frame #21: 0x00000001a710187c SwiftUI`___lldb_unnamed_symbol143639 + 84 frame #22: 0x00000001a7a635d8 SwiftUI`___lldb_unnamed_symbol216851 + 200 frame #23: 0x00000001a7a634c4 SwiftUI`___lldb_unnamed_symbol216850 + 72 frame #24: 0x00000001a74514c0 SwiftUI`___lldb_unnamed_symbol170645 + 28 frame #25: 0x00000001a6d196d4 SwiftUI`___lldb_unnamed_symbol114472 + 120 frame #26: 0x00000001a6d19780 SwiftUI`___lldb_unnamed_symbol114473 + 72 frame #27: 0x00000001a490ad94 UIKitCore`_UIUpdateSequenceRun + 84 frame #28: 0x00000001a490a484 UIKitCore`schedulerStepScheduledMainSection + 144 frame #29: 0x00000001a490a540 UIKitCore`runloopSourceCallback + 92 frame #30: 0x00000001a2684acc CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 28 frame #31: 0x00000001a2683d48 CoreFoundation`__CFRunLoopDoSource0 + 176 frame #32: 0x00000001a26824fc CoreFoundation`__CFRunLoopDoSources0 + 244 frame #33: 0x00000001a2681238 CoreFoundation`__CFRunLoopRun + 828 frame #34: 0x00000001a2680e18 CoreFoundation`CFRunLoopRunSpecific + 608 frame #35: 0x00000001e51415ec GraphicsServices`GSEventRunModal + 164 frame #36: 0x00000001a4a8f350 UIKitCore`-[UIApplication _run] + 888 frame #37: 0x00000001a4a8e98c UIKitCore`UIApplicationMain + 340 frame #38: 0x00000001a7457354 SwiftUI`___lldb_unnamed_symbol171027 + 176 frame #39: 0x00000001a7457198 SwiftUI`___lldb_unnamed_symbol171025 + 152 frame #40: 0x00000001a70d4434 SwiftUI`___lldb_unnamed_symbol142421 + 128
0
0
692
Oct ’23
compiling boost for iOS simulator on apple silicon
I'm trying to get boost to compile for the iOS simulator on my M2 Mac. I've got this script: set -euxo pipefail # See https://formulae.brew.sh/formula/boost # See https://stackoverflow.com/questions/1577838/how-to-build-boost-libraries-for-iphone wget https://boostorg.jfrog.io/artifactory/main/release/1.83.0/source/boost_1_83_0.tar.bz2 tar zxf boost_1_83_0.tar.bz2 mv boost_1_83_0 boost root=`pwd` cd boost B2_ARGS="-a -j12 --with-iostreams --with-regex" # Build for simulator ./bootstrap.sh --prefix=$root/install-ios-sim IOSSIM_SDK_PATH=$(xcrun --sdk iphonesimulator --show-sdk-path) cat << EOF >> project-config.jam # IOS Arm Simulator using clang : iphonesimulatorarm64 : xcrun clang++ -arch arm64 -stdlib=libc++ -std=c++20 -miphoneos-version-min=16.0 -fvisibility-inlines-hidden -target arm64-apple-ios16.0-simulator -isysroot $IOSSIM_SDK_PATH ; EOF ./b2 $B2_ARGS --prefix=$root/install-ios-sim toolset=clang-iphonesimulatorarm64 link=static install xcodebuild -create-xcframework thinks ./install-ios-sim/libboost_iostreams.a is not for the simulator. Specifically, if you run the following after the build script, it will show the binary is ios-arm64. xcodebuild -create-xcframework \ -library install-ios-sim/lib/libboost_iostreams.a \ -headers install-ios-sim/include \ -output boost.xcframework I know how to use lipo, etc to determine the architecture of a library, but I don't know how create-xcframework differentiates a simulator binary from an iOS binary. Note: I've also tried using the boost build script by Pete Goodliffe which generates an xcframework. However, I need a normal install of boost because I'm compiling other libraries against it. I couldn't get the script to do that. I also don't understand how the script successfully generates a simulator binary.
0
0
856
Nov ’23