The document-based SwiftUI example app (https://developer.apple.com/documentation/swiftui/building-a-document-based-app-with-swiftui) doesn't specify a launch image.
It would seem per the HIG that the "pinkJungle" background in the app would be a decent candidate for a launch image, since it will be in the background when the document browser comes up.
However when specifying it as the UIImageName, it is not aligned the same as the background image. I'm having trouble figuring out how it should be aligned to match the image. The launch image seems to be scaled up a bit over scaledToFill.
I suppose a launch storyboard might make this more explicit, but I still should be able to do it without one.
This is the image when displayed as the launch image:
and this is how it's rendered in the background right before the document browser comes up:
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm modifying <1mb of a 256mb managed buffer (calling didModifyRange), but according to Metal System Trace, the GPU copies the whole buffer (SDMA0 channel, "Page On 268435456 bytes"), taking 13ms.
I'm making lots of small modifications (~4k) per frame. I also tried coalescing into a single call to didModifyRange (~66mb) and still the entire buffer is copied. I also tried calling didModifyRange for the first byte, and then the copied data is small.
So I'm wondering why didModifyRange doesn't seem to be efficient for many small updates to a big buffer?
I've got a scene which renders as I expect:
but in the acceleration structure inspector, the kraken primitive doesn't render:
In the list on the left, the structure is there. As expected, there is just one bounding-box primitive as a lot happens in the intersection function (doing it this way since I've already built my own octree and it takes too long to rebuild BVHs for dynamic geometry)
This is just based on the SimplePathTracer example.
The signatures of the sphereIntersectionFunction and octreeIntersectionFunction aren't that different:
[[intersection(bounding_box, triangle_data, instancing)]]
BoundingBoxIntersection sphereIntersectionFunction(// Ray parameters passed to the ray intersector below
float3 origin [[origin]],
float3 direction [[direction]],
float minDistance [[min_distance]],
float maxDistance [[max_distance]],
// Information about the primitive.
unsigned int primitiveIndex [[primitive_id]],
unsigned int geometryIndex [[geometry_intersection_function_table_offset]],
// Custom resources bound to the intersection function table.
device void *resources [[buffer(0), function_constant(useResourcesBuffer)]]
#if SUPPORTS_METAL_3
,const device Sphere* perPrimitiveData [[primitive_data]]
#endif
,ray_data IntersectionPayload& payload [[payload]])
{
vs.
[[intersection(bounding_box, triangle_data, instancing)]]
BoundingBoxIntersection octreeIntersectionFunction(// Ray parameters passed to the ray intersector below
float3 origin [[origin]],
float3 direction [[direction]],
float minDistance [[min_distance]],
float maxDistance [[max_distance]],
// Information about the primitive.
unsigned int primitiveIndex [[primitive_id]],
unsigned int geometryIndex [[geometry_intersection_function_table_offset]],
// Custom resources bound to the intersection function table.
device void *resources [[buffer(0)]],
const device BlockInfo* perPrimitiveData [[primitive_data]],
ray_data IntersectionPayload& payload [[payload]])
Note: running 15.0 beta 5 (15A5209g) since even the unmodified SimplePathTracer example project will hang the acceleration structure viewer on Xcode 14.
Update:
Replacing the octreeIntersectionFunction's code with just a hard-coded sphere does render. Perhaps the viewer imposes a time (or instruction count) limit on intersection functions so as to not hang the GPU?
I received a rejection for "Your app spawns processes that continue running after the user has quit the app."
The process in question is the app's Thumbnail extension.
When I remove all of my own code from the thumbnail extension, it still continues to run after I exit my app. This is the entirety of the extension's code, which now renders blank thumbnails:
import QuickLookThumbnailing
class ThumbnailProvider: QLThumbnailProvider {
override init() { }
override func provideThumbnail(for request: QLFileThumbnailRequest,
_ handler: @escaping (QLThumbnailReply?, Error?) -> Void) {
let reply = QLThumbnailReply(contextSize: request.maximumSize) { (context: CGContext) -> Bool in
return true
}
handler(reply, nil)
}
}
Presumably Thumbnail extensions continue to run so that Finder (among others) can generate thumbnails as necessary. AFAIK, I have no direct control over the extension's lifecycle.
Is this just App Review's mistake? The "Next Steps" are clueless:
"You can resolve this by leaving this option unchecked by default, providing the user the option to turn it on."
The app uses its own thumbnail extension to render thumbnails for document templates, which may be an uncommon thing.
Adding an inspector and toolbar to Xcode's app template, I have:
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
.toolbar {
Text("test")
}
.inspector(isPresented: .constant(true)) {
Text("this is a test")
}
}
}
In the preview canvas, this renders as I would expect:
However when running the app:
Am I missing something?
(Relevant wwdc video is wwdc2023-10161. I couldn't add that as a tag)
I've had to ditch the SwiftUI app lifecycle due to this issue: https://developer.apple.com/forums/thread/742580
After creating a new UIKit document app as a test, it doesn't have a toolbar when opening the document. How can I add one along the lines of https://developer.apple.com/wwdc22/10069 ?
The UIDocumentViewController isn't already embedded in a UINavigationController it seems.
To reproduce: New Project -> iOS -> Document App. Select Interface: Storyboard. Add an empty "untitled.txt" resource to the project. Change the first line in documentBrowser(_:,didRequestDocumentCreationWithHandler:) to
let newDocumentURL: URL? = Bundle.main.url(forResource: "untitled", withExtension: "txt")
Is this an uncaught C++ exception that could have originated from my code? or something else? (this report is from a tester)
(also, why can't crash reporter tell you info about what exception wasn't caught?)
(Per instructions here, to view the crash report, you'll need to rename the attached .txt to .ips to view the crash report)
thanks!
AudulusAU-2024-02-14-020421.txt
In my app, I only get one interruption notification when a phone call comes in, and nothing after that. The app uses AVAudioEngine. Is this a bug?
A very simple repro is to just register for the notification, but not do anything else with audio:
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
.onReceive(NotificationCenter.default.publisher(for: AVAudioSession.interruptionNotification)) { event in
handleAudioInterruption(event: event)
}
}
private func handleAudioInterruption(event: Notification) {
print("handleAudioInterruption")
guard let info = event.userInfo,
let typeValue = info[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue) else {
print("missing the stuff")
return
}
if type == .began {
print("interruption began")
} else if type == .ended {
print("interruption ended")
guard let optionsValue = info[AVAudioSessionInterruptionOptionKey] as? UInt else { return }
if AVAudioSession.InterruptionOptions(rawValue: optionsValue).contains(.shouldResume) {
print("should resume")
}
}
}
}
And do this in the app's init:
@main
struct InterruptionsApp: App {
init() {
try! AVAudioSession.sharedInstance().setCategory(.playback,
options: [])
try! AVAudioSession.sharedInstance().setActive(true)
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
In my Metal-based app, I ray-march a 3D texture. I'd like to use RealityKit instead of my own code. I see there is a LowLevelTexture (beta) where I could specify a 3D texture. However on the Metal side, there doesn't seem to be any way to access a 3D texture (realitykit::texture::textures::custom returns a texture2d).
Any work-arounds? Could I even do something icky like cast the texture2d to a texture3d in MSL? (is that even possible?) Could I encode the 3d texture into an argument buffer and get that in somehow?
"Specifically, your App Description and screenshot references paid features but does not inform users that a purchase is required to access this content."
My App Description (Pro 3D art app) doesn't mention that the entire app is a subscription. I didn't think I needed to because Final Cut Pro and Logic Pro don't do that either. Anyone had experience with this? Is there a double-standard or did App Review just make a mistake?
Suppose I can add some language at the end of the App Description like "All Features unlocked with subscription"
I want to turn off my ray-tracing conditionally. There's is_null_acceleration_structure but when I don't bind an acceleration structure (or pass nil to setFragmentAccelerationStructure), I get the following API validation error:
-[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5782: failed assertion `Draw Errors Validation
Fragment Function(vol_deferred_lighting): missing instanceAccelerationStructure binding at index 6 for accelerationStructure[0].
I can turn off API validation and it works, but it seems like I should be able to use nil for the acceleration structure w/o triggering a validation error. Seems like a bug, right?
I suppose I can work around this by creating a separate pipeline with the ray-tracing disabled via a function constant instead of using is_null_acceleration_structure.
(Can we get a ray-tracing tag for questions?)
I have on the order of 50k small meshes (~64 vertices), all different connectivity, some subset of which change each frame (generated by a compute kernel). Can I render those in a performant way with Metal?
I'm assuming 50k separate draw calls would be too slow. I have a few ideas:
encode those draw calls on the GPU
or lay out the meshes linearly in blocks, with some maximum size, and use a single draw call, but wasting vertex shader threads on the blocks that aren't full
or use another kernel to combine the little meshes into a big mesh
thanks!
I'm getting the following error on Intel Iris integrated graphics. Code works well on newer Mac GPUs as well as Apple GPUs.
Execution of the command buffer was aborted due to an error during execution. Invalid Resource (00000009:kIOAccelCommandBufferCallbackErrorInvalidResource)
The error is for a compute command, not a draw command.
The constant isn't in the documentation. All buffers and textures seem to be created successfully. I've also checked that the GPU supports the required threadgroup size for the compute pipeline.
thanks!
I'm trying to implement de-noising of AO in my app, using the MPSDynamicScene example as a guide: https://developer.apple.com/documentation/metalperformanceshaders/animating_and_denoising_a_raytraced_scene
In that example, it computes motion vectors in UV coordinates, resulting in very small values:
// Compute motion vectors
if (uniforms.frameIndex > 0) {
// Map current pixel location to 0..1
float2 uv = in.position.xy / float2(uniforms.width, uniforms.height);
// Unproject the position from the previous frame then transform it from
// NDC space to 0..1
float2 prevUV = in.prevPosition.xy / in.prevPosition.w * float2(0.5f, -0.5f) + 0.5f;
// Next, remove the jittering which was applied for antialiasing from both
// sets of coordinates
uv -= uniforms.jitter;
prevUV -= prevUniforms.jitter;
// Then the motion vector is simply the difference between the two
motionVector = uv - prevUV;
}
Yet the documentation for MPSSVGF seems to indicate the offsets should be expressed in texels:
The motion vector texture must be at least a two channel texture representing how many texels
* each texel in the source image(s) have moved since the previous frame. The remaining channels
* will be ignored if present. This texture may be nil, in which case the motion vector is assumed
* to be zero, which is suitable for static images.
Is this a mistake in the example code?
Asking because doing something similarly in my own app leaves AO trails, which would indicate the motion vector texture values are too small in magnitude. I don't really see trails in the example, even when I speed up the animation, but that could be due to the example being monochrome.
Update:
If I multiply the uv offsets by the size of the texture, I get a bad result. Which seems to indicate the header is misleading and they are in fact in uv coordinates. So perhaps the trails I'm seeing in my app are for some other reason.
I also wonder who is actually using this API other than me? I would think most game engines are doing their own thing. Perhaps some of apple's own code uses it.
I'm trying to ray-march an SDF inside a RealityKit surface shader. For the SDF primitive to correctly render with other primitives, the depth of the fragment needs to be set according to the ray-surface intersection point. Is there a way to do that within a RealityKit surface shader? It seems the only values I can set are within surface::surface_properties.
If not, can an SDF still be rendered in RealityKit using ray-marching?