I’ve implemented a basic custom view controller presentation: the presented view controller slides up from the bottom in a manner similar to UISheetPresentationController, and can be dismissed by tapping the dimming view added on top of the presenting view controller.
I want my custom presentation to be interruptible, so the user should be able to tap the dimming view while the presentation transition is in progress to smoothly convert the presentation transition into a dismissal transition. I’m using additive spring animations (by virtue of UIViewPropertyAnimator) here to make the change in direction less jarring.
Unfortunately, it seems that the interruption-triggered dismissal transition is abruptly terminated by the system based on some kind of timer. The later in the presentation transition I tap the dimming view, the earlier the dismissal transition is terminated.
Steps to reproduce
After launching the test project:
Tap the “Present” button.
Tap the dimming view just before the transition completes.
The test project includes a screen recording of the broken behavior.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
An XPC service’s process has a system-managed lifecycle: the process is launched on-demand when another process tries to connect to it, and the system can decide to kill it when system resources are low. XPC services can tell the system when they shouldn’t be killed using xpc_transaction_begin/end.
Do extensions created with ExtensionFoundation and/or ExtensionKit have the same behavior?
I have an app which contains a bundled launch agent that I register using SMAppService.agent(plistName:). I’ve packaged the launch agent executable in the typical Mac app bundle structure so I can embed a framework in it. So, the launch agent lives in Contents/SharedSupport/MyLaunchAgent.app/Contents/MacOS/MyLaunchAgent.
However, I suspect this approach might be falling afoul of the scheduler, since the taskinfo tool reports my launch agent has a requested & effective role of TASK_DEFAULT_APPLICATION (PRIO_DARWIN_ROLE_UI), rather than the TASK_UNSPECIFIED (PRIO_DARWIN_ROLE_DEFAULT) value I see with system daemons.
I tried setting the LSUIElement Info.plist key of my launch agent to YES, but this seems to have had no effect.
What’s the recommended approach here?
Hi,
I have a sandboxed app with a bundled sandboxed XPC service. When it’s launched, the XPC service registers a repeating XPC activity with the system. The activity’s handler block does get called regularly like I’d expect, but it stops being called once the main app terminates.
What’s the recommended way to fix this issue? Could I have a bundled XPC service double as a launch agent, or would that cause other problems?
I have a sandboxed Mac app which I can grant access to a folder using an NSOpenPanel. Once it’s been granted access it can enumerate the contents of the folder just fine. If I rename the folder while the app is open and then make the app enumerate the folder’s contents again, though, it seems to have lost access.
What’s the recommended way to have an app’s sandbox “track” files as they’re moved around the filesystem? (NSDocument handles this for you, from what I can tell.) I’ve managed to hack something together with a combination of Dispatch sources and security-scoped bookmarks, but it feels like there must be an easier solution …
I’ve been experimenting with Dispatch, and workloops in particular. I gather that they’re similar to serial queues, except that they reorder work items by QoS. I suspect there’s more to workloops than meets the eye, though; calling dispatch_set_target_queue on them has no effect, in spite of the <dispatch/workloop.h> saying that workloops “can be passed to all APIs accepting a dispatch queue, except for functions from the dispatch_sync() family”.
Workloops keep showing up in odd places like Metal and Network.framework backtraces, and <dispatch/workloop.h> includes functionality for tying workloops to os_workgroups (?!).
What exactly is a workloop beyond just a serial queue with priority ordering, and why can’t I set the target queue of one?
Suppose I want to draw a red rectangle onto my render target using a compute shader.
id<MTLComputeCommandEncoder> encoder = [commandBuffer computeCommandEncoder];
[encoder setComputePipelineState:pipelineState];
simd_ushort2 position = simd_make_ushort2(100, 100);
simd_ushort2 size = simd_make_ushort2(50, 50);
[encoder setBytes:&position length:sizeof(position) atIndex:0];
[encoder setTexture:drawable.texture atIndex:0];
[encoder dispatchThreads:MTLSizeMake(size.x, size.y, 1)
threadsPerThreadgroup:MTLSizeMake(32, 32, 1)];
[encoder endEncoding];
#include <metal_stdlib>
using namespace metal;
kernel void
Compute(ushort2 position_in_grid [[thread_position_in_grid]],
constant ushort2 &position,
texture2d<half, access::write> texture)
{
texture.write(half4(1, 0, 0, 1), position_in_grid + position);
}
This works just fine:
Now, say for whatever reason I want to start using imageblocks in my compute kernel. First, I set the imageblock size on the CPU side:
id<MTLComputeCommandEncoder> encoder = [commandBuffer computeCommandEncoder];
[encoder setComputePipelineState:pipelineState];
MTLSize threadgroupSize = MTLSizeMake(32, 32, 1);
[encoder setImageblockWidth:threadgroupSize.width
height:threadgroupSize.height];
simd_ushort2 position = simd_make_ushort2(100, 100);
simd_ushort2 size = simd_make_ushort2(50, 50);
[encoder setBytes:&position length:sizeof(position) atIndex:0];
[encoder setTexture:drawable.texture atIndex:0];
MTLSize gridSize = MTLSizeMake(size.x, size.y, 1);
[encoder dispatchThreads:gridSize threadsPerThreadgroup:threadgroupSize];
And then I update the compute kernel to simply declare the imageblock – note I never actually read from or write to it:
#include <metal_stdlib>
using namespace metal;
struct Foo
{
int foo;
};
kernel void
Compute(ushort2 position_in_grid [[thread_position_in_grid]],
constant ushort2 &position,
texture2d<half, access::write> texture,
imageblock<Foo> imageblock)
{
texture.write(half4(1, 0, 0, 1), position_in_grid + position);
}
And now out of nowhere Metal’s shader validation starts complaining about mismatched texture usage flags:
2024-06-22 00:57:15.663132+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
2024-06-22 00:57:15.672004+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
2024-06-22 00:57:15.682422+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
2024-06-22 00:57:15.687587+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
2024-06-22 00:57:15.698106+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
The texture I’m writing to comes from a CAMetalDrawable whose associated CAMetalLayer has framebufferOnly set to NO. What am I missing?