I'm unsure what could be causing this, but it appears that all widgets that I have built with Xcode 16 replace image content with solid color views that change to the color of the tint.
Is this... fixable?
Note: None of the subviews in my widgetUI view have widgetAccentable() on then.
Adding it to the Image Views did not appear to change anything.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
As a user, when viewing a photo or image, I want to be able to tell Siri, “add this to ”, similar to example from the WWDC presentation where a photo is added to a note in the notes app.
Is this... possible with app domains as they are documented?
I see domains like open-file and open-photo, but I don't know if those are appropriate for this kind of functionality?
Given
I do not understand much at all about how to write shaders
I do not understand the math associated with page-curl effects
I am trying to:
implement a page-curl shader for use on SwiftUI views.
I've lifted a shader from HIROKI IKEUCHI that I believe they lifted from a non-metal shader resource online, and I'm trying to digest it.
One thing I want to do is to paint the "underside" of the view with a given color and maintain the transparency of rounded corners when they are flipped over.
So, if an underside pixel is "clear" then I want to sample the pixel at that position on the original layer instead of the "curl effect" pixel.
There are two comments in the shader below where I check the alpha, and underside flags, and paint the color red as a debug test.
The shader gives this result:
The outside of those rounded corners is appropriately red and the white border pixels are detected as "not-clear". But the "inner" portion of the border is... mistakingly red?
I don't get it. Any help would be appreciated. I feel tapped out and I don't have any IRL resources I can ask.
//
// PageCurl.metal
// ShaderDemo3
//
// Created by HIROKI IKEUCHI on 2023/10/17.
//
#include <metal_stdlib>
#include <SwiftUI/SwiftUI_Metal.h>
using namespace metal;
#define pi float(3.14159265359)
#define blue half4(0.0, 0.0, 1.0, 1.0)
#define red half4(1.0, 0.0, 0.0, 1.0)
#define radius float(0.4)
// そのピクセルの色を返す
[[ stitchable ]] half4 pageCurl
(
float2 _position,
SwiftUI::Layer layer,
float4 bounds,
float2 _clickedPoint,
float2 _mouseCursor
) {
half4 undersideColor = half4(0.5, 0.5, 1.0, 1.0);
float2 originalPosition = _position;
// y座標の補正
float2 position = float2(_position.x, bounds.w - _position.y);
float2 clickedPoint = float2(_clickedPoint.x, bounds.w - _clickedPoint.y);
float2 mouseCursor = float2(_mouseCursor.x, bounds.w - _mouseCursor.y);
float aspect = bounds.z / bounds.w;
float2 uv = position * float2(aspect, 1.) / bounds.zw;
float2 mouse = mouseCursor.xy * float2(aspect, 1.) / bounds.zw;
float2 mouseDir = normalize(abs(clickedPoint.xy) - mouseCursor.xy);
float2 origin = clamp(mouse - mouseDir * mouse.x / mouseDir.x, 0., 1.);
float mouseDist = clamp(length(mouse - origin)
+ (aspect - (abs(clickedPoint.x) / bounds.z) * aspect) / mouseDir.x, 0., aspect / mouseDir.x);
if (mouseDir.x < 0.)
{
mouseDist = distance(mouse, origin);
}
float proj = dot(uv - origin, mouseDir);
float dist = proj - mouseDist;
float2 linePoint = uv - dist * mouseDir;
half4 pixel = layer.sample(position);
if (dist > radius)
{
pixel = half4(0.0, 0.0, 0.0, 0.0); // background behind curling layer (note: 0.0 opacity)
pixel.rgb *= pow(clamp(dist - radius, 0., 1.) * 1.5, .2);
}
else if (dist >= 0.0)
{
// THIS PORTION HANDLES THE CURL SHADED PORTION OF THE RESULT
// map to cylinder point
float theta = asin(dist / radius);
float2 p2 = linePoint + mouseDir * (pi - theta) * radius;
float2 p1 = linePoint + mouseDir * theta * radius;
bool underside = (p2.x <= aspect && p2.y <= 1. && p2.x > 0. && p2.y > 0.);
uv = underside ? p2 : p1;
uv = float2(uv.x, 1.0 - uv.y); // invert y
pixel = layer.sample(uv * float2(1. / aspect, 1.) * float2(bounds[2], bounds[3])); // ME<----
if (underside && pixel.a == 0.0) { //<---- PIXEL.A IS 0.0 WHYYYYY
pixel = red;
}
// Commented out while debugging alpha issues
// if (underside && pixel.a == 0.0) {
// pixel = layer.sample(originalPosition);
// } else if (underside) {
// pixel = undersideColor; // underside
// }
// Shadow the pixel being returned
pixel.rgb *= pow(clamp((radius - dist) / radius, 0., 1.), .2);
}
else
{
// THIS PORTION HANDLES THE NON-CURL-SHADED PORTION OF THE SAMPLING.
float2 p = linePoint + mouseDir * (abs(dist) + pi * radius);
bool underside = (p.x <= aspect && p.y <= 1. && p.x > 0. && p.y > 0.);
uv = underside ? p : uv;
uv = float2(uv.x, 1.0 - uv.y); // invert y
pixel = layer.sample(uv * float2(1. / aspect, 1.) * float2(bounds[2], bounds[3])); // ME
if (underside && pixel.a == 0.0) { //<---- PIXEL.A IS 0.0 WHYYYYY
pixel = red;
}
// Commented out while debugging alpha issues
// if (underside && pixel.a == 0.0) {
// // If the new underside pixel is clear, we should sample the original image's pixel.
// pixel = layer.sample(originalPosition);
// } else if (underside) {
// pixel = undersideColor;
// }
}
return pixel;
}
Is there a way to present a non-anchored confirmation dialogue in iOS26? Maybe some random style modifier I haven't noticed?
Things like menu bars should be able to prompt confirmable actions, but there's not always a convenient place to anchor a popover for those.
Topic:
UI Frameworks
SubTopic:
SwiftUI
Hey All,Been digging around the internet looking for this one, and while stackoverflow has some relevant solutions, none are working for me.My View Hierarchy is the followingView--->UISplitViewController.view ( set as a child viewController )--------> rootViewController.view (set as the mainViewController of the splitView)--------> detailViewController.view (set as the detailViewController of the splitview)Via the iPhone 6 simulator(split view is always collapsed) I present a modal viewcontroller with the following code: UINavigationController *navigationController = [[UINavigationController alloc] initWithRootViewController:viewController];
[navigationController.navigationBar setBarStyle:UIBarStyleBlack];
[navigationController setModalPresentationStyle:UIModalPresentationPopover];
navigationController.popoverPresentationController.sourceView = view;
navigationController.popoverPresentationController.barButtonItem = barButtonItem;
navigationController.popoverPresentationController.delegate = self;
[self presentViewController:nav animated:YES completion:nil];I dissmiss the presented controller from that viewController by calling:[self dismissViewControllerAnimated:true completion:nil];If I set animated to "false" I dont have any problems, but it looks bad and doesnt make sense.I see some posts regarding this and custom presenatation methods, but Im not using anything custom here.Any Help is appreciated!EDIT:On iPhone the ModalPresentationStyle should default to UIModalPresentationOverFullScreen, so I tried setting the presentationStyle directly to that, and it worked!If I set the presentationStyle to "FullScreen" I get the same behavior, a black screen after dismissing.
I'd like an Image subview of a lock screen widget to render as itself, and not with the multiply-like effect it gets today.
I've tried .widgetAccentable(true) and .widgetAccentable(false), but none have the appearance I'm looking for.
Is there maybe a new modifier that lets me "force" the rendering mode? Hoping there is and it's just not jumping out at me.
Thanks for your help.
I have the following parameter:
@Parameter(title: "Image", description: "Image to copy", supportedTypeIdentifiers: ["com.image"], inputConnectionBehavior: .connectToPreviousIntentResult)
var imageFile: IntentFile?
When I drop my AppIntent into a shortcut, though, I am unable to connect this parameter to the output of the previous step.
Given the documentation I have no idea how to achieve this, if the above, is not the correct way to do so.
It is possible via the new AppIntents framework to open your app from via a shortcuts intent, but I am currently very confused about how to ensure that a particular window is opened in a SwiftUI-runtime app.
If the use says "Open View A" via a shortcuts, or Siri, I'd like to make sure it opens the window for "View A", though a duplicate window could be acceptable too.
The WWDC22 presentation has the following:
@MainActor
func perform() async throws -> some IntentResult {
Navigator.shared.openShelf(.currentlyReading)
return .result()
}
Where, from the perform method of the Intent structure, they tell an arbitrary Navigator (code not provided) to just open a view of the app. (How convenient!)
But for a multiwindow swiftUI app, I'm not sure how to make this work. @Environment variables are not available within the Intent struct, and even if I did have a "Navigator Singleton", I'm not sure how it could get the @Environment for openWindow since it's a View environment. AppIntents exist outside the View environment tree AFAIK.
Any Ideas? I'd be a little shocked if this is a UIKit only sort of thing, but at the same time... ya never know.
Disclaimer: I am new to all things 3D. There could be a variety of things wrong with what I'm doing that are not unique to RealityKit. Any domain info would be appreciated.
So, I'm following, what I think are, the recommended steps to import a shader-node material from reality composer pro and apply it to another modelEntity.
I do the following:
guard let entity = try? Entity.load(named: "Materials", in: RealityKitContent.realityKitContentBundle) else { return model }
let materialEntity = entity.findEntity(named: "materialModel") as? ModelEntity
guard let materialEntity else { return model }
I then configure a property on it like so:
guard var material = materialEntity.model?.materials[0] as? ShaderGraphMaterial else { return model }
try coreMaterial.setParameter(name: "BaseColor", value: .color(matModel.matCoreUIColor))
I then apply it.
This is what my texture looks like in RealityComposer:
I notice that my rendered object has distortions in the actual RealityView. Note the diagonal lines that appear "Stretched".
What could be doing this? I thought Node Shaders were supposed to be more resilient to distortions like this? I'm not sure if I've got a bug or if I'm using it wrong.
FWIW, this is a shader based on apple's felt material shader. My graph looks like this:
Thanks
I am trying to make a world anchor where a user taps a detected plane.
How am I trying this?
First, I add an entity to a RealityView like so:
let anchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
anchor.transform.rotation *= simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0))
let interactionEntity = Entity()
interactionEntity.name = "PLANE"
let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)])
interactionEntity.components.set(collisionComponent)
interactionEntity.components.set(InputTargetComponent())
anchor.addChild(interactionEntity)
content.add(anchor)
This:
Declares an anchor that requires a wall 2 meters by 2 meters to appear in the scene with continuous tracking
Makes an empty entity and gives it a 2m by 2m by 2cm collision box
Attaches the collision entity to the anchor
Finally then adds the anchor to the scene
It appears in the scene like this:
Great! Appears to sit right on the wall.
I then add a tap gesture recognizer like this:
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
guard value.entity.name == "PLANE" else { return }
var worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
let pose = Pose3D(position: worldPosition, rotation: value.entity.transform.rotation)
let worldAnchor = WorldAnchor(transform: simd_float4x4(pose))
let model = ModelEntity(mesh: .generateBox(size: 0.1, cornerRadius: 0.03), materials: [SimpleMaterial(color: .blue, isMetallic: true)])
model.transform = Transform(matrix: worldAnchor.transform)
realityViewContent?.add(model)
I ASSUME This:
Makes a world position from the where the tap connects with the collision entity.
Integrates the position and the collision plane's rotation to create a Pose3D.
Makes a world anchor from that pose (So it can be persisted in a world tracking provider)
Then I make a basic cube entity and give it that transform.
Weird Stuff: It doesn't appear on the plane.. it appears behind it...
Why, What have I done wrong?
The X and Y of the tap location appears spot on, but something is "off" about the z position.
Also, is there a recommended way to debug this with the available tools?
I'm guessing I'll have to file a DTS about this because feedback on the forum has been pretty low since labs started.
On Xcode 15.1.0b2 when rayacsting to a collision surface, there appears to be a tendency for the collisions to be inconsistent.
Here are my results. Green cylinders are hits, and red cylinders are raycasts that returned no collision results.
NOTE: This raycast is triggered by a tap gesture recognizer registering on the cube... so it's weird to me that the tap would work, but the raycast not collide with anything.
Is this something that just performs poorly in the simulator?
My RayCasting command is:
guard let pose = self.arSessionController.worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else {
print("FAILED TO GET POSITION")
return
}
let transform = Transform(matrix: pose.originFromAnchorTransform)
let locationOfDevice = transform.translation
let raycastResult = scene.raycast(from: locationOfDevice, to: destination, relativeTo: nil)
where destination is retrieved in a tap gesture handler via:
let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
Any findings would be appreciated.
Hello,
I’ve got a few questions about drag gestures on VisionOS in Immersive scenes.
Once a user initiates a drag gesture are their eyes involved anymore in the gesture?
If not and the user is dragging something farther away, how far can they move it using indirect gestures? I assume the user’s range of motion is limited because their hands are in their lap, so could they move something multiple meters along a distant wall?
How can the user cancel the gesture If they don’t like the anticipated / telegraphed result?
I’m trying to craft a good experience and it’s difficult without some of these details. I have still not heard back on my devkit application.
Thank you for any help.
Hello, I'm trying to determine if my application is not releasing all security scoped resources and I'm curious if there's a way to view the count of all currently accessed URLs.
I am balancing all startAccessingSecurityScopedResource calls that return true with a stopAccessingSecurityScopedResource, but sometimes my application is unresponsive when my mac wakes from sleep.
Console logs indicate some Sandboxing issues.
Unresponsiveness is resolved by a force-quit and restart of the application.
I'd like to try and observe what's going on with the number of Security Scoped resources to get to the bottom of this. Is it possible?
So, I've declared an AppIntent that indicates my app can "Open files" that conform to UTType.Image.
I've got a @AssistantEntity(schema: .files.file) and a
@AssistantIntent(schema: .files.openFile) declared.
So I navigate to the files app, quicklook an image, and open type-to-siri.
I tell siri "open this in " and all it does is act like "open ". No breakpoint is hit in my intent's perform method.
Am I doing something wrong? How can I test these cross-app behaviors?
Are they... not actually possible? Does an "OpenIntent" only work on my app's own URLs and not on file URLs from other apps?
For my first build, my package.resolved was not committed to the respository. I've fixed that and if I check my main branch on GitHub I can see the package.resolved file in the xcshareddata directory.
Even so, Xcode cloud is telling me that the file is missing and is failing to start my builds.
Could there be a caching issue going on?
My .gitignore file is empty.