(Using macOS 26 Beta 9 and Xcode 26 Beta 7) I am trying to support basic onDrop from a source app to my app. I am trying to get the closest "source" representation of a drag-and-drop, e.g. a JPEG file being dropped into my app shouldn't be converted, but stored as a JPEG in Data. Otherwise, everything gets converted into TIFFs and modern iPhone photos get huge. I also try to be a good app, and provide asynchronous support.
Alas, I've been running around for days now, where I can now support Drag-and-Drop from the Finder, from uncached iCloud files with Progress bar, but so far, drag and dropping from Safari eludes me.
My code is as follows for the onDrop support:
Image(nsImage: data.image).onDrop(of: Self.supportedDropItemUTIs, delegate: self)
The UTIs are as follows:
public static let supportedDropItemUTIs: [UTType] = [
.image,
.heif,
.rawImage,
.png,
.tiff,
.svg,
.heic,
.jpegxl,
.bmp,
.gif,
.jpeg,
.webP,
]
Finally, the code is as follows:
public func performDrop(info: DropInfo) -> Bool {
let itemProviders = info.itemProviders(for: Self.supportedDropItemUTIs)
guard let itemProvider = itemProviders.first else {
return false
}
let registeredContentTypes = itemProvider.registeredContentTypes
guard let contentType = registeredContentTypes.first else {
return false
}
var suggestedName = itemProvider.suggestedName
if suggestedName == nil {
switch contentType {
case UTType.bmp: suggestedName = "image.bmp"
case UTType.gif: suggestedName = "image.gif"
case UTType.heic: suggestedName = "image.heic"
case UTType.jpeg: suggestedName = "image.jpeg"
case UTType.jpegxl: suggestedName = "image.jxl"
case UTType.png: suggestedName = "image.png"
case UTType.rawImage: suggestedName = "image.raw"
case UTType.svg: suggestedName = "image.svg"
case UTType.tiff: suggestedName = "image.tiff"
case UTType.webP: suggestedName = "image.webp"
default: break
}
}
let progress = itemProvider.loadInPlaceFileRepresentation(forTypeIdentifier: contentType.identifier) { url, _, error in
if let error {
print("Failed to get URL from dropped file: \(error)")
return
}
guard let url else {
print("Failed to get URL from dropped file!")
return
}
let queue = OperationQueue()
queue.underlyingQueue = .global(qos: .utility)
let intent = NSFileAccessIntent.readingIntent(with: url, options: .withoutChanges)
let coordinator = NSFileCoordinator()
coordinator.coordinate(with: [intent],
queue: queue) { error in
if let error {
print("Failed to coordinate data from dropped file: \(error)")
return
}
do {
// Load file contents into Data object
let data = try Data(contentsOf: intent.url)
Dispatch.DispatchQueue.main.async {
self.data.data = data
self.data.fileName = suggestedName
}
} catch {
print("Failed to load coordinated data from dropped file: \(error)")
}
}
}
DispatchQueue.main.async {
self.progress = progress
}
return true
}
For your information, this code is at the state where I gave up and sent it here, because I cannot find a solution to my issue.
Now, this code works everywhere, except for dragging and dropping from Safari.
Let's pretend I go to this web site:
https://commons.wikimedia.org/wiki/File:Tulip_Tulipa_clusiana_%27Lady_Jane%27_Rock_Ledge_Flower_Edit_2000px.jpg
and I try to drag-and-drop the image, it will fail with the following error:
URL https://upload.wikimedia.org/wikipedia/commons/c/cf/Tulip_Tulipa_clusiana_%27Lady_Jane%27_Rock_Ledge_Flower_Edit_2000px.jpg is not a file:// URL.
And then, fail with the dreaded
Failed to get URL from dropped file: Error Domain=NSItemProviderErrorDomain Code=-1000
As far as I can tell, the problem lies in the opaque NSItemProvider receiving a web site URL from Safari. I tried most solutions, I couldn't retrieve that URL. The error happens in the callback of loadInPlaceFileRepresentation, but also fails in loadFileRepresentation. I tried hard-requesting a loadObject of type URL, but there's only one representation for the JPEG file. I tried only putting .url in the requests, but it would not transfer it.
Anyone solved this mystery?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
on iOS, I want to add up undo/redo and a close button. On ipadOS, I only need to add a close button
What’s your experience in adding a close button to the ToolPicker? Or at least have the position of the window so I can add an overlapping box (even on floating).
I want to be able to access a writable file directly in my Xcode project when I am under Xcode, but access to the bundled app version (as read-only) when running a normal version.
The solution is easy, I provide "-EditBootstrapFile $(PROJECT_DIR)/SomePath" as launch argument in my Xcode Run scheme, then access the file. While executing, if I don't find "-EditBootstrapFile", I use the Bundle.main.url version as read-only.
However, Sandbox interferes with my best intentions. I don't seem to find a way to allow the sandbox system to access that particular file.
Since it's a bootstrapping file, it kinda beats the purpose to add up a new window and delay loading of the list of parameters, it's meant to be always present.
So far, the only workaround I found is adding com.apple.security.temporary-exception.files.home-relative-path.read-write with /Developer/myproduct/SomePath in the entitlements, which is dumb, and mean I need to have two different entitlements for my code (one for xcode debugging and one for archiving) and the value is now hard-coded.
Anyone has experience with this? I'm sure there's something easy I am missing, but for now, I lost a few hours and could still not figure it out.
Question says it all.
I want the transparent pixels to pass through the taps / clicks / gestures, while the opaque pixels catches them.
Obviously be able to control the behaviour would be even better, so I could ignore slightly translucent pixels too.
Pre-processing is not possible, user images, so it's not easy.
So far, the best I thought was to get a global gesture recognizer, and try to figure out where in my complex hierarchy this tap falls, and see if the image is underneath. But that seems overly complicated for something so simple and basic, really.
Consider the following sample code
swift
import SwiftUI
struct ContentView: View {
@State var text = ""
@State var portrait = true
func updateInterfaceOrientation() {
guard let currentWindowScene = UIApplication.shared.connectedScenes.first(
where: { $0.activationState == .foregroundActive }) as? UIWindowScene else {
return
}
self.portrait = currentWindowScene.interfaceOrientation.isPortrait
}
var body: some View {
ZStack {
if portrait {
VStack {
Spacer()
TextField("", text: $text)
.background(Color.gray)
}
} else {
VStack {
Spacer()
TextField("", text: $text)
.background(Color.gray)
}
}
}
.onAppear(perform: updateInterfaceOrientation)
.onReceive(NotificationCenter.default.publisher(for: UIDevice.orientationDidChangeNotification)) { _ in
updateInterfaceOrientation()
}
}
}
Code is pretty straightforward and is the proposed way for SwiftUI's dynamic interfaces. IE: you double down on your elements, and the interface gets switched from one to the other.
Now this piece of code, compared to the one without the if portrait has multiple issues because of the keyboard.
When the portrait vs landscape gets changed, you lose the on-screen keyboard. I can kind-of live with that.
iOS 14: When you have a on-screen keyboard in portrait mode, and you move to landscape mode, the zone stays higher on screen, as if the keyboard was actually there, but it's not. If you move back to portrait mode, the safe zone becomes 3/4 of the screen. I cannot live with that!
Any clues? Happens with latest official and latest beta (.2)
Good day, simple question. In TextKit2, there seems to be a Rendering Surface Bounds property.
We currently have a text.boundingRect(with...) function call to determine some bounding properties, and this is the accepted proposal for most bounding retrieval, but I am not sure the operation is actually the Rendering Surface Bounds of a provided text, as described in the video.
So ... let say I have such a text, and I want to get that fully enclosing bounds, what would be my best bet (cross-platform)
In Core Audio, there is a description saying there are those two channels in kAudioChannelLayoutTag_TMH_10_2_full: HI and VI
These are not described anywhere else, including in the list of channel abbreviations.
Anyone knows what they are really?
This is awkward. So I have my app that I have shipped on iOS, that works properly there. I have a gesture recognizer on the double-tap .gesture(TapGesture(count: 2)).
This is on top of an image, that needs to be in UIImageView() through an UIViewRepresentable (because SVG and Image will not scale it properly, using the prerendered version instead).
Now, if I put a .gesture(TapGesture()) instead, it works well. If I put a DragGesture(), it gets picked up immediately. But I cannot have the system detect the double-tap to save my life.
I tried going inside the UIViewRepresentable, adding a #selector, but it will not get it. I tried overriding the touchesEnded globally, and it will not work either.
I also tried sequencing TapGesture() to TapGesture(count:2) and it (incredibly) sometimes works, but something will make it stop working, so it's not reliable.
It might be because of my scene too, it's really complex, something might interfere, but I have not found a way to debug the scene to know where the events are being sent off, know what interferes with what. And remember, it works perfectly on iOS, and it also works well with a normal SwiftUI Image.
contentShape doesn't seem to have a result with that issue either. And again, putting a drag or a single tap works perfectly. It's just the double-tap that seems to fail me.
Fixing it is something, but I'm also very curious on the way to debug such issues.
Thank you!
In the latest iOS, the keyWindow automatic version of SKStoreReviewController.requestReview() got deprecated in favour of a new SKStoreReviewController.requestReview(in windowScene: UIWindowScene) version.
Then, in the latest SwiftUI, the UIWindowScene got removed in favour of a new SwiftUI some Scene family.
How do I efficiently tie the two together in a View without resorting to the key UIWindow?