Hello,
I'm experiencing an irregular issue with Apple Pay merchant domain verification. As you know, Apple requires domain verification every two months to maintain Apple Pay functionality.
The problem is that while the verification sometimes happens automatically without any issues, other times it fails to complete, even though the required file "apple-developer-merchantid-domain-association.txt" is correctly available on our server.
When automatic verification fails, the Apple Pay service becomes non-functional on our website, forcing us to perform a manual verification to restore the pending service.
Is it normal to encounter such inconsistent automatic verification processes?
What could be causing these intermittent verification failures, whereas manual verification always succeed? suggesting this might not be related to IP address restrictions described on the Apple documentation.
Thank you in advance,
Overview
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hi all,
I'm using Apple Sample Code below to create application using dockkit.
"Controlling a DockKit accessory using your camera app"
https://developer.apple.com/documentation/dockkit/controlling-a-dockkit-accessory-using-your-camera-app?changes=_8
I used vision hand recognition and put the observation data to dockAccessory.track, but Belkin or Insta360 devices never move on iPhone 16 Pro Max with iOS 18.3.
If I use other functions like face search (system tracking) in the app, those work ok.
I used Belkin and Insta360 Flow 2 Pro to reproduce the problem.
My friend is also saying that the custom tracking feature was working fine on the OS 18 beta, but on recent iOS 18.3 that feature does not work.
If I can get the iOS 18.0 beta then we can test that feature. But I cannot revert my iOS from 18.3 to the iOS 18.0 Beta.
Regards,
TO
Hi,
I'm testing DockKit with a very simple setup:
I use VNDetectFaceRectanglesRequest to detect a face and then call dockAccessory.track(...) using the detected bounding box.
The stand is correctly docked (state == .docked) and dockAccessory is valid.
I'm calling .track(...) with a single observation and valid CameraInformation (including size, device, orientation, etc.). No errors are thrown.
To monitor this, I added a logging utility – track(...) is being called 10–30 times per second, as recommended in the documentation.
However: the stand does not move at all.
There is no visible reaction to the tracking calls.
Is there anything I'm missing or doing wrong?
Is VNDetectFaceRectanglesRequest supported for DockKit tracking, or are there hidden requirements?
Would really appreciate any help or pointers – thanks!
That's my complete code:
extension VideoFeedViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
detectFace(image: frame)
func detectFace(image: CVPixelBuffer) {
let faceDetectionRequest = VNDetectFaceRectanglesRequest() { vnRequest, error in
guard let results = vnRequest.results as? [VNFaceObservation] else {
return
}
guard let observation = results.first else {
return
}
let boundingBoxHeight = observation.boundingBox.size.height * 100
#if canImport(DockKit)
if let dockAccessory = self.dockAccessory {
Task {
try? await trackRider(
observation.boundingBox,
dockAccessory,
frame,
sampleBuffer
)
}
}
#endif
}
let imageResultHandler = VNImageRequestHandler(cvPixelBuffer: image, orientation: .up)
try? imageResultHandler.perform([faceDetectionRequest])
func combineBoundingBoxes(_ box1: CGRect, _ box2: CGRect) -> CGRect {
let minX = min(box1.minX, box2.minX)
let minY = min(box1.minY, box2.minY)
let maxX = max(box1.maxX, box2.maxX)
let maxY = max(box1.maxY, box2.maxY)
let combinedWidth = maxX - minX
let combinedHeight = maxY - minY
return CGRect(x: minX, y: minY, width: combinedWidth, height: combinedHeight)
}
#if canImport(DockKit)
func trackObservation(_ boundingBox: CGRect, _ dockAccessory: DockAccessory, _ pixelBuffer: CVPixelBuffer, _ cmSampelBuffer: CMSampleBuffer) throws {
// Zähle den Aufruf
TrackMonitor.shared.trackCalled()
let invertedBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1.0 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
guard let device = captureDevice else {
fatalError("Kamera nicht verfügbar")
}
let size = CGSize(width: Double(CVPixelBufferGetWidth(pixelBuffer)),
height: Double(CVPixelBufferGetHeight(pixelBuffer)))
var cameraIntrinsics: matrix_float3x3? = nil
if let cameraIntrinsicsUnwrapped = CMGetAttachment(
sampleBuffer,
key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix,
attachmentModeOut: nil
) as? Data {
cameraIntrinsics = cameraIntrinsicsUnwrapped.withUnsafeBytes { $0.load(as: matrix_float3x3.self) }
}
Task {
let orientation = getCameraOrientation()
let cameraInfo = DockAccessory.CameraInformation(
captureDevice: device.deviceType,
cameraPosition: device.position,
orientation: orientation,
cameraIntrinsics: cameraIntrinsics,
referenceDimensions: size
)
let observation = DockAccessory.Observation(
identifier: 0,
type: .object,
rect: invertedBoundingBox
)
let observations = [observation]
guard let image = CMSampleBufferGetImageBuffer(sampleBuffer) else {
print("no image")
return
}
do {
try await dockAccessory.track(observations, cameraInformation: cameraInfo)
} catch {
print(error)
}
}
}
#endif
func clearDrawings() {
boundingBoxLayer?.removeFromSuperlayer()
boundingBoxSizeLayer?.removeFromSuperlayer()
}
}
}
}
@MainActor
private func getCameraOrientation() -> DockAccessory.CameraOrientation {
switch UIDevice.current.orientation {
case .portrait:
return .portrait
case .portraitUpsideDown:
return .portraitUpsideDown
case .landscapeRight:
return .landscapeRight
case .landscapeLeft:
return .landscapeLeft
case .faceDown:
return .faceDown
case .faceUp:
return .faceUp
default:
return .corrected
}
}
View Layout
Add the following views in a view controller:
Label
View A, with a subview of the same size: MTKView A
View B, with a subview of the same size: MTKView B
Refresh Rates of Each View
The label view refreshes at 60fps (driven by CADisplayLink).
MTKView A and B refresh at 15fps.
MTKView Implementation Details
The corresponding CAMetalLayer's maximumDrawableCount is set to 2, changed to double buffering.
The scheduling mechanism is modified; drawing is not driven by the internal loop but is done manually. The draw call is triggered immediately upon receiving a frame.
self.metalView.enableSetNeedsDisplay = NO;
self.metalView.paused = YES;
A new high-priority queue is created for drawing, instead of handling it on the main queue.
MTKView Latency Tracking
The GPU completion time T1 is observed through the addCompletedHandler callback of the CommandBuffer.
The presentation time T2 of the frame is observed through the addPresentedHandler callback of the currentDrawable in MTKView.
Testing shows that T2 - T1 > 16.6ms (the Vsync period at 60Hz). This means that after the GPU rendering in MTLView is finished, the frame is not actually displayed at the next Vsync instruction but only at the Vsync instruction after that.
I believe there is an extra 16.6ms of latency here, which I want to eliminate by adjusting the rendering mechanism.
Observation from Instruments
From Instruments, the Surface presentation aligns with the above test results. After the Metal encoder finishes, the Surface in Display switches only after the next-next Vsync instruction. See the image in the link for details.
Questions
According to a beginner's understanding, after MTKView's GPU rendering is finished, the next Vsync instruction should officially display (make it visible). However, this is not what is observed. Does the subview MTKView need to wait for another Vsync cycle to be drawn to the actual display buffer?
The label updates its text at 60fps, so the entire interface should be displayed at 60fps. Is the content of MTKView not synchronized when the display happens?
Explanation of the Reasoning Behind Some MTKView Code Details
Changing from the default triple buffering to double buffering helps reduce the latency introduced by rendering.
Not using MTKView's own scheduling mechanism but using manual triggering of the draw method is because MTKView's own scheduling mechanism is driven by CADisplayLink. Therefore, if a frame falls within a Vsync window, it needs to wait for the next Vsync window to trigger the draw operation, which introduces waiting latency.
On macOS I'm seeing that only one .fileImporter modifier is called when two are defined. Anybody seeing the same issue?
The scenario I have is two different file sources share the same file extension but they need to be loaded by two slightly different processes.
Select the first option. Nothing happens. Select the second option, it works. Seeing this also in another project.
Because the isPresented value is a binding, it isn't straightforward to logically OR the boolean @States and conditionally extract within the import closure.
@main
struct Dual_File_Importer_ExpApp: App {
@State private var showFirstDialog = false
@State private var showSecondDialog = false
var body: some Scene {
DocumentGroup(newDocument: Dual_File_Importer_ExpDocument()) { file in
ContentView(document: file.$document)
.fileImporter(isPresented: $showFirstDialog, allowedContentTypes: [.commaSeparatedText]) { result in
print("first")
}
.fileImporter(isPresented: $showSecondDialog, allowedContentTypes: [.commaSeparatedText]) { result in
print("second")
}
}
.commands {
CommandGroup(after: .importExport)
{
Button("Import First")
{
showFirstDialog.toggle()
}
Button("Import Second")
{
showSecondDialog.toggle()
}
}
}
}
}```
Topic:
UI Frameworks
SubTopic:
SwiftUI
My team has developed an app with a Matter commissioner feature (for own ecosystem) using the Matter framework on the MatterSupport extension.
Recently, we've noticed that commissioning Matter devices with the MatterSupport extension has become very unstable. Occasionally, the HomeUIService stops the flow after commissioning to the first fabric successfully, displaying the error: "Failed to perform Matter device setup: Error Domain=HMErrorDomain Code=2." (normally, it should send open commissioning window to the device and then add the device to the 2nd fabric). The issue is never seen before until recently few weeks and there is no code changes in the app. We are suspected that there is some data that fail to download from the icloud or apple account that cause this problem.
For evaluation, we tried removing the HomeSupport extension and run the Matter framework directly in developer mode, this issue disappears, and commissioning works without any problems.
Topic:
App & System Services
SubTopic:
Core OS
Tags:
HomeKit
Provisioning Profiles
Matter
ThreadNetwork
Can I use them in SK and do the animations work?
Thanks, Patrick
Hi everyone,
I'm running into an issue with AVAudioRecorder when handling interruptions such as phone calls or alarms.
Problem:
When the app is recording audio and an interruption occurs:
I handle the interruption with audioRecorder?.pause() inside AVAudioSession.interruptionNotification (on .began).
On .ended, I check for .shouldResume and call audioRecorder?.record() again.
The recorder resumes successfully, but only the audio recorded after the interruption is saved. The audio recorded before the interruption is lost, even though I'm using the same file URL and not recreating the recorder.
Repro:
Start a recording with AVAudioRecorder
Simulate a system interruption (e.g., incoming call)
Resume recording after the interruption
Stop and inspect the output audio file
Expected: Full audio (before and after interruption) should be saved.
Actual: Only the audio after interruption is saved; the earlier part is missing
Notes:
According to the documentation, calling .record() after .pause() should resume recording into the same file.
I confirmed that the file URL does not change, and I do not recreate the recorder instance.
No error is thrown by the system during this process.
This behavior happens consistently when the app is interrupted and resumed.
Question:
Is this a known issue? Is there a recommended workaround for preserving the full recording when interruptions happen?
Thanks in advance!
I am able to symbolicate kernel backtraces for addresses that belong to my kext.
Is it possible to symbolicate kernel backtraces for addresses that lie beyond my kext and reference kernel code?
Sample kernel panic log
Is it possible to use the Matter.xcframework without the MatterSupport extension for onboarding a Matter device to our own ecosystem(own OTBR and matter controller) for an official App Store release?
Currently, we can achieve this in developer mode by adding the Bluetooth Central Matter Client Developer mode profile (as outlined here https://github.com/project-chip/connectedhomeip/blob/master/docs/guides/darwin.md). For an official release, what entitlements or capabilities do we need to request approval from Apple to replace the Bluetooth Central Matter Client Developer mode profile?
Thank you for your assistance.
A functioning Multiplatform app, which includes use of Continuity Camera on an M1MacMini running Sequoia 15.5, works correctly capturing photos with AVCapturePhoto. However, that app (and a test app just for Continuity Camera) crashes at delegate callback when run on a 2017 MacBookPro under MacOS 13.7.5. The app was created with Xcode 16 (various releases) and using Swift 6 (but tried with 5). Compiling and running the test app with Xcode 15.2 on the 13.7.5 machine also crashes at delegate callback.
The iPhone 15 Continuity Camera gets detected and set up correctly, and preview video works correctly. It's when the CapturePhoto code is run that the crash occurs.
The relevant capture code is:
func capturePhoto() {
let captureSettings = AVCapturePhotoSettings()
captureSettings.flashMode = .auto
photoOutput.maxPhotoQualityPrioritization = .quality
photoOutput.capturePhoto(with: captureSettings, delegate: PhotoDelegate.shared)
print("**** CameraManager: capturePhoto")
}
and the delegate callbacks are:
class PhotoDelegate: NSObject, AVCapturePhotoCaptureDelegate {
nonisolated(unsafe) static let shared = PhotoDelegate()
// MARK: - Delegate callbacks
func photoOutput(
_ output: AVCapturePhotoOutput,
didFinishProcessingPhoto photo: AVCapturePhoto,
error: (any Error)?
) {
print("**** CameraManager: didFinishProcessingPhoto")
guard let pData = photo.fileDataRepresentation() else {
print("**** photoOutput is empty")
return
}
print("**** photoOutput data is \(pData.count) bytes")
}
func photoOutput(
_ output: AVCapturePhotoOutput,
willBeginCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings
) {
print("**** CameraManager: willBeginCaptureFor")
}
func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
print("**** CameraManager: willCaptureCapturePhotoFor")
}
}
The crash report significant parts are.....
Crashed Thread: 3 Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000
Exception Codes: 0x0000000000000001, 0x0000000000000000
Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11
Terminating Process: exc handler [30850]
VM Region Info: 0 is not in any region. Bytes before following region: 4296495104
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
__TEXT 100175000-10017f000 [ 40K] r-x/r-x SM=COW ...tinuityCamera
Thread 0:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x7ff803aed552 mach_msg2_trap + 10
1 libsystem_kernel.dylib 0x7ff803afb6cd mach_msg2_internal + 78
2 libsystem_kernel.dylib 0x7ff803af4584 mach_msg_overwrite + 692
3 libsystem_kernel.dylib 0x7ff803aed83a mach_msg + 19
4 CoreFoundation 0x7ff803c07f8f __CFRunLoopServiceMachPort + 145
5 CoreFoundation 0x7ff803c06a10 __CFRunLoopRun + 1365
6 CoreFoundation 0x7ff803c05e51 CFRunLoopRunSpecific + 560
7 HIToolbox 0x7ff80d694f3d RunCurrentEventLoopInMode + 292
8 HIToolbox 0x7ff80d694d4e ReceiveNextEventCommon + 657
9 HIToolbox 0x7ff80d694aa8 _BlockUntilNextEventMatchingListInModeWithFilter + 64
10 AppKit 0x7ff806ca59d8 _DPSNextEvent + 858
11 AppKit 0x7ff806ca4882 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1214
12 AppKit 0x7ff806c96ef7 -[NSApplication run] + 586
13 AppKit 0x7ff806c6b111 NSApplicationMain + 817
14 SwiftUI 0x7ff90e03a9fb 0x7ff90dfb4000 + 551419
15 SwiftUI 0x7ff90f0778b4 0x7ff90dfb4000 + 17578164
16 SwiftUI 0x7ff90e9906cf 0x7ff90dfb4000 + 10340047
17 ContinuityCamera 0x10017b49e 0x100175000 + 25758
18 dyld 0x7ff8037d1418 start + 1896
Thread 1:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 2:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 3 Crashed:: Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
0 ??? 0x0 ???
1 AVFCapture 0x7ff82045996c StreamAsyncStillCaptureCallback + 61
2 CoreMediaIO 0x7ff813a4358f __94-[CMIOExtensionProviderHostContext captureAsyncStillImageWithStreamID:uniqueID:options:reply:]_block_invoke + 498
3 libxpc.dylib 0x7ff803875b33 _xpc_connection_reply_callout + 36
4 libxpc.dylib 0x7ff803875ab2 _xpc_connection_call_reply_async + 69
5 libdispatch.dylib 0x7ff80398b099 _dispatch_client_callout3 + 8
6 libdispatch.dylib 0x7ff8039a6795 _dispatch_mach_msg_async_reply_invoke + 387
7 libdispatch.dylib 0x7ff803991088 _dispatch_lane_serial_drain + 393
8 libdispatch.dylib 0x7ff803991d6c _dispatch_lane_invoke + 417
9 libdispatch.dylib 0x7ff80399c3fc _dispatch_workloop_worker_thread + 765
10 libsystem_pthread.dylib 0x7ff803b28c55 _pthread_wqthread + 327
11 libsystem_pthread.dylib 0x7ff803b27bbf start_wqthread + 15
Of course, the MacBookPro is an old device - but Continuity Camera works with the installed Photo Booth app, so it's possible.
Any thoughts on solving this situation would be appreciated.
Regards, Michaela
Hi
When attempting to upload a React Native app (version 0.77) we encountered the following error:
ITMS-90426: Invalid Swift Support - The SwiftSupport folder is missing. Rebuild your app using the current public (GM) version of Xcode and resubmit it.
If to check ipa folder we can see that content includes only 2 folders:
Payload
Symbols
Could you please tell us why it does not include Swift Support folder?
We tried to use XCode - 16.2 and 16.3
Thank you
Hi,
We've noticed that this issue occurs more frequently after upgrading to iOS 18.4.1 and can result in one-way audio.
Our app uses CallKit with WebRTC to establish VoIP connections.
However, on iOS 18.4.1, CallKit no longer triggers:
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession)
We're currently comparing the occurrence rate across different iOS versions to better understand the impact.
Could you please help analyze the root cause of this issue?
I have a SwiftUI app. It fetches records through CoreData. And I want to show some records on a widget. I understand that I need to use AppGroup to share data between an app and its associated widget.
import Foundation
import CoreData
import CloudKit
class DataManager {
static let instance = DataManager()
let container: NSPersistentContainer
let context: NSManagedObjectContext
init() {
container = NSPersistentCloudKitContainer(name: "DataMama")
container.persistentStoreDescriptions = [NSPersistentStoreDescription(url: FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: group identifier)!.appendingPathComponent("Trash.sqlite"))]
container.loadPersistentStores(completionHandler: { (description, error) in
if let error = error as NSError? {
print("Unresolved error \(error), \(error.userInfo)")
}
})
context = container.viewContext
context.automaticallyMergesChangesFromParent = true
context.mergePolicy = NSMergePolicy(merge: .mergeByPropertyObjectTrumpMergePolicyType)
}
func save() {
do {
try container.viewContext.save()
print("Saved successfully")
} catch {
print("Error in saving data: \(error.localizedDescription)")
}
}
}
// ViewModel //
import Foundation
import CoreData
import WidgetKit
class ViewModel: ObservableObject {
let manager = DataManager()
@Published var records: [Little] = []
init() {
fetchRecords()
}
func fetchRecords() {
let request = NSFetchRequest<Little>(entityName: "Little")
do {
records = try manager.context.fetch(request)
records.sort { lhs, rhs in
lhs.trashDate! < rhs.trashDate!
}
} catch {
print("Fetch error for DataManager: \(error.localizedDescription)")
}
WidgetCenter.shared.reloadAllTimelines()
}
}
So I have a view model that fetches data for the app as shown above.
Now, my question is how should my widget get data from CoreData? Should the widget get data from CoreData through DataManager? I have read some questions here and also read some articles around the world. This article ( https://dev.classmethod.jp/articles/widget-coredate-introduction/ ) suggests that you let the Widget struct access CoreData through DataManager. If that's a correct fashion, how should the getTimeline function in the TimelineProvider struct get data? This question also suggests the same. Thank you for your reading my question.
When I use my iPhone to scan the apple pay QR code in chrome, the url is https://applepaydemo.apple.com/apple-pay-js-api, I keep geting the "Service Unavailable" error.
Wonder know if you guys meet this error as well? Btw, the QR code feature needs IOS 18.
Hi, I have a couple questions about background app refresh. First, is the function RefreshAppContentsOperation() where to implement code that needs to be run in the background? Second, despite importing BackgroundTasks, I am getting the error "cannot find operationQueue in scope". What can I do to resolve that? Thank you.
func scheduleAppRefresh() {
let request = BGAppRefreshTaskRequest(identifier: "peaceofmindmentalhealth.RoutineRefresh")
// Fetch no earlier than 15 minutes from now.
request.earliestBeginDate = Date(timeIntervalSinceNow: 15 * 60)
do {
try BGTaskScheduler.shared.submit(request)
} catch {
print("Could not schedule app refresh: \(error)")
}
}
func handleAppRefresh(task: BGAppRefreshTask) {
// Schedule a new refresh task.
scheduleAppRefresh()
// Create an operation that performs the main part of the background task.
let operation = RefreshAppContentsOperation()
// Provide the background task with an expiration handler that cancels the operation.
task.expirationHandler = {
operation.cancel()
}
// Inform the system that the background task is complete
// when the operation completes.
operation.completionBlock = {
task.setTaskCompleted(success: !operation.isCancelled)
}
// Start the operation.
operationQueue.addOperation(operation)
}
func RefreshAppContentsOperation() -> Operation {
}
I don't know if this is the right place to raise this, so apologies if not.
For years now, I have exported an NFS share from a host Mac which I connect to from a Raspberry Pi on the same network. I configure this by adding a line in /etc/exports - /Users/Pi -mapall=myusername
This has always worked flawlessly, but since updating my Mac (M4 Mac Mini) to Sequoia 15.5 last week, it has developed a problem. If the NFS share is not accessed from the Pi for five minutes, it dies, and the Pi's file manager locks up necessitating a complete reboot.
If I run a script on the PI which does an ls on the mounted share every 5 minutes, the lockup does not happen. But if I extend the period to 6 minutes, the lockup occurs.
Something on the Mac NFS server seems to be dying in an unrecoverable fashion after five minutes of idle. Even with nfsd logging set to verbose, there is nothing helpful in the console logs.
I am open to suggestions to further investigate or to try and fix this, but this is basically a showstopper for me - I need to be able to share data between Mac and Pi, and this is now broken.
Topic:
Community
SubTopic:
Apple Developers
There are multiple report of crashes on URLConnectionLoader::loadWithWhatToDo. The crashed thread in the stack traces pointing to calls inside CFNetwork which seems to be internal library in iOS.
The crash has happened quite a while already (but we cannot detect when the crash started to occur) and impacted multiple iOS versions recorded from iOS 15.4 to 18.4.1 that was recorded in Xcode crash report organizer so far.
Unfortunately, we have no idea on how to reproduce it yet but the crash keeps on increasing and affect more on iOS 18 users (which makes sense because many people updated their iOS to the newer version) and we haven’t found any clue on what actually happened and how to fix it on the crash reports. What we understand is it seems to come from a network request that happened to trigger the crash but we need more information on what (condition) actually cause it and how to solve it.
Hereby, I attach sample crash report for both iOS 15 and 18.
I also have submitted a report (that include more crash reports) with number: FB17775979.
Will appreciate any insight regarding this issue and any resolution that we can do to avoid it.
iOS 15.crash
iOS 18.crash
In previous versions of the simulator, it was possible to import files into the Files app by dragging them from the Finder into the Simulator. It appears that in the iOS 26 Simulator, this opens the file in Safari.
I've only tried it with .json files so far.
The documentation at https://developer.apple.com/documentation/xcode/sharing-data-with-simulator says that the original behaviour should happen:
To add files to Simulator, select one or more files in Finder on your Mac, then click the Share button. Select Simulator from the share destination list. Choose the simulated device from the drop-down list. Simulator opens the Files app, and lets you select where to save the files.
I'd love to learn if this is intentional behaviour, and if so, what workarounds there might be. I use this pattern quite a lot, as I have a HealthKit app, and I've built a system that allows me to export workouts as JSON files from a real device, that I can then import into a simulator for testing.
Edit: I found a workaround. Make a folder in Files.app, then search for it within ~/Library/Developer/CoreSimulator/Devices. Open the folder in Finder, then add any files you want to be available in the Simulator.
It looks like ExtensionKit (and ExtensionFoundation) is fully available on iOS 26 but there is no mention about this in WWDC.
From my testing, it seems as of beta 1, ExtensionKit allows the app from one dev team to launch extension provided by another dev team. Before we start building on this, can someone from Apple help confirm this is the intentional behavior and not just beta 1 thing?