The documentation specifies that when Contacts framework returns unified contacts that each fetched unified contact object (CNContact) has its own unique identifier that’s different from any individual contact’s identifier in the set of linked contacts and that when refetching a unified contact, that this identifier should be used.
There is also an analogous identifier within the list of contactRelations, but each of these don't seem to corespondent to the unified contacts. For example, is a new contact (Sheryl Zakroff) is created in the simulator Contacts and their spouse is set to Hank Zakroff. However, the GUID created for the contactRelations identifier does not correlate to the original Hank Zakroff GUID and cannot be searched.
Is this a bug or what is the indent of the contactRelations identifier?
Here's a debug output of walking the unifiedContacts:
Name: Hank Zakroff
2E73EE73-C03F-4D5F-B1E8-44E85A70F170
- Other : (555) 766-4823
- Other : (707) 555-1854
Name: David Taylor
E94CD15C-7964-4A9B-8AC4-10D7CFB791FD
- Other : 555-610-6679
Name: Sheryl Zakroff
DE783BC8-7917-4138-93F6-3AF0FD4CE083
- Other : (707) 555-1854
- Spouse: <CNContactRelation: 0x60000000dd60: name=Hank M. Zakroff>
- 534B467D-CA00-46D3-897C-16EEA782C9CF
- Looking for ["534B467D-CA00-46D3-897C-16EEA782C9CF"]
[]
Overview
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I filed FB19631435 about this just now. Basically: starting with 15.6, we've had reports (internally and outternally) that after some period of time, networking fails so badly that it can't even acquire a DHCP lease, and the system needs to be rebooted to fix this. The systems in question all have at least 2 VPN applications installed; ours is a transparent proxy provider, and the affected system also had Crowdstrike's Falcon installed. A customer system reported seemingly identical failures on their systems; they don't have Crowdstrike, but they do have Cyberhaven's.
Has anyone else seen somethng like this? Since it seems to involve three different networking extensions, I'm assuming it's due to an interaction between them, not a bug in any individual one. But what do I know? 😄
Export archive step fails in Xcode Cloud when using Xcode 26.2 (17C48) RC.
The same project exports successfully when switching back to Xcode 26.1 in Xcode Cloud workflow settings.
The same project exports successfully when using Xcode 26.2 RC locally.
Projects without Apple Watch app do not encounter this issue (not so sure about this).
From Xcode Cloud UI:
Exporting for App Store Distribution failed. Please download the logs artifact for more information.
Run command: 'xcodebuild -exportArchive ...
Command exited with non-zero exit-code: 70
From xcodebuild-export-archive.log:
error: exportArchive Automatic signing cannot update bundle identifier "io.***.***.watchkitapp".
error: exportArchive No profiles for 'io.***.***.watchkitapp' were found
error: exportArchive Automatic signing cannot update bundle identifier "io.***.***".
error: exportArchive No profiles for 'io.***.***' were found
** EXPORT FAILED **
IDEDistribution: App Store Connect request for store configuration failed for account Session Proxy Provider
(Account "Session Proxy Provider": Unable to authenticate with App Store Connect
(Error Domain=DVTITunesSoftwareServiceFoundation.DVTServicesSessionProviderCredentialITunesAuthenticationContextError Code=1 "(null)"))
DVTServices: Sending request A7605D4E-2892-4B6D-9197-90BD3AB53D67 to <http://172.16.57.4:8089/services/v1/capabilities>
Payload: {"urlEncodedQueryParams":"teamId=984L9QX9X5&filter%5BreferenceType%5D=bundle&filter%5BincludeRequestable%5D=true&limit=200"}
{
"errors": [{
"id": "fb67ecdb-103b-4446-a2db-618fd6bd99e7",
"status": "400",
"code": "PARAMETER_ERROR.INVALID",
"title": "A parameter has an invalid value",
"detail": "A parameter 'filter[includeRequestable]' has an invalid value : ''includeRequestable' is not a valid field name.'",
"source": {
"parameter": "filter[includeRequestable]"
}
}]
}
DVTServices: Could not fetch capabilities from network due to error: error = 'A parameter has an invalid value'
Workaround
Switching Xcode Cloud workflow to use Xcode 26.1 works around the issue.
Using Xcode 26.2 RC locally works around the issue.
I've spent a few months writing an app that uses SwiftData with inheritance. Everything worked well until I tried adding CloudKit support. To do so, I had to make all relationships optional, which exposed what appears to be a bug. Note that this isn't a CloudKit issue -- it happens even when CloudKit is disabled -- but it's due to the requirement for optional relationships.
In the code below, I get the following error on the second call to modelContext.save() when the button is clicked:
Could not cast value of type 'SwiftData.PersistentIdentifier' (0x1ef510b68) to 'SimplePersistenceIdentifierTest.Computer' (0x1025884e0).
I was surprised to find zero hit when Googling "Could not cast value of type 'SwiftData.PersistentIdentifier'".
Some things to note:
Calling teacher.computers?.append(computer) instead of computer.teacher = teacher results in the same error.
It only happens when Teacher inherits Person.
It only happens if modelContext.save() is called both times.
It works if the first modelContext.save() is commented out.
If the second modelContext.save()is commented out, the error occurs the second time the model context is saved (whether explicitly or implicitly).
Keep in mind this is a super simple repro written to generate on demand the error I'm seeing in a normal app. In my app, modelContext.save() must be called in some places to update the UI immediately, sometimes resulting in the error seconds later when the model context is saved automatically. Not calling modelContext.save() doesn't appear to be an option.
To be sure, I'm new to this ecosystem so I'd be thrilled if I've missed something obvious! Any thoughts are appreciated.
import Foundation
import SwiftData
import SwiftUI
struct ContentView: View {
@Environment(\.modelContext) var modelContext
var body: some View {
VStack {
Button("Do it") {
let teacher = Teacher()
let computer = Computer()
modelContext.insert(teacher)
modelContext.insert(computer)
try! modelContext.save()
computer.teacher = teacher
try! modelContext.save()
}
}
}
}
@Model
class Computer {
@Relationship(deleteRule: .nullify)
var teacher: Teacher?
init() {}
}
@Model
class Person {
init() {}
}
@available(iOS 26.0, macOS 26.0, *)
@Model
class Teacher: Person {
@Relationship(deleteRule: .nullify, inverse: \Computer.teacher)
public var computers: [Computer]? = []
override init() {
super.init()
}
}
"EnableLiveAssetServerV2-com.apple.MobileAsset.MetalToolchain" = on;
ProductName: macOS
ProductVersion: 26.0.1
BuildVersion: 25A362
The MetalToolchain is installed, however I keep getting error that MetalToolchain cannot be found by the Xcode
"Command CompileMetalFile failed with a nonzero exit code"
error: error: cannot execute tool 'metal' due to missing Metal Toolchain; use: xcodebuild -downloadComponent MetalToolchain
❯ xcodebuild -downloadComponent MetalToolchain
2025-10-31 11:18:29.004 xcodebuild[6605:45524] IDEDownloadableMetalToolchainCoordinator: Failed to remount the Metal Toolchain: The file “com.apple.MobileAsset.MetalToolchain-v17.1.324.0.k9JmEp” couldn’t be opened because you don’t have permission to view it.
Beginning asset download...
2025-10-31 11:18:29.212 xcodebuild[6605:45523] IDEDownloadableMetalToolchainCoordinator: Failed to remount the Metal Toolchain: The file “com.apple.MobileAsset.MetalToolchain-v17.1.324.0.k9JmEp” couldn’t be opened because you don’t have permission to view it.
Downloaded asset to: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/4ab058bc1c53034b8c0a9baca6fba2d2b78bb965.asset/AssetData/Restore/022-17211-415.dmg
Done downloading: Metal Toolchain 17A324.
I'm trying to train a text classifier model in Create ML. The Create ML app/framework offers five algorithms. I can successfully train the model with all of the algorithms except the BERT transfer learning option. When I select this algorithm, Create ML simply stops the training process immediately after the initial feature extraction phase (with no reported error).
What I've tried:
I tried simplifying the dataset to just a few classes and short examples in case there was a problem with the data.
I tried experimenting with the number of iterations and language/script options.
I checked Console.app for logged errors and found the following for the Create ML app:
error 10:38:28.385778+0000 Create ML Couldn't read event column - category is invalid. Format string is : <private>
error 10:38:30.902724+0000 Create ML Could not encode the entity <private>. Error: <private>
I'm not sure if these errors are normal or indicative of a problem. I don't know what it means by the "event" column – I don't have an event column in my data and I don't believe there should be one. These errors are not reported when using the other algorithms.
Given that I couldn't get the app to work with BERT, I switched over to the CreateML framework and followed the code samples given in the documentation. (By the way, there's an error in the docs: the line let (trainingData, testingData) = data.stratifiedSplit(on: "text", by: 0.8) should be stratifying on "label", not on "text").
The main chunk of code looks like this:
var parameters = MLTextClassifier.ModelParameters(
validation: .split(strategy: .automatic),
algorithm: .transferLearning(.bertEmbedding, revision: 1),
language: .english
)
parameters.maxIterations = 100
let sentimentClassifier = try MLTextClassifier(
trainingData: trainingData,
textColumn: "text",
labelColumn: "label",
parameters: parameters
)
Ultimately I want to train a single multilingual model, and I believe that BERT is the best choice for this. The problem is that there doesn't seem to be a way to choose the multilingual Latin script option in the API. In the Create ML app you can theoretically do this by selecting the Latin script with language set to "Automatic", as recommended in this WWDC video (relevant section starts at around 8:02). But, as far as I can tell, ModelParameters only lets you pick a specific language.
I presume the framework must provide some way to do this, since the Create ML app uses the framework under the hood, but I can't see a way to do it. Another possibility is that the Create ML app might be misrepresenting the framework – perhaps selecting a specific language in the app doesn't actually make any difference – for example, maybe all Latin languages actually use the same model under the hood and the language selector is just there to guide people to the right choice (but this is just my speculation).
Any help would be much appreciated! If possible, I'd prefer to use the Create ML app if I can get the BERT option to work – is this actually working for anyone? Or failing that, I want to use the framework to train a multilingual Latin model with BERT, so I'm looking for instructions on how to choose that specific option or confirmation that I can just choose .english to get the correct Latin multilingual model.
I'm running Xcode 26.2 on Tahoe 21.1 on an M1 Pro MacBook Pro. I have version 6.2 of the Create ML app.
Topic:
Machine Learning & AI
SubTopic:
Create ML
Hello,
I created an app using the latest version of Xcode (16.2), and I'm having a problem testing this app on my iPhones.
I created this app just like all the others I've made for my clients, including one on (03/24/25), but when I went to upload another
App to App Store Connect on (03/26/25) I couldn't test it on my phone.
Points to consider:
The app runs perfectly on the Emulator.
All my terms and agreements with Apple are up to date.
I tried to download the apps through Testflight on 2 different devices, iPhone 7 Plus (iOS 15.8.1) and iPhone 15 Pro Max (iOS 18.3.2), and I was unsuccessful in neither attempt.
I can send my app to the internal testers, but when I click download, the following message appears:
Could not install [APP NAME].
The requested App is not available or doesn't exist.
I imagine this problem is with Apple itself, I have already contacted support, but I need to resolve this urgently for my customers.
Has anyone experienced this and resolved it?
I’m currently developing an application using WKWebView.
After updating to iOS 26.2 Developer Beta, the following Web API started returning false:
isUserVerifyingPlatformAuthenticatorAvailable
MDN: https://developer.mozilla.org/ja/docs/Web/API/PublicKeyCredential/isUserVerifyingPlatformAuthenticatorAvailable_static
This issue did not occur on iOS 26.1 — it only happens on the beta version.
Has anyone else encountered this problem or is aware of any related changes?
OS: iOS 26.2 beta 3 (23C5044b)
In Xcode, under Signing & Capabilities (Release) for our bundle ID
the selected provisioning profile does include the entitlement:
com.apple.developer.payment-pass-provisioning
However, when we upload a new build to TestFlight, the Build Metadata →
Entitlements section for the same bundle ID does not include
com.apple.developer.payment-pass-provisioning.
Because of this, PKAddPaymentPassViewController does not open in TestFlight
builds.
This suggests that while the entitlement is enabled for the App ID and
visible in Xcode, it may not yet be propagated to App Store Connect’s
signing service for TestFlight/App Store builds.
Please Note: The Wallet Entitlements team had confirmed
that they had granted entitlements for our team and the apple IDs
Xcode : 26.0.1
Profile being used: Distribution Profile
Topic:
Code Signing
SubTopic:
Entitlements
Tags:
Wallet
Entitlements
Provisioning Profiles
TestFlight
I have some questions about Liquid Glass and iOS 26 on the iPhone.
Routine scrolling transactions in any view are causing the title to change from Light Mode colors to Dark Mode colors. Is this now standard operation? The column headers are also displaying a black stripe across the top of the screen when scrolling. So why doesn't the display shift when in Dark Mode to Light mode?
Scrolling is causing everything in the header (navigation title, time, battery status, and wi-fi status) to change from black to white.
Is this an accessibility action that I may have turned on by accident?
I'm not very thrilled by this behavior!
Keep getting an error saying the tester has 'an invalid name or email address and wasn't added'.
Never had this problem until maybe 1 - 2 days ago.
Please fix I am unable to publish new builds to my early access users.
Hi,
I am building a new app in the App Store - the app is not live yet.
I have setup an annual subscription product in AppStore Connect. Our problem is that we are unable to retrieve the product from our app - we've made sure that there are no missing metadata (e.g. price, availability).
Has anyone encountered before? Appreciate any help provided.
Thanks
There is a problem with the launchscreen of my application on iPadOS 26.
After release of the windowed mode feature the launchscreen of my fullscreen landscape-only application is being cropped and doesn't stretch to screen's size when the device is in the portrait mode.
How can I fix this?
Topic:
UI Frameworks
SubTopic:
General
Hi there,
I have an SwiftUI app that opens a user selected audio file (wave). For each audio file an additional file exists containing events that were extracted from the audio file. This additional file has the same filename and uses the extension bcCalls. I load the audio file using FileImporter view modifier and within access the audio file with a security scoped bookmark. That works well. After loading the audio I create a CallsSidecar NSFilePresenter with the url of the audio file. I make the presenter known to the NSFileCoordinator and upon this add it to the FileCoordinator. This fails with NSFileSandboxingRequestRelatedItemExtension: Failed to issue extension for; Error Domain=NSPOSIXErrorDomain Code=3 "No such process"
My Info.plist contains an entry for the document with NSIsRelatedItemType set to YES
I am using this kind of FilePresenter code in various live apps developed some years ago. Now when starting from scratch on a fresh macOS26 system with most current Xcode I do not manage to get it running. Any ideas welcome!
Here is the code:
struct ContentView: View {
@State private var sonaImg: CGImage?
@State private var calls: Array<CallMeasurements> = Array()
@State private var soundContainer: BatSoundContainer?
@State private var importPresented: Bool = false
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
if self.sonaImg != nil {
Image(self.sonaImg!, scale: 1.0, orientation: .left, label: Text("Sonagram"))
}
if !(self.calls.isEmpty) {
List(calls) {aCall in
Text("\(aCall.callNumber)")
}
}
Button("Load sound file") {
importPresented.toggle()
}
}
.fileImporter(isPresented: $importPresented, allowedContentTypes: [.audio, UTType(filenameExtension: "raw")!], onCompletion: { result in
switch result {
case .success(let url):
let gotAccess = url.startAccessingSecurityScopedResource()
if !gotAccess { return }
if let soundContainer = try? BatSoundContainer(with: url) {
self.soundContainer = soundContainer
self.sonaImg = soundContainer.overviewSonagram(expectedWidth: 800)
let callsSidecar = CallsSidecar(withSoundURL: url)
let data = callsSidecar.readData()
print(data)
}
url.stopAccessingSecurityScopedResource()
case .failure(let error):
// handle error
print(error)
}
})
.padding()
}
}
The file presenter according to the WWDC 19 example:
class CallsSidecar: NSObject, NSFilePresenter {
lazy var presentedItemOperationQueue = OperationQueue.main
var primaryPresentedItemURL: URL?
var presentedItemURL: URL?
init(withSoundURL audioURL: URL) {
primaryPresentedItemURL = audioURL
presentedItemURL = audioURL.deletingPathExtension().appendingPathExtension("bcCalls")
}
func readData() -> Data? {
var data: Data?
var error: NSError?
NSFileCoordinator.addFilePresenter(self)
let coordinator = NSFileCoordinator.init(filePresenter: self)
NSFileCoordinator.addFilePresenter(self)
coordinator.coordinate(readingItemAt: presentedItemURL!, options: [], error: &error) {
url in
data = try! Data.init(contentsOf: url)
}
return data
}
}
And from Info.plist
<key>CFBundleDocumentTypes</key>
<array>
<dict>
<key>CFBundleTypeExtensions</key>
<array>
<string>bcCalls</string>
</array>
<key>CFBundleTypeName</key>
<string>bcCalls document</string>
<key>CFBundleTypeRole</key>
<string>None</string>
<key>LSHandlerRank</key>
<string>Alternate</string>
<key>LSItemContentTypes</key>
<array>
<string>com.apple.property-list</string>
</array>
<key>LSTypeIsPackage</key>
<false/>
<key>NSIsRelatedItemType</key>
<true/>
</dict>
<dict>
<key>CFBundleTypeExtensions</key>
<array>
<string>wav</string>
<string>wave</string>
</array>
<key>CFBundleTypeName</key>
<string>Windows wave</string>
<key>CFBundleTypeRole</key>
<string>Editor</string>
<key>LSHandlerRank</key>
<string>Alternate</string>
<key>LSItemContentTypes</key>
<array>
<string>com.microsoft.waveform-audio</string>
</array>
<key>LSTypeIsPackage</key>
<integer>0</integer>
<key>NSDocumentClass</key>
<string></string>
</dict>
</array>
Note that BatSoundContainer is a custom class for loading audio of various undocumented formats as well as wave, Flac etc. and this is working well displaying a sonogram of the audio.
Thx, Volker
TL;DR: How does one use DNSServiceReconfirmRecord() to invalidate mDNS state of a device that's gone offline?
I'm using the DNSServiceDiscovery API (dns_sd.h) for a local P2P service. The problem I'm trying to solve is how to deal with a peer that abruptly loses connectivity, i.e. by turning off WiFi or simply by moving out of range or otherwise losing connectivity. In this situation there is of course no notification that the peer device has gone offline; it simply stops sending any packets.
After my own timeout mechanism determines the peer is not responding, I mark it as offline in my own data structures. The problem is how to discover when/if it comes back online later. My DNSServiceBrowse callback won't be invoked because mDNS doesn't know the device went offline in the first place.
I am trying to use DNSServiceReconfirmRecord, which appears to be for exactly this use case -- "Instruct the daemon to verify the validity of a resource record that appears to be out of date (e.g. because TCP connection to a service's target failed.)" However my attempts always return a BadReference error (-65541). The function requires me to pass a DNS record, and the only one I know is the TXT record; perhaps it needs a different one? Which, and how would I get it?
Thanks!
Hi,Having issues uploading our build to App Store Connect for use in TestFlight. We get a success when building through Xcode, but we get an email after it uploads via archive manager stating "ITMS-90426: Invalid Swift Support - The SwiftSupport folder is missing. Rebuild your app using the current public (GM) version of Xcode and resubmit it.". We have confirmed that we don't have any Swift dependencies and that we are running the latest Xcode (v11.3.1). We've also tried to build/deploy this on multiple computers with the same error. Our app was built using Unity, then brought into Xcode to build/deploy from there. We have never had any Swift dependencies in our project. We are able to deploy directly to our devices, but it seems that the only time we run into issues is trying to. Any help is appreciated.
The documentation says:
The caching behavior of the NSURL and CFURL APIs differ. For NSURL, all cached values (not temporary values) are automatically removed after each pass through the run loop. You only need to call the removeCachedResourceValueForKey: method when you want to clear the cache within a single execution of the run loop. The CFURL functions, on the other hand, do not automatically clear cached resource values. The client has complete control over the cache lifetimes, and you must use CFURLClearResourcePropertyCacheForKey or CFURLClearResourcePropertyCache to clear cached resource values.
https://developer.apple.com/documentation/foundation/nsurl/removeallcachedresourcevalues()?language=objc
Is this really true? In my experience I've had to explicitly remove cached resource values via -removeAllCachedResourceValues or removeCachedResourceValueForKey: otherwise the URL contains stale values.
For example on a URL that no longer exists I attempted to read NSURLIsHiddenKey and the last value was already cached. Instead of getting a NSFileNoSuchFileError I get the old cache value unless explicitly call -removeCachedResourceValueForKey: first and I'm fairly certain the value was cached on a previous run loop churn.
I have Mac apps that embed “Helper Apps” inside their main bundle. The helper apps do work on behalf of the main application.
The helper app doesn’t show a dock icon, it does show minimal UI like an open panel in certain situations (part of NSService implementation). And it does make use of the NSApplication lifecycle and auto quits after it completes all work.
Currently the helper app is inside the main app bundle at: /Contents/Applications/HelperApp.app
Prior to Tahoe these were never displayed to user in LaunchPad but now the Spotlight based AppLauncher displays them.
What’s the recommended way to get these out of the Spotlight App list on macOS Tahoe?
Thanks in advance.
We recently migrated our entire product to Apple Unified Logging due to the various benefits it provides. However we immediately started hitting the "log quarantine" problem ("QUARANTINED DUE TO HIGH LOGGING VOLUME"). This is partly because we are indeed over logging in a few cases (which we have to work on fixing), but also partly because it's a complicated product with potentially hundreds of libraries, and some of the code can legitimately be very busy. For example we have a system extension that's implemented both as a NetworkExtension client and an EndpointSecurity client, if we were to log decent information about each network or file system event so we can troubleshoot something, they are bound to be high volume logs.
Now when our app is running in a normal user environment, this is not a problem. We can disable certain heavy log levels, or at least disable persisting for certain logs (one of the benefits of Apple Unified Logging we really like is that it allows very flexible controls, log config command, OSLogPreferences, configuration profile, we can employ whatever that suits a specific case). But ultimately, the question is what if we end up with a troubleshooting case we don't know exactly where a problem is so we just need the full logs at debug level? And not only just enabled, but because we might not know when the issue can happen either we also need to persist the full set of logs for as long as possible? We will start hitting log quarantine again. Granted this is a very extreme case, but if worst comes to worst, how can we even do that with Apple Unified Logging? Is there an option that allows us to override the quarantine, if but temporarily?
I've searched a few relevant forum posts, some of which described log quarantine but no one had mentioned any solution for it (besides having to stop logging so much from the app but as I explained we do have legitimate cases where log volume can still be huge). I've also read The Eskimo's "Your Friend the System Log" and browsed some of the troubleshooting config profiles provided by Apple hoping to discover some hidden payloads but found none so far.
There is an OSLogRateLimit environment variable that I noticed if I run a launchctl print system/<a-launch-daemon-lable> and it's usually 64. Is this something relevant? And knowing Apple it's probably something that can't be tampered with?
I have an app that has been using the following code to down load audio files:
if let url = URL(string: episode.fetchPath()) {
var request = URLRequest(url: url)
request.httpMethod = "get"
let task = session.downloadTask(with: request)
And then the following completionHandler code:
func urlSession(_ session: URLSession, downloadTask: URLSessionDownloadTask, didFinishDownloadingTo location: URL) {
try FileManager.default.moveItem(at: location, to: localUrl)
In the spirit of modernization, I'm trying to update this code to use async await:
var request = URLRequest(url: url)
request.httpMethod = "get"
let (data, response) = try await URLSession.shared.data(for: request)
try data.write(to: localUrl, options: [.atomicWrite, .completeFileProtection])
Both these code paths use the same url value. Both return the same Data blobs (they return the same hash value)
Unfortunately the second code path (using await) introduces a problem. When the audio is playing and the iPhone goes to sleep, after 15 seconds, the audio stops. This problem does not occur when running the first code (using the didFinish completion handler)
Same data, stored in the same URL, but using different URLSession calls. I would like to use async/await and not have to experience the audio ending after just 15 seconds of the device screen being asleep. any guidance greatly appreciated.
Topic:
App & System Services
SubTopic:
Networking
Tags:
Files and Storage
Network
CFNetwork
Background Tasks