The CloudKit Console includes a Unique Users table in the Usage section.
The numbers here are lower than what I would expect. Does this only track a certain percentage of users, e.g. users have opted in to share analytics with developers?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The new test report, with the automatic video recording and scrubber, is great. I'm setting up different configurations for different languages to improve localization testing, but I was wondering if it was possible to make the simulator device type part of the configuration.
For example, I'd like to have a single test plan with an "iPhone 14" test plan, an "iPad Air" test plan, etc. Then I would just press Cmd-U, and Xcode would run through each device in sequence, leaving me with videos of each test run that I could review in the test report.
Is that possible?
SwiftData includes support for CloudKit sync. However, I don't see any way to add conflict resolution behavior. For example, if different devices set different values for a field, or if a relationship is orphaned because of a deletion on another device, the application has to handle this somehow.
In Core Data (which SwiftData wraps), you can handle this with the conflict resolution system (docs) and classes like NSMergePolicy.
Is any of this accessible in SwiftData? If not, how do you deal with conflicts when syncing a SwiftData model with the cloud?
I have a project with some legacy networking code that uses the Stream (formerly NSStream) family of classes, including Stream, InputStream, OutputStream, and StreamDelegate.
None of these are sendable, so I get a lot of warnings when implementing delegate methods in a @MainActor class.
These classes seem like they could be sendable. Is this something that will happen soon? Is it a bug I should report?
The networking code that uses these classes runs great, and hasn't needed changes for years, so my current solution is to just mark these unchecked:
extension Stream: @unchecked Sendable { }
extension InputStream: @unchecked Sendable { }
extension OutputStream: @unchecked Sendable { }
This makes the compiler happy, but makes me feel kind of bad. Is there something else I could do?
I'm working on adding CFBundleDocumentTypes to my Info.plist so that a user can share an image from other apps on their device and have it open inside my iPhone app.
I seem to be able to get this to work for sharing a single photo from the Photos app, but not for (1) multiple photos from the Photos app or (2) images from Safari.
One thing that makes this difficult is that my changes to Info.plist sometimes have no effect. I can remove CFBundleDocumentTypes and still see the icon, for example. Or I can add a new accepted UTI, but it has no effect. I've tried cleaning and rebuilding, deleting and reinstalling the app...no success. I tried in the simulator, too, and even Erase Content and Settings didn't force changes to be applied. I'm not sure what else to try here.
Anyway, I'd like my app to appear in the Share sheet for photo(s) from the Photos app, Mail and Safari, in particular, but really from any app that supports sharing photos.
I can share the config that seemed to work, but since changing it doesn't always have an effect, I can't guarantee that this was the one that worked. At the moment, it doesn't work, but I'm not sure why. Here it is:
<key>CFBundleDocumentTypes</key>
<array>
<dict>
<key>CFBundleTypeIconFiles</key>
<array/>
<key>CFBundleTypeName</key>
<string>Image</string>
<key>CFBundleTypeRole</key>
<string>Viewer</string>
<key>LSHandlerRank</key>
<string>Alternate</string>
<key>LSItemContentTypes</key>
<array>
<string>public.data</string>
<string>public.jpeg</string>
<string>public.png</string>
<string>public.image</string>
<string>public.gif</string>
<string>public.url</string>
<string>public.content</string>
</array>
</dict>
</array>
<key>LSSupportsOpeningDocumentsInPlace</key>
<true/>
In my code, I also defined the following method in my SceneDelegate (though I think the problem is just with Info.plist):
func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) {
Here are my questions:
How do I make sure that my changes to Info.plist apply? Is there a cache somewhere that I have to force to clear? This is the trickiest part of this, because I can't reliably try an experiment and see if it worked.
Adding the specific image UTI’s (public.jpeg, public.png) seemed to help, even though those types should conform to public.image, which conforms to public.data and public.content. Is it actually necessary to specify those?
If the user selects multiple photos in the Photos app, my app doesn’t appear, but other third-party apps on my phone do. How can I support multiple photos?
This configuration doesn't reliably show my app for Safari images - what do I need to do to make that happen?
I had to use “public.data” when I briefly had Safari sharing working, but there doesn't seem to be a way to get a UTI from UIOpenURLContext. Right now, my code just tries to load the data as an image and aborts if UIImage(data:) returns nil. Is this a safe way of doing this? Is there a way to get the UTI for the data?
Topic:
App & System Services
SubTopic:
General
Tags:
iOS
UIKit
Core Services
Uniform Type Identifiers
In my app, I have a tab bar controller whose first tab is a navigation controller. Taking a certain action in that controller will push a new controller onto the navigation stack. The new controller has hidesBottomBarWhenPushed set to true, which hides the tab bar and shows the new controller's toolbar.
It's worked like this for years. But in the iOS 26 simulator (I don't have the beta installed on any physical iPhones yet), when I tried this behavior in my app, I instead saw:
the tab bar remained exactly where it was when I pushed the new controller
the toolbar never appeared at all and all of its buttons were inaccessible
If you set the deployment target to iOS 18 and run the code in an iOS 18 simulator:
when you tap "Tap Me", the new controller is pushed onto the screen
simultaneously, the tab bar hides and the second controller's toolbar appears.
If you set the deployment target to iOS 26 and run the code in an iOS 26 simulator:
when you tap "Tap Me", the new controller is pushed onto the screen
the toolbar never appears and the tab bar remains unchanged after the push animation completes
Is this a bug in the iOS 26 beta, or is it an intentional behavior change in how hidesBottomBarWhenPushed works in these cases?
Below is sample code that reproduces the problem:
class TabController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let button = UIButton(type: .roundedRect, primaryAction: UIAction(title: "Test Action") { action in
let newController = SecondaryController()
self.navigationController!.pushViewController(newController, animated: true)
})
button.setTitle("Tap Me", for: .normal)
button.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(button)
NSLayoutConstraint.activate([
self.view.centerXAnchor.constraint(equalTo: button.centerXAnchor),
self.view.centerYAnchor.constraint(equalTo: button.centerYAnchor),
])
}
}
class SecondaryController: UIViewController {
override func loadView() {
super.loadView()
self.toolbarItems = [
UIBarButtonItem(image: UIImage(systemName: "plus"), style: .plain, target: nil, action: nil)
]
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
self.navigationController?.isToolbarHidden = false
}
}
class SceneDelegate: UIResponder, UIWindowSceneDelegate {
var window: UIWindow?
var tabController: UITabBarController?
func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
guard let windowScene = (scene as? UIWindowScene) else { return }
let tab1 = UITab(title: "Test 1", image: UIImage(systemName: "globe"), identifier: "test1") { _ in
UINavigationController(rootViewController: TabController())
}
let tab2 = UITab(title: "Test 2", image: UIImage(systemName: "globe"), identifier: "test2") { _ in
UINavigationController(rootViewController: TabController())
}
window = UIWindow(windowScene: windowScene)
self.tabController = UITabBarController(tabs: [tab1, tab2])
self.window!.rootViewController = self.tabController
self.window!.makeKeyAndVisible()
}
}
I can see that iOS beta 2 was released yesterday. I'm not running the beta yet on my actual iPhone, but I wanted to see if an issue I reported in beta 1 was fixed.
Is there a way to update my iOS 26 simulator for beta 2? Usually, there's a new Xcode download that includes the new simulator, but it looks like the most recent Xcode beta release was June 9.
I want to use Foundation Models in a project, but I know my users will want to avoid environmentally intensive AI work in data centers.
Does Foundation Models ever use Private Compute Cloud or any other kind of cloud-based AI system?
I'd like to be able to assure my users that the LLM usage is relatively environmentally friendly. It would be great to be able to cite a specific Apple page explaining that Foundation Models work is always done locally.
If there's any chance that work can be done in the cloud, is there a way to opt out of that?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In the Meet AsyncSequence talk, there's a very cool use case that's shown in one of the slides - the new notifications property on NotificationCenter is an async sequence and the code sample does something like:
let notification = await center.notifications(named: ....).first { ... }
This seems really intriguing and useful to me but I had a few questions about the details of how this works:
What is the type of notification in this snippet? A Task? Where would I store this value?
What context should this be invoked in, especially if I want to have a long-running notification filter running that will remain active for the lifetime of the app?
Basically, I'm curious to see an example of the code surrounding this snippet.
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions.
Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens.
That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have a few view controllers in a large UIKit application that previously started showing content right below the bottom of the top navigation toolbar.
When testing the same code on iOS 26, these same views have their content extend under the navigation bar and toolbar. I was able to fix it with:
if #available(iOS 26, *, *) {
self.edgesForExtendedLayout = [.bottom]
}
when running on iOS 26. I also fixed one or two places where the main view was anchored to self.view.topAnchor instead of self.view.safeAreaLayoutGuide.topAnchor.
Although this seems to work, I wonder if this was an intended change in iOS 26 or just a temporary bug in the beta that will be resolved.
Were changes made to the safe area and edgesForExtendedLayout logic in iOS 26? If so, is there a place I can see what the specific changes were, so I know my code is handling it properly?
Thanks!
Topic:
UI Frameworks
SubTopic:
UIKit
I occasionally get this error in Xcode’s console:
Potential Structural Swift Concurrency Issue: unsafeForcedSync called from Swift Concurrent context.
What does this mean, and how can I resolve it? Googling it doesn’t turn up any results.
This doesn't crash the app - it’s just an error diagnostic that I see in the Xcode console. The app keeps running before and after the issue.
Is there a way I can set a breakpoint to catch this where it happens?