Overview

Post

Replies

Boosts

Views

Activity

macOS Shortcuts App Needs an Export to PDF Feature
I am very new to the macOS Shortcuts application. In my opinion, the documentation is sparse and totally inadequate. The internet seems to be the only method of figuring out how to use this application. I have recently created a shortcut that is working well for me. It has several steps and is too large to fit in the Shortcuts editor window, so I cannot grab a screenshot of it for documenation purposes. I also cannot copy the contents of the editor window and paste them anywhere, such as in a new Note or TextEdit document. I do think Apple should add a means to create a PDF document of the Shortcuts editor window's contents. I went to the official Apple feedback page to leave comments but, irony of ironies, the Shortcuts app is not listed there! I have no idea what I am doing at this point but am excited to learn how to use Shortcuts to automate and simplify tasks that I have to perform frequently. Here's hoping the documentation and features for this application will one day be comprehensive, comprehensible, and complete.
2
0
91
4w
Push To Talk framework doesn't active audio session in background
We are trying to extend our app with Push To Talk functionality by integrating the Push To Talk framework. We are extensively testing what happens if the app is running in the foreground, in the background or not running at all. When the app is in the foreground, and the user has joined a channel we maintain an open connection to our server. When a remote participant starts streaming audio, we immediately call setActiveRemoteParticipant on our PTChannelManager instance. The PTT system will than call our delegate's channelManager:didActivate audioSession method and we can successfully play the incoming audio. When the app is not running at all, there is of course no active connection initially. When another participant starts talking we send a push notification. The PTT system will start our app in the background, call the incomingPushResult method on our delegate, after returning the remote participant the PTT framework will then call the channelmanager:didJoin delegate method which we will use to re-establish the server connection, the PTT framework then calls our channelManager:didActivate audioSession delegate method and we can then successfully play audio. Now the problem. When the application was initially in the foreground and has an established server connection, we initially keep the server connection active when the app enters the background state, until a certain timeout or the system decides our app needs to be killed / removed from memory. This allows us to finish an incoming audio stream, quickly react on incoming responses etc. When we then receive an incoming audio stream after a certain delay (for example 5 seconds) we call the channelManager.setRemoteParticipant method (using try await syntax). This finishes successfully, without any error, however the channelManager:didActivate audioSession delegate method is never called. Manually setting up an audio session is not allowed either and returns an error. Our current workaround for this issue is to disconnect the server connection as soon as the app goes into the background. This will make sure our server sends a push notification, which is successful in activating the audio session after which we can play audio. However, this means we need to re-establish the connection which will introduce an unnecessary delay before we can start playback (and currently means we loose some audio). This also means we need to do extra checks when going to the background to make sure there is no active incoming stream. After each incoming stream we have to check again if we are in the background and disconnect immediately to make sure we get a push notification next time. This can of course also lead to race conditions in an active conversation where we might need to disconnect between incoming streams and if we don't do this in time we might never get an activated audio session. Now this might be by design, as Apple might not want us to keep the server connection active when the application enters the background state. But if that's the case I would expect the channelManager.setRemoteParticipant method to throw an error, but it doesn't. It returns successfully after which we would expect the audio session to get activated as well. So maybe we are not setting the capabilities of our project correctly (we might need other background permissions as well, although we already experimented with that), or we need to do something else to make this work?
6
0
140
4w
NSFileProviderPartialContentFetching and high-latency API calls
I am adding NSFileProviderPartialContentFetching support to an existing NSFileProviderReplicatedExtension. My backend has a high "Time To First Byte" latency (approx. 3 seconds) but reasonable throughput once the connection is established. I am observing a critical behavior difference between Partial Content Fetching and standard Materialization that causes sequential reads (e.g., dd, Finder copies, Adobe apps) to fail with timeouts. The Scenario: I have a 2.8 GB file. I attempt to read it sequentially using dd. **Baseline (Working): Partial Fetching Disabled ** I do not conform to NSFileProviderPartialContentFetching. The system triggers fetchContents(for:version:request:completionHandler:). My extension takes 3 seconds to connect, then streams the entire 2.8 GB file. Result: Success. The OS waits patiently for the entire download (minutes) without timing out, then dd reads the file instantly from the local disk. **The Issue: Partial Fetching Enabled ** I add conformance to NSFileProviderPartialContentFetching. The system requests small, aligned chunks (e.g., 16KB or 128KB). My extension fetches the requested range. This takes ~3 seconds due to network latency. The first few chunks succeed, but shortly after, the operation fails with Operation timed out. It appears the VFS kernel watchdog treats these repeated 3-second delays during read() syscalls as a stalled drive and kills the operation. **My Questions: ** Is there a documented timeout limit for fetchPartialContents completion handlers? It seems strictly enforced (similar to a local disk I/O timeout) compared to the lenient timeout for full materialization. Is NSFileProviderPartialContentFetching inherently unsuitable for high-latency backends (e.g., cold storage, slow handshakes), or is there a mechanism to signal "progress" to the kernel to reset the I/O watchdog during a slow partial fetch? Does the system treat partial fetching as "Online/Direct I/O" (blocking the user application) whereas full fetch is treated as "Offline/Syncing" (pausing the application), explaining the difference in tolerance? Any insights into the VFS lifecycle differences between these two modes would be appreciated.
2
0
192
4w
My dev account is blocked
For un unknown reason, my Apple ID has been blocked. Since I can not connect anymore to my developer account ! I've asked to unblock it (iforgot.apple.com) but according to support the unblock sequence is not send because this Apple ID is in used. I need help to unblock my ID in order to manage my app.... note : I use an another account to post this messages.
1
0
331
4w
Missing Content from my app
Hi there! I am not a coder, but I built an app in ADALO just for in-house use in my company, PetroImaging. The app looked the way I wanted it to in the preview mode in ADALO, but when my app was approved and ready for download in the app store, a lot of content was missing. Many buttons that linked to screens were just gone - even though they were there in the preview and the buttons are formatted correctly in adalo. How can I get my content that I need back on my app so the employees can being using it?
1
0
124
4w
App transfer
I am in no longer contact with the software contract company that has developed and deployed my mobile app to App store via their account. how do I gain ownership of the app? will the store allow me to develop and deploy the app under same name?
2
0
98
4w
Access UltraWideCamera when ARSession is running
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible. Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
1
0
476
4w
Does Apple Home support Matter commissioning using concatenated / multi-device QR codes?
Hi, I’m developing a Matter commissioning flow and would like to clarify Apple Home’s support for concatenated (multi-device) QR codes. In my implementation, I generate a single QR code that contains multiple Matter onboarding payloads (concatenated payloads), intended to commission multiple devices in one scan, similar to a multi-pack / multi-accessory flow. What I’ve tested: Standard single-device Matter QR codes work as expected in the Apple Home app A concatenated QR code (multiple Matter payloads combined into one QR) does not get recognized / commissioned by Apple Home My questions: Does Apple Home officially support commissioning via concatenated or multi-device Matter QR codes? If yes, is there a specific payload format or delimiter that Apple Home expects? If not, is this a known limitation or something planned for future iOS/Home releases?
1
0
94
4w
Custom GCController subclass for new hardware?
Hi all, Wondering how I would go about creating a plugin/class to support a new (physical/hardware) device with the game controller framework? Between GCVirtualController on iOS and the "KeyboardAndMouseSupport.bundle" I see inside GameController.framework on my Mac, it looks like the framework must be designed to support this but I can't find any documentation. Thanks!
1
0
774
4w
hapticpatternlibrary.plist error with Text entry fields in Simulator only
When I have a TextField or TextEditor, tapping into it produces these two console entries about 18 times each: CHHapticPattern.mm:487 +[CHHapticPattern patternForKey:error:]: Failed to read pattern library data: Error Domain=NSCocoaErrorDomain Code=260 "The file “hapticpatternlibrary.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Library/Audio/Tunings/Generic/Haptics/Library/hapticpatternlibrary.plist, NSURL=file:///Library/Audio/Tunings/Generic/Haptics/Library/hapticpatternlibrary.plist, NSUnderlyingError=0x600000ca1b30 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} <_UIKBFeedbackGenerator: 0x600003505290>: Error creating CHHapticPattern: Error Domain=NSCocoaErrorDomain Code=260 "The file “hapticpatternlibrary.plist” couldn’t be opened because there is no such file." UserInfo={NSFilePath=/Library/Audio/Tunings/Generic/Haptics/Library/hapticpatternlibrary.plist, NSURL=file:///Library/Audio/Tunings/Generic/Haptics/Library/hapticpatternlibrary.plist, NSUnderlyingError=0x600000ca1b30 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} My app does not use haptics. This doesn't appear to cause any issues, although entering text can feel a bit sluggish (even on device), but I am unable to determine relatedness. None-the-less, it definitely is a lot of log noise. Code to reproduce in simulator (xcode 26.2; ios 26 or 18, with iPhone 16 Pro or iPhone 17 Pro): import SwiftUI struct ContentView: View { @State private var textEntered: String = "" @State private var textEntered2: String = "" @State private var textEntered3: String = "" var body: some View { VStack { Spacer() TextField("Tap Here", text: $textEntered) TextField("Tap Here Too", text: $textEntered2) TextEditor(text: $textEntered3) .overlay(RoundedRectangle(cornerRadius: 8).strokeBorder(.primary, lineWidth: 1)) .frame(height: 100) Spacer() } } } #Preview { ContentView() } Tapping back and forth in these fields generates the errors each time. Thanks, Steve
2
0
312
4w
Guidance on implementing Declared Age Range API in response to Texas SB2420
I've spent the last few days researching the upcoming laws in Texas and other US states, and how these laws will impact on developers around the world. I want to share what I've learned so far with the community and get feedback on my current understanding. This post is not so much focused on a single API, but more of the bigger picture. Background The law essentially mandates that: (1) app store platforms implement age categorization and verification mechanisms, and (2) developers implement logic to listen to age categorization signals provided by the platform and respond accordingly. You can read the law itself here: https://capitol.texas.gov/tlodocs/89R/billtext/html/SB02420S.HTM Most people seem to be interpreting the law as follows: All developers who distribute apps in the USA are effectively required to implement the new APIs (required by Texas, not by Apple). The penalties are heavy, but it's unclear whether developers would actually be pursued and by whom (e.g. would someone seriously pursue an alarm clock app because it could be accessed by a minor?). Putting aside the ethical, privacy, and legal issues (and the damaging precedents this law sets), most people seem to agree that, from a technical perspective, this is a very silly way to implement age blocking (app store collects the info and passes it to dev, dev is responsible for blocking access). It would make way more sense for the platform to block the app directly for affected users (with optional API support for developers who wish to use it). However, I believe the law has specifically mandated that this is how they expect the system to work, so Apple's hands have been tied. Apple has basically complied with their obligations by providing the relevant APIs to developers. Because the law is vague and open-ended, there are a lot of legal and technical uncertainties about what developers actually need to do to be compliant. Understandably, Apple seems reticent to provide any guidance to developers that could be interpreted as legal advice. Apple's docs simply describe what the APIs do with no guidance on what the overall flow is meant to look like or how and when the APIs should actually be used in practice. Americans familiar with the political situation seem to think there's the possibility of an injunction before this law goes into effect, but that looks increasingly unlikely given that it's two weeks away. Developer solutions Many devs seem to be exploring two main workarounds, at least as temporary solutions: (1) Raise your app's rating to 18+. Putting aside the fact that Texas law would effectively be forcing developers to raise their global age rating (resulting in lost revenue that extends far beyond Texas), it remains unclear whether this solution is actually legally compliant, since the law specifically mandates that apps must implement logic to respond to signals from the platform. (2) Geo-block Texas. Again, it remains unclear if this is compliant because geo-blocking is not 100% accurate and it doesn't actually do what the law says you have to do. It also creates issues if you already have users in Texas, and it means performing additional privacy-hostile checks (i.e., detecting the user's location, even users who are not subject to the law). The DeclaredAgeRange API is actually pretty straight-forward to use – although there is still a lack of documentation on certain edge cases and it's difficult to test. In addition, the new APIs are only available in iOS 26.2, so it's unclear what you need to do if you're still supporting < iOS 26.2. Some people are of the opinion that developers can only reasonably respond to the signals that are available, thus pushing responsibility back to the platforms in regards to earlier OS versions. The API provides a bool (AgeRangeService.shared.isEligibleForAgeFeatures), which allows you to determine if the user is someone to whom age checks need to be applied. https://developer.apple.com/documentation/declaredagerange/agerangeservice/iseligibleforagefeatures I'm not 100% sure, but perhaps the simplest action you can take is to check this bool on launch and block access if it's true. In any case, it looks like this API will be very useful because it means we can avoid applying the checks in other jurisdictions and for grandfathered-in users without needing to implement custom geo-tracking code (albeit only in iOS 26.2+). To implement the API, my current thinking is that, on every launch, I should first check the above bool and, if it's true, do the following: (1) get the App Store age rating with let appStoreAgeRating = await AppStore.ageRatingCode ?? 18, (2) request the user's age with let ageRangeResponse = try await AgeRangeService.shared.requestAgeRange(ageGates: appStoreAgeRating), (3) check that the user has agreed to share their age, (4) check that lowerBound >= appStoreAgeRating, and (5) check that the verification method is not one of the self-declared methods. If this procedure fails, I should block access to the app and provide a link to Apple's support page: https://support.apple.com/en-us/122770 I stress, however, that this is just my current idea and there are some edge cases I'm unsure about. Other issues It is possible to do some basic testing of the API, but only using a sandbox App Store account on a physical device. From the Developer section in iOS Settings, you can select from a few different scenarios, like "Texas user aged 14 without parental consent", etc. There's also a whole separate aspect to this law relating to "significant updates". Everyone seems kinda confused about this, but it seems like the general idea is that, if your app's age classification changes in the future, the app should be responsive to that change. My current interpretation is that if I use the AppStore.ageRatingCode as the age gate (as described above) then that should allow me to comply, but I haven't really looked into this aspect of the law yet. There's also another aspect to this law requiring developers to revoke access to the app when requested by the parent. I have not looked into this yet, but as noted above, it doesn't make sense to me why this is the developer's responsibility given that the platforms already provide solid parental controls. Do I need to something else in addition to what I've sketched out above? It goes without saying, of course, that everything above is not legal advice, and I still have some gaps in my understanding. I would really appreciate any feedback on the above, perhaps with recommendations about better ways to approach this.
10
1
1.4k
4w
WWDC21 Demystify SwiftUI question
When the guy was talking about structural identity, starting at about 8:53, he mentioned how the swiftui needs to guarantee that the two views can't swap places, and it does this by looking at the views type structure. It guarantees that the true view will always be an A, and the false view will always be a B. Not sure exactly what he means because views can't "swap places" like dogs. Why isn't just knowing that some View is shown in true, and another is shown in false, enough for its identity? e.g. The identity could be "The view on true" vs "The view on false", same as his example with "The dog on the left" vs "The dog on the right"
0
0
63
4w
Apple Developer Program not accepting my cards..
Hi I have been trying to purchase apple developer program since week now...and not working i have tried multiple cards, debit/credit (visa, mastercard) my bank is not blocking my payment either. It keeps saying authorisaion failed on Apple end. I tried use different WiFi still same & different Apple ID it does same thing. Anyone know what going on??
1
0
200
4w
Applescript run as a Quick Action errors out when run inside a Shortcut.
I have a Quick Action which flattens a PDF via Applescript. The code works extremely well--I right click in finder and the PDF is flattened, annotations are burned in, no other applications are opened, and the action completes in less than 2 seconds. Here is the Applescript code: I have a Shortcut which completes several operations on already-flattened PDFs. Presently 1) I run my Quick Action by right-clicking a PDF in Finder in order to flatten it, and 2) then right-click that save PDF in Finder to run my Shortcut on that now-flattened PDF. Ideally I'd like to add the Applescript code which flattens the PDF to the beginning of my Shortcut for the sake of convenience, and because sometimes I run my Shortcut having forgotten to flatten the PDF first. However I'm finding that the Applescript code, when placed into a Run Applescript action in Shortcuts, gives this error message: The exact same code when placed into a Run Applescript action in Automator and then run as a Quick Action by right-clicking the PDF in Finder, does not give an error and works perfectly. Does anybody have an explanation (and possible solution) for why this is the case?
2
0
250
4w