I'm seeing crashes in _MPRemoteCommandEventDispatch on iOS 26.x devices in 3 apps. According to Bugsnag logs they are:
NSInternalInconsistencyException: event dispatch <_MPRemoteCommandEventDispatch: <MPRemoteCommandEvent: 0x11c049500 commandID=THV0 command=<MPRemoteCommand: 0x109ad1ea0 type=Play (0) enabled=YES handlers=[0x109b6a310]> sourceID=(null) ([HostedRoutingSessionDataSource] handleControlSendingCommand<2W5E>)> state:201> deallocated without calling continuation
I attached a log from Xcode organizer matching Bugsnag crash.
mpr_remote_command_event.crash
When I set the brakpoint on the -[_MPRemoteCommandEventDispatch dealloc] I can see it it's hit every time I tap play or pause on locked screen play button.
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000002370420cc __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001e975c810 pthread_kill + 268 (pthread.c:1721)
2 libsystem_c.dylib 0x0000000198f8ff64 abort + 124 (abort.c:122)
3 libc++abi.dylib 0x000000018a7cf808 __abort_message + 132 (abort_message.cpp:66)
4 libc++abi.dylib 0x000000018a7be484 demangling_terminate_handler() + 304 (cxa_default_handlers.cpp:76)
5 libobjc.A.dylib 0x000000018a6cff78 _objc_terminate() + 156 (objc-exception.mm:496)
6 xxxxxxxxxxxxxx 0x00000001003a7db8 CPPExceptionTerminate() + 416 (BSG_KSCrashSentry_CPPException.mm:156)
7 libc++abi.dylib 0x000000018a7cebdc std::__terminate(void (*)()) + 16 (cxa_handlers.cpp:59)
8 libc++abi.dylib 0x000000018a7ceb80 std::terminate() + 108 (cxa_handlers.cpp:88)
9 CoreFoundation 0x000000018d7341c4 __CFRunLoopPerCalloutARPEnd + 256 (CFRunLoop.c:769)
10 CoreFoundation 0x000000018d70bb5c __CFRunLoopRun + 1976 (CFRunLoop.c:3179)
11 CoreFoundation 0x000000018d70aa6c _CFRunLoopRunSpecificWithOptions + 532 (CFRunLoop.c:3462)
12 GraphicsServices 0x000000022e31c498 GSEventRunModal + 120 (GSEvent.c:2049)
13 UIKitCore 0x00000001930ceba4 -[UIApplication _run] + 792 (UIApplication.m:3902)
14 UIKitCore 0x0000000193077a78 UIApplicationMain + 336 (UIApplication.m:5577)
15 xxxxxxxxxxxxxx 0x00000001000c0134 main + 308 (main.swift:15)
16 dyld 0x000000018a722e28 start + 7116 (dyldMain.cpp:1477)
Is the crash happening when the app is being terminated?
Thank you!
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I’ve been struggling with a very frustrating issue using the new iOS 26 Swift Concurrency APIs for video processing. My pipeline reads frames using AVAssetReader, processes them via CIContext (Lanczos upscale), and then appends the result to an AVAssetWriter using the new PixelBufferReceiver.
The Problem: The execution randomly stops at the ]await append(...)] call. The task suspends and never resumes.
It is completely unpredictable: It might hang on the very first run, or it might work fine for 4-5 runs and then hang on the next one.
It is independent of video duration: It happens with 5-second clips just as often as with long videos.
No feedback from the system: There is no crash, no error thrown, and CPU usage drops to zero. The thread just stays in the suspended state indefinitely.
If I manually cancel the operation and restart the VideoEngine, it usually starts working again for a few more attempts, which makes me suspect some internal resource exhaustion or a deadlock between the GPU context and the writer's input.
The Code: Here is a simplified version of my processing loop:
private func proccessVideoPipeline(
readerOutputProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>>,
pixelBufferReceiver: AVAssetWriterInput.PixelBufferReceiver,
nominalFrameRate: Float,
targetSize: CGSize
) async throws {
while !Task.isCancelled, let payload = try await readerOutputProvider.next() {
let sampleBufferInfo: (imageBuffer: CVPixelBuffer?, presentationTimeStamp: CMTime) = payload.withUnsafeSampleBuffer { sampleBuffer in
return (sampleBuffer.imageBuffer, sampleBuffer.presentationTimeStamp)
}
guard let currentPixelBuffer = sampleBufferInfo.imageBuffer else {
throw AsyncFrameProcessorError.missingImageBuffer
}
guard let pixelBufferPool = pixelBufferReceiver.pixelBufferPool else {
throw NSError(domain: "PixelBufferPool", code: -1, userInfo: [NSLocalizedDescriptionKey: "No pixel buffer pool available"])
}
let newPixelBuffer = try pixelBufferPool.makeMutablePixelBuffer()
let newCVPixelBuffer = newPixelBuffer.withUnsafeBuffer({ $0 })
try upscale(currentPixelBuffer, outputPixelBuffer: newCVPixelBuffer, targetSize: targetSize )
let presentationTime = sampleBufferInfo.presentationTimeStamp
try await pixelBufferReceiver.append(.init(unsafeBuffer: newCVPixelBuffer), with: presentationTime)
}
}
Does anyone know how to fix it?
I neet to take pcm data from aac data, but this api has fossy me deeply.
I’ve been using Apple Music API for quite a while now and a recent change must have happened which is quite disruptive.
On many occasions, artists release singles from an album as part of promoting this album. For recent examples, Harry Styles released “Aperture” (a single) to promote his upcoming album “Kiss All The Time. Disco, Occasionally“. Similarly, Bruno Mars released “I Just Might”, a single from the upcoming album “The Romantic”.
Previously, those would return at the endpoint ”artists/{artist_id}/albums” with a “- Single” suffix. But it seems a recent change happens where they only appear as playable tracks inside the album.
This behavior is also evident in the Apple Music app itself. Those singles no longer appear under “Singles & EPs”. Instead, they would only be visible if the single becomes popular enough to be shown on “Top Songs“. Otherwise one would have to know to tap on the (future) album to discover if there are released singles.
Meanwhile Spotify’s API returns those as singles properly, just like Apple Music API used to.
This change must be recent but the question is if it’s intentional, and if so, how can the API be used from here on out to “extract” those singles and represent them?
Hi everyone,
I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline.
These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down.
<<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281)
Even in this clean, minimal setup, the same logs appear on iOS 26.2
The exact same logic did not produce these logs on iOS 18.x.
To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch.
My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview.
Thanks in advance for any insight.
Example Code
Looking to implement to UI to tell the user to clean their lens in our app.
Implemented the KVO for the cameraLensSmudgeDetectionStatus but I'm having issues reliably triggering it in, both in our app and the main camera app. Tried to get inventive by putting tupperware over the lens, but I think the model driving this or the LiDAR sensor might be smart enough to detect there is something close to the lens.
Is there any way to trigger this change in a similar way we can trigger thermal changes in debug?
Thanks.
Hi Apple Developer Support Team,
We are developing an iOS application using a camera package within a hybrid (cross-platform) framework, and we would like to confirm whether it is possible to disable the camera shutter sound programmatically.
As per our understanding, the shutter sound on iOS is system-controlled and depends on the device’s silent/ring mode, and there is no App Store–approved API available to force-disable this sound. Kindly confirm whether this understanding is correct or if any supported alternative approach exists for hybrid or native implementations.
Thank you for your clarification.
Best regards,
ParkhyaSolutions
Hello Apple team and developer community,
I am preparing a visionOS app for a fair environment, where we want to automatically stream the current experience to a nearby monitor via AirPlay, without requiring guests or staff to manually interact with the Control Center or AirPlay pickers all the time.
The goal is to provide a smooth, frictionless setup so attendees can focus on the demo, not the configuration.
Feature Request:
A supported API or method to programmatically start/stop AirPlay video streaming (mirroring or external playback) from within a visionOS app, allowing the current experience to be instantly displayed on an external monitor or Apple TV for the audience.
Context & Rationale:
In a trade fair or exhibition setting, rapid guest turnaround and minimal staff intervention are crucial. Having to manually guide each visitor through AirPlay setup is impractical.
As I understood, AVRoutePickerView can be used for this on iOS/macOS, but this is not available in visionOS. Enabling similar automated streaming on visionOS would make the device far more suitable for live demos and public showcases.
Questions:
Are there any supported workarounds or best practices for enabling automated screen streaming or AirPlay initiation on visionOS in public demo environments that I missed?
Is Apple considering adding programmatic AirPlay control or accessibility features to support such use cases in future visionOS releases?
Thank you for considering this request! If there are recommended patterns, entitlements, or accessibility solutions we could explore for trade fair scenarios, your guidance would be greatly appreciated.
Best regards,
Julian Zürn - IPI, HS Kempten
I occasionally receive reports from users that photo import from the Photos library gets stuck and the progress appears to stop indefinitely.
I’m using the following APIs:
func fetchAsset(_ asset: PHAsset) {
let options = PHImageRequestOptions()
options.deliveryMode = .highQualityFormat
options.resizeMode = .exact
options.isSynchronous = false
options.isNetworkAccessAllowed = true
options.progressHandler = { (progress, error, stop, info) in
// 🚨 never called
}
let requestId = PHImageManager.default().requestImageDataAndOrientation(
for: asset,
options: options
) { data, _, _, info in
// 🚨 never called
}
}
Due to repeated reports, I added detailed logs inside the callback closures. Based on the logs, it looks like the request keeps waiting without any callbacks being invoked — neither the progressHandler nor the completion block of requestImageDataAndOrientation is called.
This happens not only with the PHImageManager approach, but also when using PHAsset with PHContentEditingInputRequestOptions — the completion callback is not invoked as well.
func fetchAssetByContentEditingInput(_ asset: PHAsset) {
let options = PHContentEditingInputRequestOptions()
options.isNetworkAccessAllowed = true
asset.requestContentEditingInput(with: nil) { contentEditingInput, info in
// 🚨 never called
}
}
I suspect this is related to iCloud Photos.
Here is what I confirmed from affected users:
Using the native picker (My app also provides the native picker as an alternative option for attaching photos), iCloud download proceeds normally and the photo can be attached. However, using the PHImageManager-based approach in my app, the same photo cannot be attached.
Even after verifying that the photo has been fully downloaded from iCloud (e.g., by trying “Export Unmodified Originals” in the Photos app as described here: https://support.apple.com/en-us/111762, and confirming the iCloud download progress completed), the callback is still not invoked for that asset.
Detailed flow for (1):
I asked the user to attach the problematic photo (the one where callbacks never fire) using the native photo picker (UIImagePickerController).
The UI showed “Downloading from iCloud” progress.
The progress advanced and the photo was attached successfully.
Then I asked the user to attach the same photo again using my custom photo picker (which uses the PHImageManager APIs mentioned above).
The progress did not advance (No callbacks were invoked).
The operation waited indefinitely and never completed.
Workaround / current behavior:
If I ask users to reboot the device and try again, about 6 out of 10 users can attach successfully afterward.
The remaining ~4 out of 10 users still cannot attach even after rebooting.
For users who are not fixed immediately after reboot, it seems to resolve naturally after some time.
I’ve seen similar reports elsewhere, so I’m wondering if Apple is already aware of an internal issue related to this. If there is any known information, guidance, or recommended workaround, I would appreciate it.
I also logged the properties of affected PHAssets (metadata) when the issue occurs, and I can share them below if that helps troubleshooting:
[size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true]
[size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true]
[size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
I occasionally receive reports from users that photo import from the Photos library gets stuck and the progress appears to stop indefinitely.
I’m using the following APIs (code to be added):
func fetchAsset(_ asset: PHAsset) {
let options = PHImageRequestOptions()
options.deliveryMode = .highQualityFormat
options.resizeMode = .exact
options.isSynchronous = false
options.isNetworkAccessAllowed = true
options.progressHandler = { (progress, error, stop, info) in
// 🚨 never called
}
let requestId = PHImageManager.default().requestImageDataAndOrientation(
for: asset,
options: options
) { data, _, _, info in
// 🚨 never called
}
}
Due to repeated reports, I added detailed logs inside the callback closures. Based on the logs, it looks like the request keeps waiting without any callbacks being invoked — neither the progressHandler nor the completion block of requestImageDataAndOrientation is called.
This happens not only with the PHImageManager approach, but also when using PHAsset with PHContentEditingInputRequestOptions — the completion callback is not invoked as well.
func fetchAssetByContentEditingInput(_ asset: PHAsset) {
let options = PHContentEditingInputRequestOptions()
options.isNetworkAccessAllowed = true
asset.requestContentEditingInput(with: nil) { contentEditingInput, info in
// 🚨 never called
}
}
I suspect this is related to iCloud Photos.
Here is what I confirmed from affected users:
1. Using the native picker, iCloud download proceeds normally and the photo can be attached. However, using the PHImageManager-based approach in my app, the same photo cannot be attached.
2. Even after verifying that the photo has been fully downloaded from iCloud (e.g., by trying “Export Unmodified Originals” in the Photos app as described here: https://support.apple.com/en-us/111762, and confirming the iCloud download progress completed), the callback is still not invoked for that asset.
Detailed flow for (1):
• I asked the user to attach the problematic photo (the one where callbacks never fire) using the native photo picker (UIImagePickerController).
• The UI showed “Downloading from iCloud” progress.
• The progress advanced and the photo was attached successfully.
• Then I asked the user to attach the same photo again using my custom photo picker (which uses the PHImageManager APIs mentioned above).
• The progress did not advance.
• No callbacks were invoked.
• The operation waited indefinitely and never completed.
Workaround / current behavior:
• If I ask users to force-quit and relaunch the app and try again, about 6 out of 10 users can attach successfully afterward.
• The remaining ~4 out of 10 users still cannot attach even after relaunching.
• For users who are not fixed immediately after relaunch, it seems to resolve naturally after some time.
I’ve seen similar reports elsewhere, so I’m wondering if Apple is already aware of an internal issue related to this. If there is any known information, guidance, or recommended workaround, I would appreciate it.
I also logged the properties of affected PHAssets (metadata) when the issue occurs, and I can share them below if that helps troubleshooting:
[size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true]
[size=3.91MB] [PHAssetMediaSubtype(rawValue: 528)+DepthEffect | userLibrary | (4284x5712) | adjusted=true]
[size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.72MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
[size=2.49MB] [PHAssetMediaSubtype(rawValue: 16)+DepthEffect | userLibrary | (3024x4032) | adjusted=true]
I want develop an app for real-time streaming spatial video transmission from an Apple Vision Pro to another Apple Vision Pro and play, like MV-HEVC, does it's possible? If it's possible how to make it?
I am a graduate student conducting research in speech/audio signal processing and multimodal interaction.
Apple Vision Pro is widely recognized as a multimodal interactive system supporting voice, eye, and gesture inputs. However, I could not find detailed specifications or documentation about the audio input sampling rate used by the device’s built-in microphone array when capturing user audio.
Specifically, I would like to understand:
What is the default audio input sampling rate (e.g., 16 kHz, 44.1 kHz, 48 kHz, etc.) for the Vision Pro’s microphones?
When developing with visionOS / AVAudioSession / AVAudioEngine, is there a documented or recommended sampling rate for audio capture?
Are there any best practices or settings for enabling high-quality voice capture on Vision Pro (especially for voice research tasks)?
For context, my work involves voice processing, analysis, and possibly on-device real-time speech recognition. Any pointers to relevant APIs, documentation or examples (especially regarding audio capture buffer size or available formats on visionOS) would be very helpful.
Thank you in advance!
Best regards.
I’m seeing what appears to be an iOS audio-session issue that occurs only when a phone call happens while the app is in the background.
API: AVAudioSession, AVAudioRecorder
Background Modes: Audio enabled (UIBackgroundModes = audio)
Category: .playAndRecord
Microphone permission: granted
Expected Behavior
If the app is recording audio in the background and a phone call interrupts it:
AVAudioSession.interruptionNotification(.began) fires
Call ends
AVAudioSession.interruptionNotification(.ended) fires
App should be able to re-activate its audio session and resume or restart recording
Apple documentation suggests this should be supported for background audio apps.
Actual Behavior
When the app is in the background and phone call is ended:
AVAudioSession.interruptionNotification(.ended) does fire
Attempting to reactivate the audio session always fails:
Error Domain=NSOSStatusErrorDomain
Code=560557684 ("!int")
"Session activation failed"
The session appears to remain permanently “interrupted”
Retrying activation (with delays) does not help
Recreating AVAudioRecorder does not help
Reactivation works only after the app is opened again
Hello, I'm wondering how to capture 24MP photos.
I'm currently testing on an iPhone 16 Pro Max. By default, the device's activeFormat supports 24MP (photo dimensions: {4032x3024, 5712x4284}). For the photoOutput, I'm setting the maxPhotoDimensions to videoDevice.activeFormat.supportedMaxPhotoDimensions.lastObject, and setting MaxPhotoQualityPrioritization to quality.
When capturing, I'm applying the same maxPhotoDimensions and photoQualityPrioritization settings from the photoOutput directly to the AVCapturePhotoSettings.
What could be the issue?
// Objective-C
// setup
[self.photoOutput setMaxPhotoQualityPrioritization:AVCapturePhotoQualityPrioritizationQuality];
CMVideoDimensions maxPhotoDimensions = [(NSValue *)videoDevice.activeFormat.supportedMaxPhotoDimensions.lastObject CMVideoDimensionsValue];
[self.photoOutput setMaxPhotoDimensions:maxPhotoDimensions];
// capturing
AVCapturePhotoSettings *photoSettings = [AVCapturePhotoSettings photoSettings];
photoSettings.maxPhotoDimensions = self.photoOutput.maxPhotoDimensions;
photoSettings.photoQualityPrioritization = self.photoOutput.maxPhotoQualityPrioritization;
[self.photoOutput capturePhotoWithSettings:photoSettings delegate:photoCaptureDelegate];
...
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume.
Observed behavior:
When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1).
The app is then moved to the background.
While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5).
When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1).
From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground.
Questions:
According to Apple’s documentation for AVAudioSession.outputVolume:
“The systemwide output volume set by the user.”
https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume
However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description.
Questions:
The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume?
Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background?
Any clarification on the intended behavior or recommended handling would be greatly appreciated.
Hello,
The search functionality of the coreaudio-api mailing list archive has been broken for a very long time. Several of the lower-level audio APIs have only been discussed on this mailing list, making it critical for those of us maintaining old audio code.
Steps to reproduce:
Open https://lists.apple.com/archives/list/coreaudio-api@lists.apple.com/ in your web browser.
Enter a search term in the "Search this list" field in the top-right corner of the page.
The search will eventually time out with "502 Bad Gateway"
Can somebody please forward this information to the current maintainer? I've tried to contact developer support but they weren't sure what to do.
Thanks!
Topic:
Media Technologies
SubTopic:
Audio
Hello,
I am wondering how one can play music videos (with the actual video playing) with the ApplicationMusicPlayer using MusicKit for Swift?
There is not much documentation on this, so any help would be appreciated.
Topic:
Media Technologies
SubTopic:
General
Tags:
MusicKit
Apple Music API
wwdc21-10291
wwdc21-10293
I’m trying to build a playlist editor on macOS. I can create playlists via the Apple Music HTTP API, but DELETE always returns 401 even immediately after creation with
the same tokens.
Minimal repro:
#!/usr/bin/env bash
set -euo pipefail
BASE_URL="https://api.music.apple.com/v1"
PLAYLIST_NAME="${PLAYLIST_NAME:-blah}"
: "${APPLE_MUSIC_DEV_TOKEN:?}"
: "${APPLE_MUSIC_USER_TOKEN:?}"
create_body="$(mktemp)"
delete_body="$(mktemp)"
trap 'rm -f "$create_body" "$delete_body"' EXIT
curl -sS --compressed -o "$create_body" -w "Create status: %{http_code}\n" \
-X POST "${BASE_URL}/me/library/playlists" \
-H "Authorization: Bearer ${APPLE_MUSIC_DEV_TOKEN}" \
-H "Music-User-Token: ${APPLE_MUSIC_USER_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"attributes\":{\"name\":\"${PLAYLIST_NAME}\"}}"
playlist_id="$(python3 - "$create_body" <<'PY'
import json, sys
with open(sys.argv[1], "r", encoding="utf-8") as f:
data = json.load(f)
print(data["data"][0]["id"])
PY
)"
curl -sS --compressed -o "$delete_body" -w "Delete status: %{http_code}\n" \
-X DELETE "${BASE_URL}/me/library/playlists/${playlist_id}" \
-H "Authorization: Bearer ${APPLE_MUSIC_DEV_TOKEN}" \
-H "Music-User-Token: ${APPLE_MUSIC_USER_TOKEN}" \
-H "Content-Type: application/json"
I capture the response bodies like this:
cat "$create_body"
cat "$delete_body"
Result:
Create: 201
Delete: 401
I also checked the latest macOS SDK’s MusicKit interfaces and MusicLibrary.createPlaylist/edit/add(to:) are marked @available(macOS, unavailable), so I can’t create/
delete via MusicKit on macOS either.
Question: How can I implement a playlist editor on macOS (create/delete/modify) if:
MusicKit write APIs are unavailable on macOS, and
The HTTP API can create but DELETE returns 401?
Any guidance or official workaround would be hugely appreciated.
Hi,
I'm trying to create a FairPlay Streaming Certificate for the SDK 26.x version.
Worth to mention that we already have 2 (1024 and 2048) and we only have the possibility to use our previous 1024-bit certificate (which we do not want because we want a 2048 cert)
Our main issue is that when I upload a new "CSR" file, the "Continue" button is still on "gray" and cannot move forward on the process.
The CSR file has been created with this command:
openssl req -out csr_2048.csr -new -newkey rsa:2048
-keyout priv_key_2048.pem
-subj /CN=SubjectName/OU=OrganizationalUnit/O=Organization/C=US
Some help will be appreciated.
Thanks in advance
Best,
Hi,
I’m trying to implement the new PhotoKit PHBackgroundResourceUploadExtension. I created the extension, enabled full photo library access in the host app, and registered the extension point using the string: com.apple.photos.background-upload.
However, when I attempted to enable the extension with:
try library.setUploadJobExtensionEnabled(true)
I received the following error:
Error Domain=PHPhotosErrorDomain Code=-1 "(null)"
This happens when running the app on Xcode 26.1 and 26.2 Beta, using the iPhone 17 Pro Max simulator (iOS 26.1 and 26.2).
My question is: Is this extension supported on the simulator?
I’m asking because at the moment it’s difficult for me to test this on a physical device.
Also, What's the meaning of the error?
Thanks.