I'm reaching out regarding a recurring issue I'm experiencing with MusicKit developer tokens.
I'm using a valid .p8 private key to sign JWTs for Apple MusicKit integration. Each token I generate includes the appropriate claims (iss, iat, exp) and is signed with the ES256 algorithm, with an expiration date set approximately 6 months ahead.
Everything works as expected immediately after generating the token. However, after a few days, the same JWT (still well within its expiration period) suddenly begins returning invalid/unauthorized responses when used in Postman and other API clients.
Importantly:
I did not delete or revoke the .p8 key during this time.
I verified the JWT contains valid claims and a proper structure.
The issue consistently resolves only when I create a new .p8 file and regenerate a fresh JWT with it—after which the cycle repeats.
This issue occurs even when the environment and app identifiers remain unchanged.
I would greatly appreciate it if you could help me understand:
Why these tokens become invalid after a few days, despite having a long exp value and an unchanged key.
Whether there's any automatic revocation or timeout policy on .p8 keys that could explain this behavior.
If there's a better way to maintain long-lived developer tokens without requiring new .p8 key generation every few days.
Thank you for your help and clarification on this issue.
Best regards,
Liad Altif
General
RSS for tagExplore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When I create a SFSpeechRecognizer object, I find SFLocalSpeechRecognitionClient remains in memory and never gets released.
You can create a demo with a single UIButton whose touch action is
SFSpeechRecognizer(locale: Locale(identifier: "zh_CN"))
I maintain a couple of CoreImage libraries that provide custom Metal kernel backed CIFilters. In iOS/iPadOS 26, the CIColorKernel.apply() method invoked in the CIFilter subclass fails to add the coreimage::destination parameter to the Metal function call:
-[CIColorKernel applyWithExtent:arguments:options:] argument count mismatch for kernel 'FractalNoise3D', expected 13 but saw 12.
I've compiled the code with Xcode 26 and deployed to iOS 18 devices without any breakage, so this is definitely an iOS problem, not an Xcode problem.
Library here: https://github.com/JoshuaSullivan/SimplexNoiseFilter
Feedback ID: FB17874311
I am getting high error rates from the Apple Music API. This has been happening for months now, and it is quite frustrating. It is a mix of 404, 504, and random 500 errors. I hit these endpoints all of the time, so it is not like I am hitting a resource that doesn't exist. Why is this happening? Is this a known issue that is getting worked on?
I have an app (currently in development stage) which needs to use ffmpeg, so I tried searching how to embed ffmpeg in apple apps and found this article https://doc.qt.io/qt-6/qtmultimedia-building-ffmpeg-ios.html
It is working correctly for iOS but not for macOS ( I have made changes macOS specific using chatgpt and traditional web searching)
Drive link for the file and instructions which I'm following: https://drive.google.com/drive/folders/11wqlvb8SU2thMSfII4_Xm3Kc2fPSCZed?usp=share_link
Please can someone from apple or in general help me to figure out what I'm doing wrong?
Use case: When SharePlay -ing a fully immersive 3D scene (e.g. a virtual stage), I would like to shine lights on specific Personas, so they show up brighter when someone in the scene is recording the feed (think a camera person in the scene wearing Vision Pro).
Note: This spotlight effect only needs to render in the camera person's headset and does NOT need to be journaled or shared.
Before I dive into this, my technical question: Can environmental and/or scene lighting affect Persona brightness in a SharePlay? If not, is there a way to programmatically make Personas "brighter" when recording?
My screen recordings always seem to turn out darker than what's rendered in environment, and manually adjusting the contrast tends to blow out the details in a Persona's face (especially in visionOS 26).
Hi Apple Music API / MusicKit / MediaPlayer Team,
Similar to the currentPlaybackRate keeps the same pitch, it would be great to have a currentPlaybackPitch parameter as well. Alternatively, adding a preservesPitch parameter would also work.
I see that iOS 26 AutoMix on Apple Music currently does pitch shifting during music transitions, so maybe this is something that could be exposed on the later betas of iOS 26?
Main feature request we get is to have simple pitch changes to Apple Music we play through our app. Is this being considered?
Topic:
Media Technologies
SubTopic:
General
Tags:
Apple Music API
FairPlay Streaming
Media Player
MusicKit
Hello everyone,
I'm working on implementing a screen sharing feature using RPSystemBroadcastPickerView and a Broadcast Upload Extension to share the entire app screen in an iOS application.
The Broadcast Upload Extension is set up following Apple's ReplayKit guidelines. However, I’m encountering an issue during the broadcast startup sequence:
❗ Problem Description
The Screen Broadcast UI appears as expected
I tap “Start Broadcast”
The countdown (3 → 2 → 1) completes
Then it immediately reverts to the "Start Broadcast" screen, and screen sharing does not begin
No error messages are displayed
None of the extension lifecycle methods (broadcastStarted(withSetupInfo:), processSampleBuffer, etc.) are called
There are no logs or crash reports, neither in the main app nor in the extension
✅ What Has Been Verified
Info.plist of the Broadcast Upload Extension includes:
NSExtensionPointIdentifier = com.apple.broadcast-services-upload
NSExtensionPrincipalClass set correctly
RPBroadcastProcessMode = RPBroadcastProcessModeSampleBuffer
preferredExtension is set properly to the extension’s bundle identifier
Extension is listed in the main app's build settings under "Frameworks, Libraries, and Embedded Content"
⚠️ Additional Concern
We noticed that in Xcode (latest version), the Broadcast Upload Extension is listed under "Embedded Frameworks" with the setting "Embed Without Signing", and there is no option to change it to "Embed & Sign". We're wondering if this could be the reason the extension fails to launch correctly at runtime, despite being detected by the broadcast picker.
❓ Questions
Has anyone faced similar issues where the broadcast never starts despite correct setup?
Could the "Embed Without Signing" be causing the system to silently cancel or ignore the extension at runtime?
Are there any provisioning profile or entitlement requirements specific to Broadcast Upload Extensions that might trigger this behavior silently?
Any insights, suggestions, or workarounds would be greatly appreciated.
Thank you in advance!
I am working to update a live blog in Apple News. As far as I know there is an update endpoint to update a content in Apple News. Is there any feature in Apple News to trigger an event when the original content updated and pull the updated content?
When making a call to https://api.music.apple.com/v1/me/library/artists to get a user's library artists, it returns the following (as an example):
[
{
id: 'r.FCwruQb',
type: 'library-artists',
href: '/v1/me/library/artists/r.FCwruQb?l=en-US',
attributes: { name: 'A Great Big World' }
},
{
id: 'r.7VSWOgj',
type: 'library-artists',
href: '/v1/me/library/artists/r.7VSWOgj?l=en-US',
attributes: { name: 'Aaliyah' }
},
...
]
If I try and use an artist id from that retuned data to look up additional information about the artist by calling https://api.music.apple.com/v1/catalog/us/artists/{id}, it fails.
User Library Artists don't seem to equal Catalog Artists.
It'd be great if there was a way to use these interchangeably. Am I missing something?
I donate INPlayMediaIntent to systerm(donate success), but not show in control center
My code is as follows
let mediaItems = mediaItems.map { $0.inMediaItem }
let intent = if #available(iOS 13.0, *) {
INPlayMediaIntent(mediaItems: mediaItems,
mediaContainer: nil,
playShuffled: false,
playbackRepeatMode: .none,
resumePlayback: true,
playbackQueueLocation: .now,
playbackSpeed: nil,
mediaSearch: nil)
} else {
INPlayMediaIntent(mediaItems: mediaItems,
mediaContainer: nil,
playShuffled: false,
playbackRepeatMode: .none,
resumePlayback: true)
}
intent.suggestedInvocationPhrase = "播放音乐"
let interaction = INInteraction(intent: intent, response: nil)
interaction.donate { error in
if let error = error {
print("Intent 捐赠失败: \(error.localizedDescription)")
} else {
print("Intent 捐赠成功 ✅")
}
}
How can media resources in my app be recommended to the system media control center, just like TikTok in the picture
I have implemented fetching Apple Music preview songs using a Swift framework integrated into a Unity app.
My requirement is to fetch full tracks from a user’s Apple Music library and play them inside Unity.
To do this, I understand that I need to handle authentication, generate a Developer Token, and then obtain a Music User Token to access the user’s Apple Music content.
Currently, I have an Individual Apple Developer account (not Organization).
Based on my research, it seems that:
With an Individual account, I can implement this functionality and even upload builds to TestFlight for internal testing.
However, when releasing the app publicly on the App Store, full-track playback may be restricted for Individual accounts and allowed only for Organization accounts.
👉 Can you confirm if this understanding is correct?
👉 Specifically, is it possible for an Individual account to fetch and play full-length tracks from a subscribed Apple Music user’s library (at least for internal/TestFlight testing)?
Topic:
Media Technologies
SubTopic:
General
Hello,
I am trying to access the Apple Music Feed API, but I am recieving a 401 Unauthorized error message whenever I try to access it.
I have tried using my own code to generate a JWT and directly call the API (which can call the standard Apple Music API successfully).
> GET /v1/feed/song/latest HTTP/2
> Host: api.media.apple.com
> user-agent: insomnia/2023.5.8
> authorization: Bearer [REDACTED]
> accept: */*
< HTTP/2 401
< content-type: application/json; charset=utf-8
< content-length: 0
< x-apple-jingle-correlation-key: AV5IOHBNM2UUJVOFQ4HZ2TGF6Q
< x-daiquiri-instance: daiquiri:10001:daiquiri-all-shared-ext-7bb7c9b9bb-r459v:7987:25RELEASE91:daiquiri-amp-kubernetes-shared-ext-ak8s-prod-pv4-amp-daiquiri-ingress-prod
and also the Apple provided Python example code, which gives me authentication errors too.
$ python3 ./apple_music_feed_example.py --key-id NMBH[...] --team-id 3TNZ[...] --secret-key-file-path "/Users/foxt/Documents/am-feed/NMBH[...].p8" --out-dir .
running....
INFO:__main__:Sending requests to https://api.media.apple.com
INFO:__main__:Getting the latest export for feed artist
Exception: Authentication Failed. Did you provide the correct team id, key id, and p8 file?
Does this API need to be enabled on my account separately from the main Apple Music API? The documentation reads to me as if anyone with an Apple Developer Programme membership can use this API and I did not see any information regarding any other requirements
Hello, I want to know if there are any restrictions with MusicKit to be used in a mobile app to be able to manipulate audio with an EQ on tracks coming from Apple Music, without modifying the actual track structure/data of course, just the audio output.
My Environment:
Device: Mac (Apple Silicon, arm64)
OS: macOS 15.6.1
Description:
I'm developing a music app and have encountered an issue where I cannot update the playbackState in MPNowPlayingInfoCenter after my app loses audio focus to another app. Even though my app correctly calls [MPNowPlayingInfoCenter defaultCenter].playbackState = .paused, the system's Now Playing UI (Control Center, Lock Screen, AirPods controls) does not reflect this change. The UI remains stuck until the app that currently holds audio focus also changes its playback state.
I've observed this same behavior in other third-party music apps from the App Store, which suggests it might be a system-level issue.
Steps to Reproduce:
Use two most popular music apps in Chinese app Store (NeteaseCloud music and QQ music) (let's call them App A and App B):
Start playback in App A.
Start playback in App B. (App B now has audio focus, and App A is still playing).
Attempt to pause App A via the system's Control Center or its own UI.
Observed Behavior: App A's audio stream stops, but in the system's Now Playing controls, App A still appears to be playing. The progress bar continues to advance, and the pause button becomes unresponsive.
If you then pause App B, the Now Playing UI for App A immediately corrects itself and displays the proper "paused" state.
My Questions:
Is there a specific procedure required to update MPNowPlayingInfoCenter when an app is not the current "Now Playing" application?
Is this a known issue or expected behavior in macOS?
Are there any official workarounds or solutions to ensure the UI updates correctly?
Queria saber quando lança o IOS 26 oficialmente
I wanted to know when iOS 26 will be officially released.
Topic:
Media Technologies
SubTopic:
General
I have some question about the TTS
My device default language is zh-HK. (cantonese)
my device is iPhone 16 Pro, IOS 18.6
I create a function speakMandarin
I want the device to speak the zh-CN (putonghua)
however the device only can speak zh-HK (cantonese).
I already set the AVSpeechSynthesisVoice language as zh-CN
func speakMandarin(text: String) {
print("speakMandarin, \(text)")
lastError = nil // Reset error
// Stop any ongoing speech before starting new
if synthesizer.isSpeaking {
synthesizer.stopSpeaking(at: .immediate)
}
// Configure speech utterance
let utterance = AVSpeechUtterance(ssmlRepresentation: text)!
utterance.rate = 0.5 // Natural speaking speed
utterance.pitchMultiplier = 1.0
utterance.volume = 1.0
utterance.voice = AVSpeechSynthesisVoice(language: "zh-CN")
let preferredLanguages = [ "zh-CN" , "zh-TW"]
var selectedVoice: AVSpeechSynthesisVoice?
for lang in preferredLanguages {
if let voice = AVSpeechSynthesisVoice(language: lang) {
print(lang)
selectedVoice = voice
utterance.voice = voice
break
}
}
// If no Mandarin voice found, use system default
if selectedVoice == nil {
selectedVoice = AVSpeechSynthesisVoice(language: nil)
lastError = "未偵測到普通話語音包,將使用系統預設語音"
print (lastError)
}
utterance.voice = selectedVoice
print(utterance)
synthesizer.speak(utterance)
}
here is my log
speakMandarin, <speak>你好!我們來聊聊喜歡的動物吧。你喜歡什麼動物呢?</speak>
zh-CN
[AVSpeechUtterance 0x1194efb80] String: 你好!我們來聊聊喜歡的動物吧。你喜歡什麼動物呢?
Voice: [AVSpeechSynthesisVoice 0x104ceff90] Language: zh-CN, Name: Tingting, Quality: Default [com.apple.voice.compact.zh-CN.Tingting]
Rate: 0.50
Volume: 1.00
Pitch Multiplier: 1.00
Delays: Pre: 0.00(s) Post: 0.00(s)
Hello Apple Engineers,
I am developing a feature related to SpeechRecognizer (import Speech), and I have two questions:
After adding the NSSpeechRecognitionUsageDescription key in my Info.plist, when I initialize an SFSpeechRecognizer instance, the system shows an authorization alert. The alert contains a pre-defined message from Apple:
“Speech data from this app will be sent to Apple to process your requests. This will also help Apple improve its speech recognition technology.”
Is it possible to remove or customize this message?
My app runs in a network environment with a whitelist. I need to know which URL the SFSpeechRecognizer instance connects to, and which port it uses, so that I can add it to the whitelist.
Thank you very much for your support!
Best regards,
Yu Cheng
Hello,
I'm working on a Flutter app targeting both Android and iOS, where I implemented ShazamKit.
In order to achieve that, I first tried with the flutter_shazam_kit package, but since it's not maintained anymore, I forked it here, and tried to update it to meet the Google Play Store requirements, as you can see here:
https://github.com/mregnauld/flutter_shazam_kit/tree/fix-16k
Unfortunately, after trying everything, my app still doesn't meet the (not so) new 16 KB native library alignment. Also, I'm 100% sure it comes from that because the error message disappears if I remove that package from my app.
So after investigating, it seems that the problem comes from the ShazamKit for Android (that you can find here: https://developer.apple.com/download/all/?q=Android%20ShazamKit), and especially the .so files in the .aar file.
Is there anything I can do to fix that, or should I wait before the ShazamKit team fix that?
I'm totally stuck with that so any help is highly appreciated.
Thanks.