Hi,
Recently I encountered troubles uploading a binary with temporary-exception.audio-unit-host entitlement using Transporter.
I would love to make use of in my application and distribute in the AppStore.
What is necessary to do so?
regards, Joël
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I need to bundle an additional binary along my yet published application.
It is a Audio Unit test application.
My yet published application implemented Audio Unit plugin support.
But upload is always rejected:
Validation failed (409)
Invalid Provisioning Profile. The provisioning profile included in the bundle com.gsequencer.GSequencer [com.gsequencer.GSequencer.pkg/Payload/com.gsequencer.GSequencer.app] is invalid. [Missing code-signing certificate.] For more information, visit the macOS Developer Portal. (ID: ****)
I have followed the instructions here: Embedding a helper tool in a sandboxed app
but no luck. Does anyone know whats going on?
I use Transporter to upload the application, the embedded.provisioningprofile is copied from Xcode build and code signing is done manually.
Topic:
Code Signing
SubTopic:
Certificates, Identifiers & Profiles
Tags:
macOS
Provisioning Profiles
Code Signing
hi,
Is there an Audio Unit logo I can show on my website? I would love to show that my application is able to host Audio Unit plugins.
regards, Joël
Hi,
I have just implemented an Audio Unit v3 host.
AgsAudioUnitPlugin *audio_unit_plugin;
AVAudioUnitComponentManager *audio_unit_component_manager;
NSArray<AVAudioUnitComponent *> *av_component_arr;
AudioComponentDescription description;
guint i, i_stop;
if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){
return;
}
audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager];
/* effects */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_Effect;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
/* instruments */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_MusicDevice;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
But this doesn't show me Audio Unit v2 plugins, why?
regards, Joël
Hi,
I just started to develop audio unit hosting support in my application.
Offline rendering seems to work except that I hear no output, but why?
I suspect with the player goes something wrong.
I connect to CoreAudio in a different location in the code.
Here are some error messages I faced so far:
2025-08-14 19:42:04.132930+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:42:04.151171+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:43:08.344530+0200 com.gsequencer.GSequencer[34358:18614927] AUAudioUnit.mm:1417 Cannot set maximumFramesToRender while render resources allocated.
2025-08-14 19:43:08.346583+0200 com.gsequencer.GSequencer[34358:18614927] [avae] AVAEInternal.h:104 [AVAudioSequencer.mm:121:-[AVAudioSequencer(AVAudioSequencer_Player) startAndReturnError:]: (impl->Start()): error -10852
** (<unknown>:34358): WARNING **: 19:43:08.346: error during audio sequencer start - -10852
I have implemented an AVAudioEngine based AudioUnit host. Here I instantiate player and effect:
/* audio engine */
audio_engine = [[AVAudioEngine alloc] init];
fx_audio_unit_audio->audio_engine = (gpointer) audio_engine;
av_format = (AVAudioFormat *) fx_audio_unit_audio->av_format;
/* av audio player node */
av_audio_player_node = [[AVAudioPlayerNode alloc] init];
/* av audio unit */
av_audio_unit_effect = [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:[((AVAudioUnitComponent *) AGS_AUDIO_UNIT_PLUGIN(base_plugin)->component) audioComponentDescription]];
av_audio_unit = (AVAudioUnit *) av_audio_unit_effect;
fx_audio_unit_audio->av_audio_unit = av_audio_unit;
/* audio sequencer */
av_audio_sequencer = [[AVAudioSequencer alloc] initWithAudioEngine:audio_engine];
fx_audio_unit_audio->av_audio_sequencer = (gpointer) av_audio_sequencer;
/* output node */
[[AVAudioOutputNode alloc] init];
/* audio player and audio unit */
[audio_engine attachNode:av_audio_player_node];
[audio_engine attachNode:av_audio_unit];
[audio_engine connect:av_audio_player_node to:av_audio_unit format:av_format];
[audio_engine connect:av_audio_unit to:[audio_engine outputNode] format:av_format];
ns_error = NULL;
[audio_engine enableManualRenderingMode:AVAudioEngineManualRenderingModeOffline
format:av_format
maximumFrameCount:buffer_size error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("enable manual rendering mode error - %d", [ns_error code]);
}
ns_error = NULL;
[[av_audio_unit AUAudioUnit] allocateRenderResourcesAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("Audio Unit allocate render resources returned error - ErrorCode %d", [ns_error code]);
}
Then I render in a dedicated thread.
ns_error = NULL;
[audio_engine startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio engine start - %d", [ns_error code]);
}
[av_audio_sequencer prepareToPlay];
ns_error = NULL;
[av_audio_sequencer startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio sequencer start - %d", [ns_error code]);
}
[av_audio_player_node play];
while(is_running){
/* pre sync */
/* IO buffers */
av_output_buffer = (AVAudioPCMBuffer *) scope_data->av_output_buffer;
av_input_buffer = (AVAudioPCMBuffer *) scope_data->av_input_buffer;
/* fill input buffer */
/* schedule av input buffer */
frame_position = 0; // (gint64) ((note_offset * absolute_delay) + delay_counter) * buffer_size;
av_audio_player_node = (AVAudioPlayerNode *) fx_audio_unit_audio->av_audio_player_node;
AVAudioTime *av_audio_time = [[AVAudioTime alloc] initWithHostTime:frame_position sampleTime:frame_position atRate:((double) samplerate)];
[av_audio_player_node scheduleBuffer:av_input_buffer atTime:av_audio_time options:0 completionHandler:nil];
/* render */
ns_error = NULL;
status = [audio_engine renderOffline:AGS_FX_AUDIO_UNIT_AUDIO_FIXED_BUFFER_SIZE toBuffer:av_output_buffer error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("render offline error - %d", [ns_error code]);
}
}
regards, Joël
Hi,
I am curious about if hyperthreading is enabled/disabled on my macbook pro M1 or M4. Howto figure out?
I am using macOS 15.5.
Further, I develop a multi-threaded audio sequencer that creates threads per instrument. I use vector operations to increase performance.
I recognized lowering synchronization rate from 250 Hz to 60 Hz gives additional performance advantages.
Howto programmatically check if Hyperthreading is enabled/disabled and howto enable/disable it programmatically?
After some research I found sysctl() and nvram SMTDisable=%01.
https://support.apple.com/en-us/101870
Can anyone provide me an Objective C example?
regards, Joël
Hi,
On macOS I used to open MP3 and MP4 files with ExtAudioFile. For a few years it doesn't work anymore.
So I decided to try different macOS API using the AudioFileID of AudioToolbox framework.
I decided to write a test:
https://gist.github.com/joelkraehemann/7f5b241b52ca38c3a765c138fb647588
It fails right here:
AudioFileOpenWithCallbacks()
By telling OSStatus error 1954115647, which means kAudioFileUnsupportedFileTypeError.
The filename was set to an MP4 file:
~/Music/test.mp4
Howto fix this?
regards, Joël
Hi,
I encounter problems after updating macOS to Sequoia 15.5 with plugins loaded with dlopen and dlsym.
$ file /Applications/com.gsequencer.GSequencer.app/Contents/Plugins/ladspa/cmt.dylib
/Applications/com.gsequencer.GSequencer.app/Contents/Plugins/ladspa/cmt.dylib: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit bundle x86_64] [arm64:Mach-O 64-bit bundle arm64]
/Applications/com.gsequencer.GSequencer.app/Contents/Plugins/ladspa/cmt.dylib (for architecture x86_64): Mach-O 64-bit bundle x86_64
/Applications/com.gsequencer.GSequencer.app/Contents/Plugins/ladspa/cmt.dylib (for architecture arm64): Mach-O 64-bit bundle arm64
I am currently investigating what goes wrong. My application runs in a sandboxed environment.
Hi,
I am getting into a trap. Please check stack-trace, howto fix this?
regards, Joël
stack-trace with ExtAudioFileWrite
My audio and MIDI sequencer application consumes about 600 % of CPU power with 10 different instruments during playback. While idle approximately 100%.
What is the maximum of CPU power that an application can consume? Are there any limits and could they be modified?
I am asking because if I add more instruments the real-time behaviour gets bad at 700 % of CPU power.
I have got following HW:
MacBook Pro
14-inch, Nov 2024
Apple M4 Pro
24 GB
Hi,
I detect dark mode on macOS like following:
NSAppearance *appearance = NSApp.mainWindow.effectiveAppearance;
NSString *interface_style = appearance.name;
NSAppearanceName basicAppearance = [appearance bestMatchFromAppearancesWithNames:@[
NSAppearanceNameAqua,
NSAppearanceNameDarkAqua
]];
if([basicAppearance isEqualToString:NSAppearanceNameDarkAqua]){
theme = "Adwaita:dark";
dark_mode = TRUE;
}
if([interface_style isEqualToString:NSAppearanceNameDarkAqua]){
theme = "Adwaita:dark";
dark_mode = TRUE;
}else if([interface_style isEqualToString:NSAppearanceNameVibrantDark]){
theme = "Adwaita:dark";
dark_mode = TRUE;
}else if([interface_style isEqualToString:NSAppearanceNameAccessibilityHighContrastAqua]){
theme = "HighContrast";
}else if([interface_style isEqualToString:NSAppearanceNameAccessibilityHighContrastDarkAqua]){
theme = "HighContrast:dark";
dark_mode = TRUE;
}else if([interface_style isEqualToString:NSAppearanceNameAccessibilityHighContrastVibrantDark]){
theme = "HighContrast:dark";
dark_mode = TRUE;
}
But this doesn't work if my window is in background. As the application window is put into background, it loses dark mode. Howto fix it?
regards, Joël
Hi,
I was notified about quarantine files but I don't know why?
ITMS-91109: Invalid package contents - The package contains one or more files with the com.apple.quarantine extended file attribute
The mentioned file is to project logo as PNG image. The PNG file is located here:
MyApp.app/Contents/Resources
Hi,
I have configured the stream as interleaved, but I am unsure if the function produces interleaved samples. So here my question:
Does AudioDeviceCreateIOProcID produce interleaved samples with microphone input?
Hi,
I use AudioQueueNewInput() with my very own run loop and dedicated thread. But now it doesn't show the mic alert window.
Howto fix this?
AudioQueueNewInput(&(core_audio_port->record_format),
ags_core_audio_port_handle_input_buffer,
core_audio_port,
ags_core_audio_port_input_run_loop, kCFRunLoopDefaultMode,
0,
&(core_audio_port->record_aq_ref));
Hi,
My application ships a copy of following cryptographic libraries:
libp11-kit.0.dylib
libnettle.8.dylib
libgnutls.30.dylib
It's purpose is to connect by secured HTTP to an optional server, that might be turned on to allow to receive HTTP requests.
I think this is standard encryption, but do I need to mention this explicitely with App Uses Non-Exempt Encryption?
The application doesn't encrypt content it is just for secured HTTP connections.
regards, Joël