Hi, I would love to code with the Neural Engine on my macbook pro M1 2020.
Is there any low-level API to create my very own work-loads?
I am working with audio and MIDI. As well sound synthesis and mixing.
Can I use the Neural Engine to offload the CPU? I am especially interested in parallelism using threads.
My programming lanuage of choice is ANSI C and Objective C.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I build my application bundle by makefile and later using productbuild to create an application installer. This works fine.
But as uploading it to apple using Transporter I get these 2 error messages.
#01 error message:
Asset validation failed (90242)
The product archive is invalid. The Info.plist must contain a LSApplicationCategoryType key, whose value is the UTI for a valid category. For more details, see "Submitting your Mac apps to the App Store". (ID: f25a48a4-8e89-4824-9f22-a6bc2fb90193)
#02 error message:
Asset validation failed (90259)
Bad Bundle Executable. You must include a valid CFBundleExecutable key in your bundle's information property list file. (ID: c3d9f35a-5a27-4cc6-bba8-56cf514c3ad9)
Here is my Info.plist
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDevelopmentRegion</key>
<string>English</string>
<key>CFBundleExecutable</key>
<string>GSequencer</string>
<key>CFBundleGetInfoString</key>
<string>6.4.3, Copyright 2005-2024 Joel Kraehemann</string>
<key>CFBundleIconFile</key>
<string>GSequencer.icns</string>
<key>CFBundleIdentifier</key>
<string>com.gsequencer.GSequencer</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>6.4.3</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>6.4.3</string>
<key>NSHumanReadableCopyright</key>
<string>Copyright 2005-2024 Joel Kraehemann, GNU General Public License.</string>
<key>LSMinimumSystemVersion</key>
<string>11.0</string>
</dict>
</plist>
Here are the 2 binaries in MacOS directory:
ls -1 gsequencer-macos/build/universal/GSequencer.app/Contents/MacOS/
GSequencer
gsequencer_macos_install
Please help me to fix.
regards, Joël
Hi,
My application doesn't start playback anymore after signing it with entitlements.
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>com.apple.security.app-sandbox</key>
<true/>
<key>com.apple.security.files.user-selected.read-only</key>
<true/>
<key>com.apple.security.device.audio-input</key>
<true/>
<key>com.apple.security.device.microphone</key>
<true/>
<key>com.apple.security.assets.music.read-write</key>
<true/>
<key>com.apple.security.network.server</key>
<true/>
</dict>
</plist>
regards, Joël
Hi,
For the very same plugin dlopen() sometimes fails. The library is present but sometimes it returns NULL and sometimes a valid plugin handle.
Here is the code:
char *filename;
void *plugin_so;
filename = "/Applications/com.example.MyApp/Contents/Plugins/myplugin.dylib";
plugin_so = dlopen(filename,
RTLD_NOW);
I have debugged using lldb checked filename and it is present.
The myplugin.dylib file supports 2 architectures, arm64 and x86_64.
regards, Joël
Hi,
I am getting into a trap. Please check stack-trace, howto fix this?
regards, Joël
stack-trace with ExtAudioFileWrite
Hi,
I use AudioQueueNewInput() with my very own run loop and dedicated thread. But now it doesn't show the mic alert window.
Howto fix this?
AudioQueueNewInput(&(core_audio_port->record_format),
ags_core_audio_port_handle_input_buffer,
core_audio_port,
ags_core_audio_port_input_run_loop, kCFRunLoopDefaultMode,
0,
&(core_audio_port->record_aq_ref));
Hi,
I was notified about quarantine files but I don't know why?
ITMS-91109: Invalid package contents - The package contains one or more files with the com.apple.quarantine extended file attribute
The mentioned file is to project logo as PNG image. The PNG file is located here:
MyApp.app/Contents/Resources
Hi,
I have configured the stream as interleaved, but I am unsure if the function produces interleaved samples. So here my question:
Does AudioDeviceCreateIOProcID produce interleaved samples with microphone input?
Hi,
I detect dark mode on macOS like following:
NSAppearance *appearance = NSApp.mainWindow.effectiveAppearance;
NSString *interface_style = appearance.name;
NSAppearanceName basicAppearance = [appearance bestMatchFromAppearancesWithNames:@[
NSAppearanceNameAqua,
NSAppearanceNameDarkAqua
]];
if([basicAppearance isEqualToString:NSAppearanceNameDarkAqua]){
theme = "Adwaita:dark";
dark_mode = TRUE;
}
if([interface_style isEqualToString:NSAppearanceNameDarkAqua]){
theme = "Adwaita:dark";
dark_mode = TRUE;
}else if([interface_style isEqualToString:NSAppearanceNameVibrantDark]){
theme = "Adwaita:dark";
dark_mode = TRUE;
}else if([interface_style isEqualToString:NSAppearanceNameAccessibilityHighContrastAqua]){
theme = "HighContrast";
}else if([interface_style isEqualToString:NSAppearanceNameAccessibilityHighContrastDarkAqua]){
theme = "HighContrast:dark";
dark_mode = TRUE;
}else if([interface_style isEqualToString:NSAppearanceNameAccessibilityHighContrastVibrantDark]){
theme = "HighContrast:dark";
dark_mode = TRUE;
}
But this doesn't work if my window is in background. As the application window is put into background, it loses dark mode. Howto fix it?
regards, Joël
My audio and MIDI sequencer application consumes about 600 % of CPU power with 10 different instruments during playback. While idle approximately 100%.
What is the maximum of CPU power that an application can consume? Are there any limits and could they be modified?
I am asking because if I add more instruments the real-time behaviour gets bad at 700 % of CPU power.
I have got following HW:
MacBook Pro
14-inch, Nov 2024
Apple M4 Pro
24 GB
Hi,
I encounter problems after updating macOS to Sequoia 15.5 with plugins loaded with dlopen and dlsym.
$ file /Applications/com.gsequencer.GSequencer.app/Contents/Plugins/ladspa/cmt.dylib
/Applications/com.gsequencer.GSequencer.app/Contents/Plugins/ladspa/cmt.dylib: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit bundle x86_64] [arm64:Mach-O 64-bit bundle arm64]
/Applications/com.gsequencer.GSequencer.app/Contents/Plugins/ladspa/cmt.dylib (for architecture x86_64): Mach-O 64-bit bundle x86_64
/Applications/com.gsequencer.GSequencer.app/Contents/Plugins/ladspa/cmt.dylib (for architecture arm64): Mach-O 64-bit bundle arm64
I am currently investigating what goes wrong. My application runs in a sandboxed environment.
Hi,
I am curious about if hyperthreading is enabled/disabled on my macbook pro M1 or M4. Howto figure out?
I am using macOS 15.5.
Further, I develop a multi-threaded audio sequencer that creates threads per instrument. I use vector operations to increase performance.
I recognized lowering synchronization rate from 250 Hz to 60 Hz gives additional performance advantages.
Howto programmatically check if Hyperthreading is enabled/disabled and howto enable/disable it programmatically?
After some research I found sysctl() and nvram SMTDisable=%01.
https://support.apple.com/en-us/101870
Can anyone provide me an Objective C example?
regards, Joël
Hi,
I just started to develop audio unit hosting support in my application.
Offline rendering seems to work except that I hear no output, but why?
I suspect with the player goes something wrong.
I connect to CoreAudio in a different location in the code.
Here are some error messages I faced so far:
2025-08-14 19:42:04.132930+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:42:04.151171+0200 com.gsequencer.GSequencer[34358:18611871] [avae] AVAudioEngineGraph.mm:4668 Can't retrieve source node to play sequence because there is no output node!
2025-08-14 19:43:08.344530+0200 com.gsequencer.GSequencer[34358:18614927] AUAudioUnit.mm:1417 Cannot set maximumFramesToRender while render resources allocated.
2025-08-14 19:43:08.346583+0200 com.gsequencer.GSequencer[34358:18614927] [avae] AVAEInternal.h:104 [AVAudioSequencer.mm:121:-[AVAudioSequencer(AVAudioSequencer_Player) startAndReturnError:]: (impl->Start()): error -10852
** (<unknown>:34358): WARNING **: 19:43:08.346: error during audio sequencer start - -10852
I have implemented an AVAudioEngine based AudioUnit host. Here I instantiate player and effect:
/* audio engine */
audio_engine = [[AVAudioEngine alloc] init];
fx_audio_unit_audio->audio_engine = (gpointer) audio_engine;
av_format = (AVAudioFormat *) fx_audio_unit_audio->av_format;
/* av audio player node */
av_audio_player_node = [[AVAudioPlayerNode alloc] init];
/* av audio unit */
av_audio_unit_effect = [[AVAudioUnitEffect alloc] initWithAudioComponentDescription:[((AVAudioUnitComponent *) AGS_AUDIO_UNIT_PLUGIN(base_plugin)->component) audioComponentDescription]];
av_audio_unit = (AVAudioUnit *) av_audio_unit_effect;
fx_audio_unit_audio->av_audio_unit = av_audio_unit;
/* audio sequencer */
av_audio_sequencer = [[AVAudioSequencer alloc] initWithAudioEngine:audio_engine];
fx_audio_unit_audio->av_audio_sequencer = (gpointer) av_audio_sequencer;
/* output node */
[[AVAudioOutputNode alloc] init];
/* audio player and audio unit */
[audio_engine attachNode:av_audio_player_node];
[audio_engine attachNode:av_audio_unit];
[audio_engine connect:av_audio_player_node to:av_audio_unit format:av_format];
[audio_engine connect:av_audio_unit to:[audio_engine outputNode] format:av_format];
ns_error = NULL;
[audio_engine enableManualRenderingMode:AVAudioEngineManualRenderingModeOffline
format:av_format
maximumFrameCount:buffer_size error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("enable manual rendering mode error - %d", [ns_error code]);
}
ns_error = NULL;
[[av_audio_unit AUAudioUnit] allocateRenderResourcesAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("Audio Unit allocate render resources returned error - ErrorCode %d", [ns_error code]);
}
Then I render in a dedicated thread.
ns_error = NULL;
[audio_engine startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio engine start - %d", [ns_error code]);
}
[av_audio_sequencer prepareToPlay];
ns_error = NULL;
[av_audio_sequencer startAndReturnError:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("error during audio sequencer start - %d", [ns_error code]);
}
[av_audio_player_node play];
while(is_running){
/* pre sync */
/* IO buffers */
av_output_buffer = (AVAudioPCMBuffer *) scope_data->av_output_buffer;
av_input_buffer = (AVAudioPCMBuffer *) scope_data->av_input_buffer;
/* fill input buffer */
/* schedule av input buffer */
frame_position = 0; // (gint64) ((note_offset * absolute_delay) + delay_counter) * buffer_size;
av_audio_player_node = (AVAudioPlayerNode *) fx_audio_unit_audio->av_audio_player_node;
AVAudioTime *av_audio_time = [[AVAudioTime alloc] initWithHostTime:frame_position sampleTime:frame_position atRate:((double) samplerate)];
[av_audio_player_node scheduleBuffer:av_input_buffer atTime:av_audio_time options:0 completionHandler:nil];
/* render */
ns_error = NULL;
status = [audio_engine renderOffline:AGS_FX_AUDIO_UNIT_AUDIO_FIXED_BUFFER_SIZE toBuffer:av_output_buffer error:&ns_error];
if(ns_error != NULL &&
[ns_error code] != noErr){
g_warning("render offline error - %d", [ns_error code]);
}
}
regards, Joël
Hi,
I have just implemented an Audio Unit v3 host.
AgsAudioUnitPlugin *audio_unit_plugin;
AVAudioUnitComponentManager *audio_unit_component_manager;
NSArray<AVAudioUnitComponent *> *av_component_arr;
AudioComponentDescription description;
guint i, i_stop;
if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){
return;
}
audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager];
/* effects */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_Effect;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
/* instruments */
description = (AudioComponentDescription) {0,};
description.componentType = kAudioUnitType_MusicDevice;
av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description];
i_stop = [av_component_arr count];
for(i = 0; i < i_stop; i++){
ags_audio_unit_manager_load_component(audio_unit_manager,
(gpointer) av_component_arr[i]);
}
But this doesn't show me Audio Unit v2 plugins, why?
regards, Joël
Hi,
I need to bundle an additional binary along my yet published application.
It is a Audio Unit test application.
My yet published application implemented Audio Unit plugin support.
But upload is always rejected:
Validation failed (409)
Invalid Provisioning Profile. The provisioning profile included in the bundle com.gsequencer.GSequencer [com.gsequencer.GSequencer.pkg/Payload/com.gsequencer.GSequencer.app] is invalid. [Missing code-signing certificate.] For more information, visit the macOS Developer Portal. (ID: ****)
I have followed the instructions here: Embedding a helper tool in a sandboxed app
but no luck. Does anyone know whats going on?
I use Transporter to upload the application, the embedded.provisioningprofile is copied from Xcode build and code signing is done manually.
Topic:
Code Signing
SubTopic:
Certificates, Identifiers & Profiles
Tags:
macOS
Provisioning Profiles
Code Signing