Still have not heard back from Apple, but have discovered a few interesting things. It looks like:
SNClassifySoundRequest(classifierIdentifier: version1)
was never meant to run on the GPU in the background and it was a fluke it worked on iOS 17 and below. We are using the built-in model provided by the Sound Analysis framework - that ones seems to be optimized to run on the GPU only and thus causes the explosion we are seeing. If you look at:
SNClassifySoundRequest(mlModel: mlModel)
where you use your own model you can specify that it run on the CPU and this might work in the background. (We don't have our own model to test with since we are doing a proof of concept first using Apple's sounds). Leaving this here incase anyone wants to try it.
// create the configuration for the model
let configuration = MLModelConfiguration()
configuration.computeUnits = .cpuOnly // force computations to use only the CPU
do {
// load your Core ML model with the configuration
let mlModel = try MLModel(contentsOf: modelURL, configuration: configuration)
// wrap the model into a Sound Analysis request
let request = try SNClassifySoundRequest(mlModel: mlModel)
// create your SNAudioStreamAnalyzer
let analyzer = SNAudioStreamAnalyzer(format: audioFormat)
// add the request to the analyzer
try analyzer.add(request, withObserver: self)
} catch {
print("Error setting up Sound Analysis: \(error)")
}
Topic:
Machine Learning & AI
SubTopic:
Core ML
Tags: