I was generating models using the code:-
import Foundation
import CreateML
import TabularData
import CoreML
....
func makeTheModel(columntopredict:String,training:DataFrame,colstouse:[String],numberofmodels:Int) -> [MLLinearRegressor] {
var returnmodels = [MLLinearRegressor]()
var result = 0.0
for i in 0...numberofmodels {
let pms = MLLinearRegressor.ModelParameters(validation: .split(strategy: .automatic))
do {
let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict)
returnmodels.append(tm)
}
catch let error as NSError {
print("Error: \(error.localizedDescription)")
}
}
return returnmodels
}
Which worked absolutely fine with Sonoma, but upon upgrading the OS to 15.3.1, it does absolutely nothing.
I get no error messages, I get nothing, the code just pauses. If I look at CPU usage, as soon as it hits the line let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) the CPU usage drops to 0%
What am I doing wrong? Is there a flag I need to set somewhere in Xcode?
This is on an M1 MacBook Pro
Any help would be greatly appreciated
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I had code that ran 7x faster in Ventura compared to how it runs now in Sonoma.
For the basic model training I used
let pmst = MLBoostedTreeRegressor.ModelParameters(validation: .split(strategy: .automatic),maxIterations:10000)
let model = try MLBoostedTreeRegressor(trainingData: trainingdata, targetColumn: columntopredict, parameters: pmst)
Which took around 2 secs in Ventura and now takes between 10 and 14 seconds in Sonoma
I have tried to investigate why, and have noticed that when I use
I see these results
useWatchSPIForScribble: NO,
allowLowPrecisionAccumulationOnGPU: NO,
allowBackgroundGPUComputeSetting: NO,
preferredMetalDevice: (null),
enableTestVectorMode: NO,
parameters: (null),
rootModelURL: (null),
profilingOptions: 0,
usePreloadedKey: NO,
trainWithMLCompute: NO,
parentModelName: ,
modelName: Unnamed_Model,
experimentalMLE5EngineUsage: Enable,
preparesLazily: NO,
predictionConcurrencyHint: 0,
Why is the preferred Metal Device null?
If I do
let devices = MTLCopyAllDevices()
for device in devices {
config.preferredMetalDevice = device
print(device.name)
}
I can see that the M1 chipset is available but not selected (from reading the literature the default should be nil?)
Is this the reason why it is so slow? Is there a way to force a change in the config or elsewhere? Why has the default changed, if it has?
I have accidentally updated to Sonoma, and found my CoreML models are generating nearly 7x slower since the update. I also no longer get the verbose information in terminal (i.e time taken per cycle, deviation from actual result etc). This is using xcode and swift developed for MacOS.
The M1 laptop I am using is also under considerably less stress (i.e it is no longer getting warm)
Is there a flag I need to set to increase performance, a button I need to press, any suggestions would be helpful.
Please note this is 24hr+ since the update, so it should no longer be affected by any usual background tasks after an upgrade.