Debugger on Xcode 16.x is super slow and it turns out it's only this way when Xcode is connected to my iPhone via WiFi. If I disable WiFI on my iPhone everything is just fine. But that's not a solution.
An engineer posted this supposed solution, https://developer.apple.com/documentation/xcode-release-notes/xcode-15-release-notes.
Forgive me but that's not a solution, especially since we used to be able to shut off "Connect via WiFI."
I've seen so many posts here and everywhere else with no one stating any clear answer.
Does anyone know why has this been removed? And is anyone aware of it?
I've posted in the Feedback Asst. as many others have.
What gives?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I find Xcode's Debugging Variable View to be to cluttered, even when just watching "local" variables.
Is there anyway to set it so that it only shows the variables I want to watch? Would make debugging so much easier.
So I believe my machine JUST updated to Xcode 16.3 (16E140). But it definitely just installed the latest iOS simulator 18.4.
However, now my preview will sometimes give me the error Failed to launch app ”Picker.app” in reasonable time.
If I add a space in my code, or hit refresh on the Preview, then it will run on the second or third attempt. Sometimes in between the refreshes, the preview will crash, and then it will work again.
Anyone else experiencing this? Any ideas?
Thanks
I am writing a SwiftUI based app, and errors can occur anywhere. I've got a function that logs the error.
But it would be nice to be able to call an Alert Msg, no matter where I am, then gracefully exits the app.
Sure I can right the alert into every view, but that seems ridiculously unnecessary.
Am I missing something?
If try to dynamically load WhipserKit's models, as in below, the download never occurs. No error or anything. And at the same time I can still get to the huggingface.co hosting site without any headaches, so it's not a blocking issue.
let config = WhisperKitConfig(
model: "openai_whisper-large-v3",
modelRepo: "argmaxinc/whisperkit-coreml"
)
So I have to default to the tiny model as seen below.
I have tried so many ways, using ChatGPT and others, to build the models on my Mac, but too many failures, because I have never dealt with builds like that before.
Are there any hosting sites that have the models (small, medium, large) already built where I can download them and just bundle them into my project? Wasted quite a large amount of time trying to get this done.
import Foundation
import WhisperKit
@MainActor
class WhisperLoader: ObservableObject {
var pipe: WhisperKit?
init() {
Task {
await self.initializeWhisper()
}
}
private func initializeWhisper() async {
do {
Logging.shared.logLevel = .debug
Logging.shared.loggingCallback = { message in
print("[WhisperKit] \(message)")
}
let pipe = try await WhisperKit() // defaults to "tiny"
self.pipe = pipe
print("initialized. Model state: \(pipe.modelState)")
guard let audioURL = Bundle.main.url(forResource: "44pf", withExtension: "wav") else {
fatalError("not in bundle")
}
let result = try await pipe.transcribe(audioPath: audioURL.path)
print("result: \(result)")
} catch {
print("Error: \(error)")
}
}
}
I have Xcode 16 and am setting everything to a minimum target deployment to 17.5, and am using import Speech
Never the less, Xcode doesn't can't find it.
At ChatGPT's urging I tried going back to Xcode 15.3, but that won't work with Sequoia
Am I misunderstanding something?
Here's how I am trying to use it:
if templateItems.isEmpty {
templateItems = dbControl?.getAllItems(templateName: templateName) ?? []
items = templateItems.compactMap { $0.itemName?.components(separatedBy: " ") }.flatMap { $0 }
let phrases = extractContextualWords(from: templateItems)
Task {
do {
// 1. Get your items and extract words
templateItems = dbControl?.getAllItems(templateName: templateName) ?? []
let phrases = extractContextualWords(from: templateItems)
// 2. Build the custom model and export it
let modelURL = try await buildCustomLanguageModel(from: phrases)
// 3. Prepare the model (STATIC method)
try await SFSpeechRecognizer.prepareCustomLanguageModel(at: modelURL)
// ✅ Ready to use in recognition request
print("✅ Model prepared at: \(modelURL)")
// Save modelURL to use in Step 5 (speech recognition)
// e.g., self.savedModelURL = modelURL
} catch {
print("❌ Error preparing model: \(error)")
}
}
}