Is there a Finder type app that will read through my iPhone files?
I’m working on a app that records audio files to my iPhone, and it would be much easier if I could find an app where I could scroll through the files on my iPhone from my desktop as opposed to doingit on the iPhone itself.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Am new enough to SwiftUI that I that are still some concepts that confuse me. Case in point: .background
The code below is meant to detect when the user drags their finger over different areas, in this case three different size circles placed over each other.
The code works, but I get lost trying to figure out how the logic works.
.background calls a function that's a view builder, yet doesn't an actual view? Unless Color.clear is the view it's returning?
I have more questions, but might as well start with .background since it comes first? I think?
Thanks
import SwiftUI
struct ContentView: View {
@State private var dragLocation = CGPoint.zero
@State private var dragInfo = " "
@State private var secondText = "..."
private func dragDetector(for name: String) -> some View {
GeometryReader { proxy in
let frame = proxy.frame(in: .global)
let isDragLocationInsideFrame = frame.contains(dragLocation)
let isDragLocationInsideCircle = isDragLocationInsideFrame &&
Circle().path(in: frame).contains(dragLocation)
Color.clear
.onChange(of: isDragLocationInsideCircle) { oldVal, newVal in
if dragLocation != .zero {
dragInfo = "\(newVal ? "entering" : "leaving") \(name)..."
}
}
}
}
var body: some View {
ZStack {
Color(white: 0.2)
VStack(spacing: 50) {
Text(dragInfo)
.padding(.top, 60)
.foregroundStyle(.white)
Text(secondText)
.foregroundStyle(.white)
Spacer()
ZStack {
Circle()
.fill(.red)
.frame(width: 200, height: 200)
.background { dragDetector(for: "red") }
Circle()
.fill(.white)
.frame(width: 120, height: 120)
.background { dragDetector(for: "white") }
Circle()
.fill(.blue)
.frame(width: 50, height: 50)
.background { dragDetector(for: "blue") }
}
.padding(.bottom, 30)
}
}
.ignoresSafeArea()
.gesture(
DragGesture(coordinateSpace: .global)
.onChanged { val in
dragLocation = val.location
secondText = "\(Int(dragLocation.x)) ... \(Int(dragLocation.y))"
}
.onEnded { val in
dragLocation = .zero
dragInfo = " "
}
)
}
}
#Preview {
ContentView()
}
While running Swift's SpeechRecognition capabilities I get the error below. However, the app successfully transcribes the audio file.
So am not sure how worried I have to be, as well, would like to know that if when that error occurred, did that mean that the app went to the internet to transcribe that file? Yes, requiresOnDeviceRecognition is set to false.
Would like to know what that error meant, and how much I need to worry about it?
Received an error while accessing com.apple.speech.localspeechrecognition service: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)"
I find Xcode's Debugging Variable View to be to cluttered, even when just watching "local" variables.
Is there anyway to set it so that it only shows the variables I want to watch? Would make debugging so much easier.
If try to dynamically load WhipserKit's models, as in below, the download never occurs. No error or anything. And at the same time I can still get to the huggingface.co hosting site without any headaches, so it's not a blocking issue.
let config = WhisperKitConfig(
model: "openai_whisper-large-v3",
modelRepo: "argmaxinc/whisperkit-coreml"
)
So I have to default to the tiny model as seen below.
I have tried so many ways, using ChatGPT and others, to build the models on my Mac, but too many failures, because I have never dealt with builds like that before.
Are there any hosting sites that have the models (small, medium, large) already built where I can download them and just bundle them into my project? Wasted quite a large amount of time trying to get this done.
import Foundation
import WhisperKit
@MainActor
class WhisperLoader: ObservableObject {
var pipe: WhisperKit?
init() {
Task {
await self.initializeWhisper()
}
}
private func initializeWhisper() async {
do {
Logging.shared.logLevel = .debug
Logging.shared.loggingCallback = { message in
print("[WhisperKit] \(message)")
}
let pipe = try await WhisperKit() // defaults to "tiny"
self.pipe = pipe
print("initialized. Model state: \(pipe.modelState)")
guard let audioURL = Bundle.main.url(forResource: "44pf", withExtension: "wav") else {
fatalError("not in bundle")
}
let result = try await pipe.transcribe(audioPath: audioURL.path)
print("result: \(result)")
} catch {
print("Error: \(error)")
}
}
}
I have Xcode 16 and am setting everything to a minimum target deployment to 17.5, and am using import Speech
Never the less, Xcode doesn't can't find it.
At ChatGPT's urging I tried going back to Xcode 15.3, but that won't work with Sequoia
Am I misunderstanding something?
Here's how I am trying to use it:
if templateItems.isEmpty {
templateItems = dbControl?.getAllItems(templateName: templateName) ?? []
items = templateItems.compactMap { $0.itemName?.components(separatedBy: " ") }.flatMap { $0 }
let phrases = extractContextualWords(from: templateItems)
Task {
do {
// 1. Get your items and extract words
templateItems = dbControl?.getAllItems(templateName: templateName) ?? []
let phrases = extractContextualWords(from: templateItems)
// 2. Build the custom model and export it
let modelURL = try await buildCustomLanguageModel(from: phrases)
// 3. Prepare the model (STATIC method)
try await SFSpeechRecognizer.prepareCustomLanguageModel(at: modelURL)
// ✅ Ready to use in recognition request
print("✅ Model prepared at: \(modelURL)")
// Save modelURL to use in Step 5 (speech recognition)
// e.g., self.savedModelURL = modelURL
} catch {
print("❌ Error preparing model: \(error)")
}
}
}