Within my UIViewController I have a UITextView which I use to dump current status and info into. Obviously evry time I add text to the UITextView I would like it to scroll to the bottom.
So I've created this function, which I call from UIViewController whenever I have new data.
func updateStat(status: String, tView: UITextView) {
db.status = db.status + status + "\n"
tView.text = db.status
let range = NSMakeRange(tView.text.count - 1, 0)
tView.scrollRangeToVisible(range)
tView.flashScrollIndicators()
}
The only thing that does not work is the tView.scrollRangeToVisible. However, if from UIViewController I call:
updateStat(status: "...new data...", tView: mySession)
let range = NSMakeRange(mySession.text.count - 1, 0)
mySession.scrollRangeToVisible(range)
then the UITextView's scrollRangeToVisible does work.
I'm curious if anyone knows why this works when called within the UIViewController, but not when called from a function?
p.s. I have also tried the updateStatus function as an extension to UIViewController, but that doesn't work either
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am looking for a proper tutorial on how to write let's say a messaging app.
in other words the user doesn't have to run Messaging to get messages. I would like to build that type of structured app. I realize that "push notifications" appear the way to go.
But at this point I still can't find an decent tutorial that seems to cover all the bases.
Thank you
I am looping through an audio file, below is my very simple code.
Am looping through 400 frames each time, but I picked 400 here as a random number.
I would prefer to read in by time instead. Let's say a quarter of second. So I was wondering how can I determine the time length of each frame in the audio file?
I am assuming that determining this might differ based on audio formats? I know almost nothing about audio.
var myAudioBuffer = AVAudioPCMBuffer(pcmFormat: input.processingFormat, frameCapacity: 400)!
guard var buffer = AVAudioPCMBuffer(pcmFormat: input.processingFormat, frameCapacity: AVAudioFrameCount(input.length)) else {
return nil
}
var myAudioBuffer = AVAudioPCMBuffer(pcmFormat: input.processingFormat, frameCapacity: 400)!
while (input.framePosition < input.length - 1 ) {
let fcIndex = ( input.length - input.framePosition > 400) ? 400 : input.length - input.framePosition
try? input.read(into: myAudioBuffer, frameCount: AVAudioFrameCount(fcIndex))
let volUme = getVolume(from: myAudioBuffer, bufferSize: myAudioBuffer.frameLength)
...manipulation code
}
I have this long kludgy bit of code that works. I've outlined it below so as not to be confusing as I have no error within my code.
I just need to know if there's a method that already exists to copy a specific part of an AVAudioPCMBuffer to a new AVAudioPCMBuffer?
So if I have an AVAudioPCMBuffer of 10,000 frames, and I just want frames 500 through 3,000 copied into a new buffer (formatting and all) without altering the old buffer...is there a method to do this?
My code detects silent moments in an audio recording
I currently read an audio file into an AVAudioPCMBuffer (audBuffOne)
I loop through the buffer and detect the starts and ends of silence
I record their positions in an array
This array holds what frame position I detect voice starting (A) and which frame position the voice ends (B), with some padding of course
... new loop ...
I loop through my array to go through each A and B framePositions
Using the sample size from the audio file's formatting info, I create a new AVAudioPCMBuffer (audBuffTwo) large enough to hold from A to B and having the same formatting as audBuffOne
I go back to audBuffOne
Set framePosition to on the audio file to A
Read into audBuffTwo for there proper length to reach frame B
Save audBuffTwo to a new file
...keep looping
Trying to to convert my old URL functionality to SwiftUI with async/await. While the skeleton below works, am puzzled as to why I must place my call to the method within a Task? When I don't use Task I get the error:
Cannot pass function of type '() async -> Void' to parameter expecting synchronous function type
From the examples I've seen, this wasn't necessary.
So I either I've misunderstood something, or I am doing this incorrectly, and am looking for little but of guidance.
Thank you
Within my SwiftUI view:
Button {
Task {
let request = xFile.makePostRequest(xUser: xUser, script: "requestTicket.pl")
var result = await myNewURL.asyncCall(request: request)
print("\(result)")
}
}
From a separate class:
class MyNewURL: NSObject, ObservableObject {
func asyncCall(request: URLRequest) async -> Int {
do {
let (data, response) = try await URLSession.shared.data(for: request)
guard let httpResponse = response as? HTTPURLResponse
else {
print("error")
return -1
}
if httpResponse.statusCode == 200 {
...
}
} catch {
return -2
}
return 0
}
}
Within my SwiftUI view I have multiple view items, buttons, text, etc.
Within the view, the user selects a ticket, and then clicks a button to upload it. The app then sends the ticket to my server, where the server takes time to import the ticket. So my SwiftUI app needs to make repeated URL calls depending on how far the server has gotten with the ticket.
I thought the code below would work. I update the text to show at what percent the server is at. However, it only works once. I guess I thought that onAppear would work with every refresh since it's in a switch statement.
If I'm reading my debugger correctly, SwiftUI recalls which switch statement was called last and therefore it views my code below as a refresh rather than a fresh creation.
So is there a modifier that would get fired repeatedly on every view refresh? Or do I have to do all my multiple URL calls from outside the SwiftUI view?
Something like onInitialize but for child views (buttons, text, etc.) within the main view?
switch myNewURL.stage {
case .makeTicket:
Text(myNewURL.statusMsg)
.padding(.all, 30)
case .importing:
Text(myNewURL.statusMsg)
.onAppear{
Task {
try! await Task.sleep(nanoseconds: 7000000000)
print ("stage two")
let request = xFile.makeGetReq(xUser: xUser, script: "getTicketStat.pl")
var result = await myNewURL.asyncCall(request: request, xFile: xFile)
}
}
.padding(.all, 30)
case .uploadOriginal:
Text(myNewURL.statusMsg)
.padding(.all, 30)
case .JSONerr:
Text(myNewURL.statusMsg)
.padding(.all, 30)
}
I hope this isn't too off-topic, but I'm in a bind
My HD is so full that I can't download any new iOS' in Xcode. Actually, there's a lot I can't do. According to the storage part of System Details, Developer is over 100 GB.
I have a huge external HD and would like to completely remove and clean Xcode, and the reinstall it on the external HD.
1 - Is this allowed? Some people say that all apps must be installed in the Applications folder. Checking the web I get both answers.
2 - If it is allowed, is there a way to properly clean out? Because every website I found that describes this procedure is touting their own cleanup app.
Thank you
Is there any app out there that lets you browse through a CoreData database? When I first started to learn Swift, an app called Liya seemed to work. But alas, no longer.
it would just make it easier if there was anything out there that let you browse the data directly.
Thanks
Can I open two separate Xcode windows with the same project ?
I have multiple monitors, so would love to be able use them to view different files of the same project.
Is there a way?
Before Xcode 15, when I could access the info.plist file, I was able to add exceptions to the App Transport Security Settings so I could connect with my home server, which has no HTTPS, just HTTP.
But in Xcode 15 I have no idea, not can I buy a clue with google, on how to do this.
Please help!
Thanks
p.s. I should probably add that one site mentioned going to the Target section of your project allows easy access to info.plist. Yet for some strange reason, there is no item in Targets, which is odd, as I can debug my. project.
Topic:
Developer Tools & Services
SubTopic:
Xcode
Am new enough to SwiftUI that I that are still some concepts that confuse me. Case in point: .background
The code below is meant to detect when the user drags their finger over different areas, in this case three different size circles placed over each other.
The code works, but I get lost trying to figure out how the logic works.
.background calls a function that's a view builder, yet doesn't an actual view? Unless Color.clear is the view it's returning?
I have more questions, but might as well start with .background since it comes first? I think?
Thanks
import SwiftUI
struct ContentView: View {
@State private var dragLocation = CGPoint.zero
@State private var dragInfo = " "
@State private var secondText = "..."
private func dragDetector(for name: String) -> some View {
GeometryReader { proxy in
let frame = proxy.frame(in: .global)
let isDragLocationInsideFrame = frame.contains(dragLocation)
let isDragLocationInsideCircle = isDragLocationInsideFrame &&
Circle().path(in: frame).contains(dragLocation)
Color.clear
.onChange(of: isDragLocationInsideCircle) { oldVal, newVal in
if dragLocation != .zero {
dragInfo = "\(newVal ? "entering" : "leaving") \(name)..."
}
}
}
}
var body: some View {
ZStack {
Color(white: 0.2)
VStack(spacing: 50) {
Text(dragInfo)
.padding(.top, 60)
.foregroundStyle(.white)
Text(secondText)
.foregroundStyle(.white)
Spacer()
ZStack {
Circle()
.fill(.red)
.frame(width: 200, height: 200)
.background { dragDetector(for: "red") }
Circle()
.fill(.white)
.frame(width: 120, height: 120)
.background { dragDetector(for: "white") }
Circle()
.fill(.blue)
.frame(width: 50, height: 50)
.background { dragDetector(for: "blue") }
}
.padding(.bottom, 30)
}
}
.ignoresSafeArea()
.gesture(
DragGesture(coordinateSpace: .global)
.onChanged { val in
dragLocation = val.location
secondText = "\(Int(dragLocation.x)) ... \(Int(dragLocation.y))"
}
.onEnded { val in
dragLocation = .zero
dragInfo = " "
}
)
}
}
#Preview {
ContentView()
}
It just feels as if my debugger is running super slow when I step over each line. Each line is doing string comparison, splitting text into words, really nothing fancy.
It appears that every time I hit F6, the Variables View (local variables) takes 4 seconds or more to refresh. But I don't know if that's the cause, or a symptom.
Just curious if anyone can shed any light on this.
Specs
MacBook Pro 2019
2.6 GHz 6-Core Intel Core i7
16 GB 2667 MHz DDR4
Sequoia Version 15.1.1 (24B91)
iPhone running app is 13 pro
18.1.1
Xcode Version 16.2 (16C5032a)
So I believe my machine JUST updated to Xcode 16.3 (16E140). But it definitely just installed the latest iOS simulator 18.4.
However, now my preview will sometimes give me the error Failed to launch app ”Picker.app” in reasonable time.
If I add a space in my code, or hit refresh on the Preview, then it will run on the second or third attempt. Sometimes in between the refreshes, the preview will crash, and then it will work again.
Anyone else experiencing this? Any ideas?
Thanks
I have Xcode 16 and am setting everything to a minimum target deployment to 17.5, and am using import Speech
Never the less, Xcode doesn't can't find it.
At ChatGPT's urging I tried going back to Xcode 15.3, but that won't work with Sequoia
Am I misunderstanding something?
Here's how I am trying to use it:
if templateItems.isEmpty {
templateItems = dbControl?.getAllItems(templateName: templateName) ?? []
items = templateItems.compactMap { $0.itemName?.components(separatedBy: " ") }.flatMap { $0 }
let phrases = extractContextualWords(from: templateItems)
Task {
do {
// 1. Get your items and extract words
templateItems = dbControl?.getAllItems(templateName: templateName) ?? []
let phrases = extractContextualWords(from: templateItems)
// 2. Build the custom model and export it
let modelURL = try await buildCustomLanguageModel(from: phrases)
// 3. Prepare the model (STATIC method)
try await SFSpeechRecognizer.prepareCustomLanguageModel(at: modelURL)
// ✅ Ready to use in recognition request
print("✅ Model prepared at: \(modelURL)")
// Save modelURL to use in Step 5 (speech recognition)
// e.g., self.savedModelURL = modelURL
} catch {
print("❌ Error preparing model: \(error)")
}
}
}
This was something I needed so much help with in this post
https://developer.apple.com/forums/thread/666997
I want to add a user defined setting. As I did (from the answer in the previous post, I made sure I was adding in SWIFTACTIVECOMPILATION_CONDITIONS
I added MYDEBUG01, which does not work. This is in Xcode 12.4
I put up both projects in a picture, the one where it does work is the bottom one, MYSCENES. This was in Xcode 11.x
The screens do look a little different, but can't figure out where my error is.
Here is the screenshot
http:// 98.7.37.117/index.html