Can I have multiple extensions for the same class? Obviously I have already tried it and it works. But that doesn't make it right?
So want to make sure this doesn't blowup in my face at a later date.
p.s I keep typing "blow-up" as two separate words but the size changes the b word to "****" what's up with that?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Updated info below, in bold.
I went and changed one of the entities in the CoreData of my app. For all my entities I have them selected as "Manual" for Codegen
So I deleted all four files (for two entities), cleaned the build folder, regenerated the CoreData files with Editor -> Create NSManagedObject Subclass.
Now every time I run the app I get a fatalError in the following code in the AppDelegate:
lazy var persistentContainer: NSPersistentContainer = {
let container = NSPersistentContainer(name: “Invoice_Gen")
container.loadPersistentStores(completionHandler: { (storeDescription, error) in
if let error = error as NSError? {
fatalError("Unresolved error \(error), \(error.userInfo)")
}
})
return container
}()
The error code being
[error] error: addPersistentStoreWithType:configuration:URL:options:error: returned error NSCocoaErrorDomain (134140)
Even if I remove the files for the CoreData entities, and comment out anything related to them code wise, I will still get this crash.
If someone has any idea of whether I have to delete something else, or am whatever I would so appreciate it. This one has me more stumped than anything before it.
The change I made was to turn one of the entities' attribute from String to Int
When I changed it back, everything works. So from my research on Google there is something about the mapping model. But I can not find it at all.
I am using the code below to create my own debug log for my app. On the simulator, I have no problem viewing that log. I simply print out the documents directory in the debugger, then open it in my finder.
However I do not know how to access the created log on my iPhone itself. Even if I go to Window -> Devices and Simulator's, when I look at my app's container it's empty. Although I would like to be able to access the file from any actual device in the future.
Am I using the wrong directory? I even used allDomainsMask in place of userDomainMask below, but to no avail.
debugFileURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("myApp.log")
if let handle = try? FileHandle(forWritingTo: dbClass.debugFileURL!) {
handle.seekToEndOfFile() // moving pointer to the end
handle.write(text.data(using: .utf8)!) // adding content
handle.closeFile() // closing the file
} else {
try! text.write(to: dbClass.debugFileURL!, atomically: false, encoding: .utf8)
}
Am trying to distinguish the differences in volumes between background noise, and someone speaking in Swift.
Previously, I had come across a tutorial which had me looking at the power levels in each channel. It come out as the code listed in Sample One which I called within the installTap closure. It was ok, but the variance between background and the intended voice to record, wasn't that great. Sure, it could have been the math used to calculate it, but since I have no experience in audio data, it was like reading another language.
Then I came across another demo. It's code was much simpler, and the difference in values between background noise and speaking voice was much greater, therefore much more detectable. It's listed here in Sample Two, which I also call within the installTap closure.
My issue here is wanting to understand what is happening in the code. In all my experiences with other languages, voice was something I never dealt with before, so this is way over my head.
Not looking for someone to explain this to me line by line. But if someone could let me know where I can find decent documentation so I can better grasp what is going on, I would appreciate it.
Thank you
Sample One
func audioMetering(buffer:AVAudioPCMBuffer) {
// buffer.frameLength = 1024
let inNumberFrames:UInt = UInt(buffer.frameLength)
if buffer.format.channelCount > 0 {
let samples = (buffer.floatChannelData![0])
var avgValue:Float32 = 0
vDSP_meamgv(samples,1 , &avgValue, inNumberFrames)
var v:Float = -100
if avgValue != 0 {
v = 20.0 * log10f(avgValue)
}
self.averagePowerForChannel0 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0)
self.averagePowerForChannel1 = self.averagePowerForChannel0
}
if buffer.format.channelCount > 1 {
let samples = buffer.floatChannelData![1]
var avgValue:Float32 = 0
vDSP_meamgv(samples, 1, &avgValue, inNumberFrames)
var v:Float = -100
if avgValue != 0 {
v = 20.0 * log10f(avgValue)
}
self.averagePowerForChannel1 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel1)
}
}
Sample Two
private func getVolume(from buffer: AVAudioPCMBuffer, bufferSize: Int) -> Float {
guard let channelData = buffer.floatChannelData?[0] else {
return 0
}
let channelDataArray = Array(UnsafeBufferPointer(start:channelData, count: bufferSize))
var outEnvelope = [Float]()
var envelopeState:Float = 0
let envConstantAtk:Float = 0.16
let envConstantDec:Float = 0.003
for sample in channelDataArray {
let rectified = abs(sample)
if envelopeState < rectified {
envelopeState += envConstantAtk * (rectified - envelopeState)
} else {
envelopeState += envConstantDec * (rectified - envelopeState)
}
outEnvelope.append(envelopeState)
}
// 0.007 is the low pass filter to prevent
// getting the noise entering from the microphone
if let maxVolume = outEnvelope.max(),
maxVolume > Float(0.015) {
return maxVolume
} else {
return 0.0
}
}
Within my UIViewController I have a UITextView which I use to dump current status and info into. Obviously evry time I add text to the UITextView I would like it to scroll to the bottom.
So I've created this function, which I call from UIViewController whenever I have new data.
func updateStat(status: String, tView: UITextView) {
db.status = db.status + status + "\n"
tView.text = db.status
let range = NSMakeRange(tView.text.count - 1, 0)
tView.scrollRangeToVisible(range)
tView.flashScrollIndicators()
}
The only thing that does not work is the tView.scrollRangeToVisible. However, if from UIViewController I call:
updateStat(status: "...new data...", tView: mySession)
let range = NSMakeRange(mySession.text.count - 1, 0)
mySession.scrollRangeToVisible(range)
then the UITextView's scrollRangeToVisible does work.
I'm curious if anyone knows why this works when called within the UIViewController, but not when called from a function?
p.s. I have also tried the updateStatus function as an extension to UIViewController, but that doesn't work either
I am looking for a proper tutorial on how to write let's say a messaging app.
in other words the user doesn't have to run Messaging to get messages. I would like to build that type of structured app. I realize that "push notifications" appear the way to go.
But at this point I still can't find an decent tutorial that seems to cover all the bases.
Thank you
I am looping through an audio file, below is my very simple code.
Am looping through 400 frames each time, but I picked 400 here as a random number.
I would prefer to read in by time instead. Let's say a quarter of second. So I was wondering how can I determine the time length of each frame in the audio file?
I am assuming that determining this might differ based on audio formats? I know almost nothing about audio.
var myAudioBuffer = AVAudioPCMBuffer(pcmFormat: input.processingFormat, frameCapacity: 400)!
guard var buffer = AVAudioPCMBuffer(pcmFormat: input.processingFormat, frameCapacity: AVAudioFrameCount(input.length)) else {
return nil
}
var myAudioBuffer = AVAudioPCMBuffer(pcmFormat: input.processingFormat, frameCapacity: 400)!
while (input.framePosition < input.length - 1 ) {
let fcIndex = ( input.length - input.framePosition > 400) ? 400 : input.length - input.framePosition
try? input.read(into: myAudioBuffer, frameCount: AVAudioFrameCount(fcIndex))
let volUme = getVolume(from: myAudioBuffer, bufferSize: myAudioBuffer.frameLength)
...manipulation code
}
I have this long kludgy bit of code that works. I've outlined it below so as not to be confusing as I have no error within my code.
I just need to know if there's a method that already exists to copy a specific part of an AVAudioPCMBuffer to a new AVAudioPCMBuffer?
So if I have an AVAudioPCMBuffer of 10,000 frames, and I just want frames 500 through 3,000 copied into a new buffer (formatting and all) without altering the old buffer...is there a method to do this?
My code detects silent moments in an audio recording
I currently read an audio file into an AVAudioPCMBuffer (audBuffOne)
I loop through the buffer and detect the starts and ends of silence
I record their positions in an array
This array holds what frame position I detect voice starting (A) and which frame position the voice ends (B), with some padding of course
... new loop ...
I loop through my array to go through each A and B framePositions
Using the sample size from the audio file's formatting info, I create a new AVAudioPCMBuffer (audBuffTwo) large enough to hold from A to B and having the same formatting as audBuffOne
I go back to audBuffOne
Set framePosition to on the audio file to A
Read into audBuffTwo for there proper length to reach frame B
Save audBuffTwo to a new file
...keep looping
Trying to to convert my old URL functionality to SwiftUI with async/await. While the skeleton below works, am puzzled as to why I must place my call to the method within a Task? When I don't use Task I get the error:
Cannot pass function of type '() async -> Void' to parameter expecting synchronous function type
From the examples I've seen, this wasn't necessary.
So I either I've misunderstood something, or I am doing this incorrectly, and am looking for little but of guidance.
Thank you
Within my SwiftUI view:
Button {
Task {
let request = xFile.makePostRequest(xUser: xUser, script: "requestTicket.pl")
var result = await myNewURL.asyncCall(request: request)
print("\(result)")
}
}
From a separate class:
class MyNewURL: NSObject, ObservableObject {
func asyncCall(request: URLRequest) async -> Int {
do {
let (data, response) = try await URLSession.shared.data(for: request)
guard let httpResponse = response as? HTTPURLResponse
else {
print("error")
return -1
}
if httpResponse.statusCode == 200 {
...
}
} catch {
return -2
}
return 0
}
}
Within my SwiftUI view I have multiple view items, buttons, text, etc.
Within the view, the user selects a ticket, and then clicks a button to upload it. The app then sends the ticket to my server, where the server takes time to import the ticket. So my SwiftUI app needs to make repeated URL calls depending on how far the server has gotten with the ticket.
I thought the code below would work. I update the text to show at what percent the server is at. However, it only works once. I guess I thought that onAppear would work with every refresh since it's in a switch statement.
If I'm reading my debugger correctly, SwiftUI recalls which switch statement was called last and therefore it views my code below as a refresh rather than a fresh creation.
So is there a modifier that would get fired repeatedly on every view refresh? Or do I have to do all my multiple URL calls from outside the SwiftUI view?
Something like onInitialize but for child views (buttons, text, etc.) within the main view?
switch myNewURL.stage {
case .makeTicket:
Text(myNewURL.statusMsg)
.padding(.all, 30)
case .importing:
Text(myNewURL.statusMsg)
.onAppear{
Task {
try! await Task.sleep(nanoseconds: 7000000000)
print ("stage two")
let request = xFile.makeGetReq(xUser: xUser, script: "getTicketStat.pl")
var result = await myNewURL.asyncCall(request: request, xFile: xFile)
}
}
.padding(.all, 30)
case .uploadOriginal:
Text(myNewURL.statusMsg)
.padding(.all, 30)
case .JSONerr:
Text(myNewURL.statusMsg)
.padding(.all, 30)
}
I hope this isn't too off-topic, but I'm in a bind
My HD is so full that I can't download any new iOS' in Xcode. Actually, there's a lot I can't do. According to the storage part of System Details, Developer is over 100 GB.
I have a huge external HD and would like to completely remove and clean Xcode, and the reinstall it on the external HD.
1 - Is this allowed? Some people say that all apps must be installed in the Applications folder. Checking the web I get both answers.
2 - If it is allowed, is there a way to properly clean out? Because every website I found that describes this procedure is touting their own cleanup app.
Thank you
Is there any app out there that lets you browse through a CoreData database? When I first started to learn Swift, an app called Liya seemed to work. But alas, no longer.
it would just make it easier if there was anything out there that let you browse the data directly.
Thanks
Can I open two separate Xcode windows with the same project ?
I have multiple monitors, so would love to be able use them to view different files of the same project.
Is there a way?
Before Xcode 15, when I could access the info.plist file, I was able to add exceptions to the App Transport Security Settings so I could connect with my home server, which has no HTTPS, just HTTP.
But in Xcode 15 I have no idea, not can I buy a clue with google, on how to do this.
Please help!
Thanks
p.s. I should probably add that one site mentioned going to the Target section of your project allows easy access to info.plist. Yet for some strange reason, there is no item in Targets, which is odd, as I can debug my. project.
Topic:
Developer Tools & Services
SubTopic:
Xcode
Am new enough to SwiftUI that I that are still some concepts that confuse me. Case in point: .background
The code below is meant to detect when the user drags their finger over different areas, in this case three different size circles placed over each other.
The code works, but I get lost trying to figure out how the logic works.
.background calls a function that's a view builder, yet doesn't an actual view? Unless Color.clear is the view it's returning?
I have more questions, but might as well start with .background since it comes first? I think?
Thanks
import SwiftUI
struct ContentView: View {
@State private var dragLocation = CGPoint.zero
@State private var dragInfo = " "
@State private var secondText = "..."
private func dragDetector(for name: String) -> some View {
GeometryReader { proxy in
let frame = proxy.frame(in: .global)
let isDragLocationInsideFrame = frame.contains(dragLocation)
let isDragLocationInsideCircle = isDragLocationInsideFrame &&
Circle().path(in: frame).contains(dragLocation)
Color.clear
.onChange(of: isDragLocationInsideCircle) { oldVal, newVal in
if dragLocation != .zero {
dragInfo = "\(newVal ? "entering" : "leaving") \(name)..."
}
}
}
}
var body: some View {
ZStack {
Color(white: 0.2)
VStack(spacing: 50) {
Text(dragInfo)
.padding(.top, 60)
.foregroundStyle(.white)
Text(secondText)
.foregroundStyle(.white)
Spacer()
ZStack {
Circle()
.fill(.red)
.frame(width: 200, height: 200)
.background { dragDetector(for: "red") }
Circle()
.fill(.white)
.frame(width: 120, height: 120)
.background { dragDetector(for: "white") }
Circle()
.fill(.blue)
.frame(width: 50, height: 50)
.background { dragDetector(for: "blue") }
}
.padding(.bottom, 30)
}
}
.ignoresSafeArea()
.gesture(
DragGesture(coordinateSpace: .global)
.onChanged { val in
dragLocation = val.location
secondText = "\(Int(dragLocation.x)) ... \(Int(dragLocation.y))"
}
.onEnded { val in
dragLocation = .zero
dragInfo = " "
}
)
}
}
#Preview {
ContentView()
}