Thanks to people on this board I am able to successfully calla up a child UIViewConroller via animation with:
This is the buttonAction from the Main UIViewController, which calls up setController
@objc func buttonAction(sender: UIButton!) {
guard let theButton = sender as? MyButton else { return}
UIView.transition(with: self.view, duration: 0.5, options: .transitionCurlDown, animations: { [self] in
self.addChild(setController);
self.view.addSubview(setController.view);
}, completion: { [self]_ in setController.didMove(toParent: self);
setController.doLayout();})
}
the doLayout method lies within the child:
func doLayout (){
guard let parent = cView!.view.superview else {return}
//make sure UIV honors safeAreaLayouts
setConstraints(vc: self, pc: parent)
}
A button within the child, setController, dismisses itself:
@objc func buttonAction(sender: UIButton!) {
self.willMove(toParent: nil)
self.removeFromParent()
self.view.removeFromSuperview()
self.dismiss(animated: false, completion: nil)
}
Everything works great the first time I call up the child UIView. It curls down while covering the first/parent UIVIEW, etc. etc. Figure 1 But after I dismiss the child view and call it again, the child view scrolls down without really covering the main view, it's like a mishmash. Figure 2 Only after all is said and done, then the child view covers everything.
So am curious if I am dismissing something incorrectly.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've learned the hard way that specific commands to add a child UIView must be in a certain order, especially if I am bringing in the child UIView using animation.
So I'd like to be clear that what the order I use to delete a child UIView is correct. Yes, what I have below works. But that doesn't mean it's correct.
Thank you
UIView.transition(with: parent, duration: 0.5, options: .transitionCurlUp, animations: { [self] in
self.willMove(toParent: nil);
self.removeFromParent();
self.view.removeFromSuperview();
self.dismiss(animated: false, completion: nil);
}, completion:nil)
it's a great tool from Apple, but I want to delve more into its engine as I need to. The documentation doesn't seem to go there. For instance, I can't figure out how to clear the bestTranscritption object in speechRecognizer, as it always contains the entire transcription. There are other things I would like to work with as well.
Has anyone worked with this heavily enough to recommend proper books are paid for tutorials?
Many thanks
I was under the impression, with offline speech to text, that there was no limit. Since the app wouldn't be using Apple's servers in real time.
Yet when I process: speechRecognizer.recognitionTask it quits after one minute.
Did I misread something ?
I’m trying to do something really complex with audio streams. I.e. process the stream live edit it and then save it in snippets, all while the user is still speaking.
I’m a book person, and reading hardcopy documentation is much easier for me.
Am trying to go from the installTap straight to AVAudioFile(forWriting:
I call:
let recordingFormat = node.outputFormat(forBus: 0)
and I get back :
<AVAudioFormat 0x60000278f750: 1 ch, 48000 Hz, Float32>
But AVAudioFile has a settings parameter of [String : Any] and am curious of how to place those values into recording the required format.
Hopefully these are the values I need?
Hello,
Am starting to work with/learn the AVAudioEngine.
Currently am at the point where I would like to be able read an audio file of a speech and determine if there are any moments of silence in the speech.
Does this framework provide any such properties, such as power lever, decibels, etc. that I can use in finding long enough moments of silence?
I have my Swift app that records audio in chunks of multiple files, each M4A file is approx 1 minute long. I would like to go through those files and detect silence, or the lowest level.
While I am able to read the file into a buffer, my problem is deciphering it. Even with Google, all it comes up with is "audio players" instead of sites that describe the header and the data.
Where can I find what to look for? Or even if I should be reading it into a WAV file? But even then I cannot seem to find a tool, or a site, that tells me how to decipher what I am reading.
Obviously it exists, since Siri knows when you've stopped speaking. Just trying to find the key.
Am working on a recording app from scratch and it just has the basics. Within my info.plist I do set Privacy - Microphone Usage Description
Still, I always want to check the "privacy permission" on the microphone because I know people can hit "No" by accident.
However, whatever I try, the app keeps running without waiting for the iOs permission alert to pop up and complete.
let mediaType = AVMediaType.audio
let mediaAuthorizationStatus = AVCaptureDevice.authorizationStatus(for: mediaType)
switch mediaAuthorizationStatus {
case .denied:
print (".denied")
case .authorized:
print ("authorized")
case .restricted:
print ("restricted")
case .notDetermined:
print("huh?")
let myQue = DispatchQueue(label: "get perm")
myQue.sync
{
AVCaptureDevice.requestAccess(for: .audio, completionHandler: { (granted: Bool) in
if granted {
} else {
}
})
}
default:
print ("not a clue")
}
Working on a recording app. So I started from scratch, and basically jump right into recording. I made sure to add the Privacy - Microphone Usage Description string.
What strikes me as odd, is that the app launches straight into recording. No alert comes up the first time asking the user for permission, which I thought was the norm.
Have I misunderstood something?
override func viewDidLoad() {
super.viewDidLoad()
record3()
}
func record3() {
print ("recording")
let node = audioEngine.inputNode
let recordingFormat = node.inputFormat(forBus: 0)
var silencish = 0
var wordsish = 0
makeFile(format: recordingFormat)
node.installTap(onBus: 0, bufferSize: 8192, format: recordingFormat, block: {
[self]
(buffer, _) in
do {
try audioFile!.write(from: buffer);
x += 1;
if x > 300 {
print ("it's over sergio")
endThis()
}
} catch {return};})
audioEngine.prepare()
do {
try audioEngine.start()
} catch let error {
print ("oh catch \(error)")
}
}
Xcode 13.3
Whenever I hit enter on Xcode, it starts off the new line with an indentation. It doesn't matter if I am creating a new line, or moving lines down, it always start the new line, or the moved line with an indentation.
This happens when I have:
Prefer Indent Using set to Tabs
regardless if I have
Syntax-Aware Indenting: Return checked or unchecked
Any thoughts?
Update, now somehow it does that auto indent on every line no matter whether I set it tabs or spaces. It's as if I broke it. Very annoying, please help!
Am trying to add a file uploader to my iPhone app in Swift, and need help as am unsure how to save the data from returned from the UIDocumentPickerViewController.
My whole document picking code works, and ends with this line of code:
let data = try Data.init(contentsOf: url)
The thing is I don't do my uploading till the user clicks another button, so I need to save that data. but am unsure how to cast a variable to hold it, then release the original data, and then finally free the copy.
I thought this would work
var dataToSend : AnyObject?
but it doesn't
Yes, still have casting issues to learn in Swift
I realize I could declare a global var, such as:
var mySpecialSubClass : MySpecialSubClass?
..and then check if it is defined.
Don't know of any reason to do one way or another, but I was wondering if there was a search for an instance of "MySpecialSubClass" function or method available?
From a tutorial I pulled the extension below to allow the enduser to select a file off of the iPhone's storage system.
Everything works fine.
However I noticed that in the delegate that "dismiss" is only being called if the user chooses cancel. While the view does disappear when a file is selected, I am not sure that the view is being properly dismissed internally by Swift.
Since I do call present, am I responsible for dismissing it when the user chooses a file as well?
let supportedTypes: [UTType] = [UTType.item]
let pickerViewController = UIDocumentPickerViewController(forOpeningContentTypes: supportedTypes)
pickerViewController.delegate = self
pickerViewController.allowsMultipleSelection = false
present(pickerViewController, animated: true, completion: nil)
extension uploadFile: UIDocumentPickerDelegate {
func documentPicker(_ controller: UIDocumentPickerViewController, didPickDocumentsAt urls: [URL]) {
for url in urls {
guard url.startAccessingSecurityScopedResource() else {
print ("error")
return
}
...save chosen file url here in an existing structre
do { url.stopAccessingSecurityScopedResource() }
}
}
func documentPickerWasCancelled(_ controller: UIDocumentPickerViewController) {
controller.dismiss(animated: true, completion: nil)
}
}
Below is a quick snippet of where I record audio. I would like to get a sampling of the background audio so that later I can filter out background noise. I figure 10 to 15 seconds should be a good amount of time.
Although I am assuming that it can change depending on the iOS device, the format returned from .inputFormat is :
<AVAudioFormat 0x600003e8d9a0: 1 ch, 48000 Hz, Float32>
Based on the format info, is it possible to make bufferSize for .installTap be just the write size for whatever time I wish to record for?
I realize I can create a timer for 10 seconds, stop recording, paster the files I have together, etc. etc. But if I can avoid all that extra coding it would be nice.
let node = audioEngine.inputNode
let recordingFormat = node.inputFormat(forBus: 0)
makeFile(format: recordingFormat)
node.installTap(onBus: 0, bufferSize: 8192, format: recordingFormat, block: {
[self]
(buffer, _) in
audioMetering(buffer: buffer)
print ("\(self.averagePowerForChannel0) \(self.averagePowerForChannel1)")
if self.averagePowerForChannel0 < -50 && self.averagePowerForChannel1 < -50 {
...
} else {
...
}
do {
... write audio file
} catch {return};})
audioEngine.prepare()
do {
try audioEngine.start()
} catch let error {
print ("oh catch \(error)")
}