I handled everything through touchesEnded. Was curious if I could hand off the call to touchesEnded from touchesCancelled?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have an SKSpriteNode with an SKAction being run on it:
theGem!.run(premAction, completion: {theGem!.run(repeatAction)})
Can't seem to find out the proper steps to run another action, such as:
theGem.run(endsequence, completion: {theGem.removeAllActions(); theGem.run(stopAction)})
Should I stop the previous action first?
Is there a way to turn the repeat part off so that the first SKAction ends smoothly?
In my app, I have greyBars and one border bar. The border bar keeps the greyBars from falling off the screen.
Only one greyBar is used at a time. When it is completely filed with colored gems, a new bar is created and brought up from the bottom of the screen, with the code below.
The result I want is that the new bar, the one with all white diamonds pushes up the old bar and the new bar remains on the bottom. You can see by the screenshot that somehow the new bar ended up on top, even though it was coming from the bottom.
I slowed down the duration of the greyBar's movement to see what was going wrong.
While the two greyBars do clash with each other, the new one (the one on the bottom) ends up pushing THROUGH the top bar.
My assumption was that the new greyBar would just push up the old bar(s), and remain on the bottom.
Is there some "solidity" type property that I am missing?
myGreyBar[0].physicsBody?.categoryBitMask = bodyMasks.greyBarMask.rawValue
myGreyBar[0].physicsBody?.contactTestBitMask = bodyMasks.blankMask.rawValue
myGreyBar[0].physicsBody?.collisionBitMask = bodyMasks.greyBarMask.rawValue
myGreyBar[0].isHidden = false;
myGV.gameScene?.addChild(myGreyBar[0])
let moveAction = SKAction.move(to: CGPoint(x:(myGV.safeSceneRect.width/2) - (size.width/2), y: (myGemBase?.size.height)! + (myGV.border?.size.height)! + 200), duration: 10.0)
myGreyBar[0].run(moveAction, completion:{myGreyBar[0].physicsBody?.collisionBitMask = bodyMasks.borderMask.rawValue|bodyMasks.greyBarMask.rawValue})
I know it's uncool to ask vague questions here, but what do they call it when you create a world and follow it with a camera in Swift? Like an RPG? Like Doom?
I want to try and learn that now. And more importantly can it be done without using the Xcode scene builder? Can it be done all via code?
Thanks, as always. Without the forum I would never have gotten much farther than "Hello World!"
I have a very simple app. All SKSpriteNodes, myBall, myBlue, and myRed.
Only myBall moves, affected by gravity, and bounces off of different objects (myRed and myBlue).
What I can't figure out is how to make myBall bounce harder or softer depending on which body it hits?
I hav been playing with the density of all the objects, but it doesn't seem to make any difference? Is there some property I am unaware of? Or are there other methods?
When my app starts up I have my ViewController, which automatically creates my MainScreen (also a view controller). Right after
self.addChild(mainController)
I call a function which sets my constraints
func setConstraints (vc: UIViewController) {
vc.view.translatesAutoresizingMaskIntoConstraints = false
var constraints = [NSLayoutConstraint]()
constraints.append(vc.view.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor))
constraints.append(vc.view.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor))
constraints.append(vc.view.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor))
constraints.append(vc.view.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor))
NSLayoutConstraint.activate(constraints)
}
All is fine up to this point, the MainScreen is bound by the top and bottom safe areas.
At some point from MainScreen I create another UIViewController.
countController.modalPresentationStyle = .fullScreen
self.present(countController, animated: true, completion: {})
Yet, no matter how hard I try to apply the constraints to the new controller, I crash with the following msg:
Unable to activate constraint with anchors <NSLayoutXAxisAnchor....because they have no common ancestor. Does the constraint or its anchors reference items in different view hierarchies? That's illegal."
Am too new to figure out where my error is.
Thanks to people on this board I am able to successfully calla up a child UIViewConroller via animation with:
This is the buttonAction from the Main UIViewController, which calls up setController
@objc func buttonAction(sender: UIButton!) {
guard let theButton = sender as? MyButton else { return}
UIView.transition(with: self.view, duration: 0.5, options: .transitionCurlDown, animations: { [self] in
self.addChild(setController);
self.view.addSubview(setController.view);
}, completion: { [self]_ in setController.didMove(toParent: self);
setController.doLayout();})
}
the doLayout method lies within the child:
func doLayout (){
guard let parent = cView!.view.superview else {return}
//make sure UIV honors safeAreaLayouts
setConstraints(vc: self, pc: parent)
}
A button within the child, setController, dismisses itself:
@objc func buttonAction(sender: UIButton!) {
self.willMove(toParent: nil)
self.removeFromParent()
self.view.removeFromSuperview()
self.dismiss(animated: false, completion: nil)
}
Everything works great the first time I call up the child UIView. It curls down while covering the first/parent UIVIEW, etc. etc. Figure 1 But after I dismiss the child view and call it again, the child view scrolls down without really covering the main view, it's like a mishmash. Figure 2 Only after all is said and done, then the child view covers everything.
So am curious if I am dismissing something incorrectly.
I've learned the hard way that specific commands to add a child UIView must be in a certain order, especially if I am bringing in the child UIView using animation.
So I'd like to be clear that what the order I use to delete a child UIView is correct. Yes, what I have below works. But that doesn't mean it's correct.
Thank you
UIView.transition(with: parent, duration: 0.5, options: .transitionCurlUp, animations: { [self] in
self.willMove(toParent: nil);
self.removeFromParent();
self.view.removeFromSuperview();
self.dismiss(animated: false, completion: nil);
}, completion:nil)
it's a great tool from Apple, but I want to delve more into its engine as I need to. The documentation doesn't seem to go there. For instance, I can't figure out how to clear the bestTranscritption object in speechRecognizer, as it always contains the entire transcription. There are other things I would like to work with as well.
Has anyone worked with this heavily enough to recommend proper books are paid for tutorials?
Many thanks
I was under the impression, with offline speech to text, that there was no limit. Since the app wouldn't be using Apple's servers in real time.
Yet when I process: speechRecognizer.recognitionTask it quits after one minute.
Did I misread something ?
I’m trying to do something really complex with audio streams. I.e. process the stream live edit it and then save it in snippets, all while the user is still speaking.
I’m a book person, and reading hardcopy documentation is much easier for me.
Am trying to go from the installTap straight to AVAudioFile(forWriting:
I call:
let recordingFormat = node.outputFormat(forBus: 0)
and I get back :
<AVAudioFormat 0x60000278f750: 1 ch, 48000 Hz, Float32>
But AVAudioFile has a settings parameter of [String : Any] and am curious of how to place those values into recording the required format.
Hopefully these are the values I need?
Hello,
Am starting to work with/learn the AVAudioEngine.
Currently am at the point where I would like to be able read an audio file of a speech and determine if there are any moments of silence in the speech.
Does this framework provide any such properties, such as power lever, decibels, etc. that I can use in finding long enough moments of silence?
I have my Swift app that records audio in chunks of multiple files, each M4A file is approx 1 minute long. I would like to go through those files and detect silence, or the lowest level.
While I am able to read the file into a buffer, my problem is deciphering it. Even with Google, all it comes up with is "audio players" instead of sites that describe the header and the data.
Where can I find what to look for? Or even if I should be reading it into a WAV file? But even then I cannot seem to find a tool, or a site, that tells me how to decipher what I am reading.
Obviously it exists, since Siri knows when you've stopped speaking. Just trying to find the key.
Am working on a recording app from scratch and it just has the basics. Within my info.plist I do set Privacy - Microphone Usage Description
Still, I always want to check the "privacy permission" on the microphone because I know people can hit "No" by accident.
However, whatever I try, the app keeps running without waiting for the iOs permission alert to pop up and complete.
let mediaType = AVMediaType.audio
let mediaAuthorizationStatus = AVCaptureDevice.authorizationStatus(for: mediaType)
switch mediaAuthorizationStatus {
case .denied:
print (".denied")
case .authorized:
print ("authorized")
case .restricted:
print ("restricted")
case .notDetermined:
print("huh?")
let myQue = DispatchQueue(label: "get perm")
myQue.sync
{
AVCaptureDevice.requestAccess(for: .audio, completionHandler: { (granted: Bool) in
if granted {
} else {
}
})
}
default:
print ("not a clue")
}