I have a scene on top of the main ViewController. While that scene will have objects in it, I'd like the background to be clear.
However
myView.backgroundColor = .clear
myView.allowsTransparency = true
produces a black box.
Am I missing some step?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I wanted to use a png image to create a pattern for an SKSpriteNode. Supposedly:
Pattern images are not supported via UIColor in SpriteKit
So I am supposed to use an .fsh file for shading. The thing is, can I create such a file from an image? Everywhere I've looked only show's mathematical methods for creating those files.
I hope this is something that is possible
I am using the SQLite wrapper for Xcode. I got it from the link below and did install it.
But was hoping there would better documentation, or tutorials out there for it. Am new enough at Swift and its syntax. Whatever can make this easier for me would be a big help.
https: //git.pado.name/reviewspur/ios/tree/fd2486cf91e422e2df8d048ffd2d40ea89527685/Carthage/Checkouts/SQLite.swift/Documentation#building-type-safe-sql
Am making a game which will have 6 interactive SKSpriteNodes. Is there anything wrong with having each node handle its own user interactions? Or is better to have all user touch interactions handled through one scene (GameScene)?
Or is there, perhaps, no difference?
I have an SKSpriteNode with an SKAction being run on it:
theGem!.run(premAction, completion: {theGem!.run(repeatAction)})
Can't seem to find out the proper steps to run another action, such as:
theGem.run(endsequence, completion: {theGem.removeAllActions(); theGem.run(stopAction)})
Should I stop the previous action first?
Is there a way to turn the repeat part off so that the first SKAction ends smoothly?
In my app, I have greyBars and one border bar. The border bar keeps the greyBars from falling off the screen.
Only one greyBar is used at a time. When it is completely filed with colored gems, a new bar is created and brought up from the bottom of the screen, with the code below.
The result I want is that the new bar, the one with all white diamonds pushes up the old bar and the new bar remains on the bottom. You can see by the screenshot that somehow the new bar ended up on top, even though it was coming from the bottom.
I slowed down the duration of the greyBar's movement to see what was going wrong.
While the two greyBars do clash with each other, the new one (the one on the bottom) ends up pushing THROUGH the top bar.
My assumption was that the new greyBar would just push up the old bar(s), and remain on the bottom.
Is there some "solidity" type property that I am missing?
myGreyBar[0].physicsBody?.categoryBitMask = bodyMasks.greyBarMask.rawValue
myGreyBar[0].physicsBody?.contactTestBitMask = bodyMasks.blankMask.rawValue
myGreyBar[0].physicsBody?.collisionBitMask = bodyMasks.greyBarMask.rawValue
myGreyBar[0].isHidden = false;
myGV.gameScene?.addChild(myGreyBar[0])
let moveAction = SKAction.move(to: CGPoint(x:(myGV.safeSceneRect.width/2) - (size.width/2), y: (myGemBase?.size.height)! + (myGV.border?.size.height)! + 200), duration: 10.0)
myGreyBar[0].run(moveAction, completion:{myGreyBar[0].physicsBody?.collisionBitMask = bodyMasks.borderMask.rawValue|bodyMasks.greyBarMask.rawValue})
I have a very simple app. All SKSpriteNodes, myBall, myBlue, and myRed.
Only myBall moves, affected by gravity, and bounces off of different objects (myRed and myBlue).
What I can't figure out is how to make myBall bounce harder or softer depending on which body it hits?
I hav been playing with the density of all the objects, but it doesn't seem to make any difference? Is there some property I am unaware of? Or are there other methods?
When my app starts up I have my ViewController, which automatically creates my MainScreen (also a view controller). Right after
self.addChild(mainController)
I call a function which sets my constraints
func setConstraints (vc: UIViewController) {
vc.view.translatesAutoresizingMaskIntoConstraints = false
var constraints = [NSLayoutConstraint]()
constraints.append(vc.view.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor))
constraints.append(vc.view.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor))
constraints.append(vc.view.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor))
constraints.append(vc.view.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor))
NSLayoutConstraint.activate(constraints)
}
All is fine up to this point, the MainScreen is bound by the top and bottom safe areas.
At some point from MainScreen I create another UIViewController.
countController.modalPresentationStyle = .fullScreen
self.present(countController, animated: true, completion: {})
Yet, no matter how hard I try to apply the constraints to the new controller, I crash with the following msg:
Unable to activate constraint with anchors <NSLayoutXAxisAnchor....because they have no common ancestor. Does the constraint or its anchors reference items in different view hierarchies? That's illegal."
Am too new to figure out where my error is.
Building a very simple voice-to-text app, which I got from an online demo.
What I can't seem to find is how to reset the response back to nil. This demo just keeps transcribing from the very beginning till it finally stalls.
While I don't know how if the stall is related to my question, I still need to find out how to code "Ok, got the first 100 words. Reset response text to nil. Continue."
func startSpeechRecognition(){
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat, block: { (buffer, _) in self.request.append(buffer)})
audioEngine.prepare()
do {
try audioEngine.start()
} catch let error {
alertView(message: "audioEngine start error")
}
guard let myRecognition = SFSpeechRecognizer() else {
self.alertView(message: "Recognition is not on your phone")
return
}
if !myRecognition.isAvailable {
self.alertView(message: "recognition is not available right now")
}
task = speechRecognizer?.recognitionTask(with: request, resultHandler: { (response, error) in
guard let response = response else {
if error != nil {
self.alertView(message: error!.localizedDescription.debugDescription)
} else {
self.alertView(message: "Unknow error in creating task")
}
return
}
let message = response.bestTranscription.formattedString
self.label.text = message
})
}
it's a great tool from Apple, but I want to delve more into its engine as I need to. The documentation doesn't seem to go there. For instance, I can't figure out how to clear the bestTranscritption object in speechRecognizer, as it always contains the entire transcription. There are other things I would like to work with as well.
Has anyone worked with this heavily enough to recommend proper books are paid for tutorials?
Many thanks
I’m trying to do something really complex with audio streams. I.e. process the stream live edit it and then save it in snippets, all while the user is still speaking.
I’m a book person, and reading hardcopy documentation is much easier for me.
Am trying to go from the installTap straight to AVAudioFile(forWriting:
I call:
let recordingFormat = node.outputFormat(forBus: 0)
and I get back :
<AVAudioFormat 0x60000278f750: 1 ch, 48000 Hz, Float32>
But AVAudioFile has a settings parameter of [String : Any] and am curious of how to place those values into recording the required format.
Hopefully these are the values I need?
I have my Swift app that records audio in chunks of multiple files, each M4A file is approx 1 minute long. I would like to go through those files and detect silence, or the lowest level.
While I am able to read the file into a buffer, my problem is deciphering it. Even with Google, all it comes up with is "audio players" instead of sites that describe the header and the data.
Where can I find what to look for? Or even if I should be reading it into a WAV file? But even then I cannot seem to find a tool, or a site, that tells me how to decipher what I am reading.
Obviously it exists, since Siri knows when you've stopped speaking. Just trying to find the key.
Working on a recording app. So I started from scratch, and basically jump right into recording. I made sure to add the Privacy - Microphone Usage Description string.
What strikes me as odd, is that the app launches straight into recording. No alert comes up the first time asking the user for permission, which I thought was the norm.
Have I misunderstood something?
override func viewDidLoad() {
super.viewDidLoad()
record3()
}
func record3() {
print ("recording")
let node = audioEngine.inputNode
let recordingFormat = node.inputFormat(forBus: 0)
var silencish = 0
var wordsish = 0
makeFile(format: recordingFormat)
node.installTap(onBus: 0, bufferSize: 8192, format: recordingFormat, block: {
[self]
(buffer, _) in
do {
try audioFile!.write(from: buffer);
x += 1;
if x > 300 {
print ("it's over sergio")
endThis()
}
} catch {return};})
audioEngine.prepare()
do {
try audioEngine.start()
} catch let error {
print ("oh catch \(error)")
}
}
Am trying to add a file uploader to my iPhone app in Swift, and need help as am unsure how to save the data from returned from the UIDocumentPickerViewController.
My whole document picking code works, and ends with this line of code:
let data = try Data.init(contentsOf: url)
The thing is I don't do my uploading till the user clicks another button, so I need to save that data. but am unsure how to cast a variable to hold it, then release the original data, and then finally free the copy.
I thought this would work
var dataToSend : AnyObject?
but it doesn't
Yes, still have casting issues to learn in Swift