Post

Replies

Boosts

Views

Created

AVAudioSession lifecycle management
I have a question around AVAudioSession lifecycle management. I already got a lot of help in yesterday's lab appointment - thanks a lot for that - but two questions remain. In particular, I'm wondering how to deal with exceptions in session.activate() and session.configure(). Here is my current understanding, assuming that the session configuration is intended to remain constant throughout an app life cycle: a session needs to be configured when the app first launches, or when the media service is reset a session needs to be activated for first use, and after every interruption (e.g. phone call, other app getting access, app was suspended). Because we cannot guarantee that a session.configure or session.activate call will succeed at all times, currently, in our app, we check whether the session is configured and activated before starting playback, and if not, we configure/activate it: extension Conductor { @discardableResult private func configureSessionIfNeeded() -> Bool { guard !isAudioSessionConfigured else { return true } let session = AVAudioSession.sharedInstance() do { try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay]) isAudioSessionConfigured = true } catch { Logging.capture(error) } return isAudioSessionConfigured } @discardableResult func activateSessionIfNeeded() -> Bool { guard !isAudioSessionActive else { return true } guard configureSessionIfNeeded() else { return false } let session = AVAudioSession.sharedInstance() do { try session.setActive(true) isAudioSessionActive = true } catch { Logging.capture(error) } return isAudioSessionActive } } This, however, requires keeping track of the state of the session: class Conductor { // Singleton static let shared: Conductor = Conductor() private var isAudioSessionActive = false private var isAudioSessionConfigured = false } This feels error-prone. Here is how we currently deal with interruptions: extension Conductor { // AVAudioSession.interruptionNotification @objc private func handleInterruption(_ notification: Notification) { guard let info = notification.userInfo, let typeValue = info[AVAudioSessionInterruptionTypeKey] as? UInt, let type = AVAudioSession.InterruptionType(rawValue: typeValue) else { return } if type == .began { // WWDC session advice: ignore the "app was suspended" reason. By the time it is delivered // (when the app re-enters the foreground) it is outdated and useless anyway. They probably // should not have introduced it in the first place, but thought too much information is // better than too little and erred on the safe side. // While the app is in the background, the user could interact with it from the control center, and // for example start playback. This will resume the app, and we will receive both the command from // the control center (resome) and the interruption notification (pause), but in undefined order. // It's a race condition, solved by simply ignoring the app-was-suspended notification. if let wasSuspended = info[AVAudioSessionInterruptionWasSuspendedKey] as? NSNumber, wasSuspended == true { return } // FIXME: in the app-was-suspended case, isAudioSessionActive remains true but should be false. if playbackState == .playing { pausePlayback() } if isRecording { stopRecording() } isAudioSessionActive = false } else if type == .ended { // Resume playback guard let optionsValue = notification.userInfo?[AVAudioSessionInterruptionOptionKey] as? UInt else { return } let options = AVAudioSession.InterruptionOptions(rawValue: optionsValue) if options.contains(.shouldResume) { startPlayback() } // NOTE: imagine the session was active, and the user stopped playback, and then we get interrupted // When the interruption ends, we will still get a .shouldResume, but we should check our own state // to see whether we were even playing before that. } } // AVAudioSession.mediaServicesWereResetNotification @objc private func handleMediaServicesWereReset(_ notification: Notification) { // We need to completely reinitialise the audio stack here, including redoing session configuration pausePlayback() isAudioSessionActive = false isAudioSessionConfigured = false configureSessionIfNeeded() } } And here, for full reference, is the rest of this example class: https://gist.github.com/tcwalther/8999e19ab7e3c952d6763f11c984ef70 With the above design, we check at every playback whether we need to configure or activate the session. If we do and configuration or activation fails, we just ignore the playback request and silently fail. We feel that this is a better UX experience ("play button not working") than crashing the app or landing in an inconsistent UI state. I think we could simplify this dramatically if we know that we'll get an interruption-ended notification alongside the interruption-began notification in case the app was suspended, and that, if the app was resumed because of a media center control, the interruption-ended notification will come before the playback request we can trust session.activate() and session.configure() to never throw an exception. How would you advise simplifying and/or improving this code to correctly deal with AVAudioSession interruption and error cases?
0
0
1.3k
Jun ’21
UIViewPropertyAnimator and application lifecycle
I'm trying to use UIViewPropertyAnimator to animate a progress bar. The progress bar displays the playback time of an audio file. Approach 0) Given a progress bar and a playback duration let progressBar: UIViewSubclass = ProgressBar(...) let duration: TimeInterval = ... 1) Create an animator from start to finish on the progress bar: let animator = UIViewPropertyAnimator(duration: duration, curve: .linear) animator.pausesOnCompletion = true progressBar.setProgress(0) animator.addAnimations { [weak self] in 		guard let self = self else { return } 		self.setProgress(1) } animator.pauseAnimation() 2) When a file is played, start it with: let startTime: TimeInterval = ... animator.fractionComplete = startTime / duration animator.continueAnimation(withTimingParameters: nil, durationFactor: 0) This works well. It is CPU efficient, and, with a bit of extra code, supports more things, such as dragging the progress bar to a different playback position. Problem: App/View Lifecycle Unfortunately, this approach breaks when sending the app into the background and reopening it. After that, animator.continueAnimation() doesn't work anymore, and the animation is stuck at the finish state. Here is an example project that reproduces this problem: https://github.com/JanNash/AnimationTest/tree/apple-developer-forum-660767 The main logic is in the ViewController: https://github.com/JanNash/AnimationTest/blob/apple-developer-forum-660767/AnimationTest/ViewController.swift In this project, a simple progress bar is animated after a button press, and the button press restarts the animation from the beginning. After the app was sent to the background and restored to the foreground, the animation doesn't work anymore. Question How do I fix this problem? Is there maybe something inherent to animations that I didn't understand? I could, for example, imagine that the render server loses the animation when the app goes into background, and that as such, animations always have to be recreated when the app - or even a view - enters the foreground. But it would be good to know whether this just requires a simple code change to fix, or whether I have misunderstood something conceptually.
2
0
2.8k
Sep ’20
AVAudioEngine stops running when changing input to AirPods
I have trouble understanding AVAudioEngine's behaviour when switching audio input sources. Expected Behaviour When switching input sources, AVAudioEngine's inputNode should adopt the new input source seamlessly. Actual Behaviour When switching from AirPods to the iPhone speaker, AVAudioEngine stops working. No audio is routed through anymore. Querying engine.isRunning still returns true. When subsequently switching back to AirPods, it still isn't working, but now engine.isRunning returns false. Stopping and starting the engine on a route change does not help. Neither does calling reset(). Disconnecting and reconnecting the input node does not help, either. The only thing that reliably helps is discarding the whole engine and creating a new one. OS This is on iOS 14, beta 5. I can't test this on previous versions I'm afraid; I only have one device around. Code to Reproduce Here is a minimum code example. Create a simple app project in Xcode (doesn't matter whether you choose SwiftUI or Storyboard), and give it permissions to access the microphone in Info.plist. Create the following file Conductor.swift: import AVFoundation class Conductor { 		static let shared: Conductor = Conductor() 		 		private let _engine = AVAudioEngine? 		 		init() { 				// Session 				let session = AVAudioSession.sharedInstance() 				try? session.setActive(false) 				try! session.setCategory(.playAndRecord, options: [.defaultToSpeaker, 																													 .allowBluetooth, 																													 .allowAirPlay]) 				try! session.setActive(true) 				_engine.connect(_engine.inputNode, to: _engine.mainMixerNode, format: nil) 				_engine.prepare() 		} 		func start() { _engine.start() } } And in AppDelegate, call: Conductor.shared.start() This example will route the input straight to the output. If you don't have headphones, it will trigger a feedback loop. Question What am I missing here? Is this expected behaviour? If so, it does not seem to be documented anywhere.
2
1
2.4k
Aug ’20
SwiftUI and frequent state updates for animation
I have a use case where I want to animate a sort of progress view based on the current playback position of an audio playback node. I draw the view myself using primitive shapes (in case that matters). Let's say our view consists of two rectangles: struct ProgressView: View {   let progress: CGFloat   var body: some View {     GeometryReader { g in       HStack(spacing: 0) {         Rectangle().fill(Color.red)           .frame(width: g.size.width * self.progress, height: g.size.height)         Rectangle().fill(Color.blue)           .frame(width: g.size.width * (1-self.progress), height: g.size.height)       }     }   } } In a different class, I have the following code (simplified): class Conductor { 	@Published var progress: Double = 0   func play() { 		self.player.play()     self.playbackTimer = Timer.scheduledTimer(withTimeInterval: 0.05, repeats: true) { _ in       self.progress = self.player.currentTime / self.totalTime    }   } } Then I can update my view above as follows: struct UpdatedProgressView: View { 	@EnvironmentObject private var conductor: Conductor 	var body: some View { 		ProgressView(progress: $conductor.progress) 	} } This works (assuming I have no typos in the example code), but it's very inefficient. At this point, SwiftUI has to redraw my ProgressView at 20Hz. In reality, my progress view is not just two Rectangles but a more complex shape (a waveform visualisation), and as a result, this simple playback costs 40% CPU time. It doesn't make a difference whether I use drawingGroup() or not. Then again, I'm quite certain this is not the way it's supposed to be done. I'm not using any animation primitives here, and as far as I understand it, the system has to redraw the entire ProgressView every single time, even though it's just a tiny amount of pixels that actually changed. Any hints on how I should change my code to make it more efficient with SwiftUI?
4
1
1.9k
Jul ’20