So ideally (at least in my case) is that I could just use the when parameter passed to the tap block to figure out where in the audio I'm at, but the when needs to be converted to player time in order to get a time that makes sense (I need to determine my time relative to the entire audio buffer I scheduled on the player node, not the buffer of the tap block).
Also when the playerNode is paused, the when parameter doesn't factor the paused time in. The tap block continues to fire while the player node is paused and the when time doesn't know anything about paused/stopped so any UI synchronization you do will just jump all over the place once you start pausing/resuming. -playerTimeForNodeTime: does know about all this, but....
I don't think it's safe to call any of the audio engine/player node APIs in the tap block without risking a deadlock. If I'm wrong about all this I'd be grateful for an education. The documentation seems a bit scarce, and the dev forums have been pretty quiet lately.
What I came up with now is to just sync my own atomic_bool with my calls that pause/play the player node. Read the atomic bool in the tap block instead of if (!playerNode.isPlaying) { return; }
Then to account for the node time/ player time situation.. every time I schedule a buffer, I reset a counter to 0 that's synchronized with the tap block. In the tap block I increment it on each invocation to compute sample time relative to the entire audio buffer. I only schedule one buffer at a time (for now). I suppose I would need to figure out a good place to reset the counter to 0 if I scheduled two buffers at the same time.
If the system ever stops my player node (on error or something) my tap block could be out of sync since my flag I use to track playerNode.isPlaying is not binded to 'the truth' but it's better than deadlocking.
If there is a cleaner way to achieve this, I'm all ears.