No. That doesn’t buy you any thread safety above and beyond what URLCache already supports.
Got you, will not bother with the locks then.
Yep, I’ve seen that myself. I believe it comes about because URLCache defers write operations on a background queue and so your subsequente read operation races with that.
I suspected this. A better behaviour would be: update memory representation, return updated memory representation for readers and if needed do the writing on a background queue (but that should not affect readers).
Again, keep in mind the big picture here: URLCache was designed to meet the needs of an HTTP client, Safari basically, not as a general-purpose caching API.
I hear you. There are pros and cons of course.
It’s certainly doable but that code is not sufficient. To make this work you need a primitive other than a lock because other threads have to be able to block on the reservation waiting for the ‘lead thread’ to finish the load.
I generally hate locking, especially long term... IMHO - that's the recipe for deadlocks... I would make other clients getting some placeholder data (e.g. "loading" image if that's an image) without blocking. Once the data is finally loaded by the lead thread - the corresponding model state is changed and the relevant updates should be sent so that the all interesting parties update their representations (e.g. via SwiftUI's observation machinery, etc).
Topic:
App & System Services
SubTopic:
General
Tags: