Thanks again!
I think "very fast" actually understates how significant the performance difference is.
Ha, true. In practice it seems “instant”, to the extent that on APFS, updating huge zip files is not much slower than in-place saving into a package.
I don't know if anyone has ever shipped a solution that worked like this, but... it might be worth thinking about using DiskImages as a "file format".
Interesting! Although cross-platform compatibility might be an issue here.
The "replaceItem(at:...)" documentation actually answers this…
Sorry, I should have been more clear, although thinking about it I have been tying myself up in knots and the solution was indeed here all along. I was referring to the circumstances we were discussing before, where we don’t want to do the temp work on the same volume as the destination because the destination volume is slow.
In other words, we have deliberately created the temp folder for updating our file on another volume (e.g. one that supports APFS), because the one created using url(for: .itemReplacementDirectory…) would be too slow, and now we need to move that temp file into place on the other volume.
From your answer I realise I was overlooking the obvious: after doing the work in the fast temp directory, I then need to create a second temp directory on the slower destination volume using url(for: .itemReplacementDirectory…), copy the file across, and then use replaceItemAt from there.
Yes, it will, at least in my testing. More specifically, I modified your test project to this:
while(!finishedSave) {
_ = try fileManager.replaceItemAt(savingURL, withItemAt: tempURL)
This approach wouldn’t work anyway. The nature of this specific error means that you cannot retry replaceItemAt on the same URLs like this, because after the error, savingURL and tempURL have swapped places. So in your sample code, if the second replaceItemAt succeeds, you’ve just replaced the newer version with the older version again, so that the save has effectively done nothing. We’ll only get the result we want when failCount % 2 == 0.
You can test this by logging the expected and actual final content of the file (i.e. log the content of tempURL before the loop, and the content of savingURL after it). Whenever failCount % 2 == 1, you’ll end up with old content at the destination, because of the alternate swapping of the original and new files.
The other problem with retrying replaceItemAt on the same URLs is that, as you note, tempURL (which after the initial replaceItemAt error contains the older file that was previously in the ubiquitous storage) still has the lock (?) on it which caused the permissions error. So any attempts to use that will continue to fail until the kernel (?) has finished with it.
For these reasons, we were previously talking about making a fresh copy of the updated temp file before trying replace, and calling replaceItemAt on that, so that we keep around a valid copy of the new file with which we can try again. (E.g. Have a working copy in the temp dir, update that, clone it, try replace using the clone, if that fails, try again with a fresh clone of the working copy.)
To update your code using this sort of approach:
var tempCopyURL = tempURL.deletingLastPathComponent().appending(path: UUID().uuidString)
var finishedSave = false
var failCount = 0
while (!finishedSave) {
do {
// Create a clone of our new file for replace.
try fileManager.copyItem(at: tempURL, to: tempCopyURL)
// Try to replace using the clone.
_ = try fileManager.replaceItemAt(savingURL, withItemAt: tempCopyURL)
try? fileManager.removeItem(at: replacementDirURL) // Clean up.
finishedSave = true
} catch {
failCount += 1
if(failCount == 1) {
NSLog("First Fail on \(count-1)")
}
// Try again on the next pass with a fresh clone.
tempCopyURL = tempURL.deletingLastPathComponent().appending(path: UUID().uuidString)
}
}
if(failCount > 0) {
NSLog("\(count-1) cleared after \(failCount) retries")
}
For me, this succeeds on the first retry every time, because we’re working with a fresh temp file, not the one that we’re denied access to. Out of 50,000 saves, I hit the error 150 times and each time it resolved on first retry. (It also ensures we end up with the correct version of the file being moved into place.) The disadvantage of course is that you’re adding in an extra copy of the temp file, which adds overhead on non-APFS/copy-on-write volumes.
To return to my original question:
I’m curious though as to whether the bug could occur twice in immediate succession, so that the resave also triggers the error.
Here I was wondering whether we could, on rare occasions, encounter the error twice in immediate succession even with the approach of using a fresh clone of the temp file for each attempt. My suspicion is that this shouldn’t happen, because here’s my wild (and completely uneducated!) guess as to what is happening:
Given that this weird error only happens for ubiquitous files, I’m guessing that the problem occurs when the kernel is intermittently doing something cloud-related with the original file, putting some sort of lock on it that prevents us from deleting it - but not from moving it for some reason.
replaceItemAt successfully swaps out the original ubiquitous file for the replacement, but the kernel still has a lock on the original file (which is now in the temp folder) and so won’t allow it to be deleted, so replaceItemAt throws an error.
So if at this point we immediately retry replaceItemAt with a fresh clone, all should be good because the kernel shouldn’t be doing anything yet with the file that was, in the same run loop, just swapped into the destination URL. (At this point in fact the file at the destination URL and the fresh clone we’re replacing it with are identical.)
Does that sound reasonable?
Mostly, you'll want .fileResourceIdentifier. fileContentIdentifier is an APFS specific[1] identifier
Thank you. I realised my mistake on this late yesterday while testing.
So, given all of the above, I think my approach should be:
Make a working copy in a temp dir (if destination doesn’t support cloning but local storage does, make the working copy on the local storage): workingCopyURL.
On save, update the working copy.
Copy the working copy to a folder created using url(for: .itemReplacementDirectory…): tempURL.
Use replaceItemAt, replacing destinationURL with tempURL.
If replaceItemAt fails, AND isUbiquitous is true for destinationURL, create a fresh copy of the working copy, and try replaceItemAt again with that. (If the file wasn’t ubiquitous, just throw the error.)
If replaceItemAt fails the second time, examine the error to check for this very specific bug, and if it all checks out, move on.