filecopy fails with errno 34 "Result too large" when copying from NAS

A user of my app reported that when my app copies files from a QNAP NAS to a folder on their Mac, they get the error "Result too large". When copying the same files from the Desktop, it works.

I asked them to reproduce the issue with the sample code below and they confirmed that it reproduces. They contacted QNAP for support who in turn contacted me saying that they are not sure they can do anything about it, and asking if Apple can help.

Both the app user and QNAP are willing to help, but at this point I'm also unsure how to proceed. Can someone at Apple say anything about this? Is this something QNAP should solve, or is this a bug in macOS?

P.S.: I've had users in the past who reported the same issue with other brands, mostly Synology.

import Cocoa

@main
class AppDelegate: NSObject, NSApplicationDelegate {

    func applicationDidFinishLaunching(_ aNotification: Notification) {
        let openPanel = NSOpenPanel()
        openPanel.canChooseDirectories = true
        openPanel.runModal()
        let source = openPanel.urls[0]
        
        openPanel.canChooseFiles = false
        openPanel.runModal()
        let destination = openPanel.urls[0]
        
        do {
            try copyFile(from: source, to: destination.appendingPathComponent(source.lastPathComponent, isDirectory: false))
        } catch {
            NSAlert(error: error).runModal()
        }
        
        NSApp.terminate(nil)
    }
    
    private func copyFile(from source: URL, to destination: URL) throws {
        if try source.resourceValues(forKeys: [.isDirectoryKey]).isDirectory == true {
            try FileManager.default.createDirectory(at: destination, withIntermediateDirectories: false)
            for source in try FileManager.default.contentsOfDirectory(at: source, includingPropertiesForKeys: nil) {
                try copyFile(from: source, to: destination.appendingPathComponent(source.lastPathComponent, isDirectory: false))
            }
        } else {
            try copyRegularFile(from: source, to: destination)
        }
    }
    
    private func copyRegularFile(from source: URL, to destination: URL) throws {
        let state = copyfile_state_alloc()
        defer {
            copyfile_state_free(state)
        }
        var bsize = UInt32(16_777_216)
        if copyfile_state_set(state, UInt32(COPYFILE_STATE_BSIZE), &bsize) != 0 {
            throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno))
        } else if copyfile_state_set(state, UInt32(COPYFILE_STATE_STATUS_CB), unsafeBitCast(copyfileCallback, to: UnsafeRawPointer.self)) != 0 {
            throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno))
        } else if copyfile(source.path, destination.path, state, copyfile_flags_t(COPYFILE_DATA | COPYFILE_SECURITY | COPYFILE_NOFOLLOW | COPYFILE_EXCL | COPYFILE_XATTR)) != 0 {
            throw NSError(domain: NSPOSIXErrorDomain, code: Int(errno))
        }
    }

    private let copyfileCallback: copyfile_callback_t = { what, stage, state, src, dst, ctx in
        if what == COPYFILE_COPY_DATA {
            if stage == COPYFILE_ERR {
                return COPYFILE_QUIT
            }
        }
        return COPYFILE_CONTINUE
    }

}

Just read your reply again more carefully as it is not the end of the day and I'm not tired :)

. As the thread up above shows, the primary way to bypass the issue "copyfile" is having is to use the "..namedfork/rsrc" convention,

Interesting. I was using xattr API. So just skipping the resource fork during copyfile, then running copyfile again over the src and destination (with .namedfork/rsrc convention) seems to work on my dummy test file. Is that expected to be a reliable workaround?

I guess if this actually works reliably I wouldn't have to restart copy from scratch or suppress deprecated API warnings. Or maybe there good reason to start the copy from scratch with the Carbon file manager?

This was always my understanding of xattrs.

Yes, and I think part of what's helpful here is to not think of the resource fork as "just another xattr". Its age makes it a "special" case.

Interesting. I was using the xattr API. So just skipping the resource fork during copyfile, then running copyfile again over the src and destination (with .namedfork/rsrc convention) seems to work on my dummy test file. Is that expected to be a reliable workaround?

So, the reason I keep asking the "what's actually happening" question is that IF this is in fact a compressed file, then just copying the resource fork won't actually work. There's an additional UF_COMPRESSED file flag that needs to be set, which then "marks" the file as compressed so that the system knows to retrieve and decompress the file contents when the file itself is opened. If you read through the previous threads, you can see that this flag is what ends up "hiding" the presence of the resource fork from the rest of the system.

My guess is that this IS in fact a compressed file. I mentioned in an early message that the resource fork could get large, but that wasn't ACTUALLY that common. Applications often had large resource forks— Part of that was because it was particularly "useful" for apps, but the bigger reason was that apps didn't REALLY need to worry about being "used" anywhere else since, by definition, macOS apps only ever ran on the Mac.

However, if it was a compressed file then it should never have been written to the NAS device as a compressed file. All of our copy engines would have either automatically decompressed it (this is what happens if you copy the file without "knowing" about file compression) or noticed that the target DIDN'T support compression... and decompressed the file.

FYI, you can "fix" these files (by setting UF_COMPRESSED), but that is something the user would need to be involved with, not something you can do automatically[1].

That leads back to here:

Yea, my concern is if these files are really old, the user may be keeping them around for a long time because they consider them to be important. So I wouldn't want to be blamed if they copied the file in my app, discarded the original, and the copy is corrupted.

"Blame" is definitely the critical factor here. The worst-case scenario I'd be concerned about is that the files were already damaged when they hit the NAS and UF_COMPRESSED was stripped. You then copy the file "properly" (replicating the file as it exists on the NAS) and then get blamed for both the bad copy AND damaging the original (since you were the last one to "touch" it).

How you go about addressing that doesn't really have any single solution, as it really depends on the larger context of your app, user base, etc. A "basic" consumer-focused app might just post an error/warning and let the user sort it out. If the "job" of your app isn't actually "copying the data", then sometimes you’re better off just highlighting the failure and letting the user solve it with other tools. On the other hand, if copying IS the focus of your app, then that can end up making your app look broken/useless.

Finally, on the "how do I copy it" front:

I guess if this actually works reliably, I wouldn't have to restart copy from scratch or suppress deprecated API warnings. Or maybe there’s a good reason to start the copy from scratch with the Carbon file manager?

The main issue is that if it IS a compressed file, then the data fork will also be empty, so there won't really be much to "restart". My personal inclination would probably be to use Carbon, as that gets you closest to "do what the Finder does”, which is the benchmark most developers are concerned with. If you want to go the level "beyond" that so you can copy things the Finder can't/won't, then I'd think I'd probably implement that as the last option after Carbon failed. That path is then presented as "I couldn't properly copy the file and I don't think it's necessarily valid, but here is what I was able to get".

[1] Strictly speaking, you probably could try and identify these files by looking at the resource fork content, but it would always involve guesswork.

__
Kevin Elliott
DTS Engineer, CoreOS/Hardware

Interesting and really good information to know!

IF this is in fact a compressed file, then just copying the resource fork won't actually work. There's an additional UF_COMPRESSED file flag that needs to be set, which then "marks" the file as compressed so that the system knows to retrieve and decompress the file contents when the file itself is opened. If you read through the previous threads, you can see that this flag is what ends up "hiding" the presence of the resource fork from the rest of the system.

However, if it was a compressed file then it should never have been written to the NAS device as a compressed file. All of our copy engines would have either automatically decompressed it (this is what happens if you copy the file without "knowing" about file compression) or noticed that the target DIDN'T support compression... and decompressed the file.

"Blame" is definitely the critical factor here. The worst-case scenario I'd be concerned about is that the files were already damaged when they hit the NAS and UF_COMPRESSED was stripped. You then copy the file "properly" (replicating the file as it exists on the NAS) and then get blamed for both the bad copy AND damaging the original (since you were the last one to "touch" it). [...] My personal inclination would probably be to use Carbon, as that gets you closest to "do what the Finder does”, which is the benchmark most developers are concerned with. FYI, you can "fix" these files (by setting UF_COMPRESSED), but that is something the user would need to be involved with, not something you can do automatically[1].

In regards to the potential compression situation, I'm assuming Finder copying doesn't involve the user or set UF_COMPRESSED? Or does it? If it just copies and the file, and that file remains 'unusable' then I guess copyfile's behavior (failing) would be better in that case.

Now if there is someone out there just hoarding really old files with resource fork, Finder's behavior is obviously superior.

All it takes, as you demonstrated, to get copyfile to fail is to feed it a file with a resource fork. And the open source code indicates that it at least tries to copy it. So I guess I just want to copy the data the user feeds me. Whether the file is already broken due to compression leak etc. that's outside my app's bounds. My app isn't a "data repair tool," it isn't in that genre, so I just am going to have to settle for copying whatever I get from the pasteboard. In the case of a file with a resource fork, I guess I'd hope it is a very old file from ancient times and not a broken file.

It'd be great if Finder shared the same copy engine with the rest of us, at least whatever issues would be consistent system wide. But I get it..it probably won't ever happen.

filecopy fails with errno 34 "Result too large" when copying from NAS
 
 
Q