[quote='885813022, DTS Engineer, /thread/824156?answerId=885813022#885813022']
Some of the information here may already be obvious or well understood to you.
[/quote]
Thank you for taking the time to write all this down. That context is very helpful, especially as I'm not super familiar with the lower level Unix APIs related to this.
[quote='885809022, DTS Engineer, /thread/824156?answerId=885809022#885809022']
I don't think that's a safe assumption and, in practice, I think you're very likely to see lots of cases where a lookup ISN'T generated. I don't think a basic "ls" will generate a lookup call and I'd expect/hope the Finder would avoid it at least some of the time.
[/quote]
Both of those expectations don't The second expectation doesn't seem to be what happens in reality, at least on macOS 15.7.5. I've been testing with a sample file system that has a directory with 10,000 items. When I do a time ls /Volumes/MyFS/dir, I see that a lookupItem call was done for every single item (actually, 2 for each item, e.g. one for file1.txt and another for ._file1.txt, which doesn't exist). And if I open /Volumes/MyFS in the Finder, then it immediately goes wild at making lookupItem calls to the thousands of items in the /Volumes/MyFS/dir before I even open dir (although, I do have the "calculate all sizes" view option enabled in the Finder; when I turn that off, it at least waits until I open dir).
Edit: OK, actually, it seems like my shell has added extra stuff to a "default" ls call... it's probably oh-my-zsh or something. When I run it in a plain bash shell, a regular ls doesn't make all those lookup calls. But it also doesn't ask for any of the heavy attributes that lead to my codepath that can instantiate the FSItem object, so that specific case doesn't lead to a leak in my code.
These operations (the ls and opening the big folder in the Finder) are really slow, and while a lot of it seems to simply be FSKit overhead (FB21069313), I'm currently trying to optimize my own code to avoid some of the bottleneck on my side.
I guess, writing this down, it really looks like the "FSKit overhead" might be the tons of lookupItem calls... so if that's not intended behavior, then yeah, sounds like my assumption wouldn't work and I should be trying to figure out when to prune these objects.
[quote='885809022, DTS Engineer, /thread/824156?answerId=885809022#885809022']
That bottleneck either provides the cached object it already has or creates a new object (adding it to the cache) if it doesn't have one, ensuring that there is never more than one object for any given file system object.
[/quote]
Yeah, I've currently got something like that. I'm currently relying on reclaimItem calls to know when to release my hold on the object from the bottleneck, but I was recently reading my code again and realized that I might be never reclaiming it if the system never actually receives the item! Hence I came to the forums to ask this, to see if this is actually a problem or not.
[quote='885809022, DTS Engineer, /thread/824156?answerId=885809022#885809022']
First off, as background context, I have a post on this here which is worth reading.
[/quote]
Oh hey, that's an answer to one of my previous questions :)
[quote='885809022, DTS Engineer, /thread/824156?answerId=885809022#885809022']
As a block storage file system, one thing you should be looking at/planning is to transition to FSVolumeKernelOffloadedIOOperations and away from FSVolumeReadWriteOperations.
[/quote]
Got it. I do have that implemented, although right now I'm rejecting any request that contains the write flag. I also don't have any of the supporting write functionality (creating files, changing attributes, etc.) done, which is probably going to be the hard part with that.