Yes I believe you need to recursively descend through the directory tree. So you need to read each directory, determine which of the directory entries are sub-directories, and visit those. This boils down to readdir and stat.
I am aware of one possible optimisation, if you want to squeeze the last bit of performance. In order to determine whether a directory entry is a sub-directory, rather than a regular file, in general you need to stat it. But, when you stated the parent directory you got st_nlink, which tells you the number of hard links to that directory. Each directory's .. entry counts as a hard link to its parent directory, and its . entry is a hard link to itself. So if st_nlink is 4, then the directory can have at most 3 subdirectories. So you can enumerate the directory contents, checking if each entry is a subdirectory, until you have found 3. At that point you can stop stating everything - the remaining entries must be non-directories.
This is classic UNIX stuff that has worked since forever. It's not impossible that new Apple filesystems have alternative APIs that make this more efficient. Maybe someone else will comment. I don't know where the bottleneck is - is it the disk access speed? - is it the number of kernel calls?
Things you need to consider if you want a robust implementation:
Hard links
Symlinks
Hard or symbolic links that introduce cycles
Other special directory entries
Mount points
Whether to include . and .. and other dotfiles in your count
Unreadable (and unexecutable) directories
Personally, I'd do something like this:
auto count_files(std::filesystem::path p)
{
int n_files = 0;
for (auto& entry: std::filesystem::recursive_directory_iterator(p)) {
if (is_regular_file(entry)) ++n_files;
}
return n_files;
}