Post

Replies

Boosts

Views

Activity

Reply to NFS on VFS/ZFS with open(..., O_EXCL) ?
It is curious as to why nfs server figures out that exclusive is set, then clears va_mode? Oh yeah, this came screaming back to me. So when NFS has to do exclusive, it attempts to create the file with metadata cleared, that is va_mode 0, and time. It uses atime to hold a create_verf (probably IP and counter), then uses getattr() to confirm the create_verf is matching (this client won). (va_flags has VA_UTIMES_NULL). If that goes well, it calls setattr() with the correct information. # We are about to create the file, with O_EXCL, no existing file. 0 nfsrv_create:entry 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd # Which finally calls our vnop_create 0 zfs_vnop_create:entry # we have created it! 0 zfs_vnop_create:return 0 nfsd # nfsrv_create now calls setattr - probably to set atime 0 zfs_vnop_setattr:entry 0 zfs_vnop_setattr:return 0 nfsd # done, and some getattr verify 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd # send reply code to client 0 nfsrv_rephead:entry 0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef 0: 00 00 00 00 .... # nfsrv_create returns success! We have created the file 0 nfsrv_create:return 0 nfsd # this setattr appears to be the next client request, unsure where it is from # but probably to set mode/uid/gid 0 nfsrv_setattr:entry 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd # nfsrv_setattr calls mac_vnode_check_open 0 mac_vnode_check_open:entry 0 zfs_vnop_getattr:entry 0 zfs_vnop_getattr:return 0 nfsd 0 zfs_vnop_getattr:entry 1 zfs_vnop_getattr:return 0 nfsd # this setattr is about to fail. 1 hook_vnode_check_open:return 2 nfsd 1 hook_vnode_check_open:return 0 nfsd 1 vng_vnode_check_open:return 0 nfsd 1 mac_vnode_check_open:return 2 nfsd 1 mac_vnode_check_open:return 2 nfsd # mac_vnode_check_open failed, error = ESTALE 1 zfs_vnop_getattr:entry 1 zfs_vnop_getattr:return 0 nfsd 1 nfsrv_rephead:entry 0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef 0: 46 00 00 00 F... I guess if I knew where hook_vnode_check_open() is defined, I can perhaps figure out what goes wrong, but I get a bit lost in the MACF macros.
Topic: App & System Services SubTopic: Core OS Tags:
Jan ’22
Reply to Current status of kernel symbolication on M1/arm64?
One of the users noticed something interesting; objdump -S /Library/Extensions/zfs.kext/Contents/MacOS/zfs|less 0000000000088000 _abd_verify: Using the first symbols offset, 0x88000, the stack posted in the panic makes sense. So change load address 0xfffffe002641c000 - 0x88000 = 0xFFFFFE0026394000. atos -o /Library/Extensions/zfs.kext/Contents/MacOS/zfs -arch arm64e -l 0xFFFFFE0026394000 0xfffffe00266449d4 0xfffffe002650ab60 0xfffffe002650fad4 0xfffffe002650dc88 0xfffffe0026517798 0xfffffe002763f82c getf (in zfs) (spl-vnode.c:218) zfs_file_get (in zfs) (zfs_file_os.c:412) zfs_ioc_send_new (in zfs) (zfs_ioctl.c:6472) zfsdev_ioctl_common (in zfs) (zfs_ioctl.c:7603) zfsdev_ioctl (in zfs) (zfs_ioctl_os.c:234) Similarly, lldb can be made to work with offsets - 0x88000, as loaded from 0. Not when using load address. The actual panic? Called vnode_specrdev() on a vnode not CHR/BLK.
Topic: App & System Services SubTopic: Core OS Tags:
May ’21
Reply to kmem_alloc for ARM64
Oh I was reading: https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/vm/vm.html the section with "Allocating Memory In the Kernel Itself". But it is probably old news. Looks like I can call OSMalloc, with our own-allocator with quantum set over kalloc_kernmap_size. I will try that first. While I have your attention (do I though?) - it would be nice if we could use xnu's kalloc.zones, but we struggled with it for quite a long time. The issue was always that the machine would simply panic, if it ran out of a zone. (say, zone.64) Surely there is a way to be told/warned/event that a zone is getting full, or under pressure? Would really like to avoid panic, even if it means stalling an allocation long enough for it to reap. We never found a memory-pressure system we were allowed to call, (or it was calling us After it spun looking for memory and calling panic).
Topic: App & System Services SubTopic: Core OS Tags:
Mar ’21