Does the panic happen when you're just the source and/or destination or only when your "both"? I suspect it will happen when you're just the destination and won't happen when you''re just the source, but I'd like to confirm that.
You were right in assuming that the panic occurred when my filesystem was the destination. I was able to verify that.
How fast is the I/O path back to the server? Are you saturating that connection?
The connection is not likely to be saturated as this is a 100Gb Link on a thunderbolt 5 interface.
Is the I/O pipeline here simply "read compressed file from network-> decompress data-> write data out to network"? Or is there any intermediary in that process?
The I/O pipeline is as you described it with no intermediary involved.
What actually makes it "back" to the server before the panic occurs? How much data were you actually able to write?
On two subsequent runs, around 41-42 Gb out of 64Gb of data were written before the panic ensued.
du -smx ./25116_CINEGRAPHER_MARCCAIN
41303 ./25116_CINEGRAPHER_MARCCAIN
du -smx ./25116_CINEGRAPHER_MARCCAIN
42600 ./25116_CINEGRAPHER_MARCCAIN
How does your write process actually "work"? Is there anything that would limit/constrain how much data you have pending (waiting to be sent over the network)?
The source uio_t buffer passed into vnop_write() is userspace. Before passing it down to sock_send() which operates on kernel resident memory buffers, we create a kernelspace copy of the userspace uio_t buffer of size equal to uio_resid(uspace_uio) with copying performed by uiomove() incrementally in chunks equal to either the value of the amount of data left in the userspace buffer or the value of the kernel's copysize_limit_panic, whichever happens to be the smaller of the two.
The kernelspace uio_t buffer is further split up into smaller chunks of data pertinent to the filesystem design which end up being passed into sock_send().
Reading is done in a similar fashion, the only difference being the use of sock_receive_mbuf() in stead of sock_receive() which uses an uio_t buffer instead of an mbuf.
I'm onto the debugging strategies you suggested now.
I'll report back on my findings as they emerge.
Thanks once again for all your help.
Hopefully, we'll be able to resolve this soon.
Topic:
App & System Services
SubTopic:
Core OS
Tags: