Post

Replies

Boosts

Views

Activity

Reply to Zsh kills Python process with plenty of available VM
Thank you so much for your reply, now I have a picture of what is going on. Could you share also how to use these functions? The only documentation I could find does not have examples. Say I have, among all, this process running, labelled python3 with PID 33238. I tried writing os_proc_available_memory() in my terminal (bash shell), and all I get is a prompt > awaiting for input. Same with getrlimit and setrlimit. I tried also os_proc_available_memory(33238) etc but I get error messages. The documentation keep mentioning 'the current process' but there are many, how do I run this functions relative to a specific ongoing process?
Dec ’25
Reply to Zsh kills Python process with plenty of available VM
I see, thank you for the explanation. Yes, my machine has 16 GB of RAM and I read about the VMM at https://developer.apple.com/library/archive/documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html#//apple_ref/doc/uid/20001880-BCICIHAB Is there a guide for macOS on the steps you describe at the end, that is on how to allocate a swapfile, mmap that swapfile into memory, then use that mapped memory instead of system-allocated memory? I am familiar with dd from /dev/zero etc and the usual declaration of a swapfile by appending on /etc/fstab, but that is in Linux, and perhaps that does not work on macOS...
Dec ’25
Reply to Zsh kills Python process with plenty of available VM
Apologies if I misunderstand things completely, I am no developer so memory management is completely foreign to me. Are the steps you describe involving the creation of a file and mmap it, to be used in the shell, and the output then applied to a process already running (say some python code) or are they supposed to be performed before the python code is run? In the first case, how do I assign this points to the process already running? I assume this will be extra virtual memory on top of the one the VMM has already used for the process. In the second case, does the pointer have to be used within the python script (that is, do I need to modify my python script in a certain way to tap into this file for virtual memory)? Or once I have it, I have to launch the process in a certain way that it uses it? Feel free to refer me to some resource that explains how to map a file to a process, all I could find, manages included, was not helpful and does not go into enough detail for my background. There is no mention of the process' PID, so I am quite confused about how to let the process know that it can use this file via the pointer. Further, once this is set-up, will the process still use the available physical ram or it will only use the file? One important clarification for my use case, in the eventuality that the script has to be modified via mmap, is that the python process I need to use the mapped file uses a gurobi routine. Only this routine is the memory heavy part. However, this is proprietary software (written in C) which I cannot modify nor have access to. I simply call it through a python API with a command: model.optimize(). Thus I fear, in this case, that mmap is not an option as I do not find any mention of it in the gurobi documentation.
Dec ’25
Reply to Zsh kills Python process with plenty of available VM
I understand now, thanks for your patience. Unfortunately this means that the only way for this to work is having the Gurobi libraries' code, and incorporate mmap there, because that's where the memory-heavy node-files are created and put in memory. Too bad this is not open-source software, and therefore this is impossible. Is there an alternative to mmap to perhaps squeeze some extra space in the disk? I was wondering if renicing the process to, say, -20, can have some indirect benefits...
Dec ’25
Reply to Zsh kills Python process with plenty of available VM
Thank you, I asked, but unfortunately from Gurobi they confirmed their APIs do not support mmap. I am curious about your last suggestion regarding the boot parameter vm_compression_limit. The default is zero, but I cannot find any documentation about the other possible values, nor what is the corresponding behaviour. Is there any official documentation on it? Is there a command that supports modifying this parameter? On my mac it seems encrypted since on the kernels folder (I actually do not know which kernel is in use, there are multiple ones) all the files display a bunch of random symbols, so it is not as easy as to edit as the Linux kernel boot parameters. I assume there is a command to do it.
Dec ’25
Reply to Zsh kills Python process with plenty of available VM
Thank you for sharing. The following is an AI generated overview, but if it is correct, it seems that the only thing that vm_compression_limit would change is the compression of physical memory, not virtual memory. Am I interpreting wrong what follows? "The vm_compression_limit boot argument in macOS accepts integer values that define the maximum percentage of physical memory allowed for compressed memory. The values represent the percentage limit: 0: The default value, which allows the system to determine the appropriate limit dynamically based on system load. 1 through 100: Sets the specific maximum percentage of physical RAM that the compressed memory store is allowed to consume. Setting a specific value overrides the system's dynamic management. Usage Example To set the compression limit to 50%, you would add the following to your boot arguments in your config.plist (for OpenCore users) or through the nvram command in recovery mode (with System Integrity Protection disabled): vm_compression_limit=50 Note that changing this value requires disabling System Integrity Protection (SIP) for it to take effect." I am trying to find in advance which way it is most likely to pan out me messing with this boot-arg parameter, since I would not want to mess things up for nothing. Essentially the above states that with 100 you get maximum compression (in my case it is allowed to compress the whole 16 GB of ram). But it does not seem that this will change the swapping behaviour, it will merely start a little later (since I have compressed more ram). So once the system started swapping, the vmm will again do its thing.
Dec ’25
Reply to Zsh kills Python process with plenty of available VM
I see, thank you for pointing this out. So it is not a percantage, but an actual number of pages. Could you expand a little on how to interpret in your previous answer? How does one find the available range? What does it mean to overcommit pages? Ideally, I would try to get as close as possible to a memory overcommitment scenario, would this correspond to an "infinite" number of overcommitted pages? Is there a way to enter "infinite" in this parameter? Or there is a maximum number, which can change from machine to machine? If I am interpreting the directionality this parameter has to move towards to, in order to get the desired behaviour, I need to retrieve this number, not just compute it roughly via the known 4KB size of a page and the capacity of the disk. Say the maximum number is 1200 pages. From the documentation, I am supposed to boot in recovery mode, disable SIP, and then run sudo nvram boot-args="vm_compressor_limit=1200" and then restart to make the changes effective. Do I need to keep SIP disabled or I can re-enable it after the changes make effect?
Jan ’26
Reply to Zsh kills Python process with plenty of available VM
Thank you, that's interesting, do you also have Sequoia on your machine? When I first run nvram -p it showed no boot-args existing on my machine. So I simply run sudo nvram boot-arg="vm_compression_limit=4000000000" without the debug part, which would be unnecessary if there is nothing to override. When subsequently trying different values, I would remove the boot-arg previously set with sudo nvram -d boot-args and then rerun the previous command with another value. Each time, doublechecking with nvram -p, was displaying the boot-arg set with correct value, so I guess doing it this way is equivalent to using debug. Or is debug compulsory? It does not appear in the documentation https://ss64.com/mac/nvram.html , I could give it a try though, but what should I put as value for debug if no boot-args are shown by nvram -p?
5d
Reply to Zsh kills Python process with plenty of available VM
Thank you so much for trying. By VMM I meant the Virtual Memory Manager, I am doing this on an actual Macbook Pro, not a VM (Virtual Machine). Then the source of the issue must be either that this mac I have, which is managed by the company I worked for, somehow due to their managing software does not implements boot-args properly, even with SIP disabled (they did receive an alert when I disabled it); or python itself is somehow responsible for being capped so early. The python script I used is quite simple, so that per se should not be the problem: just appending arrays of 200 MB of random numbers to an array. A third option could be that I do something wrong. Just to double-check, I disable SIP from recovery mode (I read this needs to be done somewhere online, do you do it also? I was not able to change boot-args with SIP on). Then I log into normal mode, with SIP kept off, and set the boot-args with sudo nvram boot-args="vm_compression_limit=4000000000" Then I check that nvram -p | grep boot-args reflects the changes. Once the changes display correctly, the machine should behave according to these, and I run the test python script aforementioned. Do you do anything different (besides the checks of course, and the test script/language used). I have read that to make the boot-args changes permanent, one needs to reboot the machine. However, I do not need them to be permanent for now as I am just testing, and as far as the current session is concerned, they should already be reflected by the VMM behaviour during the test, without having to reboot (by the way, in doubt, I did also try to reboot but the behaviour was the same). Soon I will also get my hands on an older intel mac which is not managed by the company, and I will be able to check if the boot-args change the VMM behaviour correctly on that one. If that happens, then it must be some hidden block the managed device has, which the IT person I spoke to is not aware of, as they said whatever restrictions they put on the machine, it should not interfere with its proper behaviour, and I suppose reflecting tunable boot-args is proper behaviour.
2d