I am developing a library called MemoryCryptor for macOS. Its purpose is to protect sensitive data of the calling process (including launchd daemons), e.g. user passwords and other secrets, from being written to disk or read directly by debuggers or malware.
So, the first thing I need to clarify here is what you're threat model is and to what degree you're willing to "trust" the operating system itself. The problem here is that there's a spectrum of trust that runs from:
-
If you trust the operating system, then the solution is fairly simple. As described here, if a process has "Get task allow" set to "false", then the system will not allow any process to retrieve that processes task port. Without that port, there's no way for an app to gain access to another apps process, solving the issue. Are many other mechanism at work that reenforce and/or prevent other attack vectors, but the general answer here is that that standard system configuration makes ensures that one process cannot read the memory of another process unless that process has specifically been configured to allow it.
-
If you don't trust the operating system... then this is a very, very difficult problem that I don't believe has any general solution on any platform.
Expanding on that last point, the issue here is that if an attacker TRULY controls the environment your app runs in, then NOTHING your app does or checks is truly trustworthy. As an obvious example, SIP (System Integrity Protection) is a critical component in our chain of security, but we don't provide ANY API for checking it's current state. That's because, if an attacker gains control of system and is able to disable SIP, they can then modify the system such that ANY API we could create... would simply lie and return "true". We provide "csrutil status", but it's ONLY real utility is to tell you when SIP is "Off", NOT when it's "On".
The range of more "reasonable" threat models generally sits somewhere between those two points, which is why the EndpointSecurity API exists. It allows a properly authorized process to monitor and block a broad range of critical syscalls, allowing that process to watch for attacks and actively defend itself against tampering. However, to be clear, the security of that entire API rests on the assumption that the machine started in a secure, trusted state, just like every other case.
Looking at the specific APIs you mentioned:
Secure Enclave (CryptoKit.framework).
The role of the secure enclave is to bind keys to their source device, rendering the key useless if it moves off device.
Keys generated using the Secure Enclave are not bound to the creating app.
The problem here is that the Secure Enclave cannot securely validate the "creating" app, as it's own isolation from the larger system prevents it from "seeing" the larger system. The only way it could "bind" keys is to trust the calling system, at which point it's just replicating what the keychain does.
Keychain API. Keys are always loaded into the calling process’s address space before any cryptographic operation,
As noted above, the keychains role is to bind keys to "apps". By extension, that means it must return those keys to the app/process that asked it to store them. I don't know what the larger intention/design of your product is, but typically that would mean that your secure helper process was responsible for managing keys as well as encrypting/decrypting data.
Separate helper via XPC. While this could isolate key material, it requires full control of IPC implementation - plaintext may remain in the implementation's internal buffers.
I'm not sure what you're larger design here is but XPC/mach provide a very large set of options for transferring data between processes. For larger operations, this often means that the VM system is being used to share memory between processes. This means there aren't any intermediate "buffers" with the IPC system, just two process that happen to be sharing the same underlying pages.
__
Kevin Elliott
DTS Engineer, CoreOS/Hardware