Thanks for the quick response, and apologies for my extra-long answer -- I thought it best to be thorough here! Admittedly, I'm still learning the ropes a little with these APIs, so you may be able to spot some key detail I'm missing.
Is that right?
Your understanding is mostly correct. However, from the container app, I'm specifically interested in being able to dynamically switch configurations (NETunnelProviderManager instances), e.g., going from a non-MDM configuration to an MDM configuration. Outside of starting and stopping connections, I don't care to modify the configurations themselves, like for example, changing the server address of a configuration.
Is the provider packaged as an appex? Or a sysex? Or have you tried both and seen no difference?
Sorry, I should have specified. It's packaged as a sysex; I haven't experimented with an appex.
I presume you’re doing this configuration using NETunnelProviderManager from the container app. If so, what sequence of APIs are you calling?
Yes, the non-async NETunnelProviderManager APIs are being used from the container app.
I'll provide the example of transitioning from a non-MDM configuration to an MDM configuration as it feels like I have tighter control over it. The sequence is:
Call NETunnelProviderManager.loadAllFromPreferences, then parse the managers to detect if an MDM configuration exists -- our MDM profile has a key to detect this.
(from step 1 load completion handler) If found, call loadAllFromPreferences again, grabbing a reference to the VPN profile to be removed. From the completion handler, remove NEVPNStatusDidChange and NEVPNConfigurationChange observers temporarily, to ensure they're not affecting upcoming logic.
(still in step 2 load completion handler) Confirm that the NEVPNConnection associated with the manager is not in NEVPNStatus.disconnected state -- in the problem path, it is not.
(still in step 2 completion handler) Set up a temporary NEVPNStatusDidChange notification handler, to watch for NEVPNStatus.disconnected state -- this notification only observes changes to the manager to be removed.
When the notification triggers with NEVPNStatus.disconnected state, call oldManager.removeFromPreferences().
Within the removal completion handler, call loadAllFromPreferences, re-register handlers for previously removed VPN notifications, and call startVPNTunnel() on the MDM-added manager.
(Back to step 2 completion handler) Call oldManager.connection.stopVPNTunnel(), to eventually trigger the code in step 4.
In practice, when I do this, I'll end up with a routing table that's missing the route for my VPN network. To "fix" this, I've been adding a hard-coded delay between entering the removeFromPreferences() callback, and beginning the logic to start the new VPN tunnel.
I’m asking because the usual way to change a tunnel’s settings from the container app is:
...
3. Save the preferences
Note: Since the newly added profile is added to the system preferences by MDM, I do not call saveToPreferences() after loading it and prior to calling startVPNTunnel() on it. I only call saveToPreferences when setting up non-MDM configurations.
The steps for transitioning from one MDM-managed VPN configuration to another is shorter and more crude, since it deals with yanking an active configuration out of the system:
Set up observer NEVPNConfigurationChange, which will call loadAllFromPreferences() followed by startVPNTunnel() (from callback).
With one MDM-added configuration already active and connected on the system, add another MDM configuration (with Jamf in this case), so that there are now two VPN configurations in the system settings -- the current one in .connected state and the new one in .disconnected state.
Use MDM management tool (Jamf) to remove currently .connected profile, causing the system to put it into disconnected state (from what I've observed) and at the same time triggering the NEVPNConfigurationChange handler from step 1.
In this case, I've noticed a five second wait is the rough margin I need after the NEVPNConfigurationChange notification, in order to successfully configure and start the new tunnel, without the aforementioned routing table issues. Since the profile is just being yanked out in this case, I definitely understand why there's more room for error when taking down one configuration and adding another.