Post

Replies

Boosts

Views

Activity

Reply to How to reset (remove) apps from "Local Network" privacy settings?
We have seen similar results to those seen by the OP and others. We have submitted a separate FB incident (FB16512666) with our information and observations. For the most part it follows what others have said, though we have more to add on the inconsistency between UI and the actual permission granted. There are two related screenshots for this, the first of which shows the UI when the Local Network permission is OFF for our application called "Mbox": In this image you can see 10 instances of an application called "Mbox 5.2", circled in red. To the right you can see the contents of the /Library/Preferences/com.Apple.networkextension.plist file that stores the data related to the permission granted via the UI. This plist holds entries for each of the 10 instances of the application in the UI, each showing the same BundleID (com.PRG.MboxExtreme in this case) and showing a unique file path. I have drawn a red arrow pointing to one instance that has the application name "Mbox 5.2" that matches the name shown in the UI, and the state of the "DenyMulticast" boolean for that instance is circled in red - it is set to YES, which represents the result of the toggle switch in the UI being OFF. In the same image I have drawn a yellow arrow pointing to a separate instance with a different file path (and application name), and circled that instance's DenyMulticast key:value pair, which is NO, opposite to that for the other instance. This next screenshot shows the result of toggling the UI to grant permission to the app. As noted by others, toggling any one instance of the application called "Mbox 5.2" causes the toggle for all instances to be in the same state. In the image you can see that the same instances are highlighted in the plist file, with the DenyMulticast value for the first instance now being set to NO: In the first case outlined above, with permission turned off, the instance of our application called "Mbox 5.2" is NOT able to receive UDP multicast data, but ALL other instances are able to receive the same data. In the second case, with permission turned on, ALL instances of the application can receive UDP multicast. Based on what we've seen, there's an obvious inconsistency between what's shown in the UI and what the actual permission state is. The UI seems to follow and only affect the first instance of the application. Either each instance of the application would have its own instance of the permission, or like other permissions there should be only one instance for each unique BundleID and the value affects all applications with that ID. In addition to the inconsistency between the UI and actual operation, we have also seen in testing that even when granted permission for Local Network to an application, that after a reboot the application is unable to send/receive UDP multicast. To resolve this issue you can quit the affected app (or apps) then toggle their Local Network permission off then on again, and then relaunch the application. This state seems to hold until the next time the computer is rebooted. It is our suspicion that this issue is related to having multiple instances of the same application on the computer and the lack of consistency between UI and the plist. But we don't have any evidence of this yet. I'll also repeat what others here and in other related posts have stated, that the concept of Local Network permission ought to have the means to test/debug the current state and also the ability to remove or reset the permissions in total or per app, As best we can tell, these items are already reported as FB8711182 and FB14944392 respectively.
Mar ’25
Reply to Core Image Tiling and ROI
Thanks for your reply. I have indeed reviewed the programming guide (many times over the years), and I'm aware of how to provide an ROI function. The guide could really do with an update (and some fixes for errors) and better examples with more detailed explanation. So much of this stuff we've had to figure out by trial and error. We've been able to get away most of the time without supplying much in the way of custom ROIs for a long time, just returning the destRect or destRect inset by -1. But we do have some custom ROIs when using more than one sampler where the two samplers may be different sizes. After first reading your suggestion to reduce the ROI to only the portion being mirrored, I could see how it would be possible to say that some mirroring actions would require only some smaller portion of the input image (with limited ROI) and that this could help in some cases, but I figured this would only help in those cases where the entire image was not needed (the full image mirror flips) and also where the ROI itself would not exceed the 4096 pixel limit. I've edited our code to try this out and the results are good up to a point. I've kept the method of using the affine transform for the full image mirror flips, so the following comments only relate to the mirroring of a portion of the source image - either half or a quarter. I've taken your suggestion and created a custom ROI that is the portion of the source image that is being mirrored - left half, top half, bottom-right quarter, etc. This works fine until the source texture gets too big, and logically, the point at which it gets too big depends on how large the ROI needs to be. e.g. the ROI for the left to right mirror is larger than the ROI for a bottom-right quarter mirror and thus the quarter mirror can handle a larger source image. At the point where the source texture is too large, what happens is that the entire rendering loop (running at ~60Hz) of our app stalls. I'm assuming this is because the texture being passed into the CI filter chain is so large that it doesn't return fast enough for our rendering to complete in time and this just snowballs. Because of this, I've made to decision to use the affine transforms instead of a kernel with custom ROI whenever the source image size is > 8192 in either dimension. But I do want to double-check my assumptions of what is happening regarding the use of "destRect". I know the clamping with a larger image with our original code is because the tiling means that the entire source is not available for each "pass" through the kernel. And that supplying a custom ROI means that the correct portion of the source IS available. I just want to check my understanding that when you use "destRect" in the ROI you're always going to get a tiled rect, assuming the image is large enough to cause tiling? I'd have to say, embarrassingly, that when I converted all of our effects from using old methods for setting the ROI (using setROISelector) and started using applyWithExtent, I found examples somewhere that used destRect, and I followed along, clearly not fully appreciating what impact it could have. It would appear that in some cases this isn't right at all, and we want to use the entire source (CGRectInfite I guess) or like with this mirroring effect, the portion of the image that is being mirrored.
Topic: Graphics & Games SubTopic: General Tags:
Oct ’24
Reply to Core Image Tiling and ROI
Update: I have managed to replicate the functionality of our existing custom mirror kernels using cropping, affine transform, and compositing operations. Seems to work so far at least, with large (> 4096) still images that would otherwise get tiled. There's definitely a noticeable performance hit (temporary) with the still image. I haven't tested with a movie yet, but I'm assuming that will also suffer, perhaps constantly with each new frame. Therefore, I'd still be interested to know if there's a way to keep the more-optimized kernel approach with the larger images/movies if at all possible.
Topic: Graphics & Games SubTopic: General Tags:
Oct ’24