Post

Replies

Boosts

Views

Activity

Reply to Unable to send/receive IPv6 Mutlicast packets on NWConnectionGroup using Apple NF
Hi @DTS Engineer, QUOTE (link) Our general advice is to prefer Network framework over BSD Sockets, but UDP broadcasts and multicasts are an exception to that rule. Network framework has very limited UDP broadcast support. And while it’s support for UDP multicasts is less limited, it’s still not sufficient for all UDP applications. In cases where Network framework is not sufficient, BSD Sockets is your only option. UNQUOTE The idea is to know if Network framework is sufficient for us or should we be using BSD sockets API for multicast? Our use case is not very fancy and want to receive and send multicast on all interfaces. The code which we have written above is very simple and works for a IPv4 multicast group address but gives warnings and errors as listed above for IPv6 multicast group address. So, we wanted to know if 'Network Framework' has limited support for IPv6 multicast or it is a bug? Some of the limitations which are already known to us and we might soon file bugs for them as suggested by you in the past (consolidating and sharing them here to validate with you): Unlike BSD sockets, where a single socket can be used to join multiple multicast groups—including both IPv4 and IPv6 groups when using a dual-stack (v4/v6) socket—the Network framework requires creating a separate NWConnectionGroup for each NWMulticastGroup you want to listen to. When attempting to create two UDP NWListener instances on the same port with allowLocalEndpointReuse set to true (either within the same process or across different processes on the same device), the second listener fails to start with the error: POSIXErrorCode(rawValue: 48): Address already in use. Our expectation was that both listeners would be able to coexist without issues, given that allowLocalEndpointReuse was enabled. If this behavior is not intended to work, we’d appreciate it if the documentation for allowLocalEndpointReuse could be clarified accordingly. The developer forum thread where we have, had a discussion with a DTS engineer on this is here. If it is a bug, we will file a bug and share the bug number on the developer forum thread. In the absence of an interface constraint (without setting requiredInterface), we are not able to send and receive multicast traffic on all interfaces. Also if we apply an interface constraint (via requiredInterface), we are not able to force the group to use a specific interface. In both cases, it only uses the primary active interface to send and receive multicast. If that is not expected to work, can we get the documentation updated for requiredInterface. The developer forum thread where we have, had a discussion with a DTS engineer on this is Link1 /Link2. If it is a bug, we will file a bug and share the bug number on the developer forum thread.
Jun ’25
Reply to Controlling the number of Pending Send Completions using NWConnection
Thanks @DTS Engineer / @harshal_goyal This whole thread is to evaluate below as an proposed approach to do multiple sends (stream of bytes or datagram) on a NWConnection inside batch • Use the batch(_:) method. • Inside the batch, send each datagram. • Add a completion handler only to the last. To achieve or undestand: Efficiently know about their completion (success or failure) - in terms of their acceptance by the OS network stack and they are on their way out on the wire if they are sent back to back on same thread - can we assume the order would be taken by the OS (and we need wait to send next before receiving completion for prior send). some discussion using completion call to deallocate the Data and associated buffer vs deallocator function to use to deallocate it (especially for TCP)). To conclude, few questions / confirmation: On 1 - does the completion callback for last - confirms acceptance or failure including previous sends (for which completion handler was not given) by network stack? On 2 - does OS takes sends in the same order in which sends were initiated, irrespective of serial / concurrent Dispatch Queue?
Feb ’25
Reply to What DispatchQueues should i use for my app's communication subsystem?
Thanks @DTS Engineer. I’m not sure I understand this. It sounds like you’re asking whether you can run code on Network framework’s internal queues. If so, the answer to that is “No.” Indeed, there’s no guarantee that Network framework has internal queues (-: If that’s off the mark, please clarify your question. I understand that, when an event of interest arrives for a particular NWConnection/NWListener, Network framework would the dispatch the work (the state, send/recv completion, incoming connection handlers we associate with NWConnection/NWListener) to the specified dispatch queue (during start call) for execution. Apart from these handlers we associated with NWConnection/NWListener, what work would Network framework dispatches for that particular NWConnection/NWListener on the specified dispatch queue. Any use of the DispatchQueue for internal pusposes? Network framework has multiple underlying implementations: There’s a base implementation that uses the in-kernel networking stack via BSD Sockets. For standard networking — TCP and UDP connections to a remote peer — it’ll use the user-space networking stack. What is in-kernel networking stack and user-space networking stack?
Jan ’25
Reply to What DispatchQueues should i use for my app's communication subsystem?
Thanks @DTS Engineer. One last question: We we start a NWConnection with a DispatchQueue and we send on NWConnection, we are for sure know that send completion handler is invoked on DispatchQueue once the send it complete. What we want to know is: what about initiating the send operation and any other aspects of send that are also queued (pile work on) on DispatchQueue internally by the Network framework. Also how is today, the non-blocking socket model implemented internally in the Network framework using BSD/Kqueue?
Jan ’25
Reply to Some fundamental doubts about DisptachQueue and GCD
Thanks again @DTS Engineer (Kevin). It is very useful information. Concurrent queues are a relatively late addition to the API and, IMHO, are something that you should actively avoid, as they create exactly the same issues as the global concurrent queues. If the work we dispatch to the concurrent queue, does not involve blocking (say NO blocking system calls / IO etc) though they may run for few micro seconds in some worst cases (no blocking involved though). Then it should not lead to overcommitting (which in turns lead to thread explosion)? The specific problem with using concurrent queue, where work that can block, is dispatched on these queues, and when that's picked up by threads, at some point it blocks and if there is more work in the queues, more threads are spawned in that case to pick those pending work in the queue. Is the understanding correct? What are the limits involved here? For ex: total number of parallel threads that are in action, total number of threads in action plus blocked? The ONE exception to that is cases where you're specifically trying to create a limited amount of parallel activity with a specific component ("up to 4 jobs at once"). IF that's the case, then the correct solution would be to use NSOperationQueue to set the width. Glad that you mentioned it. Can we see how can we use it for some problem we intend to solve for the networking subsystem of our app - seperate thread here.
Jan ’25
Reply to What DispatchQueues should i use for my app's communication subsystem?
Thanks @DTS Engineer. This helps. While I do understand now, what you are saying. At the same time, we would NOT like to be on either extreme: On the one extreme - serialize everything and not take advantage of parallelism. On the other extreme - parallelize everything, leading to unpredictable CPU usage and choking it in burst situations etc. So while, you have proposed one model to have a serial queue (targetting by default to a global concurrent queue where the actual work happens) associated with all server sockets (NWListeners), accepted client sockets on these listeners (NWConnections) and client sockets (NWConnections) that lead to extreme case 1 above. Q1: Though we did not discuss the QoS in above model - should one create the serial queue with a particular QoS? As we don't queue work items to that queue - so we don't have control over QoS at the work item level. if we do create the serial queue with a particular QoS, how are we impacted wrt priority / scehduling for other workloads from our own app and other apps running on the device. Q2: What could be the alternative models to have decent parallelism?
Jan ’25
Reply to Some fundamental doubts about DisptachQueue and GCD
Thanks @DTS Engineer . This is very helpful. First off, as background, Dispatch's "main queue" is NOT in fact a "dispatch queue"/dispatch_queue_main_t. Our interface frameworks (UIKit/AppKit) both have the concept of the "main thread", which is both the first thread created and is there those thread use a RunLoop to receive events. The "dispatch main queue" was created to provide a convenient way to send messages to that special thread. In an that uses an main thread runloop, dispatching to the main thread does the same thing as "performSelectorOnMainThread". So, even in case of non interaction apps - (1) In gui session, our apps use NSApplicationMain()  (2) In non-gui session, like the case of daemons, we use CFRunLoopRun() - so it should be safe to dispatch work on main thread? [Main Dispatch Queue] [Link] Because the main queue doesn't behave entirely like a regular serial queue, it may have unwanted side-effects when used in processes that are not UI apps (daemons). For such processes, the main queue should be avoided. This guideline is for non interaction apps that don't get into a RunLoop on the main thread? Putting that in more concrete terms, the conceptual idea here was/is that dispatch queues feed work "into the system" while the global queues are responsible for managing and scheduling work on to the entire thread pool. The design mistake here was that allowing work to be directly submitted to the global queues unnecessarily confused this API division and created a bug opportunity that did not really need to exist. Though the intent and problem created is now clear from above response. Can you now explain - how that has now been remedied or attempted to improve using over-commiting and non-overcommiting queues for global concurrent queues? I am sharing my understanding from what I have learnt from the responses: GCD manages a thread pool per process GCD custom queues can have other GCD custom queues as target but eventually the leaf custom queue targets one of the global queues. These GCD custom queues (whether serial or concurrent) are there to only feed work "into the system" The actual work happens in the global queues - responsible for managing and scheduling work on to the entire thread pool per process. This merger of work across queues in hierarchy and the scheduling of work ensures that the execution semantics are preserved for a queue (serial - one block at a time and concurrent - multiple blocks at a time) Please let me know if the understanding is correct.
Jan ’25
Reply to What DispatchQueues should i use for my app's communication subsystem?
Written by @DTS Engineer Don’t use a global concurrent queue. See Avoid Dispatch Global Concurrent Queues for an explanation at to why. I recommend against using concurrent queues in general. They are largely more hassle then they’re worth. We don't intend to use them directly say using 'DispatchQueue.global().async()'. We would provide that as a DispatchQueue to NWListener instance's start(queue:) method call and NWConnection instance's start(queue:) method call. That is we won't be queuing work on those queues, rather Apple Network Framework would - to notify us about state changes, incoming connections, send/recv completions etc. Written by @DTS Engineer Certainly don’t target them directly from Network framework. If you do, you run the risk of two callbacks for the same object executing in parallel, which is gonna be super confusing I don't understand - why would that happen? The two callbacks would be discrete callbacks, that could be processed concurrently. For example, I would have initiated multiple sends on a UDP NWConenction and would be receiving completions for those sends concurrently. For example, I could be receiving multiple incoming connections to process on a NWListener. Written by @DTS Engineer Rather, set the queue to be a serial queue. It then might make sense to set the target queue of that serial queue to be another serial queue, depend on your specific requirements. Why would one like to serialize processing of incoming connections on a NWListener using a serial queue (when there could be CPU capacity available to process them concurrently)? This does NOT sound like an efficient listener acting as a server? Written by @DTS Engineer Beyond that, I recommend that you watch the WWDC 2017 session about Dispatch. There’s a link to it in Concurrency Resources. I have gone through all these resources and hence raised another thread to fill the gaps in understanding about Dispatch using this another thread.
Jan ’25
Reply to Inability to seperate IPv4 and IPv6 Traffic on the Same Port Using Network Framework
@DTS Engineer The question really here is when one opens two NWListeners with ipOptions.version = .v6 and .v4 respectively and we can confirm that underlying sockets created are of type udp4 and udp6 (and not udp46) respectively. Then if standard says, one should be able to bind a socket to same port if your transport & ip protocol are different, then why in this case, we cannot do the same?
Aug ’24