How does SwiftUI order gesture processing?

Here's a cut-down example from my app (iOS 16.0, Xcode 14.0)...

Grid(horizontalSpacing:0, verticalSpacing:0) {
                        GridRow {
                            Text("  L :")
                            decimalTextField($L, fraction: 1).onSubmit() { setRGB() }
                        }

                        GridRow {
                            Text("  a :")
                            decimalTextField($a, fraction: 1).onSubmit() { setRGB() }
                        }

                        GridRow {
                            Text("  b :")
                            decimalTextField($b, fraction: 1).onSubmit() { setRGB() }
                        }
                    }
                    .background(SwiftUI.Color.black.opacity(0.3))
                    .cornerRadius(8)
                    .onLongPressGesture{
                        let pasteboard = UIPasteboard.general
                        let LABtxt = String(format:"Lab: %.1f %.1f %.1f\n", L, a, b)
                        pasteboard.string = LABtxt
                    }

I was expecting unprocessed gestures to be passed from a View to its container. In this example, decimalTextField is a TextField with a NumberFormatter. if you tap on the TextField then the keyboard should appear. If you do a Long Press, I would expect the TextField does not handle a Long Press, so it drops through to the Grid container.

When I add the onLongPressGesture() to the Grid, the TextFields in the Grid stop doing anything. Take it away, and they start working again.

Is this right? I can imagine reasons why the TextField might block or consume a Long Press event, and stopping the container seeing it, but I can't see how it would work the other way around.

I have just thought of a special case. Suppose you were in a TabView, and wanted to swipe from one view to another. You would not want a swipe to accidentally trigger some action within the view. You would want any swipe to be offered to the TabView container first.

Or is the model in my head wrong? Do we identify the gesture, and then see what is listening for it; or do both happen at the same time? Do we decide whether we have a tap or a double-tap and then act, or is it more complicated?

How does SwiftUI order gesture processing?
 
 
Q