OpenIntent not executed with Visual Intelligence

I'm building a new feature with Visual Intelligence framework. My implementation for IndexedEntity and IntentValueQuery worked as expected and I can see a list of objects in visual search result. However, my OpenIntent doesn't work. When I tap on the object, I got a message on screen "Sorry somethinf went wrong ...". and the breakpoint in perform() is never triggered. Things I've tried:

  1. I added @MainActor before perform(), this didn't change anything
  2. I set static let openAppWhenRun: Bool = true and static var supportedModes: IntentModes = [.foreground(.immediate)], still nothing
  3. I created a different intent for the see more button at the end of feed. This AppIntent with schema: .visualIntelligence.semanticContentSearch worked, perform() is executed
Answered by DTS Engineer in 853652022

I created a testing project where the issue could be repro. https://github.com/Helen-Xu/kitchen-app-visual-intelligence

Excellent, thank you for that.

With beta 5, I see displayed error message now change to "Search results are limited", but perform method is still not hit and there's no error/logs in console.

This is because your EntityQuery is returning an empty array:

struct RecipeQuery: EntityQuery {
    func entities(for identifiers: [UUID]) async throws -> [RecipeEntity] {
        []
    }

However, there's one more important thing you need to do:

struct RecipeEntity: AppEntity, Sendable {
    var id: UUID = UUID()

AppEntity inherits the Identifiable protocol, and one of the key things about Identifiable is that it must be a stable identifier, which using the UUID initializer does not provide. This is a common mistake! The stability is foundational to a successful App Intents implementation, because these identifiers are sent to your EntityQuery implementation above from different invocations around the system — it could be from a Shortcut people have customized to open a specific recipe, or in this circumstance, the customer tapping on one of the Visual Intelligence results to open that specific recipe from the search results. The broad control flow looks like this:

  1. Customer: Performs visual search and selects your app.
  2. Your app: Returns RecipeEntity through RecipeIntentValueQuery .
  3. Customer: Picks a yummy recipe to get info on.
  4. The system: Gets the id of the AppEntity the customer tapped on, calls your RecipeQuery with that ID
  5. Your app: Returns the RecipeEntity based on that ID from the RecipeQuery. If you don't have Identifiable stability, things fall apart here because you can't guarantee you're returning the same AppEntity.
  6. The system: Creates the OpenRecipeIntent with the RecipeEntity from 5 as its target and calls perform() on the intent.
  7. Your app: Your code running in OpenRecipeIntent.perform() to configure your app UI to display that recipe.

I hope that clarifies things for you!

— Ed Ford,  DTS Engineer

iPad Pro (M4) has A.I. that can communicate an idea with you.

It would be helpful to see a small reproducible as a small buildable test project focused on these intents to understand what's happening with your project.

— Ed Ford,  DTS Engineer

@Helennn said in a comment:

Been trying to debug this one, but since it's not running in my app's process and there's no logs, it's really challenging to debug

In case you're not familiar with the idea of building a test project, take a look at Creating a test project for tips. Going through that process might help you debug this, and if not, then we have something that's focused on what you're working on to discuss further.

— Ed Ford,  DTS Engineer

After updating to xcode 26 beta 4 + ios 26 beta latest, I could no longer repro the reported issue because the entire Visual Intelligence framework stopped working for me, without code changes. Things that were working before, such as IntentValueQuery is no longer executed. And I'm seeing new errors as below: nw_read_request_report [C5] Receive failed with error "Operation timed out" nw_endpoint_flow_fillout_data_transfer_snapshot copy_info() returned NULL nw_read_request_report [C23] Receive failed with error "Operation timed out" nw_endpoint_flow_fillout_data_transfer_snapshot copy_info() returned NULL Failed to terminate process: Error Domain=com.apple.extensionKit.errorDomain Code=18 "(null)" UserInfo={NSUnderlyingError=0x10c9e1b60 {Error Domain=RBSRequestErrorDomain Code=3 "No such process found" UserInfo={NSLocalizedFailureReason=No such process found}}}

iOS 26 beta 5 is now available. Do you continue to get those "No such process found" errors?

— Ed Ford,  DTS Engineer

Accepted Answer

I created a testing project where the issue could be repro. https://github.com/Helen-Xu/kitchen-app-visual-intelligence

Excellent, thank you for that.

With beta 5, I see displayed error message now change to "Search results are limited", but perform method is still not hit and there's no error/logs in console.

This is because your EntityQuery is returning an empty array:

struct RecipeQuery: EntityQuery {
    func entities(for identifiers: [UUID]) async throws -> [RecipeEntity] {
        []
    }

However, there's one more important thing you need to do:

struct RecipeEntity: AppEntity, Sendable {
    var id: UUID = UUID()

AppEntity inherits the Identifiable protocol, and one of the key things about Identifiable is that it must be a stable identifier, which using the UUID initializer does not provide. This is a common mistake! The stability is foundational to a successful App Intents implementation, because these identifiers are sent to your EntityQuery implementation above from different invocations around the system — it could be from a Shortcut people have customized to open a specific recipe, or in this circumstance, the customer tapping on one of the Visual Intelligence results to open that specific recipe from the search results. The broad control flow looks like this:

  1. Customer: Performs visual search and selects your app.
  2. Your app: Returns RecipeEntity through RecipeIntentValueQuery .
  3. Customer: Picks a yummy recipe to get info on.
  4. The system: Gets the id of the AppEntity the customer tapped on, calls your RecipeQuery with that ID
  5. Your app: Returns the RecipeEntity based on that ID from the RecipeQuery. If you don't have Identifiable stability, things fall apart here because you can't guarantee you're returning the same AppEntity.
  6. The system: Creates the OpenRecipeIntent with the RecipeEntity from 5 as its target and calls perform() on the intent.
  7. Your app: Your code running in OpenRecipeIntent.perform() to configure your app UI to display that recipe.

I hope that clarifies things for you!

— Ed Ford,  DTS Engineer

Thank you for the very detailed response!

I'm still a bit confused about the usage of IntentValueQuery and EntityQuery. It seems normally AppEntity is only mapped to EntityQuery, but in Visual Search Intelligence framework, IntentValueQuery is added to provide search results based on a given image.

You mentioned that when object is tapped, it gets the id from AppEntity object, use it to search in EntityQuery.entities(for:), and get an AppEtity back, it seems to be a redundant step if we already had the AppEntity object in the beginning?

Additionally, search results provided in IntentValueQuery come from network call. If we want to be able to search within these results in EntityQuery, does it mean we'll have to store a copy of these objects somewhere? It seems unsafe to do so, and there isn't a proper chance to remove them from storage when no longer needed.

Not sure if I missed anything from the Visual Intelligence framework, but looking forward to your answers!

I also updated the testing project based on your advice:

  1. implementing EntityQuery.values(for:) does enable the OpenIntent
  2. Looks like I can cheat EntityQuery.values(for:) by creating a new Entity with same id, which seems to add unnecessary risk. But returning the correct Recipe Entity instance relies on keeping a copy of RecipeEntityInventory which seems ideal for a real world case where the results come from network work.

You mentioned that when object is tapped, it gets the id from AppEntity object, use it to search in EntityQuery.entities(for:), and get an AppEtity back, it seems to be a redundant step if we already had the AppEntity object in the beginning?

Since the results are displayed out of your process, you'll have no idea which object was tapped in the Visual Intelligence, so your app won't have the context of the returned results.

This feature is built on top of existing system infrastructure — App Intents — where so long as there's that stable identifier, the system can come to your app's process, query for a specific recipe by ID through an EntityQuery, and use that in concert with an OpenIntent for this case. If your app adopts other significant system features that also use App Intents — Siri, Shortcuts, Spotlight, Widgets, and more — you'll find that these are the same code paths used in all those cases, where the system can query your app with an ID based on user interactions from outside of your process. Thus, you get to write your EntityQuery once, and then get tons of value from across all those different features, without extra code.

Additionally, search results provided in IntentValueQuery come from network call. If we want to be able to search within these results in EntityQuery, does it mean we'll have to store a copy of these objects somewhere? It seems unsafe to do so, and there isn't a proper chance to remove them from storage when no longer needed.

Apps backed by large networked search results infrastructure have some specific decisions to make. If you have a data model object in your project that is very detailed with information containing far more than what you'd show in something like Visual Intelligence, you may want a separate AppEntity structure that is only the relevant info shown throughout system experiences like Visual Intelligence. Our Trails sample code project demonstrates all of the core App Intents framework concepts, and one of the elements it takes care to do is separate a larger data model Trail from the smaller AppEntity version named TrailEntity, in anticipation of needs like yours. There's a large code comment at the top of TrailEntity that explains a bit more here. As to your statement there about storage, you'll have to determine a cache evection strategy as part of your app's data mangement that makes sense.

Looks like I can cheat EntityQuery.values(for:) by creating a new Entity with same id

Since your mentioning networked results above, the common strategy here is that you maintain the same ID for a data object all the way from your backend infrastructure into all the layers of your app including the AppEntity, if that's feasible for how your data query and server search services are implemented.

— Ed Ford,  DTS Engineer

I understand how it work now, thank you for your explanation!

Sounds like usage of EntityQuery is required by the Intent framework, and Entity for Visual Intelligence is not exempted from it. I understand it allows AppEntity to be more verastile, although personally I think the Entity->ID->Entity retrieval process could be optimized in the case of Visual Intelligence, assuming there's an underlying collectionView that maps each cell to the Entity instance.

Thanks again for your help!

OpenIntent not executed with Visual Intelligence
 
 
Q