3 min read

Leveraging LLM Intelligence for Multi-Intent Queries in Semantic Kernel

Handling multi-intent queries in Semantic Kernel requires intelligent entity linking. We use prompt engineering, function choice behaviors, and contextual synthesis to improve AI accuracy without hardcoded logic.
Leveraging LLM Intelligence for Multi-Intent Queries in Semantic Kernel

Understanding Multi-Intent Queries and Entity Linking

Modern AI applications must process complex user queries that span multiple intents. These queries require AI to recognize implicit relationships between different entities across multiple functions. For example:

"Where is Oracle's headquarters, how's the weather there, and what flight options do I have from Atlanta?"

This query involves:

  1. General Information Retrieval – Identify Oracle’s headquarters location.
  2. Weather Forecasting – Provide weather details for the headquarters.
  3. Travel Planning – Retrieve flights from Atlanta to the headquarters’ location.

Our testing within Semantic Kernel using OpenAI GPT-4o exposed a critical entity-linking failure, where the AI misunderstood relationships between different parts of the query.


Where the AI Made a Mistake

In our test scenario, the LLM was given the following query:

"Where is the HQ for Oracle Corporation? How is the weather and what flight options do I have from Atlanta GA?"

The AI extracted the intents correctly:

  • General Information: Oracle HQ location ✅
  • Weather: Requested weather information ❌
  • Travel: Requested flight details ❌

However, it failed in entity continuity:

  • Instead of retrieving weather for Oracle’s HQ (Austin, TX), it retrieved weather for Atlanta, GA.
  • Instead of assuming the flight destination was Oracle’s HQ, it left the destination undefined, asking for clarification.
  • The flight API request failed (400 Bad Request) because the query lacked a destination parameter.

This confirmed that while the AI correctly identified multiple intents, it failed to link their underlying entities properly.


Why Did This Happen? A Breakdown of AI Misinterpretation

We asked the LLM to explain its mistake, and it self-reflected with the following insights:

  1. Ambiguity in Entity Relationships
    • The first query lacked an explicit connection between the weather and HQ location.
    • The AI defaulted to Atlanta for weather instead of linking it to Oracle’s HQ (Austin).
    • The flight request was not properly tied to the previous HQ result.
  2. Context Simplification & Assumptions
    • The AI did not infer that the flight should be from Atlanta to Austin.
    • Instead, it treated flights as a separate, independent intent without linking it to HQ.
  3. Lessons from the Mistake
    • Improved Entity Linking: AI must prioritize recent place entities in multi-intent queries.
    • Avoiding Default Assumptions: If a flight request lacks a destination, assume the last relevant place (e.g., HQ).
    • Query Structure Awareness: Identify when words like “there” implicitly refer to a prior entity.

Fixing the Issue: Prompt Engineering & System-Level Adjustments

To address this, we enhanced Semantic Kernel’s Function Choice Behaviors and introduced new prompting strategies:

1. Improved Intent Parsing Instructions

private static string IntentParsingInstructions() => $@"
You are an AI assistant for multi-intent queries. Your task is to:

1. Identify all intents (Weather, Travel, Finance, etc.).
2. Identify all place-based entities and link them correctly.
3. Maintain **contextual continuity** (e.g., ‘there’ refers to the most recently mentioned place).
4. If a user asks for flights but does not specify a destination, **assume the most relevant previous location**.
5. Return a structured JSON response that explicitly connects all detected entities to their correct intents.
";

This ensures that the AI does not lose context between queries and follows logical assumptions when needed.

2. Enhancing ResponseSynthesizerAgent for Context Awareness

private static string ResponseSynthesizerAgentInstructions() => $@"
When answering multi-intent queries:

- Ensure **entity continuity** across responses.
- If a user asks about weather without specifying a location, **default to the last mentioned place**.
- If a flight request lacks a destination, **assume it is to the last relevant location**.
- Structure responses **logically**, ensuring all pieces of information flow naturally.

For example, if a user asks:
> ‘Where is Google headquartered, what’s the weather there, and what flights are available from Chicago?’

Your response should infer:
- Google HQ = Mountain View, CA ✅
- Weather = Mountain View, CA ✅
- Flights = Chicago to Mountain View’s nearest airport ✅
";

These enhancements teach the AI to recognize implicit relationships without explicit hardcoding.

3. Leveraging System Messages for Dynamic Clarification

if (response.Content.Contains("isAmbiguous": true))
{
    Console.Write("\n(Clarification Needed) > ");
    string clarificationInput = Console.ReadLine()?.Trim();

    if (!string.IsNullOrWhiteSpace(clarificationInput))
    {
        chatHistory.AddMessage(AuthorRole.User, clarificationInput);
        chatHistory.AddMessage(AuthorRole.System, "Ensure entity continuity when responding.");
        continue;
    }
}

Instead of making incorrect assumptions, this ensures that the AI asks for clarification when needed.


Conclusion: Making LLMs Smarter Without Hardcoded Logic

Handling multi-intent queries is more than just function calling—it requires intelligent entity linking. Instead of relying on brittle rules, we:

Use advanced prompt engineering to guide entity relationships.
Modify response synthesis to maintain entity continuity.
Improve function choice behaviors to ensure relevant calls.
Introduce a fallback mechanism for missing parameters.

These improvements significantly reduce misinterpretations and allow the AI to respond more accurately to real-world, multi-intent queries. As LLMs evolve, their ability to recognize contextual relationships will improve, making these techniques even more effective.

This experiment highlights the importance of not just using LLMs but guiding them intelligently through structured design and well-crafted instructions.

Access the example implementation on GitHub.


References