Skip to Content
Node ReferenceQuery Processor

Query Processor

The AI brain that understands customer requests and intelligently routes them to the right destination.

Overview

The Query Processor is the central intelligence hub of your IVA flow. It listens to what customers say, analyzes their intent using advanced AI, and automatically routes them to the appropriate next step in your call flow.

Think of it as a smart receptionist that understands natural language - customers can speak normally, and the Query Processor figures out what they need and where to send them.

Configuration

System Prompt

Type: text Required: Yes Default: “You are a helpful assistant that routes customer calls based on their requests.”

The System Prompt defines how the AI should behave and make routing decisions. This is where you tell the AI what your business does and how to handle different types of requests.

Example:

You are a customer service representative for Acme Corporation. Your job is to understand what customers need and route them appropriately: - For billing questions, route to the Billing FAQ - For technical issues, route to Technical Support - For sales inquiries, route to the Sales team - If you can answer the question directly, do so Be professional, friendly, and efficient.

Tips for effective prompts:

  • Be specific about your business and its services
  • Define clear routing criteria for each connected node
  • Include any business rules or escalation policies
  • Keep instructions concise but complete

AI Model

Type: select Required: Yes Default: Fast LLM (Recommended)

Choose which AI model powers the routing decisions:

ModelDescriptionBest For
Fast LLM (Recommended)Quick, cost-effective routing decisionsMost use cases, high-volume calls
Advanced LLMHigh-quality alternative with fast responsesComplex routing scenarios

Both models offer fast, cost-effective routing. The Fast LLM is recommended for most users.

Full Flow Level Knowledge Base

Type: toggle Required: No Default: ON

Controls whether the AI has access to your flow’s shared knowledge base.

SettingBehavior
ON (default)AI uses the flow’s knowledge base PLUS any connected FAQ Loaders
OFFAI only uses knowledge from directly connected FAQ Loader nodes

When to turn OFF:

  • When you want to restrict AI knowledge to specific topics at this routing point
  • Example: A “Billing Support” branch should only answer billing questions, not general company FAQs

Temperature

Type: slider Range: 0.0 - 1.0 Default: 0.1

Controls how creative vs. focused the AI responses are:

ValueBehavior
0.0 - 0.3Very focused, consistent responses (recommended for routing)
0.4 - 0.6Balanced creativity and consistency
0.7 - 1.0More creative, varied responses

For call routing, keep this low (0.1) to ensure consistent, predictable behavior.

Max Tokens

Type: number Range: 50 - 2000 Default: 200

The maximum length of AI responses. Higher values allow longer responses but increase cost.

Recommended values:

  • 150-200: Standard routing decisions
  • 300-500: When AI needs to provide detailed answers
  • 500+: Complex conversational scenarios

How It Works

  1. Customer speaks or sends a message - The Query Processor receives the input
  2. AI analyzes the request - Determines what the customer needs using natural language understanding
  3. Routes to the right node - Based on connected nodes and your system prompt, the AI picks the best destination
  4. Can answer directly - If connected to a Knowledge Base or FAQ Loader, the AI can answer questions before routing

Connection Rules

DirectionAllowedNotes
InputsMultipleCan receive from Start Node, TTS Response, other Query Processors
OutputsMultipleEach connected node becomes a routing option

Key concept: Every node you connect to a Query Processor becomes an available routing destination. The AI uses your System Prompt to decide which destination is best for each customer request.

Examples

Basic Example: Simple Routing

A straightforward setup where the Query Processor routes between FAQ answers and human transfer.

Start --> Query Processor --> FAQ Loader (for product questions) --> Transfer (for complex issues) --> End (when customer is satisfied)

Configuration:

  • System Prompt: “Route product questions to FAQ, transfer complex issues to support, end call when resolved”
  • AI Model: Fast LLM
  • Temperature: 0.1

Advanced Example: Multi-Branch Routing

A comprehensive call center flow with specialized handlers for different departments.

Start --> Query Processor --> Billing FAQ Loader --> Technical Support Transfer --> Sales Team Transfer --> General FAQ Loader --> End Node

Configuration:

  • System Prompt: Detailed instructions for each department with specific keywords and scenarios
  • Knowledge Base: ON (for general questions)
  • Connected FAQ Loaders: Specialized knowledge for billing and general topics

Specialized Example: Restricted Knowledge

When you need the AI to only answer questions about specific topics at a certain point in the flow.

Main Query Processor --> Billing Query Processor (KB OFF) --> Billing FAQ only

Configuration:

  • Full Flow Level Knowledge Base: OFF
  • Only the connected Billing FAQ Loader knowledge is available
  • Prevents the AI from answering non-billing questions in this branch

Best Practices

Write clear, specific system prompts

  • Tell the AI exactly what your business does
  • Define specific routing criteria for each connected node
  • Include examples of customer phrases and where they should route

Connect only the nodes you need

  • Each connected node appears as a routing option
  • Too many options can slow down decisions
  • Remove connections that are rarely or never used

Use descriptive node titles

  • Give connected nodes clear names like “Billing Support” or “Schedule Appointment”
  • The AI uses these titles to understand routing options

Test with real customer phrases

  • Use the Chat tester to try different customer requests
  • Verify the AI routes to the expected destinations
  • Refine your system prompt based on testing

Keep temperature low for routing

  • Low temperature (0.1) ensures consistent, predictable routing
  • Higher values can lead to unexpected behavior

Common Issues

”The AI keeps routing to the wrong node”

Solution: Review your System Prompt and make routing criteria more specific. Add examples of phrases that should route to each destination. Test with the Chat feature to see how the AI interprets requests.

”Responses are slow”

Solution:

  • Use the Fast LLM model (recommended)
  • Reduce your System Prompt length
  • Lower the Max Tokens setting
  • Ensure you have a stable internet connection

”The AI can’t answer questions from my FAQ”

Solution:

  1. Verify an FAQ Loader or Knowledge Base node is connected
  2. Check that “Full Flow Level Knowledge Base” is ON (or an FAQ Loader is directly connected)
  3. Ensure the FAQ Loader has content saved
  4. Test the FAQ Loader independently to verify knowledge is loaded

”Low confidence in routing decisions”

Solution:

  • Make your System Prompt more specific
  • Add clear routing rules for edge cases
  • Give connected nodes descriptive titles
  • Consider splitting complex routing into multiple Query Processors

”The AI isn’t collecting caller information”

Solution: When connected to a Notification Node, ensure your System Prompt instructs the AI to gather the required information. The AI will naturally collect fields like name, email, or message when properly configured.

  • Start Node - Entry point that typically connects to Query Processor first
  • FAQ Loader - Provides structured Q&A data for the AI to answer questions
  • Knowledge Base - Adds documents and URLs as knowledge sources
  • TTS Response - Plays audio responses; can loop back to Query Processor for more input
  • Transfer Node - Routes calls to human agents; common escalation destination
  • Conditional Transfer - Advanced routing with multiple transfer rules
  • Notification Node - Collects caller information and sends notifications
  • End Node - Terminates calls; used when conversation is complete

The Query Processor is the heart of intelligent call routing. Take time to craft your System Prompt carefully, and test thoroughly with the Chat feature before going live.

Last updated on