• unwind ai
  • Posts
  • Build a Voice-First Insurance Claim Live Agent Team

Build a Voice-First Insurance Claim Live Agent Team

Multi-agent voice-first FNOL app with Google ADK and Gemini Live (100% open source)

Filing an insurance claim by phone is messy: the claimant tells an emotional, unstructured story, and an agent on the other end tries to translate it into a rigid form in real time. Voice AI is built for exactly this gap.

In this tutorial, you will build a voice-first FNOL (first notice of loss) app where a claimant talks naturally and an agent assembles a structured claim packet live. The UI shows the transcript, extracted facts, missing items, routing, and an adjuster-ready handoff.

The stack is Google ADK for the workflow graph and Gemini Live for the voice.

What is Google ADK? Google's framework for building production-ready multi-agent systems. It provides model-agnostic agent orchestration, native tool integration (like Google Search), and a powerful Sequential Agent pattern that lets you chain specialized agents into sophisticated workflows.

Don’t forget to share this tutorial on your social channels and tag Unwind AI (X, LinkedIn, Threads, Facebook) to support us!

What We’re Building

A voice and text FNOL intake app that:

  • Lets the claimant speak or type

  • Streams audio responses back in real time via Gemini Live

  • Extracts structured claim facts into Pydantic schemas

  • Classifies claim type and severity

  • Applies deterministic rules for missing fields, required documents, fraud signals, and safety escalations

  • Builds a Markdown adjuster handoff packet during the call

  • Avoids promising coverage, payment, or liability

How It Works

Every time the claimant speaks or types, the app does this in the background:

  1. Listen. Voice gets transcribed; text comes in directly.

  2. Understand. Gemini reads the full conversation so far and pulls out structured facts like name, policy, date, location, what happened, evidence, injuries.

  3. Classify. Gemini decides what kind of claim this is and how severe it looks.

  4. Apply rules. Python checks: are required fields missing? What documents will the adjuster need? Are there fraud or safety red flags?

  5. Decide routing. The rules pick one of four lanes: ready_for_adjuster, needs_docs, special_investigation, or emergency_escalation. A safety flag (injury, unsafe housing) always wins.

  6. Build the packet. A Markdown handoff packet is assembled, plus the next thing the agent should say.

  7. Update the UI. Fields, timeline, and packet refresh in the browser. The agent speaks back.

The key idea is the split of labor: Gemini handles messy human language, Python handles decisions that have to be consistent. You don't want an LLM deciding whether a claim has all its documents or whether to escalate for safety — those are exactly the decisions that need to be the same every time.

Prerequisites

Before we begin, make sure you have the following:

  1. Python installed on your machine (version 3.12 is recommended)

  2. Your Gemini API key for using Gemini models

  3. A code editor of your choice

  4. Basic Python, FastAPI, and async familiarity

  5. A browser with microphone access for voice

Code Walkthrough

Setting Up the Environment

First, let's get our development environment ready:

  1. Clone the GitHub repository:

git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
  1. Go to the insurance_claim_live_agent_team folder:

cd awesome-llm-apps/voice_ai_agents/insurance_claim_live_agent_team
  1. Install the required dependencies:

pip install -r requirements.txt
  1. Grab your Gemini API key from Google AI Studio.

  2. Copy the env file and add your key:

cp .env.example .env
  1. In .env:

GOOGLE_GENAI_USE_VERTEXAI=False
GOOGLE_API_KEY=your-google-api-key

Creating the App

Project structure:

insurance_claim_live_agent_team/
├── __init__.py            # Exports root_agent for `adk web`
├── agent.py               # ADK graph + run_claim_workflow
├── schemas.py             # Pydantic data contracts
├── policies.py            # Deterministic insurance rules
├── examples.py            # Demo claimant prompts
├── requirements.txt
├── .env.example
├── live_demo/
│   ├── server.py          # FastAPI transport
│   ├── index.html         # Frontend
│   ├── styles.css         # Frontend
│   └── app.js             # Frontend
└── README.md

We skip the frontend code (index.html, styles.css, app.js). It's a static cockpit that talks to two endpoints (/api/message, /api/audio) and a WebSocket (/ws/live) — swap it for any UI you like as long as those three surfaces are honored.

schemas.py — Data Contracts

  1. Pydantic models for everything passed between steps. Headline schema is ClaimNarrative - this is what the LLM extracts from a claimant's story:

class ClaimNarrative(BaseModel):
    policyholder_name: str
    policy_number: str
    contact_method: str
    date_of_loss: str
    loss_location: str
    loss_description: str
    estimated_loss_usd: Optional[float] = None
    injuries_or_safety_concerns: list[str] = Field(default_factory=list)
    evidence_available: list[str] = Field(default_factory=list)
    documents_mentioned: list[str] = Field(default_factory=list)
    # ...

One schema per pipeline step: FieldValidation, ClaimClassification, CoverageEvidenceDecision, DocumentChecklist, FraudSafetyGate, ClaimIntakePacket. Literal types lock down values like ClaimType and RoutingDecision so the rules can switch on them safely.

policies.py — Deterministic Insurance Rules

  1. The boring file, by design. Five public functions, each takes structured inputs and returns structured outputs:

validate_required_claim_fields(claim)
apply_coverage_and_evidence_rules(claim, validation, classification)
generate_document_checklist(claim, classification, coverage)
fraud_signal_and_safety_gate(claim, validation, classification, coverage)
build_claim_intake_packet(...)
  1. Required documents are mapped per claim type:

TYPE_REQUIRED_DOCS = {
    "home_water_damage": [
        ("Photos or video of damaged areas before cleanup", "..."),
        ("Mitigation or drying invoice", "..."),
        ("Repair estimate or contractor assessment", "..."),
    ],
    "auto_collision": [...],
    # ...
}

fraud_signal_and_safety_gate is the most consequential — if it sees injury or unsafe-living mentions, it forces routing to emergency_escalation regardless of the rest of the pipeline.

agent.py — The ADK Graph and the Workflow Bridge

Two jobs: define the SequentialAgent (root_agent), and expose run_claim_workflow for the server.

  1. The graph is seven steps, alternating LLM and Python nodes:

def create_workflow() -> SequentialAgent:
    return SequentialAgent(
        name="insurance_claim_live_agent_team",
        sub_agents=[
            create_normalizer(),                            # LLM
            FunctionNode(name="ValidateRequiredClaimFields", ...),   # Python
            create_classifier(),                            # LLM
            FunctionNode(name="ApplyCoverageAndEvidenceRules", ...),
            FunctionNode(name="GenerateDocumentChecklist", ...),
            FunctionNode(name="FraudSignalAndSafetyGate", ...),
            FinalPacketNode(name="FinalClaimIntakePacket", ...),
        ],
    )

root_agent = create_workflow()
  1. LLM nodes are LlmAgent with a Pydantic output_schema — that's what guarantees structured output:

def create_normalizer() -> LlmAgent:
    return LlmAgent(
        name="NormalizeClaimNarrative",
        model=MODEL,  # "gemini-3-flash-preview"
        instruction="""You are the intake specialist...
        Read the claim narrative and produce a structured ClaimNarrative.
        Do not invent policy numbers, contacts, dates, or evidence.""",
        output_schema=ClaimNarrative,
        output_key="normalized_claim",
    )
  1. Python nodes wrap a handler that reads from and writes to ADK session state:

class FunctionNode(BaseAgent):
    handler: Callable[[InvocationContext], dict[str, Any]]
    output_key: str

    @override
    async def _run_async_impl(self, ctx):
        result = self.handler(ctx)
        ctx.session.state[self.output_key] = result
        yield _state_event(self.name, self.summary, {self.output_key: result})
  1. Handlers delegate straight to policies.py — no logic in the graph layer:

def _validate_claim_handler(ctx):
    return validate_required_claim_fields(ctx.session.state.get("normalized_claim"))
  1. The bridge that lets the live UI use this graph is run_claim_workflow:

async def run_claim_workflow(claimant_transcript, *, session_id=None, user_id="live-ui"):
    if not claimant_transcript.strip():
        return build_initial_workflow_state()

    session_service = InMemorySessionService()
    await session_service.create_session(app_name=APP_NAME, user_id=user_id, session_id=...)
    runner = Runner(app_name=APP_NAME, agent=root_agent, session_service=session_service)

    message = genai_types.Content(role="user", parts=[
        genai_types.Part(text=f"Use this claimant transcript as source of truth.\n\n{claimant_transcript}")
    ])

    async for _ in runner.run_async(user_id=user_id, session_id=..., new_message=message):
        pass

    state = (await session_service.get_session(...)).state
    # validate each output through Pydantic, return as a dict

live_demo/server.py — FastAPI Transport

  1. After the refactor, server.py does no claim reasoning. It serves the frontend, manages per-claimant sessions, accepts text/audio/WebSocket, and calls run_claim_workflow:

from agent import MODEL, blank_claim, build_initial_workflow_state, run_claim_workflow
  1. The bridge function is tiny:

async def _process_with_adk_graph(session, *, add_claimant_facing_reply):
    workflow = await run_claim_workflow(_claimant_text(session), session_id=session.session_id)
    if add_claimant_facing_reply:
        packet = workflow["claim_intake_packet"]
        session.transcript.append({"speaker": "Agent", "text": packet["claimant_next_message"]})
    return _state_from_workflow(session, workflow)

The agent's reply comes from the deterministic packet's claimant_next_message — no separate LLM call.

  1. The three transport surfaces:

@app.post("/api/message")        # text turn
@app.post("/api/audio")          # uploaded audio (transcribed, then same path)
@app.websocket("/ws/live")       # Gemini Live bidirectional voice

For /ws/live, the graph runs as a side task (asyncio.create_task(...)), so audio keeps streaming back while the structured packet updates in the background. The user never waits on the rules.

__init__.py — Optional Surface for adk web

from .agent import root_agent
__all__ = ["root_agent"]

Two reasons it exists: it makes the folder a Python package (so the relative imports inside agent.py resolve), and it lets ADK's CLI tools — adk web, adk run, adk api_server — auto-discover root_agent.

The live demo imports run_claim_workflow from agent.py directly; adk web exercises the same root_agent through ADK's dev UI. Both hit the same graph.

Caveat: this tutorial is built around the FastAPI live demo. You don't need adk web to run anything here, but pointing it at this package after the app is running is a useful debugging trick to inspect what each node wrote into session state.

examples.py — Demo Prompts

Five canned claimant narratives like basement flood, car accident with injuries, stolen laptop without a police report, travel cancellation, and a deliberately vague claim. Useful for adk web testing and rule sanity-checks.

Running the App

With our code in place, it's time to launch the app.

Start the backend and frontend with one command:

python -m uvicorn live_demo.server:app --reload --host 127.0.0.1 --port 4177

Open:

http://127.0.0.1:4177/index.html

Click the mic to start a live claim, or type. Watch the right panel populate fields and build the handoff packet as you talk. Try a prompt from examples.py for a quick demo.

To inspect the graph step-by-step:

adk web

Working Application Demo

Conclusion

You've built a voice-first FNOL intake app with a clean separation: schemas as contracts, policies as deterministic logic, an ADK graph as the workflow, and FastAPI as pure transport.

A few directions to play with from here:

  • Support new claim types and see how far the pipeline carries you for free

  • Persist conversations so claims survive a backend restart

  • Expand the fraud and safety signals for richer routing

  • Wire the final packet into your real claims system, CRM, or messaging tool

  • Build an eval set to catch regressions when you tweak prompts or rules

The hybrid LLM-plus-rules pattern generalizes far beyond insurance!

Keep experimenting with different configurations and features to build more sophisticated AI applications.

We share hands-on tutorials like this 2-3 times a week, to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this tutorial on your social channels and tag Unwind AI (X, LinkedIn, Threads) to support us!

Reply

or to participate.