- unwind ai
- Posts
- Build an AI Domain Deep Research Agent
Build an AI Domain Deep Research Agent
Fully functional agentic deep research app with step-by-step instructions (100% opensource)
Let's face it — building good research tools is hard. When you're trying to create something that can actually find useful information and deliver it in a meaningful way, you're usually stuck cobbling together different search APIs, prompt engineering for hours, and then figuring out how to get the results into a shareable format. It's a headache, and the results are often inconsistent.
In this tutorial, we'll build an AI Domain Deep Research Agent that does all the heavy lifting for you. This app uses three specialized agents which are built using the Agno framework, use Qwen’s new flagship model Qwen 3 235B via Together AI, and use tools via Composio to generate targeted questions, search across multiple platforms, and compile professional reports — all with a clean Streamlit interface.
What makes this deep research app different from other tools out there is its unique approach: it automatically breaks down topics into specific yes/no research questions, combines results from both Tavily and Perplexity AI for better coverage, and formats everything into a McKinsey-style report that's automatically saved to Google Docs.
What We’re Building
An advanced AI research agent built using the Agno Agent framework, Together AI's Qwen model, and Composio tools. This agent helps users conduct comprehensive research on any topic by generating research questions, finding answers through multiple search engines, and compiling professional reports with Google Docs integration.
Features
👫 Team of specialized AI agents:
Question Generator Agent creates 5 specific yes/no research questions based on your topic and domain
Research Agent leverages multiple search tools (Tavily and Perplexity AI) for comprehensive information gathering
Report Compilation Agent transforms raw research into professional McKinsey-style reports
🧠 Intelligent Question Generation:
Automatically generates 5 specific research questions about your topic
Tailors questions to your specified domain
Focuses on creating yes/no questions for clear research outcomes
🔎 Multi-Source Research:
Uses Tavily Search for comprehensive web results
Leverages Perplexity AI for deeper analysis
Combines multiple sources for thorough research
📊 Professional Report Generation:
Compiles research findings into a McKinsey-style report
Structures content with executive summary, analysis, and conclusion
Creates a Google Doc with the complete report
🖥️ User-Friendly Interface:
Clean Streamlit UI with intuitive workflow
Real-time progress tracking
Expandable sections to view detailed results
How The App Works
User Input & Setup: Users begin by entering their API keys, research topic (e.g., "American Tariffs"), and domain (e.g., "Economics") in the Streamlit interface.
Multi-Agent Processing Pipeline: Behind the scenes, three specialized agents work in sequence to complete the research:
Question Generator Agent analyzes the topic and domain, then creates 5 specific yes/no research questions using the Qwen 3 235B model. These questions appear in the interface for the user to review.
Research Agent takes each question and conducts comprehensive searches using both Tavily Search and Perplexity AI simultaneously. This ensures broader coverage than single-source research tools. The agent processes the search results, synthesizes the information, and provides detailed answers for each question.
Report Compiler Agent takes all the research findings and transforms them into a structured, professional McKinsey-style report with an executive summary, detailed analysis sections, and a conclusion. The agent also automatically exports this report to Google Docs using the Composio integration.
Real-Time Progress Tracking: Throughout this process, the Streamlit interface displays real-time progress with status indicators, expandable results sections, and a final success message when the Google Doc is created.
This multi-agent architecture allows each component to specialize in a specific task, producing higher quality results than a single agent attempting to handle the entire workflow. The Agno framework coordinates these agents, managing the handoffs between stages and maintaining context throughout the research process.
Prerequisites
Before we begin, make sure you have the following:
Python installed on your machine (version 3.10 or higher is recommended)
Your Together AI and Composio API keys
A code editor of your choice (we recommend VS Code or PyCharm for their excellent Python support)
Basic familiarity with Python programming
Code Walkthrough
Setting Up the Environment
First, let's get our development environment ready:
Clone the GitHub repository:
git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
Go to the ai_domain_deep_research_agent folder:
cd advanced_ai_agents/multi_agent_apps/ai_domain_deep_research_agent
Install the required dependencies:
pip install -r requirements.txt
Add the necessary Composio tools:
composio add googledocs
composio add perplexityai
Create a
.env
file in the project directory and add your API keys:
TOGETHER_API_KEY=your_together_api_key
COMPOSIO_API_KEY=your_composio_api_key
Creating the Streamlit App
Let’s create our app. Create a new file ai_domain_deep_research_agent.py
and add the following code:
Import necessary libraries and configure UI:
import os
import asyncio
import streamlit as st
from dotenv import load_dotenv
from agno.agent import Agent
from composio_agno import ComposioToolSet, Action
from agno.models.together import Together
# Load environment variables
load_dotenv()
# Set page config
st.set_page_config(
page_title="AI DeepResearch Agent",
page_icon="🔍",
layout="wide",
initial_sidebar_state="expanded"
)
Setup Sidebar for API Keys and Information:
# Sidebar for API keys
st.sidebar.header("⚙️ Configuration")
# API key inputs
together_api_key = st.sidebar.text_input(
"Together AI API Key",
value=os.getenv("TOGETHER_API_KEY", ""),
type="password",
help="Get your API key from https://together.ai"
)
composio_api_key = st.sidebar.text_input(
"Composio API Key",
value=os.getenv("COMPOSIO_API_KEY", ""),
type="password",
help="Get your API key from https://composio.ai"
)
# Sidebar info
st.sidebar.markdown("---")
st.sidebar.markdown("### About")
st.sidebar.info(
"This AI DeepResearch Agent uses Together AI's Qwen model and Composio tools to perform comprehensive research on any topic. "
"It generates research questions, finds answers, and compiles a professional report."
)
Initialize Session State and Agent Setup:
# Initialize session state
if 'questions' not in st.session_state:
st.session_state.questions = []
if 'question_answers' not in st.session_state:
st.session_state.question_answers = []
if 'report_content' not in st.session_state:
st.session_state.report_content = ""
if 'research_complete' not in st.session_state:
st.session_state.research_complete = False
def initialize_agents(together_key, composio_key):
# Initialize Together AI LLM
llm = Together(id="Qwen/Qwen3-235B-A22B-fp8-tput", api_key=together_key)
# Set up Composio tools
toolset = ComposioToolSet(api_key=composio_key)
composio_tools = toolset.get_tools(actions=[
Action.COMPOSIO_SEARCH_TAVILY_SEARCH,
Action.PERPLEXITYAI_PERPLEXITY_AI_SEARCH,
Action.GOOGLEDOCS_CREATE_DOCUMENT_MARKDOWN
])
return llm, composio_tools
Question Generation Agent:
def create_agents(llm, composio_tools):
# Create the question generator agent
question_generator = Agent(
name="Question Generator",
model=llm,
instructions="""
You are an expert at breaking down research topics into specific questions.
Generate exactly 5 specific yes/no research questions about the given topic in the specified domain.
Respond ONLY with the text of the 5 questions formatted as a numbered list, and NOTHING ELSE.
"""
)
return question_generator
def generate_questions(llm, composio_tools, topic, domain):
question_generator = create_agents(llm, composio_tools)
with st.spinner("🤖 Generating research questions..."):
questions_task = question_generator.run(
f"Generate exactly 5 specific yes/no research questions about the topic '{topic}' in the domain '{domain}'."
)
questions_text = questions_task.content
questions_only = extract_questions_after_think(questions_text)
# Extract questions into a list
questions_list = [q.strip() for q in questions_only.split('\n') if q.strip()]
st.session_state.questions = questions_list
return questions_list
Research Question Agent:
def research_question(llm, composio_tools, topic, domain, question):
research_task = Agent(
model=llm,
tools=[composio_tools],
instructions=f"You are a sophisticated research assistant. Answer the following research question about the topic '{topic}' in the domain '{domain}':\n\n{question}\n\nUse the PERPLEXITYAI_PERPLEXITY_AI_SEARCH and COMPOSIO_SEARCH_TAVILY_SEARCH tools to provide a concise, well-sourced answer."
)
research_result = research_task.run()
return research_result.content
Report Compilation Agent:
def compile_report(llm, composio_tools, topic, domain, question_answers):
with st.spinner("📝 Compiling final report and creating Google Doc..."):
qa_sections = "\n".join(
f"<h2>{idx+1}. {qa['question']}</h2>\n<p>{qa['answer']}</p>"
for idx, qa in enumerate(question_answers)
)
compile_report_task = Agent(
name="Report Compiler",
model=llm,
tools=[composio_tools],
instructions=f"""
You are a sophisticated research assistant. Compile the following research findings into a professional, McKinsey-style report. The report should be structured as follows:
1. Executive Summary/Introduction: Briefly introduce the topic and domain, and summarize the key findings.
2. Research Analysis: For each research question, create a section with a clear heading and provide a detailed, analytical answer. Do NOT use a Q&A format; instead, weave the answer into a narrative and analytical style.
3. Conclusion/Implications: Summarize the overall insights and implications of the research.
Use clear, structured HTML for the report.
Topic: {topic}
Domain: {domain}
Research Questions and Findings (for your reference):
{qa_sections}
Use the GOOGLEDOCS_CREATE_DOCUMENT_MARKDOWN tool to create a Google Doc with the report. The text should be in HTML format. You have to create the google document with all the compiled info. You have to do it.
"""
)
compile_result = compile_report_task.run()
st.session_state.report_content = compile_result.content
st.session_state.research_complete = True
return compile_result.content
Main Application Flow:
if together_api_key and composio_api_key:
# Initialize agents
llm, composio_tools = initialize_agents(together_api_key, composio_api_key)
# Main content area
st.header("Research Topic")
# Input fields
col1, col2 = st.columns(2)
with col1:
topic = st.text_input("What topic would you like to research?", placeholder="American Tariffs")
with col2:
domain = st.text_input("What domain is this topic in?", placeholder="Politics, Economics, Technology, etc.")
# Generate questions section
if topic and domain and st.button("Generate Research Questions", key="generate_questions"):
# Generate questions
questions = generate_questions(llm, composio_tools, topic, domain)
# Display the generated questions
st.header("Research Questions")
for i, question in enumerate(questions):
st.markdown(f"**{i+1}. {question}**")
# Research section - only show if we have questions
if st.session_state.questions and st.button("Start Research", key="start_research"):
st.header("Research Results")
# Reset answers
question_answers = []
# Research each question
progress_bar = st.progress(0)
for i, question in enumerate(st.session_state.questions):
# Update progress
progress_bar.progress((i) / len(st.session_state.questions))
# Research the question
with st.spinner(f"🔍 Researching question {i+1}..."):
answer = research_question(llm, composio_tools, topic, domain, question)
question_answers.append({"question": question, "answer": answer})
# Display the answer
st.subheader(f"Question {i+1}:")
st.markdown(f"**{question}**")
st.markdown(answer)
# Update progress again
progress_bar.progress((i + 1) / len(st.session_state.questions))
# Store the answers
st.session_state.question_answers = question_answers
# Compile report button
if st.button("Compile Final Report", key="compile_report"):
report_content = compile_report(llm, composio_tools, topic, domain, question_answers)
# Display the report content
st.header("Final Report")
st.success("Your report has been compiled and a Google Doc has been created.")
# Show the full report content
with st.expander("View Full Report Content", expanded=True):
st.markdown(report_content)
Running the App
With our code in place, it's time to launch the app.
In your terminal, navigate to the project folder, and run the following command
streamlit run ai_domain_deep_research_agent.py
Streamlit will provide a local URL (typically http://localhost:8501). Open this in your web browser, configure your API keys, and start your deep research!
Working Application Demo
Conclusion
You've now built a powerful multi-agent AI research system that streamlines the entire research workflow. By using three specialized agents working in concert, the system delivers more focused questions, more comprehensive information gathering, and better-structured reports than a single agent could provide.
To enhance your multi-agent research system further, consider these practical improvements:
Create Domain-Specific Agents: Develop specialized research agents for fields like medicine, law, or technology that use field-specific prompts and knowledge bases.
Implement Memory Between Sessions: Add a vector database to store previous research results, allowing the system to build upon past findings for related topics.
Include Source Evaluation: Add an agent that evaluates the credibility of sources and weighs information accordingly, improving research quality.
Enable Interactive Research Refinement: Let users modify the generated questions or suggest additional search directions during the research process.
This architecture is a flexible foundation that you can adapt to various research needs and domains. By focusing each agent on what it does best, you can continue to improve individual components without rebuilding the entire system.
Keep experimenting with different configurations and features to build more sophisticated AI applications.
We share hands-on tutorials like this 2-3 times a week, to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.
Reply