- unwind ai
- Posts
- Custom AI Agents that Understand Your Customers
Custom AI Agents that Understand Your Customers
PLUS: Lovable for MCP servers, Vibe investing with AI finance agent
Today’s top AI Highlights:
Build custom AI Agents that literally understand your customers
Build RAG pipelines quickly with this opensource RAG-as-a-service
AI financial agent for vibe investing
Lovable for MCPs – No/low-code builder for MCP servers
AI that learns by watching you work — no code; no drag-and
& so much more!
Read time: 3 mins
AI Tutorial
Integrating travel services as a developer often means wrestling with a patchwork of inconsistent APIs. Each API—whether for maps, weather, bookings, or calendars—brings its own implementation challenges, authentication systems, and maintenance burdens. The travel industry's fragmented tech landscape creates unnecessary complexity that distracts from building great user experiences.
In this tutorial, we’ll build a multi-agent AI travel planner using MCP servers as universal connectors. By using MCP as a standardized layer, we can focus on creating intelligent agent behaviors rather than getting bogged down in API-specific quirks. Our application will orchestrate specialized AI agents that handle different aspects of travel planning while using external services through the MCP. We'll be using the Agno framework to create and orchestrate our team of specialized AI agents.
We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.
Latest Developments
Building AI agents that can actually talk to customers without being awkward is hard. Most of us end up with two frustrating options: deterministic flowcharts using platforms like LangFlow or Botpress that customers routinely diverge from, or system prompts using AI agent frameworks like LangGraph, that go completely off the rails in production due to AI’s non-deterministic nature.
Parlant, an open-source framework, tackles this with a third approach - "Conversation Modeling".
What is Conversation Modeling? Instead of forcing your AI down rigid paths or hoping a system prompt keeps it in line, you create a set of domain-specific guidelines that your agent can use while responding. Parlant’s Conversation Modeling Engine matches these guidelines to each customer interaction, so your agent gets just the guidance it needs at that moment. Your agent stays on-brand and follows protocols while still adapting to whatever tangents your customers throw at it.
Let’s see how it works:
Guidelines that make sense - Instead of mapping out every possible conversation path, just define what your agent should do in specific situations. These contextual nudges keep conversations flowing naturally while ensuring your agent handles important scenarios the way you want.
Tools that don't go rogue - Tired of your agent randomly calling APIs or making up data? Parlant ties tools to specific guidelines so they only run when appropriate and with the right parameters. Your travel agent won't suddenly start searching flights when the customer is just asking about baggage policies.
Build it piece by piece - Add new capabilities or fix issues by tweaking individual guidelines without breaking everything else. This modular approach means you can start small, test with real users, and grow your agent organically.
Keep the lawyers happy - A new "Utterances" feature lets you pre-approve specific responses for sensitive situations. Instead of always generating responses dynamically, the agent first checks if an appropriate Utterance exists, and that specific response template is sent to the customer with any necessary dynamic information from tool calls.
LLM-agnostic - Parlant works with all major LLM vendors, including OpenAI, Google, Anthropic, and Meta, letting you switch models to optimize for cost, performance, or capabilities.
Stop compromising between control and adaptiveness – check out Parlant’s GitHub. If you’re building truly useful and reliable customer-facing AI agents, this might be the framework you have been searching for!
Unlock the Ultimate ChatGPT Toolkit
Struggling to leverage AI for real productivity gains? Mindstream has created a comprehensive ChatGPT bundle specifically for busy professionals.
Inside you'll find 5 battle-tested resources: decision frameworks, advanced prompt templates, and our exclusive 2025 AI implementation guide. These are the exact tools our 180,000+ subscribers use to automate tasks and streamline workflows.
Subscribe to our free daily AI newsletter and get immediate access to this high-value bundle.
Dcup is an open-source RAG-as-a-Service that turns your documents into a self-hostable, extensible AI-powered search engine. This platform connects to your data sources like AWS S3, Google Drive, and Dropbox, then handles the entire RAG pipeline from chunking to embedding to search. What makes Dcup stand out is its combination of hybrid search (semantic and keyword) with optional re-ranking, delivering precision and context-aware results.
The best part? You own everything – run it yourself with no vendor lock-in or black boxes.
Key Highlights:
Integration with Data Sources - Connect AWS S3, Google Drive, Dropbox, or upload files directly through clean APIs. Dcup handles the syncing automatically, so your data stays current without manual updates or complex pipelines.
Complete RAG Pipeline Out-of-the-Box - Skip building custom chunking, embedding, and indexing logic. Dcup automatically processes your files with OpenAI embeddings and stores everything in Qdrant's vector database, handling all the technical details for you.
Search Capabilities - Get more relevant results with hybrid semantic+keyword search that goes beyond basic vector similarity. Optional LLM re-ranking prioritizes truly relevant content, while flexible filtering lets you narrow results using metadata.
Developer-Friendly - Deploy via Docker for quick self-hosting, or use the cloud version for zero setup. The modular architecture makes it easy to customize components for your specific needs while maintaining enterprise-grade scalability.
Quick Bites
Xynth is the first AI financial agent that can help you make investment decisions using agentic workflows and real-time market data. It autonomously pulls live data from finance websites, regulatory filings, and even social media channels, and conducts in-depth analysis tailored to your investment questions. This agent can do technical as well as fundamental analysis, macro as well as micro analysis, and also generate graphs and charts.
What sets this agent apart from other Deep Research tools is that it is deployed in a coding environment where it can freely interact with all the necessary financial APIs and run Python code to do analysis and generate charts.
Allen Institute has released OLMo 2 1B, the smallest model in the OLMo 2 family, built for local inference. Trained on 4T high-quality tokens, it follows the same training recipe as the Olmo models, and outperforms peer 1B models like Gemma and Llama 3.2. Full code, training logs, checkpoints, and model variants (Base, SFT, DPO, Instruct) are available on Hugging Face, with support for Transformers and vLLM.
Tools of the Trade
Generate MCP: Create and deploy MCP servers from a single prompt without writing code. It handles tool generation, hosting, and schema setup so MCP clients can call external actions directly.
Blast: A fast, multi-threaded serving engine built for web browsing AI agents, with support for automatic parallelism, prefix caching, and memory/cost budgeting. It exposes an OpenAI-compatible API that lets you stream real-time browser actions while keeping resource usage low.
Kairos: Automate repetitive tasks by actually showing your workflow. No code. No drag and drop. You share your screen and show Kairos your entire workflow while you’re explaining the reason behind each step. Kairos spins up an AI agent that can independently handle your task while understanding the nuance behind each step.
Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, MCP, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.
Hot Takes
I just realized something most people are going to lose when (as they inevitably will) they start using AIs to write everything for them. They'll lose the knowledge of how writing is constructed. ~
Paul GrahamI don't get why they all stick to this dumb $20/month model with a capped number of questions/follow-ups. The pay-per-token approach is so much more convenient, but they keep it only for developers. 🤦 ~
Andriy Burkov
That’s all for today! See you tomorrow with more such AI-filled content.
Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!
PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉
Reply