• unwind ai
  • Posts
  • Drag-and-Drop to Build Multi-Agent Army with 1000+ Tools

Drag-and-Drop to Build Multi-Agent Army with 1000+ Tools

PLUS: Operating System for Memory-augmented generation, MCP server to stop AI hallucination

Today’s top AI Highlights:

  1. Drag-and-drop to build multi-agent workforce with 1000+ tools

  2. AI agents that create and execute tools as required on-the-fly

  3. An Operating System for memory-augmented generation in LLMs

  4. Will your AI agent sabotage you? Anthropic actually tests for this

  5. Ask Human MCP to keep your AI from hallucinating

& so much more!

Read time: 3 mins

AI Tutorial

Building good research tools is hard. When you're trying to create something that can actually find useful information and deliver it in a meaningful way, you're usually stuck cobbling together different search APIs, prompt engineering for hours, and then figuring out how to get the results into a shareable format. It's a headache, and the results are often inconsistent.

In this tutorial, we'll build an AI Domain Deep Research Agent that does all the heavy lifting for you. This app uses three specialized agents that are built using the Agno, Qwen 3 235B model via Together AI, and use tools via Composio to generate targeted questions, search across multiple platforms, and compile professional reports.

What makes this deep research app different from other tools out there is its unique approach: it automatically breaks down topics into specific yes/no research questions, combines results from both Tavily and Perplexity AI for better coverage, and formats everything into a McKinsey-style report that's automatically saved to Google Docs.

We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads) to support us!

Latest Developments

AI agent frameworks like LangChain and AutoGen are great if you love writing orchestration code, but what if you just want to build and deploy? AgentX strips away the complexity while keeping all the power, letting you focus on agent logic instead of infrastructure.

AgentX is a no-code multi-agent platform that lets you build AI agents like you build teams - each with a role, a brain (LLM), tools, and tasks - and they actually collaborate to get things done.

Think no-code meets full-stack control, with support for GPT-4o, Claude, Gemini, Llama 3, and 1000+ tools. From lead gen to enterprise research, from Slack bots to full research copilots: it’s all drag-and-drop.

Key Highlights:

  1. Specialized Agent Roles - Create agents with distinct responsibilities, LLMs, and toolsets that collaborate intelligently rather than competing for the same tasks.

  2. Workflow Engine - Chain agents together for complex custom workflows with real-time stats, parallel processing, and automatic task delegation based on agent capabilities.

  3. Integration Hub - Connect to Google Workspace, Notion, Zapier, and custom APIs through their MCP Server, plus deploy across WhatsApp, Discord, Slack, and web channels.

  4. Production-Ready - Built with Kubernetes for always-live deployments, persistent state management, and parallel processing.

Your AI just went from "I can help you plan that" to "I already built it while we were talking." This AI agent framework is your own self-hosted swarm of agents running wild inside a Docker box.

Agent Zero writes code, installs software, browses the web, and can even spin up more agents to get stuff done. You give it tasks, it builds tools. No rails. No training wheels.

It gives AI agents complete freedom to use your operating system as their toolbox. These agents can dynamically create whatever tools they need on the fly. The framework runs in its own virtual Linux environment, so you get all the power without the security risks to your main system.

Key Highlights:

  1. Computer as a Tool - Agent Zero treats your entire operating system as its toolkit, writing and executing code in Python, Node.js, or Bash as needed, with no predefined limitations on what it can install or run.

  2. Multi-Agent Hierarchy - The system can spawn subordinate agents to break down complex tasks, with each agent reporting back to its superior in a structured chain of command that keeps contexts clean and focused.

  3. Persistent Learning Memory - Built-in memory system automatically stores solutions, facts, and behavioral adjustments from past interactions, allowing agents to solve similar problems faster and more reliably over time.

  4. Complete Customization - Every aspect is modifiable through simple text files - change the system prompts, add custom tools, or modify agent behavior without touching any core code.

ChatGPT remembering your weekend plans is cute, but it's like celebrating a calculator when you need a supercomputer.

Current memory features are basic user preference storage - they don't solve the fundamental problem that LLMs are essentially very expensive goldfish with 30-second attention spans.

MemOS tackles this by turning memory into the central nervous system of AI applications rather than a side feature. The research team built a complete memory operating system that lets LLMs actually accumulate knowledge, maintain relationships, and evolve their capabilities over time, turning them from chatbots into persistent intelligent agents.

Key Highlights:

  1. Three-Way Memory Fusion - MemOS unifies parametric memory (baked into weights), activation memory (runtime states), and plaintext memory (external docs) into one manageable system instead of juggling separate solutions.

  2. Memory Scheduling - The system automatically decides what memory to load based on context, user patterns, and task requirements using pluggable strategies like LRU and semantic similarity.

  3. Cross-Agent Sharing - Unlike ChatGPT's isolated memory bubbles, MemOS lets different AI agents actually share and build on each other's memories while maintaining security and version control.

  4. Memory Lifecycle Control - Full version control, rollback mechanisms, access permissions, and audit trails - basically Git for AI memory with enterprise security baked in.

  5. Framework Agnostic - Works with any LLM backend and integrates with existing agent frameworks without forcing you to rewrite your entire application stack.

Quick Bites

Google is bringing NotebookLM-like Audio Overview to Search as an experiment. Searching for a topic you’re not familiar with? These audio overviews can help you get a lay of the land, offering a convenient, hands-free way to absorb information if you're multitasking or simply prefer an audio experience.

Ever thought your AI assistant would plot against you? Anthropic's new research actually tests for this, creating elaborate scenarios where models attempt covert sabotage while maintaining their cover.

The setup resembles a digital espionage thriller: models receive benign tasks paired with secret malicious objectives, then try to execute both without triggering suspicion from monitoring systems.

Here’s a breather at least today: Today's models proved surprisingly bad at deception (often "blurting out" their hidden agendas). But the research establishes crucial benchmarks for detecting dangerous capabilities for future models that could actually pull off these schemes. The evaluation will likely become standard in pre-deployment safety testing as AI agents gain more autonomy.

Tools of the Trade

  1. ask-human-mcp: MCP server that lets AI agents pause execution and ask you questions when they're uncertain, instead of hallucinating. When an agent calls ask_human(), it writes a question to ask_human.md with "answer: PENDING", waits for you to replace that with the correct answer, then continues execution.

  2. Apple On-Device OpenAI API: A SwiftUI application that creates an OpenAI-compatible API server using Apple's on-device Foundation Models. This allows you to use Apple Intelligence models locally through familiar OpenAI API endpoints.

  3. AI Dev Tasks: A collection of MD command files built for the Cursor that lets you guide the AI agent step-by-step while building software features. It breaks down the process into structured tasks—from writing a PRD to reviewing individual code changes—so the AI stays focused and you stay in control.

  4. Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, MCP, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes

  1. AI models are becoming agents out of the box ~
    Logan Kilpatrick

  2. job security doesn't exist anymore sometimes you just need to throw on whatever music pumps you up, over-caffeinate, and ship your ideas like your future depends on it because it kinda does ~
    Greg Isenberg

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.