• unwind ai
  • Posts
  • Hire an AI Employee with Just a Prompt

Hire an AI Employee with Just a Prompt

PLUS: Opensource self-improving model matches OpenAI o3, 5000+ MCP servers with one line of code

In partnership with

Today’s top AI Highlights:

  1. The first AI Employee you can build with just a prompt

  2. Visually build AI agents that call other agents as tools

  3. Opensource model matches OpenAI o3 via self-improvement, not long reasoning

  4. Perplexity is using stealth, undeclared crawlers

  5. Connect to 5,000+ MCP servers and tools with just 1 line of code

& so much more!

Read time: 3 mins

AI Tutorial

Integrating travel services as a developer often means wrestling with a patchwork of inconsistent APIs. Each API - whether for maps, weather, bookings, or calendars - brings its own implementations, auth, and maintenance burdens. The travel industry's fragmented tech landscape creates unnecessary complexity that distracts from building great user experiences.

In this tutorial, we’ll build a multi-agent AI travel planner using MCP servers as universal connectors. By using MCP as a standardized layer, we can focus on creating intelligent agent behaviors rather than API-specific quirks. Our application will orchestrate specialized AI agents that handle different aspects of travel planning while using external services via MCP.

We'll use the Agno framework to create a team of specialized AI agents that collaborate to create comprehensive travel plans, with each agent handling a specific aspect of travel planning - maps, weather, accommodations, and calendar events.

We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads) to support us!

Latest Developments

Your next hire won't need a desk, benefits, or coffee breaks.

Lindy AI just dropped version 3.0, introducing what they're calling the "first AI employee" - virtual workers that you can create with simple prompts and deploy across unlimited use cases. These AI agents work natively in your browser, handling everything from sales outreach to automated research, and even spin up their own virtual computers in the cloud to autonomously perform tasks on your behalf.

The new release centers on three core improvements: Agent Builder for rapid creation, Autopilot for computer-based actions, and Team Accounts for organization-wide deployment. Anyone can now build custom agents in minutes without technical expertise or coding requirements.

Key Highlights:

  1. Agent Builder - Create custom AI agents instantly using natural language prompts, eliminating the need for technical skills or complex coding processes.

  2. Autopilot Integration - Agents operate their own cloud-based computers, performing any action possible on a standard interface without requiring specific API integrations.

  3. Team-Wide Deployment - Share and manage AI agents across entire organizations with centralized control, monitoring, and standardized workflows.

  4. Universal Capability - Handle diverse tasks from lead generation and social media management to customer support and content creation through browser-based operation.

Training Generative AI? It starts with the right data.

Your AI is only as good as the data you feed it. If you're building or fine-tuning generative models, Shutterstock offers enterprise-grade training data across images, video, 3D, audio, and templates—all rights-cleared and enriched with 20+ years of human-reviewed metadata.

With 600M+ assets and scalable licensing, our datasets help leading AI teams accelerate development, simplify procurement, and boost model performance—safely and efficiently.

Book a 30-minute discovery call to explore how our multimodal catalog supports smarter model training. Qualified decision-makers will receive a $100 Amazon gift card.

For complete terms and conditions, see the offer page.

You can now visually build multi-agent hierarchies natively in n8n, where multiple subagents are nested within one primary AI agent.

n8n’s new AI Agent Tool node allows a parent AI agent to call subagents, each with their own instructions and toolset, directly in the same workflow.

This is great for building complex systems that function like real-world teams, where a lead agent assigns parts of a task to specialists. You can even add multiple layers of agents directing other agents, just like you would have in a real multi-tiered organizational structure.

While similar orchestration was already possible using sub-workflows, AI Agent Tool nodes are a good choice when you want the interaction to happen within a single execution or prefer to manage and debug everything from a single canvas.

Key Highlights:

  1. Nested Agents as Tools - Treat an entire AI agent as a callable tool within another, allowing for a clear and logical separation of tasks and capabilities.

  2. Granular Model Control - Fine-tune performance and cost by assigning a specific LLM to each agent in the workflow, matching the model's strengths to its job.

  3. Unified Workflow and Debugging - Design and monitor complex agent interactions without switching contexts, using a single, intuitive canvas and a detailed execution log.

  4. Inter-Agent Communication - The platform simplifies passing instructions and context between parent and child agents, making the orchestration process more straightforward.

A small San Francisco team just dropped a series of AI models that might change how AI intelligence scales.

Deep Cognito, an AI research company founded by former Google and DeepMind employees, just released 4 hybrid reasoning models (sizes 70B, 109B MoE, 405B, 671B MoE). These models aren’t big or have longer reasoning chains; they demonstrate how AI systems can get smarter by learning from their own thinking process, then baking that wisdom directly into their core parameters.

The company's flagship 671B MoE model outperforms DeepSeek's latest v3 and R1, and matches Claude 4 Opus and OpenAI o3 models, while using 60% shorter reasoning chains. You can toggle these hybrid models between instant responses and deep reasoning modes. What makes this release particularly impressive is the efficiency - all four models from 70B to 671B were trained for under $3.5M combined, challenging the assumption that breakthrough AI requires massive infrastructure investments.

Key Highlights:

  1. Four models - The largest model outperforms DeepSeek v3 and R1 while using significantly shorter reasoning chains, demonstrating improved "intuition" over pure search-based approaches.

  2. Novel self-improvement paradigm - Instead of just scaling inference-time reasoning, the models internalize their reasoning process through iterative policy improvement, creating a feedback loop where each generation becomes inherently more intelligent.

  3. Remarkable cost efficiency - The entire development cycle for all eight Cogito models (including v1) cost under $3.5M total.

  4. Emergent multimodal reasoning - Despite being trained only on text, the models naturally developed visual reasoning capabilities through transfer learning from their multimodal base.

  5. Availability - All four models are available for download on Hugging Face or accessible via APIs through Together AI, Baseten, and RunPod, with local deployment options via Unsloth.

Quick Bites

Cloudflare caught Perplexity red-handed using stealth crawlers that masquerade as Chrome browsers to scrape content from websites that explicitly blocked its official bots. When confronted with robots.txt restrictions, Perplexity switches to generic user agents and rotates IP addresses across different networks to circumvent blocks, generating millions of requests daily across tens of thousands of domains. Cloudflare has removed Perplexity from its verified bot program and is now actively blocking this behavior.

Anthropic just figured out how to see personality traits forming inside AI neural networks in real-time. AI personalities aren't just emergent quirks; they're measurable patterns of neural activity that can now be mapped and manipulated. Anthropic’s new "persona vectors" technique can detect when models are shifting toward undesirable behaviors like sycophancy or hallucination, and even prevent these traits from developing during training by essentially giving models a controlled "vaccine" dose of the unwanted behavior. It’s a very interesting study!

Quora’s Poe has released a developer API that unlocks access to 100+ models from every major LLM provider like OpenAI, Anthropic, Google, xAI, and millions of community bots behind a single, OpenAI-compatible interface. Your existing Poe subscription powers the API through the same point system, with tools like Cursor working out of the box. The API covers text, image, video, and audio models, with additional points available at $30 per million tokens beyond subscription limits.

LangChain reverse-engineered why Claude Code, Deep Research, and Manus actually work for complex tasks. Their analysis points to four key components: detailed system prompts, planning tools (even no-op ones for context engineering), sub-agents for task decomposition, and file systems for persistent memory. They've opensourced a "Deep Agents" package that implements these patterns so you can easily create a Deep Agent in your application that can handle complex, long-horizon tasks.

Tools of the Trade

  1. ToolSDK.ai: TypeScript SDK that connects over 5,000 MCP servers and AI tools to your agentic applications through a single line of code. You can dynamically fetch, configure, and execute tools without hardcoding endpoint logic.

  2. Cipher: Opensource memory layer that stores and retrieves coding knowledge across AI development tools like Cursor, Claude Desktop, and VS Code via MCP. It maintains both explicit programming concepts and reasoning patterns to provide persistent context.

  3. Claude Code Unified Agents: A comprehensive collection of Claude Code sub-agents combining the best features from multiple community repositories. It has 50+ production-ready agents across AI/ML, business, creative, meta-management, and more categories.

  4. ScreenCoder: UI-to-code generation system that converts screenshots or design mockups into HTML/CSS code using multiple AI agents. It combines visual understanding, layout planning, and adaptive code synthesis to generate accurate and editable front-end code.

  5. Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, MCP, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes

  1. How interesting that Meta's long-term AI research lab has "AI" in the name and its short-term one has "Superintelligence". ~
    Pedro Domingos

  2. All the technical language around AI obscures the fact that there are two paths to being good with AI:

    1) Deeply understanding LLMs

    2) Deeply understanding how you give people instructions & information they can act on.

    LLMs aren’t people but they operate enough like it to work ~
    Ethan Mollick

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.