• unwind ai
  • Posts
  • Free AI Web Agent beats $200/month OpenAI Operator

Free AI Web Agent beats $200/month OpenAI Operator

PLUS: Mistral's enterprise package for vibe coding, Custom MCP connectors in ChatGPT

In partnership with

Today’s top AI Highlights:

  1. Free autonomous web AI agent beats $200/month OpenAI Operator

  2. Build no-encode RAG that directly queries knowledge bases via MCP

  3. ChatGPT can now connect to workspace apps and custom MCP connectors for teams

  4. Mistral AI releases its own Vibe Coding client - Mistral Code

  5. Claude Code now available to Pro users

& so much more!

Read time: 3 mins

AI Tutorial

Building intelligence tools that can automatically gather, analyze, and synthesize competitive data is both challenging and incredibly valuable. But it's one of those projects that sounds straightforward until you realize you're juggling multiple APIs, parsing different data formats, and somehow making sense of scattered information across dozens of websites.

In this tutorial, we'll build a multi-agent Product Intelligence System using GPT-4o, Agno framework, and Firecrawl's new /search endpoint. This system deploys three specialized AI agents that work together to provide comprehensive competitive analysis, market sentiment tracking, and launch performance metrics - all through a clean Streamlit interface.

We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads) to support us!

Latest Developments

The web AI agent space has gotten very competitive. We’ve seen impressive demos of OpenAI Operator and Google's Project Mariner, coming with hefty subscription fee, limits, and even lag. But this Paris-based AI company has decided to skip the waitlists and paid barriers altogether. Their autonomous web AI agent, Runner H, is out in public beta - no subscriptions, no waitlists, and free with very generous limits.

Runner H can handle real-world multi-step web tasks like pulling live data into a spreadsheet, applying for jobs, and booking trips, all from a single simple prompt. Along with Runner H, H has also opensourced Holo-1, the visual-language model that powers its web automation, delivering an industry-leading performance. For anyone tired of doing the same web tasks day in and day out, this is your chance to see how AI can step in.

Key Highlights:

  1. Complete Workflow Automation - Runner H integrates directly with Google Workspace apps, Notion, Slack, and Zapier. The agent can scrape, browse as well as interact with websites to complete your tasks. You can ask it to pull live data, put it into a spreadsheet, or summarize in a doc, and recap the results on Slack or sweep your Gmail inbox for important emails. 

  2. Multi-Agent Orchestration with Reasoning - Runner H has an intelligent orchestrator that delegates tasks to specialized sub-agents. At its core is Surfer H, their web browsing agent, along with document handlers, app connectors, and a reasoning engine that plans multi-step workflows.

  3. Opensource Action Vision Model - H Company has released Holo-1 under Apache 2.0 license, featuring 3B and 7B parameter models fine-tuned from Qwen2.5-VL specifically for UI localization and web interaction. Unlike general-purpose vision models, Holo-1 is trained to identify precise coordinates on user interfaces and predict the exact actions needed to complete tasks.

  4. Industry-Leading Performance - Runner H with Holo-1-7B achieves 92.2% accuracy on WebVoyager benchmarks at just $0.13 per task, significantly outperforming OpenAI GPT-4o and other models. The 3B model variant delivers 89.7% accuracy at only $0.11 per task, establishing the best accuracy-to-cost ratio in the market.

Automate Prospecting Local Businesses With Our AI BDR

Struggling to identify local prospects? Our AI BDR Ava taps into a database of 200M+ local Google businesses and does fully autonomous outreach—so you can focus on closing deals, not chasing leads.

Ava operates within the Artisan platform, which consolidates every tool you need for outbound:

  • 300M+ High-Quality B2B Prospects

  • Automated Lead Enrichment With 10+ Data Sources Included

  • Full Email Deliverability Management

  • Personalization Waterfall using LinkedIn, Twitter, Web Scraping & More

Traditional RAG systems have three components: a retriever that encodes queries and documents, a knowledge store containing pre-encoded chunks, and an LLM generator. When you ask a question, the system encodes your query, searches for similar encoded chunks in the vector store, and feeds both to the LLM for a response.

This RAG pipeline cuts out the encoding entirely - your knowledge stays in its original form and you query it directly with natural language instead of vector embeddings. FedRAG's new implementation, NoEncode RAG, involves only defining a knowledge store and a usual generator model - no retriever model necessary. It works particularly well with MCP servers, letting you connect to live data sources without the overhead of maintaining embedding pipelines.

Key Highlights:

  1. No embedding pipeline required - Skip the retriever model and vector database setup entirely by querying knowledge sources directly with natural language instead of encoded representations.

  2. Native MCP server integration - Connect to multiple MCP servers simultaneously, accessing live data from various sources like databases, APIs, and file systems without separate encoding steps.

  3. Custom reranking callbacks - Add scoring and prioritization logic to handle results from multiple knowledge sources, ensuring the most relevant information gets surfaced first.

  4. Standard fine-tuning support - Use existing FedRAG training classes to adapt your NoEncode RAG system to work better with your specific knowledge sources and domain requirements.

Quick Bites

OpenAI has released pre-built and custom connectors for ChatGPT. It can now connect with workplace tools, allowing it to pull real-time context from sources like Outlook, Teams, Google Drive, Gmail, and Linear. Team, Enterprise, and Education users get additional access to SharePoint, Dropbox, and Box connectors. They can build custom Deep Research connectors using MCP in beta so your team can search, reason, and act on that knowledge alongside web results and pre-built connectors.

The company is also rolling out a record mode feature for Team users on macOS that can transcribe meetings and voice notes, and use them for more context.

Claude Code is now available for Pro users. Just like the Max plan, it comes with rate limits that are shared across Claude and Claude Code, meaning all activity in both tools counts against the same usage limits. Average users can send approximately 225 messages with Claude OR 50-200 prompts with Claude Code - every 5 hours.

LlamaIndex has launched a new MCP integration that lets you connect your agents to MCP servers and convert LlamaIndex agent workflows into MCP servers. The Helper functions allow agents to access MCP tools with authentication support. And the workflow_as_mcp function lets you easily serve any LlamaIndex workflow as an MCP server that can work with any MCP client.

Mistral AI has released an enterprise package for vibe coding, Mistral Code, that takes aim at AI IDEs like Cursor, Windsurf, and GitHub Copilot. Built on the opensource Continue project, it combines four specialized models - Codestral for autocomplete, Codestral Embed for code search, Devstral for multi-step coding tasks, and Mistral Medium for chat - into a single package.

What sets it apart is the ability for companies to fine-tune models on their private repositories and deploy everything within their own infrastructure boundaries. It is being actively tested to go beyond coding suggestions for tasks like opening files, writing new modules, updating tests, and executing shell commands.

Tools of the Trade

  1. Base44: Vibe code fully-functional full-stack web applications, software, games, dashboards, anything with built-in authentication, databases, analytics, and infrastructure. It takes care of all the technical parts behind the scenes, so you can build, deploy, and share apps easily.

  2. Exosphere: Cloud platform for running large-scale, background AI workflows with built-in parallelism, retries, and integrations to data sources like S3 and Notion. It handles the infrastructure complexity of running AI agents on massive datasets and offers up to 75% cost savings.

  3. Clarm: AI deep research agent builder for high-trust environments. It connects across 40+ enterprise integrations - CRMs, email, knowledge bases - and lets you build precise, repeatable workflows with zero hallucinations.

  4. Tropir: Traces problems in AI pipelines like hallucinations, bad retrievals, or tool failures back to the exact step that caused them. It automatically suggests fixes, reruns the pipeline, and checks if the fix worked.

  5. Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, MCP, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes

  1. > that looks wrong claude

    > You're absolutely right to question this! ~
    anton

  2. Why o3-pro when GPT-5 will probably be released next month?

    My guess: Although GPT-5 unites all models, there will certainly still be different pay tiers: Free, Plus and Pro.

    Depending on which tier you subscribe to, different efforts will be made on the models (e.g. o3-pro to solve particularly difficult problems). By no means will GPT-5 perform the same test-time compute in the free tier as in the pro tier. In this respect, it still makes sense to publish o3-pro for OpenAI. ~
    Chubby♨️

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.