- unwind ai
- Posts
- AI Super Agent in Your Browser
AI Super Agent in Your Browser
PLUS: AI agent course as MCP server, Always-on AI agents that monitor the web for you
Today’s top AI Highlights:
Fully-agentic, ad-free AI browser that can browse, watch, and take actions
This 15MB RAG library outperforms LangChain by 33x
Always-on AI agents that continuously monitor the web for you
Manus AI released a 100% free and unlimited AI chat mode
AI agents course as an MCP server
& so much more!
Read time: 3 mins
AI Tutorial
Traditional RAG has served us well, but it's becoming outdated for complex use cases. While vanilla RAG can retrieve and generate responses, agentic RAG adds a layer of intelligence and adaptability that transforms how we build AI applications. Also, most RAG implementations are still black boxes - you ask a question, get an answer, but have no idea how the system arrived at that conclusion.
In this tutorial, we'll build a multi-agent RAG system with transparent reasoning using Claude 4 Sonnet and OpenAI. You'll create a system where you can literally watch the AI agent think through problems, search for information, analyze results, and formulate answers - all in real-time.
We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.
Latest Developments
Forget everything you know about web browsers - Genspark just turned browsing into an agentic superpower.
Their fully agentic AI Browser doesn’t just display web pages; it embeds a Super Agent directly into every web page you visit, ready to shop smarter, research faster, watch videos for you, and automate workflows across 700+ connected apps.
Mac users can download it now (Windows coming soon). This feels like getting a personal assistant who lives inside your browser and knows exactly what you need on every webpage.
Key Highlights:
Embedded Intelligence Everywhere - The Super Agent appears on any webpage with contextual AI tools - find better deals while shopping, summarize YouTube videos into slides, or extract transcripts automatically.
Autopilot Browsing - Genspark can autonomously navigate websites, read your X feed, and generate custom podcasts from trending topics, or access your premium subscriptions like SimilarWeb to pull competitive data. It handles the tedious browsing work while you focus on decision-making.
MCP Store - Connect over 700 tools seamlessly - from Discord and Slack to Notion and GitHub - letting you set up meetings with Zoom links, attach documents, and notify team members across platforms with a single command. Your entire workflow becomes voice-controlled through the browser.
Performance and Privacy - Built for speed with instant page loads and real-time AI responses, while sophisticated ad blocking eliminates hundreds of intrusive ads automatically. You get a clean, distraction-free experience that's faster than traditional browsers even with AI running constantly.
We've all been there: frantically refreshing booking pages for that perfect dinner reservation or setting phone alarms to check if concert tickets dropped in price.
Yutori's Scouts replace this chaos with AI agents that monitor the web continuously, understand context, and catch opportunities you'd otherwise miss. These always-on agents track everything from flight deals to price drops, operating browsers autonomously, spotting changes, and sending you alerts only when they find exactly what you're looking for.
It's like having a personal team that lives on the internet and works around the clock just for you.
Key Highlights:
Parallel Web Monitoring - Deploy multiple agents simultaneously to monitor different sites and topics, all running asynchronously in the cloud while you focus on other tasks.
Smart Agent Deployment - Scouts automatically break down complex tracking requests into specific tasks, visiting relevant pages, and applying appropriate filters to gather precise information.
Complete Visibility - View detailed logs of agent actions including todos, pages visited, and content extraction methods, building confidence in Scout reliability through full transparency.
Roadmap - Beyond monitoring, Scouts will soon act on findings by completing purchases, making reservations, and handling transactions automatically when opportunities arise.
Picture this: you're building a RAG app and need to chunk some text, so you pip install a popular library only to watch 80MB+ of dependencies for what should be a 10-line task. Two developers felt this pain repeatedly while building personal LLM projects.
To fix this, they built Chonkie, a chunking library that actually respects your disk space and processing time. This 15MB powerhouse delivers 33x faster token chunking than LangChain.
Chonkie handles everything from document preprocessing through vector database ingestion, with 19+ integrations that connect seamlessly to your existing tech stack.
Key Highlights:
Lightning-fast performance - Achieves up to 33x faster token chunking while maintaining a 15MB install size, compared to alternatives that can exceed 170MB with significantly slower processing speeds.
Advanced chunking strategies - Implements cutting-edge methods like Late Chunking from recent papers, Semantic Double Pass for related chunk merging, and specialized Code Chunking using AST analysis for optimal split points.
Modular pipeline architecture - Features a flexible multi-stage system with Chefs for preprocessing, Chunkers for splitting, Refineries for post-processing, and Handshakes for seamless vector database integration across 19+ supported platforms.
Zero-dependency core - Works without external dependencies for basic functionality while implementing aggressive caching, precomputation, and running mean pooling for efficient semantic chunking across major tokenizers.
Quick Bites
Manus AI just rolled out a free and unlimited Chat Mode, no waitlist, no token caps. You can now ask questions, drop in files, and get instant responses without switching tabs or tools. And when you’re ready for more muscle (like full-blown multi-step workflows), a one-click upgrade takes you straight into their Agent Mode.
Google DeepMind’s AlphaEvolve taught us that algorithms can evolve, and now this evolutionary agent loop has come for creative writing. Someone just built AlphaWrite, an open-source framework that builds a population of short stories, ranks them using Elo scores, and evolves the best ones over multiple rounds.
It doesn’t stop there: the best stories are then used to fine-tune the base model, creating a loop where the model improves by learning from its own work. In tests with Llama 3.1 8B, readers preferred AlphaWrite’s outputs nearly 3 out of 4 times.
This has to be the smartest way to teach AI agents we've seen to date. Mastra just turned their course on using the framework into an MCP server. Meaning, your AI IDE becomes your instructor. Instead of watching videos that'll be outdated next month, you learn to build TypeScript AI agents by having an agent teach you interactively right inside Cursor, Windsurf, or VSCode. It's meta in the best way: learning about agents from an agent that guides you through building tools, memory systems, and MCP integrations while you code.
Tools of the Trade
Top Opensource AI agents and MCP GitHub repos to supercharge your AI workflows:
mcpo: A dead-simple proxy that takes an MCP server command and makes it accessible via standard RESTful OpenAPI, so your tools "just work" with LLM agents and apps expecting OpenAPI servers.
Unbody: Opensource, modular backend stack for building AI-native software that understands, reasons, and acts on knowledge, instead of just data(bases). Think of it as the Supabase of the AI era.
OWL: Multi-agent framework handles complex tasks like research, web browsing, and coding. Currently ranks #1 among opensource projects on the GAIA benchmark with a 58.18 average score. Works with multiple LLMs including Claude Sonnet, DeepSeek, GPT-4o, and even local models.
mcptools: A comprehensive command-line interface for interacting with MCP servers. Discover, call, and manage tools, resources, and prompts from any MCP-compatible server.
Self.so: Turn your resume or LinkedIn profile into a personal website in under a minute, hosted on Vercel. It’s a great demonstration of how different building blocks like an LLM, Clerk for auth, S3, Vercel, etc. can be combined together to build something simple yet useful.
VoiceStar: Opensource TTS system that allows you to specify exactly how long the output audio should be, super useful for applications like dubbing, advertisements, and accessibility features where timing precision is critical.
Second-Me: Opensource tool that lets you train a personalized AI agent locally using your own data and memories to create a "digital twin" that reflects your communication style and knowledge.
Agent File (.af): Open standard for serializing stateful AI agents into portable files that contain their memory, prompts, tools, and configurations. It enables agents to be shared, versioned, and transferred between frameworks like MemGPT, LangGraph, and CrewAI without losing their state or behavior.
Blender MCP: Connects Blender to Claude AI via MCP to directly interact with and control Blender and create 3D models, scenes, and 3D manipulation from simple prompts.
Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, MCP, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.
Hot Takes
Google will be the first one to get AGI because they are the ONLY big tech company for whom the rise of AI is an imminent existential threat. ~
Bojan TunguzLeaving SF is crazy.
You forget that most people don’t even know what Claude is. ~
Kai Jarmon
That’s all for today! See you tomorrow with more such AI-filled content.
Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!
PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉
Reply