• unwind ai
  • Posts
  • NoCode AI Agents in your Browser

NoCode AI Agents in your Browser

PLUS: OpenAI's o3-mini and Deep Research, Run DeepSeek R1 locally

In partnership with

Today’s top AI Highlights:

  1. Build AI agent workflows that scale to 1000s of API endpoints

  2. Open source end-to-end framework for simple to agentic RAG apps

  3. Run DeepSeek R1 671B quantized in 1.58-bit locally with Llama.cpp

  4. OpenAI releases o3-mini in ChatGPT and API, and agentic Deep Research

  5. Create custom AI agents that run in your browser without a single line of code

& so much more!

Read time: 3 mins

AI Tutorials

For businesses looking to stay competitive, understanding the competition is crucial. But manually gathering and analyzing competitor data is time-consuming and often yields incomplete insights. What if we could automate this process using AI agents that work together to deliver comprehensive competitive intelligence?

In this tutorial, we'll build a multi-agent competitor analysis team that automatically discovers competitors, extracts structured data from their websites, and generates actionable insights. You'll create a team of specialized AI agents that work together to deliver detailed competitor analysis reports with market opportunities and strategic recommendations.

This system combines web crawling, data extraction, and AI analysis to transform raw competitor website data into structured insights. Using a team of coordinated AI agents, each specializing in different aspects of competitive analysis

We share hands-on tutorials like this 2-3 times a week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads, Facebook) to support us!

Latest Developments

Wildcard makes connecting tools to your AI agents refreshingly simple. It's a developer platform that lets you integrate and orchestrate APIs without wrestling with complex workflows or custom integrations. With access to over 2000 endpoints across 12 popular APIs, Wildcard handles the heavy lifting of tool selection and execution, while you focus on building features.

This system lets you use natural language to control and select API endpoints, instead of traditional methods like function calling which tend to falter with a large number of tools. The platform works seamlessly with agent frameworks like LangGraph, with more framework support on the way.

Key Highlights:

  1. Tool Integration - Drop in Wildcard's API and get instant access to optimized integrations for 12 popular services including Gmail, Slack, and Shopify. Connect your AI agents to APIs with minimal code and use natural language to interact with and select the right tool from an array of options. This is useful if your AI agent needs to be able to handle thousands of endpoints, something that function calling struggles with.

  2. Customizable Tool Definitions - Wildcard provides tool definitions tailored for your LLM, allowing you to execute function calls on your own infrastructure. This gives you both flexibility in how your agent interacts with API endpoints and control over data security and management.

  3. Developer-First Implementation - Get up and running in under 5 minutes with just a few lines of code. Wildcard provides clean Python packages with type safety and comprehensive documentation. Start with the developer package of pre-integrated endpoints, or easily add your custom APIs while maintaining the same performance benefits.

  4. Built on agents.json - Wildcard uses the agents.json specification, an open-source standard built on OpenAPI that formally describes contracts for API and AI agent interactions. This means you get strongly-typed definitions, stateless execution, and the ability to use your existing agent architecture and RAG systems to handle state.

  5. Flexible Authentication - Choose between managed cloud authentication or self-hosted options for complete control over credentials. Wildcard supports OAuth2, API keys, bearer tokens, and basic authentication methods. Execute functions locally on your infrastructure to keep sensitive data secure and private.

The #1 AI Meeting Assistant

Typing manual meeting notes drains your energy. Let AI handle the tedious work so you can focus on the important stuff.

Fellow is the AI meeting assistant that:

✔️ Auto-joins your Zoom, Google Meet, and Teams calls to take notes for you.
✔️ Tracks action items and decisions so nothing falls through the cracks.
✔️ Answers questions about meetings and searches through your transcripts, like ChatGPT

Try Fellow today and get unlimited AI meeting notes for 90 days.

Haystack is an open-source framework for building production-ready LLM applications, RAG pipelines, and state-of-the-art search systems that work intelligently over large document collections.

The framework lets you combine components like retrievers, generators, and document stores to build everything from basic search systems to complex agent-based architectures. With built-in support for multiple LLM providers and databases, Haystack handles the heavy lifting of integrating AI technologies while giving you granular control over your implementation.

Key Highlights:

  1. Component-Based Architecture - Build your AI applications by connecting modular components for specific tasks - document processing, embedding generation, retrieval, and LLM interactions. Each component is interchangeable, letting you swap implementations without rewriting your entire codebase. You can also create custom components to extend functionality while maintaining compatibility with the rest of the ecosystem.

  2. Production-Ready Pipeline - Develop and deploy AI workflows through a pipeline system that supports branching, loops, and parallel processing. The pipelines are fully serializable for deployment, include comprehensive logging, and provide validation to catch potential issues before runtime. Built-in error handling and monitoring help maintain reliability in production environments.

  3. Flexible Data Integration - Choose from multiple document store options including Elasticsearch, Milvus, and Chroma to match your scaling needs. The framework provides unified interfaces for document processing, letting you handle various file formats (PDF, DOCX, XLSX) and implement custom preprocessing pipelines. Built-in vector search capabilities make it simple to implement semantic search and RAG applications.

  4. Framework-Agnostic LLM Support - Work with models from OpenAI, Anthropic, Cohere, or local models through a consistent interface. The framework handles provider-specific requirements behind the scenes, letting you focus on building features rather than managing multiple APIs. You can easily benchmark different models or implement fallbacks without significant code changes.

  5. Visual Studio - Build LLM apps faster with Haystack Studio's drag-and-drop interface. You can prototype quickly by connecting components visually, test with your own files or databases, and export production-ready pipelines. The Studio provides immediate feedback on pipeline performance and enables easy sharing with team members.

Quick Bites

You can run the full DeepSeek-R1 671B model in its dynamic 1.58-bit quantized form (compressed to 131GB) locally using Llama.cpp integrated with Open WebUI. The setup involves downloading UnslothAI's 1.58-bit quantized version from Hugging Face, running it through Llama.cpp's server mode, and connecting it to Open WebUI's interface. While inference speeds are modest on consumer hardware, this is a significant experiment with one of the largest open-source reasoning models without requiring enterprise-grade infrastructure.

Google has launched "Daily Listen," an AI audio feature in the Google app that creates personalized 5-minute podcast episodes based on your Google Discover feed interests. The feature provides an audio overview of topics you usually read or follow, complete with a rolling transcript and links to related stories, following a similar concept to NotebookLM's Audio Overviews. It is currently available to limited U.S. users through Google's Search Labs experiment on Android and iOS.

OpenAI has released o3-mini, their newest cost-efficient reasoning model, available in ChatGPT (for all tiers) and in the API. o3-mini is a powerful and fast reasoning model that is particularly strong in science, math, and coding, consistently outperforming the o1-preview model.

For ChatGPT users:

  1. Free users can try o3-mini by selecting 'Reason' in the message composer. For paid users, o3-mini has replaced o1-mini in the model picker.

  2. Pro users have unlimited access to the model, while Plus and Team users gain 3x rate limits (from 50 for o1-mini to 150 messages per day). There’s also a higher intelligence version, 'o3-mini-high', for all paid users.

  3. o3-mini now works with search to find up-to-date answers with links to relevant web sources.

For developers:

  1. o3-mini becomes the first reasoning model with developer features like function calling, structured outputs, and developer messages, making it production-ready.

  2. Developers in API usage tiers 3-5 can access o3-mini via the Chat Completions, Assistants, and Batch APIs.

  3. The model offers three reasoning effort levels (low, medium, high) to allow for tailored performance, optimizing for either complex tasks or faster responses.

In another update, OpenAI has launched "deep research" in ChatGPT for Pro users (soon for Plus and Team users), an agentic feature powered by their upcoming o3 model optimized for web browsing across modalities and Python analysis. This tool performs multi-step research, synthesizing information from the web to produce comprehensive reports. Deep research is designed for complex tasks, like competitive analysis or specialized research, and automates hours of manual investigation to tens of minutes, providing citations and summaries of its process.

Google also has a similar agentic product Deep Research, available in the Gemini app for Pro users. A team of AI agents powered by the Gemini model work together to do research - Draft a plan > Search the web (Google search) > Analyze results > Create a research report - all in 2-3 minutes. 2025 is definitely the year of AI agents!

Tools of the Trade

  1. BrowserAgent: No-code platform to create and run browser-based AI agents through a drag-and-drop interface, using your browser's GPU to run AI models locally. It functions as a Chrome extension that enables you to automate web tasks while keeping data private and avoiding API costs, built on the open-source BrowserAI library.

  2. Goose: Open-source AI agent that runs locally on your machine, providing autonomous engineering capabilities by executing, editing, and testing code. You can customize Goose with your preferred LLM and enhance its capabilities by connecting it to any external MCP server or API.

  3. mcp-agent: A Python framework for building AI agents that implements Anthropic's MCP. It simplifies connecting to MCP servers by handling technical aspects like server lifecycle management and tool integration. It also offers composable workflow patterns like parallel execution, routing, and evaluation-optimization.

  4. Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes

  1. It’s amazing that no model has caught up with Claude on coding. Even if they look good on benchmark they’re still not as good at generating working good looking modern web apps.
    Whatever magic Anthropic did seems very durable. ~
    Amjad Masad

  2. Once AGI gets access to the root directory it’s over. ~
    Bojan Tunguz

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads | Facebook

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.