• unwind ai
  • Posts
  • Lovable + Figma = World's First Agentic AI Canvas

Lovable + Figma = World's First Agentic AI Canvas

PLUS: Google's Gemma 3 model for phone, Fastest deep research agent yet

In partnership with

Today’s top AI Highlights:

  1. The first agentic AI canvas for vibe building

  2. Google Gemma 3 270M uses < 1% phone battery

  3. World’s fastest deep research that responds in under 2 minutes

  4. Fully-managed service for enterprise AI agents and MCP servers

  5. Ollama for real-time speech-to-text

& so much more!

Read time: 3 mins

AI Tutorial

Building targeted B2B outreach campaigns is one of the most time-consuming aspects of sales and marketing. The challenge isn't just finding companies; it's discovering the right decision-makers, researching genuine insights, and crafting personalized messages that actually get responses.

In this tutorial, we'll build a multi-agent AI email outreach system using OpenAI GPT-5, Agno for orchestrating agents, and Exa AI for intelligent web search. This system automates the entire outreach pipeline - from company discovery to personalized email generation - delivering professional, research-backed outreach emails in minutes instead of hours.

Our multi-agent system conducts real research on each company using website content and Reddit discussions and ensures every email feels genuinely personalized.

We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads) to support us!

Latest Developments

Vibe coding just became as intuitive as sketching on a whiteboard, except your sketches turn into real code.

Trickle Magic Canvas is the world’s first agentic canvas where you can co-create with AI, visually, to ship production-ready apps and websites. It’s a visual space for context engineering, where the agent better understands your intentions and builds multi-page apps.

Think Lovable meets Figma, where the AI agent codes, designs, and deploys while you watch and guide the process. You can drop in your own assets, adjust layouts with simple drag-and-drop, and see every change reflected in real working code. The AI doesn't just follow instructions - it maintains project context, builds complete backend systems, and handles database integration automatically.

Key Highlights:

  1. Visual Co-Creation - Watch the AI build your app step-by-step on a living canvas while you drag elements, upload images, and make adjustments like Figma that automatically sync with the underlying code.

  2. Context Intelligence - The system maintains project knowledge and rules throughout development, so the AI remembers your requirements and constraints even as you iterate and add new features.

  3. End-to-End Automation - From frontend UI to backend databases and admin dashboards, the AI handles the complete technical stack while you focus on design and user experience decisions.

  4. Instant Deployment - Go from initial idea to live, shareable application in minutes with built-in version control and one-click publishing to production URLs.

Your RAG system keeps making up facts that aren't in your documents.

Verbatim RAG only returns exact text from your documents - no paraphrasing, no creative liberties, just pure verbatim content with citations. Unlike traditional RAG that lets LLMs freely generate answers based on retrieved context, this pipeline extracts exact text spans from source documents and composes responses entirely from these precise passages.

You can run the entire pipeline without any LLM using their trained ModernBERT extractor, and with SPLADE embeddings, it operates entirely on CPU for lightweight deployment. Built with Docling for document processing and Chonkie for intelligent chunking.

Key Highlights:

  1. Zero-hallucination - Every response consists entirely of verbatim text spans extracted directly from source documents, with exact citations linking back to original passages.

  2. LLM-optional - Run the complete pipeline using only embeddings and their trained ModernBERT model, making it cost-effective and suitable for sensitive environments.

  3. CPU-only - With SPLADE sparse embeddings, the entire system runs on CPU hardware, eliminating GPU requirements while maintaining performance.

Google just dropped a 270M parameter model that's specifically designed to be fine-tuned into a specialist, not another general-purpose conversationalist.

Gemma 3 270M brings strong instruction-following capabilities to a surprisingly small footprint, establishing new performance benchmarks for its size category on the IFEval benchmark. The philosophy behind it is simple: build a high-quality foundation model that follows instructions well out of the box, then unlock its true potential through task-specific fine-tuning.

What makes this particularly interesting is its energy efficiency - the INT4-quantized version uses just 0.75% of a Pixel 9 Pro's battery for 25 conversations.

Key Highlights:

  1. Specialized Architecture - Built with 270M parameters (170M for embeddings, 100M for transformer blocks) and a 256k token vocabulary, designed specifically for task-focused fine-tuning rather than general conversation.

  2. Perfect for Specialized Tasks - Ideal for high-volume, well-defined tasks like sentiment analysis, entity extraction, query routing, creative writing, etc.

  3. Fast Development Cycle - Small size allows for quick fine-tuning experiments and deployment, with proven results showing specialized versions outperforming larger general models.

  4. Deployment - Ships with quantization-aware training checkpoints and deployment through Hugging Face, Ollama, Vertex AI, and popular inference tools like llama.cpp.

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

Quick Bites

Amazon launched Bedrock AgentCore Gateway, a fully managed service that centralizes tool discovery and integration for AI agents using MCP. Create MCP servers with no code, and this service abstracts away security, infrastructure, and protocol-level complexities, with built-in OAuth authorization.

  • Semantic tool search prevents agent "tool overload" through intelligent discovery via natural language queries.

  • Dual-sided security architecture with OAuth inbound validation and configurable outbound authentication (IAM, API keys, OAuth 2LO).

  • Native protocol translation between MCP and REST APIs/Lambda functions with automatic infrastructure scaling.

What if deep research iterations took 2 minutes instead of 15? SuperNinja Fast Deep Research, powered by Cerebras hardware, delivers exactly that: 5x speed improvements while maintaining accuracy comparable to top models like Sonnet 4 (58.9% vs 62.1% on GAIA benchmark). The system runs Qwen3-235B entirely on-chip, bypassing the GPU bottlenecks that make iterative research painfully slow.

Tools of the Trade

  1. OWhisper: Think Ollama for local speech-to-text models. It downloads and runs models like Whisper-cpp and Moonshine through a simple CLI interface. It exposes OpenAI-compatible APIs for real-time and batch transcription, allowing you to integrate custom STT endpoints into your apps without cloud dependencies.

  2. MagicNode: A visual, no-code platform to build AI applications by connecting drag-and-drop nodes for prompts, logic, APIs, and UI elements. You can instantly publish your apps through an integrated marketplace and monetize them.

  3. Lyra: Runs a network of local specialized AI agents that passively monitor your activities, learn your preferences, and handle work like booking reservations and sending follow-ups. It runs entirely on your device using local models. When you approve a task that requires online action, only the specific task details are sent to complete it.

  4. Octofriend: Opensource CLI coding agent that can switch between different LLMs (GPT-5, Claude 4, GLM-4.5, and Kimi K2) mid-conversation. Comes with autofix models to handle diff edit and JSON encoding errors, properly manages reasoning tokens from thinking models, and supports MCP servers.

  5. Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, MCP, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes

  1. > 2025: models plateau

    > 2026: companies stop paying for multiple foundational models

    > 2026: some company does big article about how they moved to open source model and saved tons of money without losing efficacy

    > 2027: blood in the streets ~
    staysaasy

  2. I just met a person who can't tell Python from C++.

    He has never written a single line of code in his life, yet he feels he can build anything he wants.

    He told me point-blank:

    "I challenge you to tell me something I can't build using AI."

    I tried to explain, but I couldn't find the right words.

    The most fascinating aspect of vibe-coding is how it has convinced so many people to believe they are better and more capable than they really are. ~
    Santiago

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.