• unwind ai
  • Posts
  • Replit Agent Runs Autonomously for 200 Mins Non-Stop

Replit Agent Runs Autonomously for 200 Mins Non-Stop

+ Full MCP support in ChatGPT, Awesome Claude Code agents and commands

In partnership with

Today’s top AI Highlights:

& so much more!

Read time: 3 mins

AI Tutorial

We have created a complete Google Agent Development Kit crash course with 9 comprehensive tutorials!

This tutorial series takes you from zero to hero in building AI agents with Google's Agent Development Kit.

What's covered:

  • Starter Agent - Your first ADK agent with basic workflow

  • Model Agnostic - OpenAI and Anthropic integration patterns

  • Structured Output - Type-safe responses with Pydantic schemas

  • Tool Integration - Built-in tools, custom functions, LangChain, CrewAI, MCP

  • Memory Systems - Session management with in-memory and SQLite storage

  • Callbacks & Monitoring - Agent lifecycle, LLM interactions, tool execution tracking

  • Plugins - Cross-cutting concerns and global callback management

  • Multi-Agent Patterns - Sequential, loop, and parallel agent orchestration

Each tutorial includes explanations, working code examples, and step-by-step instructions.

Everything is 100% open-source.

We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads) to support us!

Latest Developments

AI agent running completely autonomously from 2 to 20 to 200 minutes. This might sound counterintuitive - who wants to wait longer for results? But you're not sitting idle; you're getting hours of focused development work happening in the background while you tackle other priorities.

Replit just released Agent 3, their biggest autonomy breakthrough since Replit launched. This agent doesn't just write code, it runs for over 3 hours straight, testing every button and API call in an actual browser, then fixes whatever breaks without asking permission.

Agent 3 now manages your entire development cycle, generating code, running commands, creating task lists, and monitoring its own progress with minimal human oversight. With proprietary testing technology that outperforms Computer Use models by 10x in cost-effectiveness, Agent 3 clicks through your apps like a human QA tester, ensuring every feature works before you even see it.

Key Highlights:

  1. Agent-Building Capabilities - Creates other agents and automations like Slack bots, Telegram assistants, and time-based workflows. Just like building normal apps, you just need to describe what you want in words.

  2. Autonomous Testing - Tests apps in real browsers by clicking buttons, filling forms, and checking APIs automatically, then fixes detected issues without user intervention using a proprietary system that's 3x faster than Computer Use models.

  3. Development Modes - Offers both full-stack application building and frontend-only prototyping options, letting you choose the right approach as per your use case.

Available immediately to all free and paid Replit users.

OpenAI rolled out full MCP support in ChatGPT in Developer Mode, which means your ChatGPT can now write actions through MCP connectors - not just search and fetch data like before.

Instead of only fetching data, you can now instruct ChatGPT to update Jira tickets, push code changes, schedule meetings, or execute any action your MCP server exposes. The system supports chaining multiple connectors for complex workflows, giving developers the ability to build sophisticated automations that span different platforms and services.

OpenAI has warned about potential risks from prompt injections and data destruction. Developer mode treats this as a power-user feature with confirmation prompts and detailed payload inspection to help prevent accidents.

Key Highlights:

  1. True Write Operations - Execute actual modifications and creations in connected systems, not limited to read-only data retrieval like previous Connector implementations.

  2. Cross-Platform - Chain multiple MCP connectors within a single conversation to build complex workflows that operate across different tools and services.

  3. Setup - Configuration through settings with support for SSE, streaming HTTP, OAuth authentication, and granular tool management controls, just like Claude and other MCP clients.

  4. Safety Controls - Write actions require confirmation by default, with expanded JSON payload inspection and conversation-level approval memory to prevent accidental data destruction.

Currently rolling out to Plus and Pro users on the web.

The #1 AI Newsletter for Business Leaders

Join 400,000+ executives and professionals who trust The AI Report for daily, practical AI updates.

Built for business—not engineers—this newsletter delivers expert prompts, real-world use cases, and decision-ready insights.

No hype. No jargon. Just results.

Quick Bites

Chat with Google Gemma 3n locally on your phone
Running sophisticated multimodal AI locally on your Android device just became ridiculously straightforward. Google’s AI Edge Gallery is now available on the Play Store that lets you chat with Gemma 3n using text, audio, and image inputs, running entirely free, locally, and without internet. All data stays on your device, never sent to cloud servers. The team also plans to add more features to the app, including on-device function-calling and RAG.

1-bit or 3-bit models can beat GPT 4.1 or Claude Opus 4
Unsloth AI just showed that DeepSeek-V3.1 can be quantized down to 1-bit or 3-bit and still outperform GPT-4.1, GPT-4.5, and Claude-Opus-4 on the Aider Polyglot bench. The 1-bit version shrinks the model size by 75% while keeping accuracy strong, while the 3-bit version in “thinking” mode even beats Claude-Opus-4. The trick is selective layer quantization, where only less important layers are dropped to ultra-low precision while critical ones stay higher-bit. Check out their guide to try the quantized models yourself.

Kimi's open-source Checkpoint Engine updates 1T models in ~ 20s
Updating a trillion-parameter model across thousands of GPUs used to be a nightmare - now Kimi.ai's new checkpoint-engine does it in just 20 seconds. This open-source middleware handles in-place weight updates for LLM inference engines with remarkable efficiency. The tool is particularly valuable for reinforcement learning workflows where frequent model updates are essential.

UAE’s 32B open-source reasoning model beats GPT-OSS and DeepSeek v3
It seems the UAE wants a seat at the global AI table. Their AI research institute has released K2 Think, a 32B parameter reasoning model based on Qwen 2.5, optimized for advanced reasoning in math, science, code, and agentic tasks. The model surpasses or matches much larger models such as GPT-OSS 120B and DeepSeek v3.1 across multiple benchmarks. It's fully open-source under Apache 2.0, complete with training data and deployment tools. Wish they had chosen another name, since many people have already confused it with Kimi.ai’s K2 model.

Hugging Face’s free course on Fine-Tuning with certification
Hugging Face just launched their free "smol-course" on fine-tuning language models, covering everything from instruction tuning to preference alignment and synthetic data generation. The course runs through December with hands-on frameworks, community challenges, and two certification tracks. It's designed for developers who want to fast-track their LLM fine-tuning skills with practical, battle-tested techniques.

Tools of the Trade

  1. Awesome Claude Code - A curated list of slash-commands, CLAUDE.md files, CLI tools, and other resources and guides for enhancing your Claude Code workflow, productivity, and vibes.

  2. Sourcetable's Superagents - Connects spreadsheets to any database, API, or MCP server to analyze and orchestrate data, using AI agents that can generate code on-the-fly to solve data problems. It runs a sandboxed Python virtual machine with hundreds of AI tools and libraries for real-time data analysis, modeling, and visualization.

  3. Oboe - Generate courses on literally any topic - from history of AI to contract law to ordering wine in France - all with a simple prompt. It offers nine content formats, like articles, audio lectures, quizzes, and games, and adapts to your preferences.

  4. Awesome LLM Apps: A curated collection of LLM apps with RAG, AI Agents, multi-agent teams, MCP, voice agents, and more. The apps use models from OpenAI, Anthropic, Google, and open-source models like DeepSeek, Qwen, and Llama that you can run locally on your computer.
    (Now accepting GitHub sponsorships)

Hot Takes

  1. I don’t understand who asked for a thinner iPhone ~
    Garry Tan

  2. it's incredibly fortunate we live in a world where every single coding agent can be #1 on benchmarks ~
    dax

  3. it's funny, sonnet can both be the best and the worst model at writing frontend code depending on how you prompt it ~
    eric zakariasson

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.