- unwind ai
- Posts
- Google's Agent Garden is Open to All
Google's Agent Garden is Open to All
+ OpenAI GPT-5-Codex-Mini, Perplexity Comet on Android
Today’s top AI Highlights:
& so much more!
Read time: 3 mins
AI Tutorial
This is the Day 1 whitepaper for the AI Agents Intensive Course by Kaggle and Google, designed to take developers from prototypes to production-grade agent systems.
Learn about agentic design patterns, tool integration with MCP and A2A, multi-agent systems, RAG, and Agent Ops.
The 5-level taxonomy:
Level 0: Core reasoning system (LM in isolation)
Level 1: Connected problem-solver (LM + tools)
Level 2: Strategic problem-solver (context engineering + planning)
Level 3: Collaborative multi-agent systems
Level 4: Self-evolving systems that create new tools and agents
The paper covers real implementation details: how to handle context engineering, build memory systems, deploy with proper observability, and scale from one agent to enterprise fleets.
Advanced sections explore agent evolution, simulation environments, and case studies like Google Co-Scientist and AlphaEvolve.
Zero fluff. 100% free. Check it out now!
Latest Developments
Google's repository of production-grade AI agents just went public. You don’t even need a Google Cloud account to use it.
Agent Garden, a curated collection of ready-to-deploy agent samples built with their Agent Development Kit (ADK), is now available to all developers.
You can test, customize, and push these agents to production with just one click. The platform tackles the messiest part of building multi-agent systems: figuring out how agents should actually be architected, what tools they need, and how they coordinate with each other.
This collection caters to a range of use cases, ranging from data science to customer support and RAG. Every sample comes with architectural insights, use case documentation, and source code on GitHub.
Key Highlights:
One-Click Deployment - Agent Garden comes with Agent Starter Pack (open-source starter pack that helps with seamless deployments). You can simply click Deploy, and sample agents are deployed to Agent Engine in your project and exposed through the Agent Engine playground UI.
Multi-Service Integration Examples - Sample agents demonstrate production-ready integrations with BigQuery, Vertex AI Search, and other cloud services, showing how to handle real enterprise data workflows rather than toy examples.
Framework-Level Learning - Each sample exposes ADK's multi-agent patterns, including sequential workflows, parallel execution, and LLM-driven routing, helping you understand orchestration architecture rather than just copying code.
Customizing the agents - If you want to customize the code to fit your needs and use cases, there’s an option through Firebase Studio so you can open and customize the agents.
Model Flexibility Built-In - Samples work with Gemini models but support swapping to 200+ alternatives from Anthropic, Meta, Mistral AI, and others through LiteLLM integration, letting you test different models against real use cases.
Attention spans are shrinking. Get proven tips on how to adapt:
Mobile attention is collapsing.
In 2018, mobile ads held attention for 3.4 seconds on average.
Today, it’s just 2.2 seconds.
That’s a 35% drop in only 7 years. And a massive challenge for marketers.
The State of Advertising 2025 shows what’s happening and how to adapt.
Get science-backed insights from a year of neuroscience research and top industry trends from 300+ marketing leaders. For free.
Imagine you’re building a complex software project with AI agents. You write the task descriptions, define all the branches, and hope you've covered everything. Every branch needs predefined instructions.
But what about discoveries you didn't anticipate?
This open-source framework, Hephaestus, solves this with a radically different approach: agents don't follow predefined workflows. They build the workflow as they discover what's needed.
You define phase types (Analysis → Implementation → Validation) and the agents dynamically create tasks within those phases based on what they discover. When a testing agent finds an optimization opportunity or architectural insight, it doesn't get stuck, it spawns new investigation tasks, implementation work, or validation checks on the fly. The workflow branches and adapts in real-time, coordinated through Kanban tickets and monitored by a Guardian system that ensures agents stay aligned with phase goals.
Key Highlights:
Self-Building Workflow Trees - Agents spawn new tasks across any phase type when they discover optimization opportunities, bugs, or architectural improvements during their work, creating branching trees of coordinated tasks without predefined instructions for every scenario.
Parallel Execution - Multiple agents work simultaneously across different phases with Kanban-based coordination. One agent validates auth while another implements API routes, and a third investigates a caching pattern, all tracked through ticket dependencies.
Coherence Without Rigidity - Guardian system monitors workflow alignment and agent trajectories against phase goals (with coherence scoring), ensuring coordinated progress while allowing agents to explore discoveries and create new work branches autonomously.
Real-Time Observability - Watch agents work in isolated tmux sessions through a dashboard showing active tasks, phase distribution, dependency graphs, and agent decisions as the workflow tree builds itself based on actual project discoveries.
Quick Bites
OpenAI GPT-5-Codex-Mini gives 4x more usage than GPT-5-Codex
OpenAI quietly dropped GPT-5-Codex-Mini, giving developers 4x more usage than the full Codex at a slight capability hit. It is currently available only via Codex CLI and the VS Code extension until API access arrives. Simon Willison tested the model and found that it struggles with complex visual tasks like SVG generation, suggesting it's best suited for simpler coding operations where efficiency matters more than peak performance.
Google Agent Development Kit handcrafted for Go
Google's Agent Development Kit now supports Go, joining Python and Java as language options for building AI agents with code-first control. The Go implementation ships with Agent2Agent protocol support for multi-agent orchestration and includes MCP Toolbox integration for 30+ databases out of the box. If you've been waiting to build agents in Go with proper concurrency primitives and strong typing, ADK just made that possible.
Perplexity Comet for Android coming soon
Perplexity is sending out Android invites for Comet. Pro subscribers and heavy Perplexity users get first dibs on access, while everyone else can join the waitlist on Google Play.
PyTorch releases high-level DSL for performant ML Kernels
PyTorch just released Helion, a DSL that compiles high-level Python code into autotuned Triton kernels. Think "PyTorch with tiles" where you write familiar torch operations and the compiler handles the tedious indexing, memory management, and hardware-specific tuning. The interesting bit is the implicit search space: one kernel definition automatically generates thousands of Triton configurations to explore. Benchmarks show it beating torch.compile by 1.17x and hand-tuned Triton by 2.1x on B200, which is impressive given you're writing at a much higher level of abstraction. Available in beta.
Tools of the Trade
Kosong - An LLM abstraction layer designed for modern AI agent applications. It unifies message structures, asynchronous tool orchestration, and pluggable chat providers so you can build agents with ease and avoid vendor lock-in.
MCP Tool Filter - Reduces tool context from 1000+ MCP server tools to the most relevant 10-20 in under 10ms, using semantic embeddings to narrow down to the contextually relevant subset. Comes with built-in caching and optimized vector operations.
Valdi - A cross-platform UI framework by Snapchat that delivers native performance without sacrificing developer velocity. Write your UI once in declarative TypeScript, and it compiles directly to native views on iOS, Android, and macOS. Used by Snapchat in production for 8 years.
OpenAI ChatGPT OAuth Plugin for opencode - This plugin lets opencode users authenticate with ChatGPT Plus/Pro OAuth to access OpenAI's Codex backend, bypassing separate API credits. It auto-fetches Codex instructions from GitHub releases, and includes 9 pre-configured reasoning variants (low/medium/high effort) for both gpt-5 and gpt-5-codex models.
Awesome LLM Apps - A curated collection of LLM apps with RAG, AI Agents, multi-agent teams, MCP, voice agents, and more. The apps use models from OpenAI, Anthropic, Google, and open-source models like DeepSeek, Qwen, and Llama that you can run locally on your computer.
(Now accepting GitHub sponsorships)
Hot Takes
Pretty soon LLMs will be mostly talking to themselves.
Crazy to think we’re on our way to having more tokens generated globally for LLM consumption (reasoning and Agent2Agent) vs for human consumption.
Can you imagine being a "frontier" lab that's raised like a billion dollars and now you can't release your latest model because it can't beat Kimi_Moonshot? 🗻
Sota can be a bitch if thats your target
~ Emad Mostaque
That’s all for today! See you tomorrow with more such AI-filled content.
Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!
PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉






Reply