• unwind ai
  • Posts
  • LangChain's No-Code Agent Builder

LangChain's No-Code Agent Builder

+ New agentic coding models from Cursor and Windsurf, GitHub Agent HQ

In partnership with

Today’s top AI Highlights:

& so much more!

Read time: 3 mins

AI Tutorial

SEO optimization is both critical and time-consuming for teams building businesses. Manually auditing pages, researching competitors, and synthesizing actionable recommendations can eat up hours that you'd rather spend strategizing.

In this tutorial, we'll build an AI SEO Audit Team using Google's Agent Development Kit (ADK) and Gemini 2.5 Flash. This multi-agent system autonomously crawls any webpage, researches live search results, and delivers a polished optimization report through a clean web interface that traces every step of the workflow.

We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads) to support us!

Latest Developments

LangChain is putting AI agent building in the hands of every user, not just developers.

They just released LangSmith Agent Builder in private preview, a no-code platform to build sophisticated AI agents that connect to multiple platforms and execute tasks autonomously.

LangSmith Agent Builder is NOT a visual workflow builder like n8n or OpenAI Agent Builder. The team believes that rather than follow a predetermined path, agents can delegate more decision-making to an LLM, allowing for more dynamic responses.

You describe what you want in plain language, answer a few follow-up questions, and the system generates a complete agent with prompts, tool connections, and triggers. What makes this different is the built-in memory system: when you correct the agent or point out an edge case, it updates its own instructions so the fix carries forward to future runs.

Key Highlights:

  1. Conversational Setup - The system guides you through agent creation with follow-up questions to refine your requirements, then auto-generates detailed prompts, connects necessary tools via MCP, and sets up triggers - no prompt engineering experience needed.

  2. Adaptive Memory - Agents can update their own instructions and tool configurations based on your corrections, so improvements stick without requiring you to manually edit prompts or rebuild the agent from scratch.

  3. Tool Integration - Connect agents to approved services like Gmail, Slack, Linear, and LinkedIn through built-in OAuth flows and MCP support, with Agent Authorization ensuring proper permissions for team tools.

  4. Agent Inbox for Monitoring - Track all agent threads with status indicators (idle, busy, interrupted, errored) and receive notifications when agents need your attention, creating a manageable oversight system for autonomous workflows.

The AI Insights Every Decision Maker Needs

You control budgets, manage pipelines, and make decisions, but you still have trouble keeping up with everything going on in AI. If that sounds like you, don’t worry, you’re not alone – and The Deep View is here to help.

This free, 5-minute-long daily newsletter covers everything you need to know about AI. The biggest developments, the most pressing issues, and how companies from Google and Meta to the hottest startups are using it to reshape their businesses… it’s all broken down for you each and every morning into easy-to-digest snippets.

If you want to up your AI knowledge and stay on the forefront of the industry, you can subscribe to The Deep View right here (it’s free!). 

AI browsers talk about agents. FlowithOS actually delivers them. While competitors struggle to book a simple flight, this OS is autonomously managing YouTube channels, posting on social platforms, and scoring near-perfect on the hardest web automation benchmarks.

FlowithOS is the first operating system natively designed for AI agents, running directly on your desktop and treating the entire web as its execution environment. Built on Flowith's infinite agent architecture, the system uses agentic flows to orchestrate web-wide resources into seamless on-demand workflows.

It handles complex multi-step processes autonomously, like creating content, managing uploads, writing descriptions, engaging with comments, and completing transactions, while a built-in reflective agent continuously reviews performance and learns from mistakes.

Results speak clearly: On the Online Mind2Web hardest level, FlowithOS hit 92.8% while ChatGPT Atlas managed only 75.7%.

Key Highlights:

  1. End-to-End Workflow Execution - Handles complete processes like YouTube content creation from video generation in Flowith Canvas through upload, metadata writing, and publishing, with autonomous comment management afterward.

  2. Adaptive Learning System - The reflective agent reviews every action, identifies improvement opportunities, and evolves the system through continuous reinforcement learning, automatically converting successful workflows into skills.

  3. Dual Memory Architecture - Short-term context and long-term storage work together to enable intelligent handling of recurring tasks, learning your habits and preferences to become more effective over time.

  4. Skills and Custom Instructions - Create Markdown-based instructions that guide the OS through specific sites or tasks, building a knowledge base that the agent references and expands as it learns new workflows.

Currently, access is via invite codes only. Follow Flowith on X to get updates on access codes.

Quick Bites

Cursor’s first in-house coding model
Cursor just dropped their 2.0 version, introducing Composer, their first in-house coding model, built for low-latency agentic coding tasks, which runs 4x faster than comparable models and completes most tasks in under 30 seconds. The model was trained with codebase-wide semantic search tools, making it particularly adept at navigating large projects. The new Cursor 2.0 interface shifts focus from files to agents, letting you run multiple models simultaneously on the same problem via git worktrees or remote machines. You can even have several models tackle the same problem simultaneously and pick the best solution. Cursor 2.0 is now available to use.

And Windsurf/Cognition AI’s new model too
The same day, Cognition AI also released SWE-1.5, a frontier-scale coding model that’s 13x faster than Sonnet 4.5 while delivering near-SOTA performance. They partnered with Cerebras for inference and may be the first to publicly deploy a model trained on Nvidia's GB200 chips.

Rather than using off-the-shelf coding benchmarks, they manually created training environments that mirror real Devin and Windsurf tasks, complete with execution and web browsing. Their "reward hardening" process involves senior engineers deliberately trying to exploit the grading mechanisms, identifying false positives before the model learns bad patterns. SWE-1.5 is now available in Windsurf.

GitHub's Agent HQ: One platform for every AI coding agent
GitHub just announced Agent HQ, turning the platform into an open ecosystem where agents from Anthropic, OpenAI, Google, Cognition, and xAI will all run natively. No more juggling multiple subscriptions or interfaces. You get a single mission control interface to orchestrate multiple agents in parallel like assigning tasks, monitoring progress, and managing them across GitHub, VS Code, mobile, and CLI, all included in your existing Copilot subscription. Plus custom agent configs that live in source control, automatic code review before you see agent output, and full MCP support in VS Code, all built on the Git primitives you already trust.

ChatGPT Go one-year subscription for free in India
OpenAI is offering its ChatGPT Go subscription free for one year to all users in India starting November 4. The plan, normally priced at ₹399/month (around US $4.50), is available to both new and existing subscribers in India.

OpenAI AGI timelines + Restructuring + $1T of Compute
OpenAI just closed the deal of the century. The restructuring is now complete, clearing the path for a likely IPO. Microsoft got their $135B stake (27%) but lost exclusivity, and the nonprofit Foundation kept control with 26% while committing $25B to health/safety initiatives.

The infrastructure math is genuinely insane. Altman disclosed $1.4T committed across 30GW of compute deals (Nvidia, AMD, Broadcom, Oracle), works out to ~$47B per gigawatt. The endgame? Build 1GW per week at $20B per GW, that's $1T annual capex, to hit their target capacity.

Sam also dropped the AGI roadmap: intern-level AI researcher by September 2026, full autonomous "legitimate AI researcher" by March 2028, capable of independently running large research projects end-to-end.

And this one’s the most important for developers and engineers - Despite hiring a CEO of Apps, Altman said that OpenAI is pivoting to be an infrastructure for the next wave of AI companies, not just ChatGPT's parent company. If you're building applications and tools, this is the clearest "we're here to empower you, not compete" message Altman has sent yet.

Tools of the Trade

  1. Open ChatGPT Atlas - Open-source and free alternative to ChatGPT Atlas. This Chrome extension uses Composio’s Tool Router to connect to 500+ apps like Gmail and Slack to autonomously execute actions, and Gemini 2.5 Flash Computer Use model to navigate and use the web.

  2. MCP Scanner - Scans MCP servers for security vulnerabilities using three analysis methods: YARA rules, LLM-based evaluation, and Cisco's AI Defense API. It works as a CLI tool or REST API to check MCP tools, prompts, and resources for threats like prompt and command injection, and malicious code.

  3. V0 iOS app - Vibe build full-stack web applications anywhere from your phone. v0 iOS app brings the platform to your phone, where you can just describe your idea in simple language, and it builds your entire application from backend to UI.

  4. kvcached - A KV cache library for LLM serving and training on shared GPUs. By bringing OS-style virtual memory abstraction to LLM systems, it maps physical GPU memory only when needed at runtime, improving GPU utilization under dynamic workloads.

  5. Awesome LLM Apps - A curated collection of LLM apps with RAG, AI Agents, multi-agent teams, MCP, voice agents, and more. The apps use models from OpenAI, Anthropic, Google, and open-source models like DeepSeek, Qwen, and Llama that you can run locally on your computer.
    (Now accepting GitHub sponsorships)

Hot Takes

  1. both cursor and windsurf released models today heavily optimized for speed

    this is very different than the direction people have been pushing where they kick stuff off to codex for 45min

    but it's fast feedback loops are always what end up mattering
    ~ dax

  2. A real gap between what people using chatbots can do and what even non-coders can do with today’s CLI-like tools that have access to their computers, the web & the ability to execute long-term plans,

    Big opportunity for the AI lab that gets powerful & safe personal agents right

    ~ Ethan Mollick

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.