- unwind ai
- Posts
- OpenAI GPT-5 Releases in A Few Hours
OpenAI GPT-5 Releases in A Few Hours
PLUS: LangChain's opensource asynchronous SWE agent, Security reviews by Claude Code
Today’s top AI Highlights:
GPT-5 is coming today. SaaS is about to go full fast fashion.
LangChain releases opensource asynchronous SWE agent
OpenAI’s Harmony format that makes gpt-oss actually work
Claude Code can now do security reviews autonomously
Opensource self-hosted Perplexity for your codebase
& so much more!
Read time: 3 mins
AI Tutorial
Finding the perfect property in today's competitive real estate market can be overwhelming. With thousands of listings across multiple platforms, varying market conditions, and complex investment considerations, homebuyers often struggle to make informed decisions efficiently. What if we could create specialized agents that work together like a professional real estate team?
In this tutorial, we've built a multi-agent AI real estate agent team that provides detailed property listings, market insights, and investment analysis all in one interface without you having to search multiple websites.
This system uses three specialized agents working in concert:
Property Search Agent that finds listings across major platforms,
Market Analysis Agent that provides neighborhood insights, and
Property Valuation Agent that delivers investment analysis.
We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.
Latest Developments
Every major tech company is racing to build the next asynchronous coding agent, but most are still closed-source black boxes that lock you into their ecosystem.
LangChain just released Open SWE, the first truly opensource alternative that matches the features offered by OpenAI Codex and Google Jules, complete with autonomous planning, cloud execution, and pull request generation.
Built on LangGraph, this agent doesn't just write code; it researches your codebase, creates detailed execution strategies, implements features with testing, and self-reviews before submitting PRs. It has specialized components: a Manager for task routing, a Planner for strategy development, and a Programmer with a built-in Reviewer for quality assurance. Use it with your choice of models or customize fully for your team's workflow.
Key Highlights:
Open-source - Fork and customize the entire agent stack instead of being locked into proprietary systems. Use any LLM provider or run local models instead.
Multi-agent pipeline - Dedicated planning phase with human approval, systematic code implementation, and automated review cycles prevent the broken CI deployments common with simpler agents.
Parallel Execution: You can run as many Open SWE tasks as you want in parallel. Since it runs in a sandbox environment in the cloud, you're not limited by the number of tasks you can run at once.
Real-time interaction - Send messages to running agents mid-execution to provide feedback or redirect focus, a capability missing from many current asynchronous coding tools.
End-to-end management - Open SWE will automatically create GitHub issues for tasks, and create pull requests which will close the issue when implementation is complete.
2025: The Year of the One-Card Wallet
When an entire team of financial analysts and credit card experts go to bat for the credit card they actually use, you should listen.
This card recommended by Motley Fool Money offers:
0% intro APR on purchases and balance transfers until nearly 2027
Up to 5% cash back at places you actually shop
A lucrative sign-up bonus
All for no annual fee. Don't wait to get the card Motley Fool Money (and everyone else) can't stop talking about.
While everyone was focused on the gpt-oss models themselves, an important accompanying release slipped under the radar.
OpenAI quietly released Harmony, a mandatory response format that's now the foundational layer for how their open-weight models communicate. This isn't just a chat template; it's a protocol that separates internal reasoning, tool usage, and final responses into distinct channels, giving you granular control over AI agent workflows.
The format introduces multiple roles with clear instruction hierarchies, three output channels, and built-in support for chain-of-thought reasoning and tool calling. It is designed to mimic the OpenAI Responses API, so if you have used that API before, this format should feel familiar.
Key Highlights:
Mandatory Architecture - The gpt-oss models were specifically trained on Harmony format and will not function correctly without it, making this the de facto standard for OpenAI's open ecosystem.
Multi-Channel Output - Enables models to output distinct streams for chain-of-thought reasoning (analysis channel), tool-calling preambles (commentary channel), and user-facing responses (final channel).
Structured Tool Integration - Built-in support for function calling with proper namespacing, structured outputs, and clear instruction hierarchies where system messages override developer messages, which override user inputs.
Performance-First - Core rendering and parsing logic built in Rust for speed, with Python bindings providing developer-friendly APIs. The shared implementation ensures token-sequence fidelity and eliminates formatting inconsistencies.
Quick Bites
It’s official! OpenAI is releasing GPT-5 today via a livestream on X at 10 AM PT. There are a lot of rumours floating around the model sizes and capabilities. But let's set them aside for now and focus on what's actually revealed during today's livestream!
Google just dropped Gemini CLI GitHub Actions, a no-cost powerful AI coding teammate for your GitHub repo. This release brings intelligent issue triage, automated PR reviews, and on-demand assistance. Just mention @gemini-cli to quickly delegate routine tasks to Gemini. These features closely mirror what Anthropic is offering via Claude Code GitHub Actions, clearly signaling that Google is advancing Gemini CLI to compete with Claude Code.
Ollama launched Turbo, a $20/month service that runs open-source models on datacenter hardware for larger models like gpt-oss-120B that wouldn't fit on consumer hardware, returning responses much faster. Turbo maintains full compatibility with Ollama's existing CLI and API.
Wish we were still in college for this! Google is giving free Google Pro subscription to university students in the US, Japan, Korea, Indonesia, and Brazil. This is the ultimate bundle for writing papers, coding projects, making videos, and late-night study sessions.
Claude Code can now spot security vulnerabilities before they hit production with two new automated review features. The /security-review slash command enables ad-hoc terminal-based security analysis, while a GitHub Actions integration automatically flags potential issues on every PR. These features caught real vulnerabilities in Anthropic's own production code that would have otherwise shipped. Both are available immediately to all Claude Code users.
Alibaba Qwen just dropped two new 4B models that pack some serious performance upgrades into a compact package. The Qwen3 Instruct-2507 variant delivers major improvements in reasoning, multilingual coverage, and long-context understanding, while the Thinking-2507 model nearly doubled its performance on AIME25 (from 65.6 to 81.3). Both models support native 256K context length and show substantial gains in tool usage, creative writing, and alignment tasks. Available to download on Hugging Face and Modelscope under Apache 2.0
Tools of the Trade
Sourcebot: Self-hosted tool for indexing and searching across all your repos and branches, regardless of host (GitHub, GitLab, Bitbucket, etc.). It lets you query your codebase in natural language and get answers with inline code citations.
AgentMail: Gives AI agents dedicated email inboxes through an API that bypasses the limitations of consumer email services. It’s much better than Gmail, with automated inbox management, unrestricted sending limits, and built-in features like semantic search and structured data extraction.
PulseMCP: A directory that catalogs and indexes 5,000+ MCP servers. It has a clean UI, intuitive navigation, and robust filtering options, making it easier to discover and explore MCP servers compared to other registries.
Apple Health MCP Server: MCP server that connects Apple Health data to AI assistants like Claude, allowing you to query your health metrics using natural language or direct SQL.
Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, MCP, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.
Hot Takes
If you’re not using AI to code, you’re shipping a lower quality product.
At a higher level, humans make the best planning and architectural decisions.
At a lower level, AI can make better, more detail-oriented, research-driven decisions.
(No agentic software is really optimized for this by default yet. So you have to command these specifically yourself.)
Humans should spend even more time scoping tasks out before they write a line of code, and use AI even more to optimize every lower-level decision once it’s time to build.
This will lead to the highest amount of conscientiousness, visible in product quality and iteration speed (the latter which leads to more of the former) ~
Sahil LavingiaJust met a founder who fired his entire team because he was able to individually beat their productivity with Claude Code ~
Alex Reibman
That’s all for today! See you tomorrow with more such AI-filled content.
Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!
PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉
Reply