• unwind ai
  • Posts
  • Simulated Society of 10,000+ AI Agents

Simulated Society of 10,000+ AI Agents

PLUS: Gemini Deep Think now available, Cerebras Code - 20x faster than Claude

In partnership with

Today’s top AI Highlights:

  1. This framework simulates an entire society of 10,000+ AI agents

  2. Google’s opensource library to extract structured information from unstructured text

  3. Cerebras Code - 20x faster than Claude, 1x the price

  4. The IMO 2025 gold model is now in your pocket - sort of

  5. 100+ Free AI agent, command, and MCP templates for Claude Code

& so much more!

Read time: 3 mins

AI Tutorial

Integrating travel services as a developer often means wrestling with a patchwork of inconsistent APIs. Each API - whether for maps, weather, bookings, or calendars - brings its own implementations, auth, and maintenance burdens. The travel industry's fragmented tech landscape creates unnecessary complexity that distracts from building great user experiences.

In this tutorial, we’ll build a multi-agent AI travel planner using MCP servers as universal connectors. By using MCP as a standardized layer, we can focus on creating intelligent agent behaviors rather than API-specific quirks. Our application will orchestrate specialized AI agents that handle different aspects of travel planning while using external services via MCP.

We'll use the Agno framework to create a team of specialized AI agents that collaborate to create comprehensive travel plans, with each agent handling a specific aspect of travel planning - maps, weather, accommodations, and calendar events.

We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads) to support us!

Latest Developments

Social scientists have a problem: studying human behavior is expensive, time-consuming, and often impossible to control.

Now, picture 10,000 AI agents living, working, and arguing about gun control in a virtual city.

That's what Tsinghua University's research team created with AgentSociety.

It is the first large-scale social laboratory powered by AI agents that actually think and feel like humans. These agents have genuine psychological depth, complete with emotions, needs, and the ability to form opinions.

What makes AgentSociety special is the scale too - simulating 5 million interactions between agents is impressive. Each agent operates with Maslow's hierarchy of needs driving their behavior, from basic survival to self-actualization.

The research implications are massive. AgentSociety has already reproduced four major social phenomena: political polarization, inflammatory message spread, UBI policy effects, and hurricane response patterns. Each experiment would have cost millions and taken years in the real world which AgentSociety did in computational minutes.

Key Highlights:

  1. Psychological Realism - Agents operate with three-layered mental processes (emotions, needs, cognition) that dynamically influence their behavior, creating authentic decision-making patterns that mirror human psychology rather than simple rule-following.

  2. Integration - The simulation incorporates real urban data from OpenStreetMap, actual economic dynamics with banks and taxation, and social networks with content moderation, creating a world where agents face genuine constraints and feedback.

  3. Massive Scale - Successfully demonstrated simulations with up to 30,000 agents running faster than real-time, using distributed computing and asynchronous processing to achieve unprecedented scalability in social simulation.

  4. Validated Experiments - Reproduced real-world phenomena, including political polarization, inflammatory message spread, Universal Basic Income effects, and hurricane response patterns, proving the platform's ability to capture authentic social dynamics.

  5. Opensource - Built as an opensource framework with plug-and-play LLM API support (OpenAI, DeepSeek, Qwen, and more), allowing you to deploy it on your own infrastructure.

Turn AI Into Your Income Stream

The AI economy is booming, and smart entrepreneurs are already profiting. Subscribe to Mindstream and get instant access to 200+ proven strategies to monetize AI tools like ChatGPT, Midjourney, and more. From content creation to automation services, discover actionable ways to build your AI-powered income. No coding required, just practical strategies that work.

Extract structured data from unstructured text with the output tied back to its source in just 3 lines of Python code.

Google's LangExtract library lets you pull structured data from any unstructured text by showing the model one high-quality example of your desired output format. It handles everything from quick text snippets to full documents, processing them in parallel while maintaining exact source grounding so every extracted piece traces back to its original location. It also generates HTML visualization for exploring results.

The Python library works across domains - clinical notes, legal documents, and research papers. Works with both cloud models, like Gemini, and local models via Ollama.

Key Highlights:

  1. Example-driven - Define any extraction task using just a few high-quality examples, and the library adapts to your domain without requiring model fine-tuning or complex rule writing.

  2. Precise source grounding - Every extracted entity is mapped back to its exact character offsets in the source text, making it much easier to evaluate and verify the extracted information.

  3. Interactive visualization - Go from raw text to an interactive, self-contained HTML visualization in minutes. LangExtract makes it easy to review extracted entities in context, with support for exploring thousands of annotations.

  4. Optimized for long documents - Handles large texts through intelligent chunking, parallel processing, and multiple extraction passes to overcome the "needle-in-a-haystack" problem.

  5. Model support - Works with cloud models like Gemini for production use or local models via Ollama for privacy-sensitive applications, with consistent API across all options.

Quick Bites

Manus AI just launched "Wide Research," a feature that spins up over 100 AI agents simultaneously to work on parallel subtasks, essentially giving you an on-demand agent swarm. Each subagent runs as a fully capable, general-purpose Manus instance on its own virtual machine, enabling tasks like comparing 100 products or generating 50 design variations concurrently. The feature is now live for Pro tier users, with a gradual rollout planned for Plus and Basic tiers.

Cohere released Command A Vision, their first multimodal model that can actually see and understand your enterprise data. The model delivers state-of-the-art performance on visual tasks, surpassing GPT 4.1, Llama 4 Maverick, and Mistral Medium 3. It requires just two GPUs for deployment. It's built specifically for business workflows like document processing, chart analysis, and real-world scene understanding.

Google has released Gemini Deep Think to Google Ultra subscribers, a faster "bronze-level" variant of the full model that won gold in IMO 2025. The model generates many possible solutions to a problem at once, considers them simultaneously, even revising or combining different ideas over time, before arriving at the best answer. The original gold-medal model remains exclusive to select mathematicians and researchers.

Cerebras is launching two new monthly coding plans to make AI coding faster and more accessible: Cerebras Code Pro ($50/month) and Code Max ($200/month). Both plans give you access to Qwen3-Coder, the world’s leading open-weight coding model, running at speeds of up to 2,000 tokens per second, with a 131k-token context window, no weekly limits. If your code IDE supports OpenAI-compatible inference endpoints, you can use it with Cerebras Code. Plug Cerebras Code into Cursor, Continue.dev, Cline, RooCode, or whatever else you’re using. No extra setup.

Tools of the Trade

  1. Claude Code Templates: Claude Code just got a whole lot easier. You can now install 100+ AI agents, commands, MCP servers, and templates with a single command. Also lets you monitor your Claude Code usage with a comprehensive analytics dashboard.

  2. IsAgent: Detects whether website visitors are AI agents or human users. It provides React hooks and components to serve different content or experiences based on whether the client is identified as an AI agent like ChatGPT.

  3. InsForge: Opensource agent-native alternative to Supabase designed specifically for AI coding agents like Claude Code, Cursor, and Cline. It provides authentication, database, storage, and deployment services through an MCP server interface that allows agents to autonomously configure and manage full-stack applications.

  4. Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, MCP, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes

  1. Every vibe-coder is generating as much technical debt as 10 regular developers in half the time.

    Here is the reality:

    A good engineer + AI is 100x better than folks who don't know what they are doing.

    Don't get carried away by the hype. Knowledge matters today more than ever. ~
    Santiago

  2. Heard from a little bird at OpenAI:

    GPT-5 is finally better than Claude at coding, not just on benchmarks, but in real internal use over the past few days.

    If that’s true, Anthropic can't stay quiet for too long. Claude 5 has to be released sooner. ~
    Yuchen Jin

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.