• unwind ai
  • Posts
  • Build an AI UI/UX Feedback Agent Team with Nano Banana

Build an AI UI/UX Feedback Agent Team with Nano Banana

Multi-agent app using Google ADK and Nano Banana (100% opensource)

Creating landing pages that convert requires a designer's eye, UX expertise, and countless iterations. What if you could get instant, expert-level feedback on your designs and automatically generate improved versions - all powered by AI?

In this tutorial, we'll build a sophisticated multi-agent UI/UX feedback app using Google's Agent Development Kit (ADK) and Gemini 2.5 Flash, aka Nano Banana. This agent team analyzes landing page screenshots, gives a comprehensive design critique, and automatically generates improved versions incorporating all recommendations.

What makes Google ADK special?

Google ADK is a powerful framework for building multi-agent systems with specialized roles. It lets you build teams with advanced patterns and hierarchies like Coordinator/Dispatcher, Sequential Pipelines, and Parallel Agents. Combined with Gemini 2.5 Flash's native vision capabilities, your agents can see, analyze, and edit images directly, no manual tool calls needed.

Don’t forget to share this tutorial on your social channels and tag Unwind AI (X, LinkedIn, Threads, Facebook) to support us!

What We’re Building

This application implements a production-ready multi-agent system for landing page analysis and improvement. The system uses a Coordinator/Dispatcher pattern with specialized agents working in sequence to deliver comprehensive UI/UX feedback and automatically generate improved designs.

Features:

👁️ Visual AI Analysis: Upload landing page screenshots—agents automatically analyze layout, typography, colors, and UX patterns using Gemini's vision capabilities

🤖 Multi-Agent Architecture: Three specialized agents work together: UI Critic (analysis), Design Strategist (planning), and Visual Implementer (generation)

Automatic Improvements: Generates improved landing page designs incorporating all recommendations

📊 Detailed Reports: Comprehensive feedback covering visual hierarchy, accessibility, conversion optimization, and design best practices

♻️ Iterative Refinement: Edit and refine generated designs based on additional feedback

♿ WCAG Compliance: Accessibility checks and recommendations included

How The App Works

Basic Flow:

Besides this pipeline, there are two more specialized agents:

  1. Info Agent 📚

  • Handles general questions about the system

  • Explains capabilities and features

  • Guides users to upload screenshots for analysis

  1. Design Editor ✏️

  • Refines existing generated designs

  • Applies specific improvements (e.g., "make CTA bigger", "change color scheme")

  • Loads latest version automatically and creates new iterations

The Root Coordinator routes the queries to Design Editor, which loads the latest version and applies targeted improvements, and to Info Agent for quick answers.

Prerequisites

Before we begin, make sure you have the following:

  1. Python installed on your machine (version 3.10 or higher is recommended)

  2. Google Gemini API key

  3. A code editor of your choice (we recommend VS Code or PyCharm for their excellent Python support)

  4. Basic familiarity with Python programming

Code Walkthrough

Setting Up the Environment

First, let's get our development environment ready:

  1. Clone the GitHub repository:

git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
cd advanced_ai_agents/multi_agent_apps/agent_teams/multimodal_uiux_feedback_agent_team
pip install -r requirements.txt

Creating the App

Let's create our app. This system uses four Python files that work together:

File: agent.py

  1. Import necessary libraries:

from google.adk.agents import LlmAgent, SequentialAgent
from google.adk.tools import google_search
from google.adk.tools.agent_tool import AgentTool
from .tools import edit_landing_page_image, generate_improved_landing_page
  1. Create the Search Helper Agent:

search_agent = LlmAgent(
    name="SearchAgent",
    model="gemini-2.5-flash",
    description="Searches for UI/UX best practices",
    instruction="Use google_search to find current UI/UX trends, design principles, WCAG guidelines",
    tools=[google_search]
)
  1. Create the Info Agent (handles general questions):

info_agent = LlmAgent(
    name="InfoAgent",
    model="gemini-2.5-flash",
    description="Handles general questions about the system",
    instruction="""
    You explain the AI UI/UX Feedback Team capabilities.
    Keep responses brief (2-4 sentences).
    Ask users to upload landing page screenshots for analysis.
    """
)
  1. Create the Design Editor Agent (iterative refinements):

design_editor = LlmAgent(
    name="DesignEditor",
    model="gemini-2.5-flash",
    description="Edits existing designs based on feedback",
    instruction="""
    Find the most recent design from conversation history.
    Use edit_landing_page_image tool with specific UI/UX improvements.
    Be SPECIFIC: colors with hex codes, exact sizes, clear placements.
    """,
    tools=[edit_landing_page_image]
)
  1. Create the UI Critic Agent (visual analysis):

project_coordinator = LlmAgent(
    name="ProjectCoordinator",
    model="gemini-2.5-flash",
    description="Coordinates timeline, budget, and generates photorealistic renderings",
    instruction="""
Create final renovation plan with:
- Budget breakdown (materials, labor, permits, contingency)
- Timeline with phases
- Contractors needed
- Action checklist

Generate rendering using generate_renovation_rendering tool:

Build EXTREMELY DETAILED prompt incorporating:
- Room type and layout
- Exact colors with codes
- Specific materials and fixtures
- Lighting details
- All design elements

Example prompt structure:
"Professional interior photography of renovated [room_type].
Style: [exact style]
Colors: [Benjamin Moore Simply White OC-117 on walls]
Cabinets: [white shaker style with brushed nickel hardware]
Countertops: [Carrara marble-look quartz]
Flooring: [light oak luxury vinyl plank]
Backsplash: [white subway tile in running bond]
Camera: Wide-angle, eye-level, photorealistic, 8K quality"
""",
    tools=[generate_renovation_rendering, edit_renovation_rendering],
)
  1. Create the Design Strategist Agent (improvement planning):

planning_pipeline = SequentialAgent(
    name="PlanningPipeline",
    description="Full renovation planning: Assessment → Design → Coordination",
    sub_agents=[
        visual_assessor,
        design_planner,
        project_coordinator,
    ],
)
  1. Create the Visual Implementer Agent (design generation):

visual_implementer = LlmAgent(
    name="VisualImplementer",
    model="gemini-2.5-flash",
    description="Generates improved design and report",
    instruction="""
    Read conversation history to extract:
    - UI Critic's analysis
    - Design Strategist's plan
    - Original image (visible via vision)
    
    Build EXTREMELY DETAILED prompt incorporating:
    - Exact colors with hex codes
    - Typography specifications
    - Layout structure
    - CTA design details
    - Whitespace improvements
    
    Use generate_improved_landing_page tool.
    """,
    tools=[generate_improved_landing_page]
)
  1. Create the Analysis Pipeline (Sequential Agent):

root_agent = LlmAgent(
    name="HomeRenovationPlanner",
    model="gemini-2.5-flash",
    description="Intelligent coordinator that routes requests to specialists",
    instruction="""
Analyze requests and route to the right specialist:

1. General questions → InfoAgent
2. Edit existing renderings → RenderingEditor
3. New renovation planning → PlanningPipeline

CRITICAL: Use transfer_to_agent - don't answer directly!
""",
    sub_agents=[
        info_agent,
        rendering_editor,
        planning_pipeline,
    ],
)
  1. Create the Root Coordinator Agent:

root_agent = LlmAgent(
    name="UIUXFeedbackTeam",
    model="gemini-2.5-flash",
    description="Intelligent coordinator for UI/UX feedback",
    instruction="""
    Route requests based on context:
    
    1. Image visible → transfer to AnalysisPipeline
    2. Edit existing design → transfer to DesignEditor
    3. General questions → transfer to InfoAgent
    
    CRITICAL: If you SEE an image → IMMEDIATELY route to AnalysisPipeline
    """,
    sub_agents=[info_agent, design_editor, analysis_pipeline]
)

File: tools.py

  1. Import necessary libraries:

import os
import logging
from google import genai
from google.genai import types
from google.adk.tools import ToolContext
from pydantic import BaseModel, Field
from dotenv import load_dotenv
  1. Version management helpers:

def get_next_version_number(tool_context: ToolContext, asset_name: str) -> int:
    asset_versions = tool_context.state.get("asset_versions", {})
    current_version = asset_versions.get(asset_name, 0)
    return current_version + 1

def create_versioned_filename(asset_name: str, version: int, file_extension: str = "png") -> str:
    return f"{asset_name}_v{version}.{file_extension}"
  1. Pydantic input models for type safety:

class EditLandingPageInput(BaseModel):
    artifact_filename: str = Field(..., description="The filename of the landing page artifact to edit.")
    prompt: str = Field(..., description="Detailed description of UI/UX improvements to apply.")
    asset_name: str = Field(default=None, description="Optional: specify asset name for the new version.")
  1. Edit landing page image tool:

async def edit_landing_page_image(tool_context: ToolContext, inputs: EditLandingPageInput) -> str:
    client = genai.Client()
    
    # Load existing landing page
    loaded_image_part = await tool_context.load_artifact(inputs.artifact_filename)
    
    # Enhance prompt with UI/UX best practices
    enhanced_prompt = f"""
    {inputs.prompt}
    
    Apply these UI/UX best practices:
    - Maintain visual hierarchy
    - Ensure sufficient whitespace
    - Use consistent alignment and grid
    - Make CTAs prominent with contrasting colors
    """
    
    # Generate edited version
    for chunk in client.models.generate_content_stream(
        model="gemini-2.5-flash-image",
        contents=[loaded_image_part, types.Part.from_text(text=enhanced_prompt)],
        config=types.GenerateContentConfig(response_modalities=["IMAGE", "TEXT"])
    ):
        # Save edited image as versioned artifact
        version = await tool_context.save_artifact(filename=edited_filename, artifact=image_part)
  1. Generate improved landing page tool:

async def generate_improved_landing_page(tool_context: ToolContext, inputs: GenerateImprovedLandingPageInput) -> str:
    # Build enhanced prompt from analysis
    enhancement_prompt = f"""
    Create a professional landing page design:
    {inputs.prompt}
    
    Previous Analysis: {latest_analysis}
    
    Requirements:
    - Modern, clean aesthetic
    - Clear visual hierarchy
    - Prominent CTAs
    - WCAG AA accessible
    """
    
    # Generate improved design
    for chunk in client.models.generate_content_stream(
        model="gemini-2.5-flash-image",
        contents=[types.Part.from_text(text=enhancement_prompt)],
        config=types.GenerateContentConfig(response_modalities=["IMAGE", "TEXT"])
    ):
        # Save as artifact
        version = await tool_context.save_artifact(filename=artifact_filename, artifact=image_part)

File: __init__.py

  1. Package Setup:

from .agent import root_agent

__all__ = ["root_agent"]

Running the App

With our code in place, it's time to launch the app.

  1. Start ADK Web:

cd advanced_ai_agents/multi_agent_apps/agent_teams/multimodal_uiux_feedback_agent_team
adk web
  1. Open your browser to the URL provided (typically http://localhost:8000)

  2. Select "multimodal_uiux_feedback_agent_team" from the app list

  3. Upload a landing page screenshot and watch as the agents automatically:

    • Analyze the design comprehensively

    • Create an improvement strategy

    • Generate an improved version

  4. Iterate on the design: Ask for specific refinements like "make the CTA button larger and more prominent" or "use a warmer color scheme." The Design Editor will handle these requests.

Working Application Demo

Conclusion

For further enhancements, consider:

  1. Expand Analysis Dimensions: Add brand consistency checks, competitive analysis, or emotional impact scoring

  2. A/B Testing Generator: Create multiple design variations for testing

  3. Code Export: Generate HTML/CSS code from approved designs

  4. Integration with Design Tools: Connect to Figma or Sketch for seamless workflow

Keep experimenting with different configurations and features to build more sophisticated AI applications.

We share hands-on tutorials like this 2-3 times a week, to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this tutorial on your social channels and tag Unwind AI (X, LinkedIn, Threads) to support us!

Reply

or to participate.