Artificial intelligence has entered a new phase where agents no longer operate in isolation. Instead, they collaborate with other autonomous entities to solve more complex tasks. This vision is made possible through Google’s Agent‑to‑Agent (A2A) Protocol, the AG‑UI Protocol, and CopilotKit—a framework for creating interactive, multi‑agent AI systems.
This guide demonstrates how to build a full‑stack multi‑agent system capable of seamless agent‑to‑agent (A2A) and agent‑to‑user (AG‑UI) communication, integrating diverse frameworks like LangGraph, Google ADK, and OpenAI APIs—all connected in real time through CopilotKit.
Understanding the A2A Protocol
Introduced in 2025, the A2A Protocol is an open communication standard for AI agents that allows them to discover, interact, and collaborate regardless of their underlying frameworks or technologies.
It establishes a universal language through which agents can exchange data, trigger actions, and achieve shared goals across distributed environments.
Core components of the A2A Protocol include:
- A2A Client: The primary orchestrator agent, responsible for managing communication and distributing tasks.
- A2A Agents: Independent service endpoints that process requests and return structured results.
- Agent Card: Metadata (in JSON format) that describes an agent’s capabilities, version, and connection path.
- Agent Skills: The functional descriptions of what each agent can perform—such as “generate itinerary” or “forecast weather.”
- A2A Executor: The processing layer that receives, executes, and returns responses as events.
- A2A Server: The web interface powered by frameworks like Starlette and Uvicorn, hosting each agent for network access.
This standardized architecture allows agents built with different frameworks—like LangGraph, Google ADK, or custom Python agents—to cooperate as part of the same distributed system.
Setting Up Multi‑Agent Communication via CLI
To begin, developers can use the CopilotKit CLI to quickly scaffold a multi‑agent environment. Using a single command, the CLI sets up both the backend orchestrator and frontend interface connecting through AG‑UI.
bashnpx copilotkit@latest create -f a2a
Once generated, simply install dependencies on both the frontend and backend, activate your Python virtual environment, and configure environment variables for Google API and OpenAI keys.
Running npm run dev
launches the entire workspace, enabling all services such as:
- Frontend UI at
localhost:3000
- Orchestrator (A2A + ADK) at
localhost:9000
- Specialized agents (e.g., itinerary, restaurants, weather, budgets) each running on unique ports
This step establishes the foundation for real‑time, multi‑agent communication.
Integrating Google ADK with AG‑UI for Backend Control
Using Google’s Agent Development Kit (ADK), an orchestrator agent can coordinate multiple specialized agents within the A2A ecosystem.
For instance, a travel‑planning orchestrator uses Gemini Pro as its language model and delegates subtasks to agents built on ADK and LangGraph. Each request flows through the AG‑UI Protocol, which handles bi‑directional events between the frontend and AI backend.
The backend runs on FastAPI, wrapped with ADKAgent middleware for AG‑UI compatibility—allowing dynamic, event‑based updates to be streamed directly into the user interface.
Integrating Multiple Frameworks with the A2A Protocol
Each specialized agent—such as the Itinerary Agent or Weather Agent—is independently powered by frameworks like LangGraph or ADK, yet conforms to the A2A communication structure.
Defining Agent Skills and Cards
Every agent exposes an Agent Card, listing capabilities such as description, version, supported input/output types, and HTTP endpoints.
pythonpublic_agent_card = AgentCard(
name='Itinerary Agent',
description='Creates detailed travel itineraries using LangGraph',
url='http://localhost:9001/',
version='1.0.0',
defaultInputModes=['text'],
defaultOutputModes=['text']
)
Implementing the A2A Executor
The AgentExecutor handles task delegation and result handling using the A2A event queue. It ensures messages are passed seamlessly between orchestrator and remote agents while preserving session context.
Launching the A2A Server
Finally, a Starlette‑based A2A server exposes each agent as a REST endpoint for external invocation:
pythonuvicorn.run(server.build(), host='0.0.0.0', port=9001)
At this point, all agents are discoverable and callable within the A2A mesh—fully interoperable and secured via standard HTTP‑JSON exchanges.
Connecting the Frontend with CopilotKit and AG‑UI
Once the backend is running, it’s time to connect it to the user interface. CopilotKit bridges the frontend to AG‑UI through an event‑driven communication layer, allowing real‑time conversation between users and orchestrator agents.
Implementing the CopilotKit API Route
Within a Next.js environment, you can establish a /api/copilotkit
route configured with A2A middleware. This acts as the central hub that proxies messages between the Chat UI, orchestrator, and agents.
The A2AMiddlewareAgent within CopilotKit registers all participating A2A agent URLs and routes orchestration requests accordingly.
Setting Up the CopilotKit Provider
By wrapping UI components within the <CopilotKit>
provider, the entire React application gains automatic access to agent sessions, runtime management, and AG‑UI events.
Rendering Real‑Time Agent‑to‑Agent Interactions
Using CopilotKit’s built‑in chat components such as CopilotChat or CopilotSidebar, developers can visualize ongoing agent communication in real time.
The UI renders both outgoing messages (user → orchestrator) and incoming responses (agents → orchestrator → user) using modular React components.
Through Generative UI integration, each agent response can dynamically generate interactive elements such as:
- Itinerary cards for travel schedules
- Real‑time budget breakdowns
- Weather forecasts with icons
- Restaurant recommendation lists
Enabling Human‑in‑the‑Loop (HITL) Interaction
A major advantage of AG‑UI + CopilotKit is support for Human‑in‑the‑Loop (HITL) workflows. This ensures agents can pause execution to request approvals or clarifications from real users before proceeding.
For example, in a travel planning scenario, the Budget Agent may generate a breakdown and await user approval using interactive chat components before finalizing financial outputs.
This capability enhances transparency, compliance, and reliability when deploying AI in production-grade environments.
Streaming Agent Responses with Generative UI
Agent results stream live into the frontend through AG‑UI’s event system. The interface automatically parses structured JSON coming from A2A responses and renders appropriate UI components.
Developers can tap into these results using useCopilotChat()
hooks, enabling features like automated data extraction and visualization for itinerary, budget, and weather information.
The combination of streaming data and dynamic UI unlocks a natural, conversational development experience for complex workflows.
Conclusion
The integration of A2A Protocol, AG‑UI Protocol, and CopilotKit marks a turning point for intelligent applications.
By combining Google ADK for back‑end orchestration, CopilotKit for frontend visualization, and A2A for agent interoperability, developers can now build scalable multi‑agent systems that collaborate, learn, and reason together across frameworks.
This full‑stack approach allows teams to go beyond static assistants—toward interactive, distributed systems where every agent communicates like part of a connected ecosystem.
Read more such articles from our Newsletter here.