Information
# A2UI: Agent-to-User Interface
A2UI is an open-source project, complete with a format
optimized for representing updateable agent-generated
UIs and an initial set of renderers, that allows agents
to generate or populate rich user interfaces.
*A gallery of A2UI rendered cards, showing a variety of UI compositions that A2UI can achieve.*
## ️ Status: Early Stage Public Preview
> **Note:** A2UI is currently in **v0.8 (Public Preview)**. The specification and
implementations are functional but are still evolving. We are opening the project to
foster collaboration, gather feedback, and solicit contributions (e.g., on client renderers).
Expect changes.
## Summary
Generative AI excels at creating text and code, but agents can struggle to
present rich, interactive interfaces to users, especially when those agents
are remote or running across trust boundaries.
**A2UI** is an open standard and set of libraries that allows agents to
"speak UI." Agents send a declarative JSON format describing the *intent* of
the UI. The client application then renders this using its own native
component library (Flutter, Angular, Lit, etc.).
This approach ensures that agent-generated UIs are
**safe like data, but expressive like code**.
## High-Level Philosophy
A2UI was designed to address the specific challenges of interoperable,
cross-platform, generative or template-based UI responses from agents.
The project's core philosophies:
* **Security first**: Running arbitrary code generated by an LLM may present a
security risk. A2UI is a declarative data format, not executable
code. Your client application maintains a "catalog" of trusted, pre-approved
UI components (e.g., Card, Button, TextField), and the agent can only request
to render components from that catalog.
* **LLM-friendly and incrementally updateable**: The UI is represented as a flat
list of components with ID references which is easy for LLMs to generate
incrementally, allowing for progressive rendering and a responsive user
experience. An agent can efficiently make incremental changes to the UI based
on new user requests as the conversation progresses.
* **Framework-agnostic and portable**: A2UI separates the UI structure from
the UI implementation. The agent sends a description of the component tree
and its associated data model. Your client application is responsible for
mapping these abstract descriptions to its native widgets—be it web components,
Flutter widgets, React components, SwiftUI views or something else entirely.
The same A2UI JSON payload from an agent can be rendered on multiple different
clients built on top of different frameworks.
* **Flexibility**: A2UI also features an open registry pattern that allows
developers to map server-side types to custom client implementations, from
native mobile widgets to React components. By registering a "Smart Wrapper,"
you can connect any existing UI component—including secure iframe containers
for legacy content—to A2UI's data binding and event system. Crucially, this
places security firmly in the developer's hands, enabling them to enforce
strict sandboxing policies and "trust ladders" directly within their custom
component logic rather than relying solely on the core system.
## Use Cases
Some of the use cases include:
* **Dynamic Data Collection:** An agent generates a bespoke form (date pickers,
sliders, inputs) based on the specific context of a conversation (e.g.,
booking a specialized reservation).
* **Remote Sub-Agents:** An orchestrator agent delegates a task to a
remote specialized agent (e.g., a travel booking agent) which returns a
UI payload to be rendered inside the main chat window.
* **Adaptive Workflows:** Enterprise agents that generate approval
dashboards or data visualizations on the fly based on the user's query.
## Architecture
The A2UI flow disconnects the generation of UI from the execution of UI:
1. **Generation:** An Agent (using Gemini or another LLM) generates or uses
a pre-generated \`A2UI Response\`, a JSON payload describing the composition
of UI components and their properties.
2. **Transport:** This message is sent to the client application
(via A2A, AG UI, etc.).
3. **Resolution:** The Client's **A2UI Renderer** parses the JSON.
4. **Rendering:** The Renderer maps the abstract components
(e.g., \`type: 'text-field'\`) to the concrete implementation in the client's codebase.
## Dependencies
A2UI is designed to be a lightweight format, but it fits into a larger ecosystem:
* **Transports:** Compatible with **A2A Protocol** and **AG UI**.
* **LLMs:** Can be generated by any model capable of generating JSON output.
* **Host Frameworks:** Requires a host application built in a supported framework
(currently: Web or Flutter).
## Getting Started
The best way to understand A2UI is to run the samples.
### Prerequisites
* Node.js (for web clients)
* Python (for agent samples)
* A valid [Gemini API Key](https://aistudio.google.com/) is required for the samples.
### Running the Restaurant Finder Demo
1. **Clone the repository:**
\`\`\`bash
git clone https://github.com/google/A2UI.git
cd A2UI
\`\`\`
2. **Set your API Key:**
\`\`\`bash
export GEMINI_API_KEY="your_gemini_api_key"
\`\`\`
3. **Run the Agent (Backend):**
\`\`\`bash
cd samples/agent/adk/restaurant_finder
uv run .
\`\`\`
4. **Run the Client (Frontend):**
Open a new terminal window:
\`\`\`bash
cd samples/client/lit/shell
npm install
npm run dev
\`\`\`
For Flutter developers, check out the [GenUI SDK](https://github.com/flutter/genui),
which uses A2UI under the hood.
CopilotKit has a public [A2UI Widget Builder](https://go.copilotkit.ai/A2UI-widget-builder)
to try out as well.
## Roadmap
We hope to work with the community on the following:
* **Spec Stabilization:** Moving towards a v1.0 specification.
* **More Renderers:** Adding official support for React, Jetpack Compose, iOS (SwiftUI), and more.
* **Additional Transports:** Support for REST and more.
* **Additional Agent Frameworks:** Genkit, LangGraph, and more.
## Contribute
A2UI is an **Apache 2.0** licensed project. We believe the future of UI is agentic,
and we want to work with you to help build it.
See [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to get started.
*A gallery of A2UI rendered cards, showing a variety of UI compositions that A2UI can achieve.*
## ️ Status: Early Stage Public Preview
> **Note:** A2UI is currently in **v0.8 (Public Preview)**. The specification and
implementations are functional but are still evolving. We are opening the project to
foster collaboration, gather feedback, and solicit contributions (e.g., on client renderers).
Expect changes.
## Summary
Generative AI excels at creating text and code, but agents can struggle to
present rich, interactive interfaces to users, especially when those agents
are remote or running across trust boundaries.
**A2UI** is an open standard and set of libraries that allows agents to
"speak UI." Agents send a declarative JSON format describing the *intent* of
the UI. The client application then renders this using its own native
component library (Flutter, Angular, Lit, etc.).
This approach ensures that agent-generated UIs are
**safe like data, but expressive like code**.
## High-Level Philosophy
A2UI was designed to address the specific challenges of interoperable,
cross-platform, generative or template-based UI responses from agents.
The project's core philosophies:
* **Security first**: Running arbitrary code generated by an LLM may present a
security risk. A2UI is a declarative data format, not executable
code. Your client application maintains a "catalog" of trusted, pre-approved
UI components (e.g., Card, Button, TextField), and the agent can only request
to render components from that catalog.
* **LLM-friendly and incrementally updateable**: The UI is represented as a flat
list of components with ID references which is easy for LLMs to generate
incrementally, allowing for progressive rendering and a responsive user
experience. An agent can efficiently make incremental changes to the UI based
on new user requests as the conversation progresses.
* **Framework-agnostic and portable**: A2UI separates the UI structure from
the UI implementation. The agent sends a description of the component tree
and its associated data model. Your client application is responsible for
mapping these abstract descriptions to its native widgets—be it web components,
Flutter widgets, React components, SwiftUI views or something else entirely.
The same A2UI JSON payload from an agent can be rendered on multiple different
clients built on top of different frameworks.
* **Flexibility**: A2UI also features an open registry pattern that allows
developers to map server-side types to custom client implementations, from
native mobile widgets to React components. By registering a "Smart Wrapper,"
you can connect any existing UI component—including secure iframe containers
for legacy content—to A2UI's data binding and event system. Crucially, this
places security firmly in the developer's hands, enabling them to enforce
strict sandboxing policies and "trust ladders" directly within their custom
component logic rather than relying solely on the core system.
## Use Cases
Some of the use cases include:
* **Dynamic Data Collection:** An agent generates a bespoke form (date pickers,
sliders, inputs) based on the specific context of a conversation (e.g.,
booking a specialized reservation).
* **Remote Sub-Agents:** An orchestrator agent delegates a task to a
remote specialized agent (e.g., a travel booking agent) which returns a
UI payload to be rendered inside the main chat window.
* **Adaptive Workflows:** Enterprise agents that generate approval
dashboards or data visualizations on the fly based on the user's query.
## Architecture
The A2UI flow disconnects the generation of UI from the execution of UI:
1. **Generation:** An Agent (using Gemini or another LLM) generates or uses
a pre-generated \`A2UI Response\`, a JSON payload describing the composition
of UI components and their properties.
2. **Transport:** This message is sent to the client application
(via A2A, AG UI, etc.).
3. **Resolution:** The Client's **A2UI Renderer** parses the JSON.
4. **Rendering:** The Renderer maps the abstract components
(e.g., \`type: 'text-field'\`) to the concrete implementation in the client's codebase.
## Dependencies
A2UI is designed to be a lightweight format, but it fits into a larger ecosystem:
* **Transports:** Compatible with **A2A Protocol** and **AG UI**.
* **LLMs:** Can be generated by any model capable of generating JSON output.
* **Host Frameworks:** Requires a host application built in a supported framework
(currently: Web or Flutter).
## Getting Started
The best way to understand A2UI is to run the samples.
### Prerequisites
* Node.js (for web clients)
* Python (for agent samples)
* A valid [Gemini API Key](https://aistudio.google.com/) is required for the samples.
### Running the Restaurant Finder Demo
1. **Clone the repository:**
\`\`\`bash
git clone https://github.com/google/A2UI.git
cd A2UI
\`\`\`
2. **Set your API Key:**
\`\`\`bash
export GEMINI_API_KEY="your_gemini_api_key"
\`\`\`
3. **Run the Agent (Backend):**
\`\`\`bash
cd samples/agent/adk/restaurant_finder
uv run .
\`\`\`
4. **Run the Client (Frontend):**
Open a new terminal window:
\`\`\`bash
cd samples/client/lit/shell
npm install
npm run dev
\`\`\`
For Flutter developers, check out the [GenUI SDK](https://github.com/flutter/genui),
which uses A2UI under the hood.
CopilotKit has a public [A2UI Widget Builder](https://go.copilotkit.ai/A2UI-widget-builder)
to try out as well.
## Roadmap
We hope to work with the community on the following:
* **Spec Stabilization:** Moving towards a v1.0 specification.
* **More Renderers:** Adding official support for React, Jetpack Compose, iOS (SwiftUI), and more.
* **Additional Transports:** Support for REST and more.
* **Additional Agent Frameworks:** Genkit, LangGraph, and more.
## Contribute
A2UI is an **Apache 2.0** licensed project. We believe the future of UI is agentic,
and we want to work with you to help build it.
See [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to get started.