Ricky RichardsHome

Designing for AI Agents: The UX Paradigm Nobody Is Ready For

Back

Designing for AI Agents: The UX Paradigm Nobody Is Ready For

March 2, 2026 · By Ricky Richards

For twenty years, every design decision I've made has started with the same question: what does the user need? The user was always a person — someone with goals, frustrations, habits, and emotions. Every wireframe, every flow, every micro-interaction was built to serve a human being sitting on the other side of a screen.

That assumption is about to break.

We are entering an era where the primary consumer of digital interfaces will increasingly be an AI agent — a piece of software that acts on behalf of a human, navigating products, making decisions, and completing tasks autonomously. The human is still in the loop. But they are no longer the one clicking buttons.

This is not a speculative future. It is happening now. And the design profession is almost entirely unprepared for it.


The Numbers That Should Wake You Up

60%

of digital products will be architected primarily for AI agent consumption by 2029, with human-facing UX becoming a secondary consideration. — Gartner

Gartner's prediction is not an outlier. It sits within a broader constellation of forecasts that all point in the same direction:

  • 40% of enterprise applications will feature task-specific AI agents by end of 2026 — up from less than 5% in 2025. An 8x increase in a single year
  • 90% of B2B buying will be AI agent intermediated by 2028, pushing over $15 trillion of B2B spend through agent exchanges
  • 60% of brands will use agentic AI for streamlined one-to-one interactions by 2028
  • The agentic AI market is projected to grow from $6.96 billion in 2025 to $47-57 billion by 2030
  • Morgan Stanley estimates agentic commerce could reach $190-385 billion in US e-commerce alone by 2030
  • 23% of American consumers have already made AI-assisted purchases

These numbers describe a structural shift, not a trend. The interfaces designers build today — navigation patterns, form fields, modal dialogs, notification systems — were designed for human perception and human cognition. They assume eyes that scan, fingers that tap, and brains that process visual hierarchy. AI agents have none of these. They parse structured data, execute API calls, and make decisions based on token probabilities.

The design paradigm built for humans does not transfer to agents. We need a new one.


From UX to AX: What Changes

John Maeda's 13th annual Design in Tech Report, presented at SXSW 2026 on March 18, is titled explicitly "From UX to AX." He calls this evolution "perhaps the most profound shift" he has observed in over a decade of tracking design and technology. The interface becomes, in his framing, "less a series of visible controls and more a conversation between intentions."

Matt Biilmann, CEO of Netlify, has been even more concrete. He coined the term Agent Experience (AX) and defined four pillars: Access (can the agent reach your product?), Context (does the LLM know about your product?), Tools (are you building for agents?), and Orchestration (can you trigger agent runs from your product?). The business case is already proven: since investing in AX, Netlify has seen a 5x increase in daily signups and a 7x jump in paid conversions. As Sequoia Capital's Sonya Huang put it: "We are going from the age of product-led growth to the age of agent-led growth."

The distinction between UX and AX is fundamental. In UX, you design for perception — how something looks, how it feels, how information is organized for a human to understand. In AX, you design for comprehension — how structured data is organized for a machine to act on reliably.

Here is what changes:

Information Architecture → Data Architecture

In traditional UX, information architecture is about organizing content so humans can find what they need. You think about navigation menus, breadcrumbs, search, and progressive disclosure. For agents, none of this matters. An agent doesn't browse. It queries.

What matters instead is data architecture — how your product's capabilities, state, and content are structured as machine-readable endpoints. This means APIs, structured schemas, and semantic metadata that an agent can parse without needing to "see" a page.

Visual Hierarchy → Semantic Hierarchy

Designers spend enormous energy on visual hierarchy — making sure the most important element on a page draws the eye first. Agents don't have eyes. They need semantic hierarchy — clear, unambiguous labeling of what each piece of data means, what actions are available, and what the consequences of those actions are.

This is where standards like Anthropic's Model Context Protocol (MCP) become critical. MCP creates a universal standard for how AI agents communicate with applications — a shared language between agent and tool. It is to agent-application communication what REST was to client-server communication. Designers who understand MCP will have a structural advantage in the coming years.

Feedback Loops → Confirmation Protocols

When a human clicks a button, they expect visual feedback — a loading spinner, a success toast, a state change. When an agent performs an action, it needs something different: a structured confirmation that the action succeeded, what changed, and what the new state is. If something fails, the agent needs a machine-readable error with enough context to retry or escalate.

This means designing error states not for human comprehension but for agent recovery. Instead of a friendly "Something went wrong" message, agents need structured error codes, failure reasons, and suggested remediation steps — all in parseable formats.

User Personas → Agent Personas

Here is where it gets genuinely new. In traditional design, we create user personas to represent the goals, behaviors, and frustrations of our target users. In agent-first design, we need agent personas — models of the different AI systems that will interact with our product.

An agent persona might include: what model powers it, what its context window is, what tools it has access to, what its authorization level is, what its failure modes are, and what its human's preferences and constraints look like. A Claude-based agent with broad tool access behaves very differently from a GPT-based agent with narrow permissions.

The shift from UX to AX is not about removing humans from the equation. It is about designing for a new intermediary between the human and the product.


The MCP Revolution: A New Design Primitive

If you are a product designer and you have not yet studied Anthropic's Model Context Protocol, you should. MCP is rapidly becoming the standard interface between AI agents and the applications they interact with.

Think of it this way: in the pre-web era, every application had its own proprietary interface. The web standardized communication through HTTP and HTML. MCP is doing the same thing for the agent era — creating a standardized protocol through which agents can discover what a tool does, what inputs it needs, and what outputs it produces.

For designers, MCP introduces a new design primitive: the tool description. When you expose your product's capabilities through MCP, you are writing a description that an AI agent reads to understand what your product can do. This description is, in a very real sense, a design artifact. Its clarity, specificity, and structure directly determine how effectively agents can use your product.

This is prompt engineering meets information design. And it is a skill that barely exists in the design profession today.

What MCP Means for Design Systems

Design systems have traditionally been built for two consumers: designers (in Figma) and developers (in code). MCP adds a third consumer: AI agents.

Brad Frost — the creator of Atomic Design — has been leading this conversation with his work on "Agentic Design Systems." His core thesis: design systems must evolve "from passive repositories into active systems of interaction that provide the semantic intelligence which allows AI agents to build entire features while staying on-brand and on-system." The new Storybook MCP tool already lets teams generate UI by having AI agents wield their component libraries directly.

Figma's work on expression tokens, MCP server integrations, and structured documentation is another step. The most forward-thinking design systems will soon include:

  • Machine-readable component manifests — structured descriptions of every component's purpose, props, states, and usage guidelines that agents can consume
  • Semantic action maps — declarations of what actions are possible within a product and what each action does, in a format agents can parse
  • Constraint definitions — explicit rules about what agents can and cannot do, encoded at the system level rather than enforced through visual UI
  • Validation layers — automated tests and human sign-off gates that ensure agent-generated UI meets system standards

Teams incorporating AI into their design systems thoughtfully are already seeing productivity improvements of 30-40% while maintaining the human creativity that distinguishes exceptional design.


Who Is Already Building for Agents

The shift is not theoretical. Several major companies are already building agent-first or agent-compatible experiences:

Salesforce Agentforce

Salesforce has made its largest strategic bet on agentic AI with Agentforce, a platform for building and deploying autonomous AI agents across sales, service, marketing, and commerce. These agents can handle multi-step tasks — qualifying leads, resolving support tickets, optimizing campaigns — with minimal human intervention. Gartner predicts that by 2028, 60% of brands will use this kind of agentic AI for customer interactions.

Shopify Sidekick

Shopify's AI assistant doesn't just answer questions — it takes actions. Merchants describe what they need in plain language and Sidekick can build custom admin apps using Shopify's UI framework and GraphQL API — no coding required. It can update inventory, modify pricing, create automation workflows visually, and analyze multiple data sources simultaneously. Shopify Engineering published a detailed architecture guide on "Building production-ready agentic systems" that is worth studying. The UX challenge here is not designing the chat interface. It is designing the permission model — what can the agent do autonomously, and what requires human approval?

Netlify: The AX Success Story

Netlify's bet on Agent Experience is the clearest proof that designing for agents drives business results. CEO Matt Biilmann defined AX as "the holistic experience AI agents have as users of a product or platform" and invested heavily in making Netlify agent-accessible. The result: 5x daily signups, 7x paid conversions. AX has, in Biilmann's words, "empowered millions of new users with little or no coding background to become active builders." BVP named AX as their Law #1 in their AI Developer laws.

OpenAI's Operator and Deep Research

OpenAI's Operator is a general-purpose agent that navigates the web on behalf of users, filling out forms, making purchases, and completing multi-step workflows. Deep Research autonomously conducts research across multiple sources and synthesizes findings. Both products force a question designers have never had to answer at scale: what does a product look like when the user never sees it directly?

Anthropic's Claude with Tool Use

Anthropic has built tool use directly into Claude, allowing it to interact with external APIs, databases, and applications through structured function calls. Combined with MCP, this creates an ecosystem where Claude can act as an autonomous agent across an entire toolchain — reading from one system, making decisions, and writing to another.


The Trust Problem: Designing for Delegation

Perhaps the deepest design challenge of the agent era is trust. As the Nielsen Norman Group puts it: "In 2026, trust will be a major design problem for AI experiences. This challenge will only grow as more AI agents are rolled out, often before they're ready."

The World Economic Forum published a "trust stack" framework in February 2026 that I think every product designer should study. It identifies five layers: legible reasoning paths (the agent explains how it reached a decision), bounded agency (clear limits with no silent escalation of autonomy), goal transparency (users know what the agent optimizes for), contestability and override (frictionless exit is a trust requirement), and governance by design (logging and auditability embedded from day one, not bolted on later).

This creates a new design problem: delegation UX — interfaces that help humans understand what agents are doing, approve high-stakes actions, and intervene when necessary.

The best frameworks I've seen for this draw on the concept of levels of autonomy, borrowed from self-driving cars:

  • Level 1: Agent suggests, human acts. The agent recommends an action but the human executes it. This is where most AI assistants sit today.
  • Level 2: Agent acts, human approves. The agent performs the action in a sandbox and presents the result for human approval before it takes effect.
  • Level 3: Agent acts, human monitors. The agent operates autonomously but surfaces a feed of actions for the human to review after the fact.
  • Level 4: Agent acts autonomously. The agent operates independently within defined constraints. The human is notified only of exceptions.
  • Level 5: Full delegation. The agent manages entire workflows end-to-end with no human oversight.

Most enterprise applications will need to support multiple levels simultaneously — Level 4 for routine tasks, Level 2 for financial transactions, Level 1 for novel situations. Designing this graduated autonomy is one of the hardest and most important design challenges of the next decade.


The Invisible Interface

There is a provocative idea gaining traction: in the agent era, the best interface may be no interface at all.

If an AI agent can book your flights, manage your calendar, process your expenses, and order your groceries — all through API calls and structured data — do you need a visual UI for any of it? The answer is: not always. But "no interface" is not the same as "no design."

The design of an invisible interface is the design of:

  • Contracts — what promises does the system make to the agent and, through the agent, to the human?
  • Boundaries — what can the agent do, and what is explicitly off-limits?
  • Escalation paths — when should the agent stop and ask the human?
  • Audit trails — how can the human review what the agent did?

This is systems design at its purest. No pixels, no typography, no color palettes. Just structure, logic, and trust.

The best design in the agent era might be the design that no human ever sees — but that every agent relies on.


What I'm Watching as an Investor

As an angel investor, the agent infrastructure space is where I'm seeing the most interesting pitches right now. A few categories worth watching:

Agent orchestration platforms — tools like LangChain, CrewAI, and AutoGen that help developers build multi-agent systems. These are the Rails and Django of the agent era — the frameworks that will underpin most agent-powered applications.

Agent-native commerce — products being built from scratch for agent-to-agent transactions. Imagine a procurement agent that negotiates with vendor agents, comparing prices and terms without a single human reviewing a spreadsheet.

Trust and governance layers — startups building the audit, compliance, and oversight infrastructure for autonomous agents. This is the unsexy but essential plumbing of the agent economy. As Gartner notes, by 2028, AI governance will overtake raw capability as the primary selection criterion for enterprise buyers.

Design tooling for AX — the Figma of agent experience design doesn't exist yet. Whoever builds the tool that lets designers prototype, test, and iterate on agent interactions will own a significant category.


A Framework for Designers

If you are a product designer reading this and wondering where to start, here is my framework:

1. Audit Your Product for Agent Readiness

Look at your product's core workflows. For each one, ask: could an AI agent complete this workflow today through your API? If the answer is no, identify what's missing — is it structured data, clear action definitions, or machine-readable documentation?

2. Design the Agent Layer Alongside the Human Layer

Every new feature should have two design specs: one for the human user and one for the agent consumer. The human layer is your Figma mockup. The agent layer is your tool description, API schema, and permission model.

3. Think in Capabilities, Not Screens

Traditional product design thinks in screens and flows. Agent-first design thinks in capabilities — discrete actions that a product can perform, independent of any visual interface. Map your product's capabilities, define clear inputs and outputs for each, and document the constraints.

4. Design for Graduated Trust

Don't assume full autonomy from day one. Design your agent interactions to start at Level 1 (suggest and confirm) and progressively unlock higher autonomy as the human builds trust. Make the escalation path from agent to human seamless.

5. Learn MCP

Seriously. Understanding Model Context Protocol is not optional for product designers who want to remain relevant in the agent era. It is the new design primitive, and fluency with it will separate the designers who shape this era from the ones who are shaped by it.


The Opportunity

Here is why I am optimistic, despite the scale of the shift: the agent era needs designers more than ever. Not fewer.

Agents are powerful but dumb. They can execute complex workflows but they cannot decide which workflows matter. They can optimize processes but they cannot define what "good" means for a particular user in a particular context. They can follow rules but they cannot set them.

The designer's role in the agent era is to be the person who sets the rules — who defines the contracts, draws the boundaries, designs the trust model, and ensures that the system serves human needs even when no human is directly interacting with it.

This is design at the highest level of abstraction. It is design without pixels. And it is the most important design work of the next decade.

52% of AI builders already say design is more important for AI-powered products than traditional ones. As the products become agents, that importance will only grow.

The question is not whether this shift is coming. The question is whether you will be ready to design for it.


Sources and Further Reading

  • Design in Tech Report 2026: UX to AX — John Maeda at SXSW
  • Introducing AX: Why Agent Experience Matters — Matt Biilmann
  • How to Design for Trust in the Age of AI Agents — World Economic Forum
  • Agentic Design Systems in 2026 — Brad Frost
  • Gartner: 40% of Enterprise Apps Will Feature AI Agents by 2026
  • Gartner: AI Agents Will Command $15 Trillion in B2B Purchases by 2028
  • Model Context Protocol — Anthropic
  • Building Production-Ready Agentic Systems — Shopify Engineering
  • How Service Design Will Evolve with AI Agents — NN/g
  • Agentic UX: Designing Interfaces for AI Agents — Standard Beagle
  • Morgan Stanley: Agentic Commerce Market Impact
  • State of the Designer 2026 — Figma
Subscribe to my free Substack