• UX for AI
  • Posts
  • Secrets of Agentic UX: Emerging Design Patterns for Human Interaction with AI Agents

Secrets of Agentic UX: Emerging Design Patterns for Human Interaction with AI Agents

“We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies,” said Sam Altman (1). Here's an excellent example of Supervisor/Worker Agentic UX from AWS Re: Invent that showcases emerging fundamental design patterns for Human-Agent interactions.

In partnership with

Get Your Team Booked on 3.8 Million Podcasts Automatically

It's 2025. Want to finally be a regular podcast guest in your industry? PodPitch will make it happen. Even the beehiiv team uses it!

The best way to advertise isn't Meta or Google – it's appearing on podcasts your customers love.

PodPitch.com automates thousands of weekly emails for you, pitching your team as ideal guests.

Big brands like Feastables use PodPitch.com instead of expensive PR agencies.

By many accounts, AI Agents are already here – they are just not evenly distributed. However, few examples yet exist of what a good user experience of interacting with that near-futuristic incarnation of AI might look like. Fortunately, at the recent AWS Re: Invent conference, I came upon an excellent example of what the UX of interacting with AI Agents might look like, and I am eager to share that vision with you in this article. But first, what exactly are AI Agents?

What are AI Agents?

Imagine an ant colony. In a typical ant colony, you have different specialties of ants: workers, soldiers, drones, queens, etc. Every ant in a colony has a different job – they operate independently yet as part of a cohesive whole. You can “hire” an individual ant (agent) to do some simple semi-autonomous job for you, which in itself is pretty cool. However, try to imagine that you can hire the entire ant hill to do something much more complex or interesting: figure out what’s wrong with your system, book your trip, or …Do pretty much anything a human can do in front of a computer. Each ant on their own is not very smart – they are instead highly specialized to do a particular job. However, put together, different specialties of ants present a kind of “collective intelligence" that we associate with higher-order animals. The most significant difference between “AI,” as we’ve been using the term in the blog, and AI Agents is autonomy. You don’t need to give an AI Agent precise instructions or wait for synchronized output – the entire interaction with a set of AI Agents is much more fluid and flexible, much like an ant hill would approach solving a problem.

How do AI Agents Work?

There are many different ways that Agentic AI might work – it’s an extensive topic worthy of its own book (perhaps in a year or two.) In this article, we will use an example of troubleshooting a problem on a system as an example of a complex flow involving a supervisor agent (also called “Reasoning Agent”) and some Worker Agents. The flow starts when a human operator receives an alert about a problem. They launch an investigation, and a team of semi-autonomous AI Agents led by a supervisor agent help them find the root cause and make recommendations about how to fix the problem. Let’s break down the process of interacting with AI Agents in a step diagram:

Multi-stage Agentic AI Flow. Image Source: Greg Nudelman

A multi-stage agentic workflow pictured above has the following steps:

  1. A human operator issues a general request to a Supervisor AI Agent.

  2. Supervisor AI Agent then spins up and issues general requests to several specialized semi-autonomous Worker AI Agents that start investigating various parts of the system, looking for the root cause (Database).

  3. Worker Agents bring back findings to the Supervisor Agent that collates them as Suggestions for the human operator.

  4. Human operator Accepts or Rejects various Suggestions, which causes the Supervisor Agent to spin up additional Workers to investigate (Cloud).

  5. After some time going back and forth, the Supervisor Agent produces a Hypothesis about the Root Cause and delivers it to the human operator.

Just like in the case of contracting a typical human organization, a supervisor AI agent has a team of specialized AI agents at their disposal. The supervisor can route a message to any of the AI worker agents under its supervision who will do the task and communicate back to the supervisor. The supervisor may choose to assign the task to a specific agent and send additional instructions at a later time when more information becomes available. Finally, when the task is complete, the output is communicated back to the user. A human operator then has the option to give feedback or additional tasks to the Supervising AI Agent, in which case the entire process begins again. (3)

The human does not need to worry about any of the internal stuff – all that is handled in a semi-autonomous manner by the supervisor. All the human does is state a general request, then review and react to the output of this agentic “organization.” This is exactly how you would communicate with an ant colony if you could do such a thing: you would assign the job to the queen and have her manage all of the workers, soldiers, drones, and the like. And much like in the ant colony, the individual specialized agent does not need to be particularly smart or to communicate with the human operator directly – they need only to be able to semi-autonomously solve the specialized task they are designed to perform and be able to pass precise output back to the supervisor agent, and nothing more. It is the job of the supervisor agent to do all of the reasoning and communication. This AI model is more efficient, cheaper, and highly practical for many tasks. Let’s take a look at the interaction flow to get a better feel for what this experience is like in the real world.

Use Case: CloudWatch Investigation with AI Agents

For simplicity, we will follow the workflow diagram earlier in the article, with each step in the flow matching that in the diagram. This example comes from AWS Re:Invent 2024 - Don't get stuck: How connected telemetry keeps you moving forward (COP322), by AWS Events on YouTube, starting at 53 minutes (2).

Step 1

The process starts when the user finds a sharp increase in faults in a service called “bot-service” (top left in the screenshot) and launches a new investigation. The user then passes all of the pertinent information and perhaps some additional instructions to the Supervisor Agent.

Step 1: Human Operator launches a new investigation. Image Source: AWS via YouTube (2).

Step 2

Now, in Step 2, the Supervisor Agent receives the request and spawns a bunch of worker AI agents that will be semi-autonomously looking at different parts of the system. The process is asynchronous, meaning the initial state of suggestions on the right is empty: findings do not come immediately after the investigation is launched.

Step 2: Supervisor Agent launches worker agents that take some time to report back. Image Source: AWS via YouTube (2).

Step 3

Now the worker agents come back with some “suggested observations” that are processed by the Supervisor and added to the Suggestions on the right side of the screen. Note that the right side of the screen is now wider to allow for easier reading of the agentic suggestions. In the screen below two very different observations are suggested by different agents, the first one specializing in the service metrics and the second one specializing in tracing.

Step 3: Worker Agents come back with suggested observations that may pertain to the problem experienced by the system. Image Source: AWS via YouTube (2).

These “suggested observations” form the “evidence” in the investigation that is targeted at finding the root cause of the problem. To figure out the root cause the human operator in this flow helps out: they respond back to the Supervisor agent to tell it which of these observations are most relevant. Thus the Supervisor agent and human work side by side to collaboratively figure out the root cause of the problem.

Step 4

The human operator responds by clicking “Accept” on the observations they find relevant, and those are added to the investigation “case file” on the left side of the screen. Now that the humans have added some feedback to indicate the information they find relevant, the agentic process kicks in the next phase of the investigation. Now that the supervisor agent has received the user feedback, they will stop sending “more of the same” but instead will dig deeper and perhaps investigate a different aspect of the system as they search for the root cause. Note in the image below that the new suggestions now coming in on the right are of a different type – these are now looking at logs for a root cause.

Step 4: After user feedback, the agents look deeper and come back with different suggestions. Image Source: AWS via YouTube (2).

Step 5

Finally, the Supervisor Agent has enough information to take a stab at identifying the root cause of the problem. Hence, it switches from evidence gathering to reasoning about the root cause. In steps 3 and 4, the supervisor agent was providing “suggested observations.” Now, in Step 5, it is ready for a big reveal (the "denouement scene," if you will) so, like a literary detective, the Supervisor Agent delivers its “Hypothesis suggestion.” (This is reminiscent of the game “Clue” where the players take turns making “suggestions,” and then, when they are ready to pounce, they make an “accusation.” The Supervisor Agent is doing the same thing here!)

Step 5: The Supervisor agent is now ready to point out the culprit of the “crime.” Image Source: AWS via YouTube (2).

The suggested hypothesis is correct, and when the user clicks “accept” the Supervisor agent helpfully provides the next steps to fix the problem and prevent future issues of a similar nature. The Agent almost seems to wag a finger at the human by suggesting that they “implement proper change management procedures” – the foundation of any good system hygiene!

The Supervisor agent also provides the next steps to fix the problem and prevent it in the future. Image Source: AWS via YouTube (2).

Final Thoughts

There are many reasons why Agentic flows are highly compelling and are a focus of so much AI development work today. Agents are compelling, economical, and allow for a much more natural and flexible human-machine interface, where the agents fill the gaps left by a human and vice versa, literally becoming a mind-meld of human and a machine, a super-human “Augmented Intelligence,” which is much more than the sum of its parts. However, getting the most value from interacting with agents also requires drastic changes in how we think about AI and how we design user interfaces that need to support agentic interactions:

Flexible, adjustable UI: Agents work alongside humans, to do that, AI Agents require a flexible workflow that supports continuous interactions between humans and machines across multiple stages – starting investigation, accepting evidence, forming a hypothesis, providing next steps, etc. It’s a Flexible looping flow crossing multiple iterations.

Autonomy: while, for now, human-in-the-loop seems to be the norm for agentic workflows, agents show remarkable abilities to come up with hypotheses, gather evidence, and iterate the hypothesis as needed until they solve the problem. They do not get tired or run out of options and give up. AI agents also show the ability to effectively “write code… a tool building its own tool” (4) to explore novel ways to solve problems – this is new. This kind of interaction by nature requires an “aggressive” AI, e.g., these agents are trained on maximum Recall, open to trying every possibility to ensure the most true positive outcomes (see our Value Matrix discussion here.) This means that sometimes the agents will take an action “just to try it” without “thinking” about the cost of false positive or false negative outcomes. For example, an aggressive AI agent “doctor” might prescribe an invasive brain cancer biopsy procedure without considering lower-risk alternatives first or even stopping to get the patient’s consent! All this requires a deeper level of human and machine analysis and multiple new approval flows for aggressive AI “exploration ideas” that might lead to human harm or simply balloon the out-of-budget costs.

New controls are required: while much of the interaction can be accomplished with existing screens, the majority of Agent actions are asynchronous, which means that most web pages with the traditional transactional, synchronous request/response models are a poor match for this new kind of interaction. We are going to need to introduce some new design paradigms. For example, start, stop, and pause buttons are a good starting point for controlling the agentic flow as otherwise you run a very real risk of ending up with the "The Sorcerer's Apprentice" situation from Fantasia (with self-replicating brooms fetching water without stopping, creating a huge expensive mess.)

You “hire” AI to perform a task: this is a radical departure from traditional tool use. These are no longer tools, they are reasoning entities, intelligent in their own ways. AI service already consists of multiple specialized agents monitored by a Supervisor. Very soon, we will introduce multiple levels of management with sub-supervisors and “team leads” reporting to the final “account executive agent” that deals with humans… Just as human organizations do today. Up to now, organizations needed to track Products, People, and Processes. Now, we are adding a new definition of “people” – AI Agents. That means developing workable UIs for safeguarding confidential information, Role-Based Access Control (RBAC), and agent versioning. Safeguarding the agentic data is going to be even more important than signing NDAs with your human staff.

Continuously Learning Systems: To get full value out of agents, they need continuous learning. Agents learn, quickly becoming experts in whatever systems they work with. The initial agent, just like a new intern, will know very little, but they will quickly become the “adult in the room” with more access and more experience than most humans. This will have the effect of creating a massive power shift in the workplace. We need to be ready. (Part 4 of our new book deals entirely with AI Ethics — you can pre-order the book here.)

Regardless of how you feel about AI Agents, it is clear that they are here to stay and evolve alongside their human counterparts. It is, therefore, essential that we understand how Agentic AIs work and how to design systems that allow us to work with them safely and productively, emphasizing the best of what humans and machines can bring to the table.

Want to practice designing your own Agentic UX flows?

  1. We have a new UX for AI book coming from Wiley in April. It is full of practical UX skills and frameworks for making AI work for humans. Pre-order now!

  2. I will be teaching a UX for AI workshop at SXSW on March 9th.

  3. I will be teaching at the AI Bootcamp for UX Teams (organized by Strat) May 13 - 15 in San Francisco (early bird pricing is now.)

References

1. Altman, Sam. Reflections. samaltman.com. January 5, 2025. https://blog.samaltman.com/reflections Collected Jan 21, 2025.

2. AWS re:Invent 2024 - Don't get stuck: How connected telemetry keeps you moving forward (COP322). AWS Events on YouTube.com. Dec 7, 2024. https://www.youtube.com/watch?v=ad42UTjP7ds Collected Jan 21, 2025

3. Kartha, Vijaykumar. Hierarchical AI Agents: Create a Supervisor AI Agent Using LangChain. Medium.com May 2, 2024.

4. Mollick, Ethan. When you give a Claude a mouse. oneusefulthing.org. Oct 22, 2024 https://www.oneusefulthing.org/p/when-you-give-a-claude-a-mouse Collected Jan 21, 2025

 

Reply

or to participate.