• UX for AI
  • Posts
  • LLM Design Patterns: Part 1 - Re-Stating

LLM Design Patterns: Part 1 - Re-Stating

Re-stating is a way the AI tells you what it understood as input to avoid any confusion about what is being researched, presented, or constructed.

The discussion of Copilot Best Practices (https://www.uxforai.com/p/ux-best-practices-copilot-design) and Copilot Reporting (https://www.uxforai.com/p/reporting-important-copilot-use-case) would not be complete without the mention of the key LLM “Design Patterns*” that are making Copilots so useful and popular. Although these patterns are often used in a Copilot context, they are worth keeping in mind any time we deal with an LLM (Large Language Model) or an SLM (Small Language Model) AI in any UX context. 

*NOTE: I put “Design Patterns” in quotes because I believe these to be more in the realm of “features” than actual design patterns. Much of what we think of LLM UI is still rapidly evolving, and no particular patterns have settled down. That’s what makes this area of UX Design and Research so incredibly exciting!

One of the most impressive and unique features of the modern language models is how much more these new models appear to “understand” by tying together information from all sorts of disparate data sources and contexts. For example, imagine you are driving and tell the model a single word: “Park.” Based on access to your calendar, the latest generation AI can ascertain that you are nearing your scheduled destination, a popular nightclub downtown in San Francisco, the fact that it’s 9 pm and therefore you are likely late, and it’s dark outside, etc. Therefore, a modern LLM model like the ChatGPT AI model might be capable of determining that you are looking for a parking spot near your venue and not, for example, general information on national parks or a sunny frolic in a botanical garden or a biking excursion in the nearby Golden Gate Park: 

“Park” Source: ChatGPT o1-preview

In comparison, the previous generation assistant like Siri is nowhere near that smart. 

In fact,

Siri now looks downright Silly. 

And Silly, instead of finding you parking near the venue, will instead be executing: “Which one?”

“Park” Source: Siri

I bring up this example not to hate on Silly and her cousins Cortana and Alexa, but to demonstrate the incredible scope and capability of the modern LLMs, so that you might keep in mind how necessary the patterns we are going to review here are to ensure that LLMs actually do exactly what the user intended.

As Christian Lange (Noble Peace Prize winner of 1921) so famously quipped:

“Technology is a useful servant but a dangerous master.”

Christian Lange

To make sure LLMs remain our servants, we need to talk about Re-stating, Auto-Complete, Talk-Back, Suggestions, Nest Steps, Regen Tweaks, and Guardrails. We begin this week with our first installment: Re-stating. 

Re-stating

Re-stating is simply a way the AI tells you what it understood as input.

By using a re-stating design pattern in your Human-AI interface, you avoid any confusion about what is being researched, presented, or constructed. One of the earliest widely-used examples of this pattern was implemented in the Microsoft Power BI NLP Ask feature, the latest example of which is shown below:

Image Source: YouTube https://youtu.be/L7phhEmxERs?si=Wo4NkuHxUudS9vkb How To Use Natural Language Query (Q&A) In Power BI - Detailed Review [2022 Update] by Enterprise DNA

Notice that in the example above, the user typed in “…where is 2017” into the “Ask…” box and the system correctly interpreted and re-stated the field right below the ask box as “2017 (order date).” Using the re-statement feature to fill in the gaps and auto-correct is extremely useful—it allows us to stress less about our human fallibilities of typing sloppily or too fast while also leveraging what LLMs are actually really good at—filling in the next word in the sentence based on what makes sense.

Should you restate before you take action? That depends.

Recall our discussion of the Value Matrix in the previous article, AI Accuracy is Bullsh*t. Here's what UX must do about it (https://www.uxforai.com/p/ai-accuracy-bullsht-heres-ux-must-part-1) To answer the “should you take this action immediately based on AI’s best guess,” we need to know two things:

  1. How often will the AI be wrong, and 

  2. Impact of the False Positive 

Now, we can calculate the ROI of each option by multiplying the impact of each hallucination by the number of questions that will be answered incorrectly. 

In the case of PowerBI, the “Ask” query can be executed immediately because there is almost no penalty for a false positive (other than a tiny Azure compute charge to run the query). In contrast, imagine you are doing the NLP translation for an SMS function. A false positive query interpretation might:

  1. Send incorrectly parsed text content or

  2. Send an SMS to the incorrect person.

In either case, the consequences can be absolutely disastrous! So in this case, you should design the system to confirm before sending the text, as in: “you asked me to text your boss ‘go duck yourself.’ Is that right?”

Enjoyed this installment of UXforAI? Stay tuned for Part 2, Auto-Complete, coming next week.

Greg

Reply

or to participate.