- UX for AI
- Posts
- Search UX Revolution: LLM AI in Search UIs
Search UX Revolution: LLM AI in Search UIs
Another use case for LLMs that promises to irrevocably change the UX Design of Search UIs.
In the past, we’ve covered multiple interesting and unique uses of LLMs such as previous installments dedicated to Copilot Best Practices and Reporting. Today, we discuss another use case for LLMs that promises to irrevocably change the UX Design of Search UIs.
The Current State of Search
The current state of search UI should be quite familiar to most of today’s internet users. We basically have two basic approaches. Let’s call them “Google” and “Amazon.”
“Google”
As a very simplified explanation, “Google-type search” is what you get when you give users a large, friendly search box and allow them to type in whatever they like. The search engine magic then performs a fuzzy logic match on the query, trying to match synonyms of the metadata and keywords of a piece of content. The resulting matches are sorted by relevance and “authority” – the number of links from other authoritative sites to this piece of content.
The Google-type searching includes “answers” – authoritative answers to a specific question such as “What is the capital of Australia?” sourced from authoritative sources.
This type of search UI also includes disambiguation for common use cases such as the query “tiger.” This type of UI leans heavily on auto-complete, auto-correct, and other tricks to make sure the best answer is returned with a minimum of fuss. The primary application for this type of service is to find a few pieces of reliable and quick authoritative content of a specific type.
“Amazon”
In contrast to “Google,” “Amazon-type search” is the backbone of e-commerce. Amazon search is what you get when you perfect your search in service of finding something to buy, visit, consume, or watch. This type of search UI is characterized first and foremost by facets, a feature conspicuously absent from the “Google-type search.” Just like facets on a diamond, search facets are various angles of the search query and represent convenient filters by which users can narrow down the query. For example, in running a query such as “Nike,” would surface the facets such as Department, Review Stars, Delivery Type, Price Range, etc.
The “Mysteries That Are Not Scary” Problem
Unfortunately, neither UI does particularly well with poorly defined or “negative” queries. The quintessential example of such a query introduced by none other than Jared Spool is “Mysteries That Are Not Scary.” For many reasons, finding answers to queries of this type is quite easy for humans but is particularly difficult for conventional search UIs.
One challenge is that typical “Google-type” search engines look for matches rather than mismatches. So, one can potentially assume that a piece of content can be tagged as “Scary,” and then the content marked scary would be excluded from the result set. Of course, this means that someone has to have the preemptive initiative to mark all of the content in this way, which is usually impractical.
This is frequently solved by enterprising humans who create guides for everything, including, ahem:
While it may look as though Google just magically came up with the answer, it’s actually just quoting a single “authoritative” source: https://modernmrsdarcy.com/page-turning-mysteries-hopeful-not-dark-gloomy/
Google did a decent job of translating “Not Scary” into “Not Dark and Gloomy,” although it’s not the exact match. The typical human search strategy from this point might be something like “Pearl Growing,” where the human searcher would look at the article and peruse the comments and references to find similar material. (This is one of the common search strategies described in Peter Morville’s famous book “Search Patterns: Design for Discovery” https://a.co/d/5U4VJZo) As Peter wrote, “What we find changes what we seek” — one of my personal favorite quotes.
“Amazon-type” search engines traditionally do even worse than “Google-type” search for this type of “fuzzy match queries.” Part of the problem is constrained content inventory – Amazon only contains books and movies, not guides on the scariness of the content. The other is constrained vocabulary. For these search UIs to work as intended, the level of “scariness” should be ideally set up as a search facet. This is even less likely to be the practical approach, as one cannot predict all of the types of facets that will be requested by the searchers, and facets are much harder to set up than metadata tags. Thus, we get this hodgepodge (And yes, “hodgepodge” is an absolutely scientific term. Definitely. Absolutely.)
NOTE: the screenshot below is obviously condensed and edited for demonstration and to skip the sponsored content.
The result set starts out randomly referencing the completely unknown “O'Malleys” and then proceeding to Scooby Doo, which definitely fits the bill. Then comes Scary Movie, which is probably a decent pick.
Then things begin to unravel.
We have a random non-mystery title: a Bernie Mac biography coming at number three. From here, things take a decidedly darker turn with Bates Motel (the spin-off of one of the most legitimately terrifying movies ever made), Smile (VERY scary), Children of the Corn (ditto), then finally coming to IT (following the viewing of which I myself had trouble sleeping. For a few weeks. And I still refuse to open fortune cookies.) I can just imagine someone looking for a nice cozy Hercule Poirot PBS mystery cuddling up with IT instead... WHAAAAAA!!
So, the bottom line is that things that might be easy for humans are historically difficult for computers.
No mystery there.
But the plot is about to thicken.
Enter LLMs
Years ago, I had a fantastic opportunity to work on designing a new UI for the Associated Press (AP) Images site: https://newsroom.ap.org/ in New York. It was one of the most satisfying projects I had a chance to work on as a consultant. The idea was to put AP on the technological and UX Design cutting edge, and boy, did we deliver!
Today AP is once again on the cutting edge of search technology: they are one of the first specialty sites to utilize LLMs for search.
The first search for our favorite query, “mysteries that are not scary,” yields little more than an empty result set. However, AP also presents a preview for AI-powered results:
Clicking on the “AI-powered search” does loads better, Showing us images of Indian Tibetan Dance, Costume Shops in Madrid, Halloween decorations in Poland, and Sherlock Holms Museum:
While this seems like a tiny improvement this is nothing short of a revolution in search.
Because LLMs like ChatGPT have no problem figuring out the riddle:
Even coming up with specific movies that fit the bill perfectly:
And this is only the beginning.
Very shortly, your customers will demand receiving well-formed custom e-commerce and content results that are fine-tuned with your specific content and the detailed, accurate answers to fuzzy queries that matter to them.
This is exactly why I have set up a special full-day UX for AI workshop together with the AI search experts at Open Source Connections, where together we will explore the mysteries and possibilities of LLMs in Search UIs.
The AI in Search UX: A Framework for Product Design with Greg Nudelman workshop will take place Thursday, April 25th, at the Haystack Conference: https://www.eventbee.com/v/haystack-us-2024/event?eid=225311531#/tickets
There are just a few tickets remaining, so if you are on the East Coast, I highly recommend signing up soon. This full-day workshop is only $495, and the first 5 people to sign up will get $50 off with the code "greg4ux".
Looking forward to exploring the mysteries of LLM-assisted search with you!
Greg
P.S. Did I mention that my 1st book was on Search UX? It was! You know what else is cool? You can get your hands on a free copy of my 5th book, $1 Prototype, by sharing the article using the link below. Please consider doing that if you know someone on the East Coast who might benefit from this AI Search UX workshop at $50 off. And I will send you my newest book for free as my way of saying “thank you:”
Reply