- UX for AI
- Posts
- The Rise of AI-First Products
The Rise of AI-First Products
If Mobile-First thinking has revolutionized the UX Design industry, AI-First is promising to be an even more spectacular kick in the pants.
Updated Feb 14, 2025. Original Version published January 12, 2024.
If Mobile-First thinking has revolutionized the UX Design industry, AI-First is promising to be an even more spectacular kick in the pants.
For likely the first time in history, we can build products and services truly centered around functional AI. The next wave of AI products promises to combine LLMs with mobile, audio, video, vision, movement, and much more. This is giving rise to a functional set of products that can be called “AI-First.”
And many of the design “rules” are going out the window.
As a concept, AI-First Design was introduced to me by Itai Kranz, the Head of UX at Sumo Logic, who wrote this nice article: "AI-First Product Design." One of the earliest mentions of the concept in online literature seems to point to Masha Krol’s Medium.com article “AI-First: Exploring a new era in design,” published Jul 13, 2017.
However, AI-First is not exclusively the domain of designers. As Neeraj Kumar helpfully explains in his LinkedIn article “The AI First Approach: A Blueprint for Entrepreneurs:”
In an AI-first company, AI is not an afterthought or a tool to be tacked on later for incremental efficiency gains. Instead, it is an integral part of the company's DNA, influencing every decision, from the problem the company chooses to solve, the product it builds, to the way it interacts with its customers.
Well said.
Co-pilot is not an AI-First Design
The first wave of LLM-enabled products has largely been ad-ons, the now so-called “co-pilots.” We explored various co-pilot design patterns at length and even sketched a few that have not yet been made into products on our blog, UXforAI.com, in "How to Design UX for AI: The Bookending Method in Action." Essentially the idea behind a co-pilot is to retrofit an existing product with a side panel that will work with the LLM engine and information in the main screen in order to produce some acceleration or insight. A nice recent example of this is Amazon Q integration with QuickSight:
Amazon Q co-pilot panel answers natural language questions, explains dashboards, creates themed reports, and more. While this is pretty impressive and useful, it is not an AI-First approach. It is a way to retrofit an existing product (QuickSight) with some natural language processing accelerators.
We tried AI-First with Alexa
We’ve seen a few attempts at AI-First products in the past: Amazon Echo with Alexa, for example. However, Alexa suffered and continues to suffer from a lack of context, as I wrote about in my 5th book, Smashing Book 6.
Echo with Alexa also lacks access to essential secure services that would allow the product to actually “do stuff” outside of Amazon’s own ecosystem. If you ask Alexa to add your dog food to your Amazon shopping cart, it will do it quite well. However, don’t expect Alexa to work when ordering a pizza. Much less to execute a complex multi-step flow like booking a trip. In fact, any multi-step experience with Alexa is borderline excruciating.
The Alexa “Skills” (Amazon’s name for voice-activated apps) is the worst failure of the platform, in my opinion. Greg wrote extensively about this previously (https://www.smashingmagazine.com/2018/09/smashing-book-6-release/), but it comes down to a problem of lengthy invocation utterance, inability to pass the context, clunky enter and exit strategies, and inability to show system state (like are you inside a Skill or inside Alexa?). And the worst part is that you have to say everything very, very quickly and concisely, or else Alexa’s minuscule patience will time out, and you’ll have to start all over again.
I once did a pilot project spike for GE where I created an Alexa skill called Corrosion Manager to report on the factory assets that were about to rust out and thus were posing an increased risk. (See our UXforAI.com article, "Essential UX for AI Techniques: Vision Prototype") The easiest Alexa Skill invocation command we could come up with was something like: “Alexa, ask Corrosion Manager if I have any at-risk assets in the Condensing Processor section in the Northeastern ACME plant.” (Try to say that five times fast. Before Alexa times out. Before your morning coffee. I can tell you my CPO at the time was not impressed when he tried it.)
Alexa skills don’t just fail the smell test for serious SaaS applications. One memorable experience came from trying to introduce a nice middle-aged religious couple who were friends of mine to Bible Skill on Alexa. Let’s just say they did not have the pre-requisite patience of a saint and, therefore, failed to invoke even a single Bible Skill task successfully. (They eventually forgave me for introducing a satanic device into their home. Yes, we are still friends. Barely.)
Humane AI Pin
Humane AI Pin (https://humane.com/) was arguably the first commercially available AI-first product of the new generation. We already discussed the issues with the AI Pin at length in UXforAI.com article "Apps are Dead." Among the problems were awkward I/O and controls. While it seemed to be able to mimic Alexa’s functions on the go, it was hard to see people doing real work on this device, even something relatively simple like ordering a pizza. Booking a trip was definitely out of the question. However, this device helped show that the new paradigm seems to be about the unabashed and uncompromising death of the app store paradigm.

Source: https://humane.com/
We wrote about that extensively in the past issue of our column, “Apps are Dead,” here: https://www.uxforai.com/p/apps-are-dead (It’s a quick read, and I highly recommend a refresher as it will help put this next product in the proper perspective.)
r1 rabbit
Another AI-first product, r1, from rabbit, was launched 13 months ago, on January 9th, 2024. The r1 is part of the next wave of AI products promising to combine LLMs with mobile form-factor, voice, and vision capabilities. The r1 appears to be a smaller version of a cell phone with a touch screen and a spinner wheel, somewhat reminiscent of late Crackberry designs. (Have you seen the movie Blackberry? It’s excellent. Must watch for all the mobile design nerds.)
The most prominent feature of the r1 device is what it does NOT have: apps.
All of the usual apps are available instead as permanent integrations that are embedded behind the scenes into the ChatGPT voice-assistant interaction. Here’s a full transcript of r1 demo ordering a pizza:
The key strategy seems to be a simple end-to-end experience that works reliably and consistently, together with simple pricing.
Sadly, this appears to be much harder to build than than it sounds.
Down the Rabbit Hole: The Slippery Slope of AI Product Ethics
Unfortunately, 13 months after the release, all is not well in the rabbit land. Based on multiple early product reviews from YouTube tech influencers including Marques Brownlee (who calls r1 “barely reviewable”) (Brownlee, Marques. Rabbit R1: Barely Reviewable. https://www.youtube.com/watch?v=ddTV12hErTc) and Coffeezilla (who just straight up calls r1 “a scam”) (Coffeezilla. $30,000,000 AI Is Hiding a Scam. https://www.youtube.com/watch?v=NPOHf20slZg), the r1 might have gone a bit too far down the marketing hype rabbit hole and failed to deliver up to the hype.
According to these and many other reviewers, the device is plagued by multiple issues, including broken interfaces to key services like Uber, DoorDash, Spotify, etc., buggy visual recognition, terrible battery life, bad GPS locator, and general inability to connect various experiences together.
The main problem is it does not actually appear to do any of the stuff the Marketing promised. Reviewers like The Verge and Coffeezilla have pointed out that all of the connectivity with various services that were supposed to make r1 work appears to be done by hand, directly through a hard-coded open-source web interface plugin called Playwright, and not as the rabbit alleged through the “Large Action Model” AI. (Coffeezilla. Rabbit Gaslit Me, So I Dug Deeper. May 24, 2024. https://www.youtube.com/watch?v=zLvFc_24vSM Obtained Nov 18, 2024. and Pierce, David. Rabbit R1 review: nothing to see here. The Verge. May 2, 2024 https://www.theverge.com/2024/5/2/24147159/rabbit-r1-review-ai-gadget Obtained Nov 18, 2024.)
It also appears that the entire product is basically ChatGPT that has been specially instructed not to reveal that truth to the user:
Image Source: YouTube https://www.youtube.com/watch?v=zLvFc_24vSM
As Emily Sheppard explains in the video: “The way LAM was observed to work is not actually how it works. It’s meant to be an AI live controlling website and understanding that website… But what they have is a bunch of static commands…. And the problem with that is that if the user interface changes… If the website changes… If there is a CAPTCHA… The hard coded script cannot cope with that.” (Coffeezilla. Rabbit Gaslit Me, So I Dug Deeper. May 24, 2024. https://www.youtube.com/watch?v=zLvFc_24vSM) In other words, the interface breaks.
According to Coffeezilla and others, many signs point to LAM as being more of a “Marketing Term” and actually not existing as promised.
AI-First is Hard
Regardless of your experience with r1, I think we can all agree: AI-First is hard.
While a few early failures and hype are to be expected, AI ethics quickly becomes a crucial consideration for AI-First products because they naturally aggregate massive amounts of data from various apps across the entire spectrum of use cases. However, today, there is no strict code governing the considerations for the design and development of these potentially powerful products.
What we have instead is closer to the pirate code from Jack Sparrow’s famous adventures:
“The code is more what you'd call 'guidelines' than actual rules.”
Recall that with “great power comes great responsibility.” Although it is tempting, we simply cannot think like pirates in our approach to ethical AI-First systems designs, especially if those devices are going to be handling the combined total of human lives the way our mobile devices currently do. In the next section, I will attempt to put down some of the principles and key considerations for AI-First designers. May your seas be smooth and the wind always at your back!
Rules for Rule-Breakers
While there is clearly much to learn, we can already deduce a few rules for this new AI-First design paradigm. Here’s what we’ve got to go on so far:
Smooth, simple, seamless. The AI-first experience must feel much simpler and smoother than the current app paradigm. This is where r1 takes a hit by requiring the use of another device (a computer with a keyboard and large screen) to set up all the app integrations. We already do everything on the mobile. Not being able to do everything on the AI-First device is a step backward and just will not work. The sub-second LLM response speed is nice, though.
Personalization: The AI assistant must learn my preferences quickly. The AI assistant must know whether I like pepperoni, or want vegan cheese, or gluten-free crust. It should know where I live and what I prefer at what hour of the day, above and beyond the app preferences. For example, the Amazon app keeps trying to make me return my packages at Kohls's two towns over when I have a UPS store next door. This nonsense simply must cease.
Data privacy: with this intimate knowledge of my life across all of the apps, I must know that data about my personal habits will not be used to enslave me and sell me down the river. AI is powerful enough for me to pay extra to have my interests served first. Not make me into another piece of rabid rabbit robot food.
Use existing phone, watch, earbuds, glasses, tablet, headphones, etc.: Please, please, please – I mean it! Use the same device if possible. I already have too many devices. There is no new interaction in r1 to warrant me owning yet another device. None. I don’t need a smaller screen, it's a bad idea. I already have two cameras on my phone, and I’m used to that, so there is no need to reduce it back to one camera. That’s another bad idea.
Security of transactions: we are going to be doing everything with our AI-First device, so use established high-security methods like facial recognition and fingerprint. I like what r1 is doing with the transaction confirmation dialog, but this needs to be more secure, like double click + facial recognition Apple iPhone provides.
Non-voice is more important than voice: both r1 and AI Pin are missing the most important lesson from the mobile-first. Voice is not going to be the primary UI. Voice control is just too public. Imagine saying your password out loud, like in Star Trek! (That’s “O-U-C-H,” Capt’n) Mobile use is popular in both quiet (doctor’s offices, meetings) and noisy (metro, bus, cafe) environments. Text input via keyboard is a primary, not secondary, use case.
Avoid cutesy form factors: be friendly without being cloying. You don’t need to invoke the Adventures of Edward Tulane – that story is creepy enough to be left alone! Avoid bright colors, especially orange (even if the CEO really seems like it. Designers, please try to talk your executives out of making crazy color choices. Orange is a warning. Or a rescue craft. Or a child’s toy. This thing is none of those.)
Again,
AI-First is hard. These products are still baby steps. Remember that the first iPhone did not have cut and paste. And the first Facebook “app” was actually just a website and only allowed reading and liking of messages. It took over a year for the first true mobile Facebook app to be ready.
Baby steps.
The time will, of course, be as unkind as it can possibly be to any new product named “rabbit” that is produced by a company called “Teenage Engineering” (if the influencer backlash and disabled comments on the launch video on YouTube is any guide…) However, this author is of the opinion that r1 is a very clever ChatGPT wrapper built on top of the usual phone OS+apps play that has basically remained unchanged since the first release of the iPhone in 2007, for almost 17 years!
Apps Must Die
Recall that we recently discussed how InVision failed to implement the key strategy for the age of AI: “simple end-to-end experience that worked reliably and consistently, together with simple pricing.” (See "InVision Shutdown: Lessons from a (Still) Angry UX Designer" on UXforAI.com) AI-first products like the r1 from rabbit are early attempts at this 3S: Smooth, Simple, Seamless experience.
One thing that rabbit r1 emphatically demonstrates is that under the pressure of LLMs, apps must die.
Think of your phone now not as a collection of Mobile-First UI designs but as a platform for AI-First experiences.
The APIs and services apps deliver will, of course, remain alive and well. What must, however, be allowed to pass away is the need for the customer to go in and out of a specific UI silo (or a voice silo if we are talking about Alexa Skills).
With AI-First design, as simply and as frictionlessly as possible, we simply ask the assistant for what we want, and the assistant goes into specific services it needs to accomplish the task armed with a deep knowledge of your preferences and inner desires. LLMs like ChatGPT are making this shift away from apps not just possible but simply imperative.
We see the AI-First Design movement quickly becoming the avalanche that will sweep away the outdated siloed app environments in favor of 3S: Smooth, Simple, Seamless experiences that bring together various app capabilities and content under the umbrella of an AI-First approach.
So, enough talk! Go forth and design some cool AI-Frist sh*t.
We can’t wait to see it!
Greg & Daria
Reply