• UX for AI
  • Posts
  • Storyboarding for AI-driven Products - Part 1: Why You Need a Storyboard for Your AI Project and Why You Should Draw It Yourself

Storyboarding for AI-driven Products - Part 1: Why You Need a Storyboard for Your AI Project and Why You Should Draw It Yourself

In AI projects, we use storyboarding to effectively tell our story, and to communicate the project’s UX vision to users and stakeholders. Here's why you should draw yours using a pencil and not with ChatGPT or Midjourney.

NOTE: This is storyboarding for AI-Driven Products, Part 1. This article continues in Part 2.

From ancient Egypt to modern advertising and comics, 

Words + Picture = Greater Impact

Papyrus of Ani. (Source: By Photographed by the British Museum; original artist unknown - Eternal Egypt: Masterworks of Ancient Art from the British Museum by Edna R. Russmann, Public Domain, https://commons.wikimedia.org/w/index.php?curid=10971230)

In AI projects, we use storyboarding to effectively tell our story, and to communicate the nascent project’s UX vision to users and stakeholders.

Why bother with a storyboard?

You might be tempted to say, why do I need a storyboard in the first place? Can’t I just have a list of requirements? 

Let me tell you a short story about “Mood Ring” that will help demonstrate the storyboard’s immediate utility to your next AI-driven project. 

This story comes from an exercise done by a team of students in one of my UX design classes at San Francisco State University. In my class, all of the student teams got to pick any use case they to work on. As part of the initial assignment, they needed to tell a compelling story about what their product would do to a team of “stakeholders” (the rest of the students in my class) by drawing a storyboard explaining how their product would work. One of the teams (names are hidden on purpose) envisioned an AI-driven product called “Mood Ring” that would detect when the owner was feeling down and try to improve their mood.

One of the team members created the following storyboard: 

Storyboard A. Source: SFSU UX Design Class

I can see that some explanation is required.

In storyboard A:

  • The story starts at home. 

  • The protagonist is drinking her coffee. 

  • She feels sad. 

  • She opens a mood tracker. 

  • Answers a few questions. 

  • Then she suddenly, magically… ahhhh, just feels better!

Do you think that looks like a viable product? The rest of the class sure did not think so. Even though there may be a germ of an idea for a viable AI product (more on this below), this storyboard fails to connect the dots on how the benefit would be delivered.

Here is another example of a storyboard from the same team:

Storyboard B. Source: SFSU UX Design Class

In storyboard B:

  • The story starts in traffic (those boxes are all little cars stuck on a 10-lane highway)

  • The protagonist (NOT Howard Stern) is stuck driving in traffic.

  • They open their phone (because, of course, everyone does that while driving in heavy traffic… NO, I MOST CERTAINLY DO NOT RECOMMEND DOING THIS!)

  • There is some vague exchange of intense messages between parties unknown (again, absolutely NOT recommended while driving)

  • Something happens…

  • There is an opportunity to upload a photo or make a note

I guess the desired outcome was to make the protagonist feel better in some way, but since the designer never finished their story, we can assume that it just kind of petered out from there.

Is that a viable product? What do you think?

The reason many AI projects fail is that they fail to frame the problem correctly, resulting in lack of demand for their product or service. If you cannot tell a compelling story you don’t have a chance.

Storyboards can help you solve this problem before you invest time and effort into your design. Without a cohesive story, you don’t have a chance at a successful product.

As I mentioned previously, the “Mood Ring” use case actually contains a germ of a controversial idea: the use of AI tools to enhance psychiatric care. One such use case, the use of AI to enhance diagnostic accuracy and personalize treatment plans, is described by Michael Asbach in “AI in Psychiatry: Things Are Moving Fast.” published in Psychiatric Times: https://www.psychiatrictimes.com/view/ai-in-psychiatry-things-are-moving-fast.

Taking this use case as an example, I asked ChatGPT to come up with its own storyboard for “Mood Ring”:

Storyboard C. Source ChatGPT-4o

Hmmm… Perhaps my students should not despair of securing their UX jobs just yet.

Perhaps the use case was just too hard?

Let’s try a different twist on the Mood Ring use case:

“A nerdy, awkward young man is sitting alone in a coffee shop. He’s too shy and socially awkward to approach a pretty girl, so he launches “Mood Ring,” an AI-based app that encourages him to be less afraid of rejection and more open with his feelings. The app features a checklist to help him get the conversation started and to remind him to smile and at least appear human. The pretty girl responds well to his advances, and they strike up a lively conversation. The awkward young man and the pretty girl leave the coffee shop together. Romance blossoms.”

When we plug this into ChatGPT, we get the following written storyboard:

Scene 1: Coffee Shop Interior

  • Panel 1: Wide shot

    • The camera captures a cozy coffee shop filled with warm lighting and a few patrons. The nerdy young man is sitting alone at a small table in the corner, looking nervous and out of place. He’s hunched over, fidgeting with his coffee cup.

  • Panel 2: Close-up of the young man

    • The focus is on his face, showing his anxious expression as he glances towards a pretty girl sitting a few tables away. She’s absorbed in a book, unaware of his gaze.

Scene 2: The App Introduction

  • Panel 3: Over-the-shoulder shot

    • The young man pulls out his smartphone and opens an AI-based app. The screen displays a friendly, welcoming interface. The app’s main screen reads: "Overcome Fear, Start a Conversation!"

  • Panel 4: Close-up of the App

    • The app presents a checklist: “1. Take a deep breath. 2. Smile. 3. Compliment her book. 4. Ask about her interests.” The young man’s finger hovers over the first item, hesitating.

Scene 3: The Checklist

  • Panel 5: Split-screen

    • On one side, the app’s screen shows a cartoon avatar giving a thumbs-up with the text, "You’ve got this!" On the other side, the young man takes a deep breath, a small smile forming on his face.

  • Panel 6: Medium shot

    • The young man stands up, gathering his courage. The app screen is still visible, reminding him to stay calm and smile. He begins to walk towards the girl.

Scene 4: The Approach

  • Panel 7: Two-shot

    • The young man approaches the girl’s table, standing awkwardly. She looks up, slightly surprised. He stammers out a compliment about her book, just as the app suggested.

  • Panel 8: Close-up of the girl

    • The girl smiles warmly, clearly pleased by the compliment. She invites him to sit down.

Scene 5: Conversation Flow

  • Panel 9: Side-view shot

    • The young man and the girl are now deep in conversation, laughing and chatting comfortably. The young man’s nervousness has faded, and he’s clearly more relaxed and confident. The app screen peeks out of his pocket with a notification: "Great job!"

Scene 6: Leaving Together

  • Panel 10: Wide shot

    • The young man and the girl stand up, grabbing their things. They exit the coffee shop together, walking side by side, engaged in conversation.

  • Panel 11: Final shot

    • The camera lingers on the coffee shop’s entrance as the couple disappears into the street. The app screen, now dim, displays a congratulatory message: "Mission Accomplished: Romance in Bloom!"

Scene 7: Epilogue (Optional)

  • Panel 12: Close-up of the App (Optional)

    • The app’s home screen now displays a new prompt: "Plan your next date!" The young man smiles as he sees the notification, ready to continue his romantic journey.

This write-up is actually fantastic. 

Unfortunately (or rather fortunately for us humans), so far ChatGPT does an exceedingly poor job of creating a visual storyboard:

Storyboard D. Source: ChatGPT-4o

Somehow, Midjourney /imagine actually does even worse:

Storyboard E. Source: Midjourney

However, luckily for genreative AI nerds, we can still use AI to generate a visual storyboard; we just have to do it one screen at a time.

Generating Mood Ring one Panel at a Time

Using Midjourney /imagine:

Pencil sketch style storyboard panel featuring a crowded coffee shop with people milling about --aspect 1:1

Pencil sketch style storyboard panel featuring an awkward nerdy person at the party sitting to the side by himself. he is browsing his phone --aspect 1:1 

Pencil sketch style storyboard panel featuring a close-up of a phone in a person's hand. the phone has a vague outline of a checklist --aspect 1:1

Pencil sketch style storyboard panel featuring an awkward nerdy person at a coffee shop talking to a pretty girl. He is smiling --aspect 1:1

Pencil sketch style storyboard panel featuring an awkward nerdy person walking down the street arm in arm with a pretty girl. She is holding a heart balloon floating above her head. The view is of their backs retreating. --aspect 1:1 

Here’s the complete storyboard, produced one panel at a time using Midjourney /imagine:

Storyboard F, Source: Midjorney (one screen at a time)

And here is the same storyboard I sketched myself using sticky notes:

Storyboard G, Source: Greg Nudelman

As you can see F and G tell pretty much the same story – but why should you follow my titular advice and draw one yourself?

Reasons to Draw the Storyboard Yourself

  1. Time and effort. It took over 1.5 hours to make one using the one-panel at a time method. I had certain picture in mind but Midjourney had other ideas of course, so it took quite a bit of time to get the two somewhat close. In contrast, the sketch storyboard took about 1.5 minutes. Drawing it yourself is about 60 times faster.

  2. Focus. During the time I was “playing” with Midjourney, guess what I wasn’t doing: I was not focusing my empathy on the user. I was not feeling the anxiety and fear that come with approaching a stranger. I was not thinking about whether this app was going to do anything to help those feelings. I was focused on the tool and not on the story.

  3. Quality. Drawing quality is a double-edged sword: it takes a great deal of time and effort to create consistency from one request to the next in generative AI. The AI-generated panels look a little different, and because the quality of the drawing is very high, the eye is naturally drawn toward these differences. This means that the mind is thereby yanked out of the story and instead into cataloging these differences and critiquing the quality of the drawing. It’s as though you started telling me a story about a tall prince with luxurious wavy hair, and midway through the story, the prince became short and bald. Not good. My own drawings are quite basic, but therein lies the redeeming lack of quality – they are clearly a work in progress, and minute differences are noticed less. Thus the reader is more focused on the story itself. To put it another way, the quality of the drawing must match the level of certainty of the project, and we are far from being certain this app is worth doing at all at this point, so lower quality is actually preferred because it focuses the mind on the story the drawing represents and not on the drawing itself.

  4. Mind-Drawing Connection. This is hard to describe in scientific terms, but when a human being puts their pencil to paper, something magical occurs: they are now thinking in multiple dimensions at the same time. They are transported into the story, seeing the scene, hearing the muted buzz of the conversation, feeling the feelings, smelling the fresh coffee. This direct connection with your drawing is what frees your imagination and is perhaps the most important part of the whole exercise. This “mind-drawing connection” absolutely does not happen when you are using a computer to draw the picture for you.

TL/DR

So, in conclusion, what should you do?

  1. Until ChatGPT or Midjorney can create a complete storyboard in one shot, I’d say the juice is definitely not worth the squeeze: get used to drawing your own storyboards with pencil and sticky notes. (Next week we’ll publish part 2 explaining in detail various aspects of the UX for AI storyboard and pitfalls to avoid.) 

  2. Even after generative AI comes up with a reliable way to draw a complete storyboard based on a prompt, you should continue investing 2-3 minutes to draw one yourself. Why? Drawing is an exercise by humans and for humans. Drawing is needed in order to free our imagination and to feel our connection to the rest of the universe: I call it the “Mind-Drawing Connection.” The transformative power of first-person drawing is a real thing and should never be discarded – that is perhaps the most important part of the whole exercise.

  3. Please vote for my UX for AI workshop at the next SXSW: https://panelpicker.sxsw.com/vote/151357 

  4. Attend my next UX for AI workshop at Boulder, CO, on September 9th to practice drawing storyboards and doing other exercises, such as Digital Twin, which is essential for success on your next AI project. Register here: https://strat.events/usa/tickets 

  5. Above all, have fun with your storyboards – it’s one of the few times in your adult life you get paid and add value by pretending you are 8 years old all over again! And that is just one of the things that makes UX Design such a cool profession.

Peace,
Greg

P.S. Our fabulous full-day workshop coming up on September 9th, 2024, at UXStrat in Boulder WILL sell out like our previous workshops at UXStrat 2023, UX Copenhagen, UXLx in Lisbon, and Rosenfeld Media workshop online. If you found this article useful, don’t miss out on key learnings you will need in your next project and get your ticket now: https://strat.events/usa/tickets 

P.P.S. And please remember to vote for my session at SXSW: https://panelpicker.sxsw.com/vote/151357

P.P.P.S: This was storyboarding for AI-Driven Products, Part 1. This article continues in Part 2.

Reply

or to participate.