Decisions by Design: An AI Story
In a time where AI tools are rapidly becoming everyday collaborators, how might we help young adults make informed, ethical, and empowered decisions about their use?
This project explores that question through an interactive, scenario-based game designed to improve AI literacy among students and recent grads. The goal was to create a judgment-free, narrative-driven experience where users make choices in branching storylines that lead to different outcomes—each revealing insights about the impact of AI on their creative workflows, responsibilities, and decision-making.
View Figma Prototype here.
Role
UX Designer, Writer
Industry
HCI+AI, Education
Duration
6 weeks
Context
AI tools are deeply embedded in our daily lives, yet many young adults (18-30) still struggle to understand how these technologies function and how to use them effectively. As this demographic grows and forms lifelong habits, so does AI—rapidly evolving in capability and accessibility. Young adults are especially prone to adopting AI uncritically, leading to patterns of over-reliance, deskilling, and even addictive behaviors.
By targeting this population early, we have the opportunity to shape how a generation relates to AI. If we can help them recognize both the benefits and the risks of integrating AI into everyday life, they can become more intentional users—and potentially influence broader communities through their awareness and actions.
I initiated the discussion of this project with my team of 4 to focus on these core goals:
Equip young adults with the tools to improve AI literacy and adapt their workflows to work with AI—not around it.
Educate on the early signs of addictive behavior and over-dependence on AI systems.
Perspective Searching: Expert Interviews
With the project scope and deliverables still at large, we knew we needed guidance from those with professional experience in AI. To ground our direction, my team conducted 4 semi-structured interviews with experts in HCI and AI across both academia and industry.
Questions were designed specifically for the expert we outreached based on their area of expertise and active projects, organized below:
Post-Interview Affinity Mapping
Many pages of notes later, my team came together and discussed our findings from each interviewees.
We identified several core themes that informed how we approached AI literacy for young adults:
1. Co-Collaboration, Not Automation
AI works best as a partner, not a replacement—humans should stay in control while AI supports their goals. Our design emphasizes shared decision-making and mutual contribution.
2. The Cost of Convenience
AI overuse can lead to deskilling, where key cognitive abilities fade. We highlight the importance of maintaining human connection and long-term awareness.
3. AI as a Teammate, Not a Teacher
AI is most effective as a learning partner—handling repetitive tasks while users retain creative and critical control. We frame AI as a peer, not an authority.
4. Preserve the Process, Not Just the Output
True learning happens through reflection and iteration, not instant answers. Our experience design encourages users to document, explain, and own their thinking.
Ideation
This next phase marked a pivotal moment in our project—when I and my team aligned on a design direction centered around a storytelling-based prototype. As passionate admirers of narrative-driven experiences, we were drawn to the idea of using interactive stories to engage users. We conducted research on existing games and websites that featured branching narratives, gathered inspiration, and compiled our findings back on the FigJam board.
As a warm-up exercise for story-building, we put a twist on the classic Crazy 8’s and ran a Crazy 3’s instead—each teammate quickly sketched 3 characters and 3 scenarios to explore potential narrative flows. It was a fast, creative exercise that helped us generate ideas and align on tone, and honestly, it was the most fun part of the process so far.
Whiteboarding
Next, we dove into an even more exciting phase—hours of brainstorming story logistics, layouts, scenarios, and characters. This collaborative sprint helped us shape the narrative structure, define key decision points, and breathe life into the world we were building.
Back to Ideation: Flushing out Story Development
We returned to ideation with a sharper focus—this time, to deepen the narrative logic and embed meaningful consequences tied to AI usage. We revisited each core theme and analyzed how over-reliance or avoidance of AI could affect specific skills like critical thinking, creativity, emotional resilience, and learning aptitude.
With new grads as our primary audience, we prioritized scenarios that highlighted these trade-offs in relatable ways. This allowed us to evolve each story path to reflect cascading skill effects—for example, consistently outsourcing creative decisions to AI might later result in weaker design abilities. These decisions shape the player’s journey toward different endings, each paired with personalized takeaways based on their AI habits throughout the game.


Prototyping: Low-fidelity
We began low-fidelity story-building by mapping out how the digital screens might function—focusing on the structure, flow of content, and key decision points without getting caught up in visual design. The goal was to ensure each screen supported the core themes from our research while avoiding information overload and maintaining narrative clarity.
Iteration development v1:

Iteration development v2:

Prototyping: High-Fidelity
In our final rounds of iteration, we narrowed our narrative focus from four storylines to three—eliminating the "high school" use case. We ultimately decided that individuals under 18 wouldn’t fall within our target audience of young adults, along with legal complexities.
View Figma Prototype here.

Usability Testing
We recruited usability participants via our personal networks. The following questions were asked in our usability tests:

Testing feedback were then organized via a spreadsheet.

Some key findings from our usability testing are listed below. Interestingly, two of these insights contradict each other—highlighting an opportunity for future testing to explore these tensions further, if time and resources permit.
Scores alone are not indicative enough of educational value.
Utilize more practical examples in daily life.
Score at each plot point is valuable as immediate feedback.

In Retrospect
Most challenging part of the project:
A lack of quantitative data on human–AI co-collaboration meant we had to rely on qualitative feedback to evaluate the tool’s impact on AI literacy.
Designing a resource broad enough to apply to diverse workflows, trust levels in AI, and varying levels of AI knowledge.
Maintaining user engagement was difficult due to the abstract nature of the topic.
On key outcomes and lessons learned:
After multiple rounds of ideation, we pivoted to an interactive storytelling format, which increased user engagement in testing sessions by 75% compared to static formats.
Through 4 moderated user tests, we received consistent qualitative feedback that users preferred learning through narrative immersion and appreciated seeing the consequences of their choices.
Gamification of an idea increases user engagement and interest—turning abstract concepts into active, decision-based experiences that feel more approachable and enjoyable.