The year of AI through a User Researcher’s eyes

Sofya Bourne
8 min readDec 30, 2023
Photo by and machines on Unsplash

It was early March, and I was sitting by a rooftop pool in Santiago, Chile, showing my husband ChatGPT for the first time. I’d been hearing about it for a few months, but it was around that time that my social feeds suddenly became overrun with thought leaders peddling the latest prompt hacks and telling me I was already using ChatGPT “wrong”. I created an account and was instantly hooked.

Fast forward nine months, and every tech meetup I go to in New York nowadays is chock-full of AI VCs and AI entrepreneurs. There’s been an explosion of AI startups, and even my least tech-savvy relatives have played with ChatGPT or Bard by now.

2023 has truly been the year of AI. Or, perhaps, the first year of AI is a more appropriate way to describe it — the cat’s out of the bag, there’s no going back.

I’ve been fascinated by this awesome tech both as a user and as a UX professional, so I thought it would be fun to pull together some thoughts and observations on generative AI as a year-end post.

It’s the end of December, I’m halfway through my two-week holiday, so I’m keeping this very light and loose with two highlights, two lowlights and two questions for 2024. Oh, and there’s a fun bonus right at the end: the most delightful AI user experience I came across this year. I hope it delights you, too!

Let’s start with some of the things I’ve loved about generative AI this year.

Highlight 1: It helped me conquer quantitative data

Earlier this year, I had to learn Looker and work with large volumes of quantitative data in Google Sheets for an analytics-heavy project I was working on. It was the first research project I led while having access to ChatGPT, and I quickly found that having an AI companion by my side as I mounted those steep learning curves significantly accelerated my progress.

With the help of ChatGPT, I was able to bypass hours of digging through Google and YouTube to understand how certain features in Looker work, brush up on statistics (grad school flashbacks! FUN!), and easily decipher some new business and finance jargon I was encountering in my research.

I also began to leverage ChatGPT to write Google Sheets formulas and custom Looker queries, something that’s now become habitual whenever I’m working with quantitative data.

Previously, I’d always struggled to reconcile my strong belief in the power of mixed methods with my limited ability to use mixed methods in my work, largely due to my qualitative research skills being more developed than my quantitative skills. ChatGPT helped me narrow this skills gap significantly this year and did so much quicker than I expected.

Highlight 2: It helped me sharpen and refine my thinking and communication

“Does this make sense?” is something I now ask AI on a regular basis. I also frequently use it to help me think through possible blind spots, assumptions and biases in my thinking.

As a researcher on a distributed team, I often miss the physical proximity of my team members to sense check my reasoning or understanding of a new topic and poke holes in my ideas in an ad hoc manner, on the fly.

This is also true for my personal life. I’ve spent most of 2023 travelling, which meant being away from my friends and community. Being able to brainstorm approaches to difficult conversations or sense-check my read on some interpersonal situations with a neutral third party has been extremely useful in this context.

I now regularly use AI tools like ChatGPT and Pi.ai as a cognitive checks and balances system of sorts. They don’t always get things right, and, of course, hallucinations and the fact that they’re not, you know, human, limit how much they can help in certain situations. But the fact remains: these tools have helped me think and communicate better this year.

There are also ways in which I wanted to use AI tools this year but couldn’t because the technology is just not quite there yet.

Lowlight 1: AI for user research is still pretty useless

The promise of AI as part of the user research practice is enormous and so exciting, but I’m yet to see it materialize in reality.

The research platform I use for my job rolled out AI features as early as June. I even joined their user research efforts as a participant ahead of the release because I really wanted their AI tooling to be good.

Sadly, despite much fanfare, the performance of these features has been disappointing. Even basic tasks like generating interview summaries or grouping coded data by meaningful themes are a tall order at the moment, with results often lacking any utility to my research process.

Not to mention that what I really want to use AI for as a user researcher isn’t interview summaries at all. The future I get excited about is when AI tools can be leveraged to augment user research in ways only a machine can do, like quickly trawling through vast amounts of customer data held across multiple siloed departments and tools to identify patterns and trends in customer needs that any one person would be unable to find.

I’m sure this will be a reality soon enough, but we’re not there yet, and that’s disappointing.

Lowlight 2: AI tools usability still has ways to go

Every AI tool I’ve tried this year has had a laundry list of usability quirks and pain points that need resolving.

To give just one example, I find it incredibly annoying that ChatGPT can’t remember what we talked about between conversations. And while it does save every chat in a side panel in chronological order unless told otherwise, it lacks a system for keeping track of these chats. Sure, I can name them, but am I really going to scroll through months of saved conversations to find the one I’m looking for? Is anyone?..

Pi.ai, on the other hand, won’t let me create separate conversations at all. So it can remember what we talked about days and weeks ago, but now I definitely don’t. Asking it to remind me of a vague idea or a random thought I mentioned to it some weeks ago is still, sadly, a bit of a hit-and-miss — sometimes, it does remember that specific part of our ongoing discussion; other times, it does not.

These are just two personal pet peeves that are top of mind for me as I write this post, but the reality is that the user experience of all AI tools is still in its infancy, especially if we consider the challenge of mass adoption. Sure, 100 million people use ChatGPT daily, but if AI tools are to live up to their potential, companies will need to figure out how to make them usable not just by early adopters and tech enthusiasts but by everyone, everywhere. How does the user experience of AI tools need to evolve for adoption to break the first billion users? What about to reach the next billion after that?

This brings me to two broader questions I’m looking forward to exploring more in the new year.

Question 1: How can we better align user expectations with AI capabilities?

I suspect that the frustration I feel when I want an AI tool to help me 10x my capabilities, but all it can produce is a half-coherent summary of an interview transcript is a manifestation of some fundamental AI UX design principle we’re yet to define.

I’ve actually seen this in my own work before. When I was leading user research for an AI-powered biometric experience, one sentiment I kept encountering in user interviews was the persistent expectation among participants that our tech was doing something way more sci-fi-y and futuristic than what it was actually doing.

What generative AI tools can do is incredible and is the closest we’ve come to the realm of sci-fi in real life. But their amazing abilities are also precisely why it’s so easy for users to imagine AI capabilities to be way more advanced and broad than they are at present. This mismatch between users’ expectations and the reality of AI capabilities can lead to user frustration, disengagement and mistrust of AI tools.

The concept of AI explainability is nothing new, but the more UX professionals get pulled in to work on AI-powered experiences, the more central I expect it to become to our work, given how critical it is to our ability to design experiences that feel safe, reliable and trustworthy as well as easy, quick and delightful.

Question 2: How will user-centered design evolve in the age of AI?

Since the early days of UX as a distinct area of software development, our purpose as designers and researchers has been to figure out how to enable humans to interact better with machines. But now the machines themselves are changing. Suddenly, they can speak our language — all of our languages, in fact — in an increasingly naturalistic way. We can now talk to computers much like we talk to other humans, without resorting to predetermined commands or buttons. What’s more, each human-AI interaction is hyper-personalized and unpredictable, resulting in a multitude of possible user journeys and experiences. How do we design for that?

This fundamental transformation of the very nature of the relationship between humans and machines is bound to lead to major shifts in user-centered design and research practices.

But this is no cause for panic. Instead, let’s get curious.

How can we expand our focus from guiding users’ interactions with machines to guiding machines to better interact with humans? How can we better understand and inform the design of AI models that will power the experiences we build? How can we help ensure that the new AI experiences are not just easy and delightful but also safe and trustworthy? How can we learn to work with new stakeholders in our organisations? How can we adapt our methods and practices to an entirely new set of possibilities and constraints inherent to AI-powered experiences? There’s so much to figure out for us as UX professionals!

I hope 2024 will be the year we start engaging with these questions in order to collectively set the right precedents for what user-centred design looks like in the age of AI.

This post is already long enough, but I wanted to share one utterly delightful AI experience I stumbled upon this year before I wrap up. I won’t go into describing it to you because I think everyone needs to experience it for themselves. All I’ll say is that no other AI experience I’ve tried this year has inspired me to envision the amazing possibilities of the future we’re all hurling towards more than this one.

Sounds intriguing? Here’s what you need to do to try it out for yourself:

  1. Download the Pi.ai app on your phone.
  2. Start a chat — you’ll see a little phone button in the text input bar (an interesting choice for a mental model to leverage in the design of this particular feature, but that’s a topic for another time).
  3. Press the phone button, and have fun!

What have been your highs and lows of exploring generative AI tools this year, whether you’re in the UX field or not? Let me know in the comments below, or message me on LinkedIn to chat more.

--

--