The Complexity Paradox of ChatGPT, AI and UX
How do you teach people to use something that goes through step-change improvements every 6 months?
Put it like this; there's a clue that ChatGPT isn't as intuitive as a traditional search engine.
It's that companies are hiring for a skill known as "prompt engineering".
In short: the change in complexity of AI is outpacing behavioural evolution—by a lot.
It's harder to build predictable and transparent experiences when even literal computer scientists can't predict the outcome of a prompt, or explain it to someone on the train.
This is a case study about ChatGPT.
But more so, it's about core principles of design that are being stretched and tested by AI.
And I think it explains the strategy for Apple Intelligence.
Case study
A deeper dive
Sources and black boxes
Imagine searching on Google to find out when the Oasis tour starts.
I know you're probably skim reading this. But for 30 seconds, slow down and try to recognise some of the subconscious behaviour going on.
You've seen this results page:
Subtle pieces of information, combined with your own context, help you build an answer that you're somewhat confident in.
Confidence and predictability create comfort.
For example:
- I've heard of The Guardian, that's likely accurate
- But it looks like there's an inconsistency
- Ticketmaster is a very reliable source, because they actually sell the tickets
Still with me? Okay.
This same task on ChatGPT outsources that subconscious process, and spits out a single answer.
In other words, it'll attempt to complete that for you.
There's a faulty assumption that when people use the internet, the only relevant piece of information is the result.
But let's give this example to ChatGPT.
They'll draw from Wikipedia and a celebrity blog that I've never heard of.
Even if the answer was the same, and despite showing their sources, you wouldn't feel as confident about it.
We're biased to place greater trust in a human process, than a more sophisticated automated or hidden one.
i.e., a big area of resistance for fully autonomous driving.
Now consider that with these newer models, the "chain of thought" is intentionally masked.
A second layer of abstraction.
Using the Oasis example, o1 Preview will "confirm the tour status", and then get the answer wrong.
This isn't just a short term accuracy problem, it's at the core of how we use products.
As we outsource this "thinking", we lose the subconscious processes that make experiences feel comfortable.
And that might be a big problem for these wrappers that utilise OpenAI's API.
Designers need to embrace the technology with open arms, and then immediately bear hug the complexity to death with more context. Not less.
This is why Apple embedding AI into very specific features, makes a lot of sense.
And over time, having a single generic chat interface that completes your maths homework, books you an Uber and writes poetry for your spouse, probably doesn't.
P.S., If you want to dive more into the UX of chatbots, I've got you covered 🤖.
UX Exercise
This is ChatGPT's "chain of thought".
Which psychological bias is most at play here?
You’ve finished this study
Become a BFM+ member to track your progress, create a library of content and share learnings within your team.
Other studies picked for you
What Audible Gets Wrong About Repeat Purchases
How to create a flywheel of discovery, purchase and satisfaction that leads to repeat purchases.
BFM+ Exclusive
Is This Why Users Are Ignoring Your Features?Preview this content
Simple techniques to increase feature usage, retention and ultimately alter how users perceive the value of your product.
BFM+ Exclusive
Slack Hides Complexity to Create Effortless OnboardingPreview this content
The techniques that Slack have used to create an effortless onboarding flow, and why you might not want to copy them.