By Peter Ramsey

19 Sep 24

ChatGPT11 min read
Listen

The Complexity Paradox of ChatGPT, AI and UX

The Complexity Paradox of ChatGPT, AI and UX Featured Image

How do you teach people to use something that goes through step-change improvements every 6 months?

Put it like this; there's a clue that ChatGPT isn't as intuitive as a traditional search engine.

It's that companies are hiring for a skill known as "prompt engineering".

In short: the change in complexity of AI is outpacing behavioural evolution—by a lot.

It's harder to build predictable and transparent experiences when even literal computer scientists can't predict the outcome of a prompt, or explain it to someone on the train.

This is a case study about ChatGPT. 

But more so, it's about core principles of design that are being stretched and tested by AI.

And I think it explains the strategy for Apple Intelligence.

Case study

Please rotate your device to view this slideshow

Note, this won’t work if ‘rotate: lock’ is on in your device settings.
593538
593539
593540
593541
593542
593543
593544
593545
593546
593547
593548
593549
593550
593551
593552
593553
593554
593555
593556
593557
593558
593559
593560
593561
593562
593563
593564
593565
593566
593567
593568
593569
593570
593571
593572
593573
593574
593575
593576
593577
593578
593579
593580
593581
593582
593583
593584
593585
593586
593587
593588
593589
593590
593591
593592
593593
593594
593595
593596
593597
593598
593599
593600
593601
593602
593603
593604
593605
593606
593607
593608
593609
593610
593611
593612
593613
593614
593615
593616
593617
593618
593619
593620
593621
593622
593623
593624
593625
593626
593627
593628
593629
593630
593631
593632
593633
593634
593635
593636
593637
593638
593639
593640
593641

👇

That’s all for the slideshow, but there’s more content and key takeaways below.

Slide 1 of 105

A deeper dive

Sources and black boxes

Imagine searching on Google to find out when the Oasis tour starts.

I know you're probably skim reading this. But for 30 seconds, slow down and try to recognise some of the subconscious behaviour going on.

You've seen this results page:

null image

Subtle pieces of information, combined with your own context, help you build an answer that you're somewhat confident in.

Confidence and predictability create comfort.

For example:

  • I've heard of The Guardian, that's likely accurate
  • But it looks like there's an inconsistency
  • Ticketmaster is a very reliable source, because they actually sell the tickets

Still with me? Okay.

This same task on ChatGPT outsources that subconscious process, and spits out a single answer.

In other words, it'll attempt to complete that for you.

There's a faulty assumption that when people use the internet, the only relevant piece of information is the result.

But let's give this example to ChatGPT.

They'll draw from Wikipedia and a celebrity blog that I've never heard of.

null image

Even if the answer was the same, and despite showing their sources, you wouldn't feel as confident about it.

We're biased to place greater trust in a human process, than a more sophisticated automated or hidden one.

i.e., a big area of resistance for fully autonomous driving.

Now consider that with these newer models, the "chain of thought" is intentionally masked.

A second layer of abstraction.

Using the Oasis example, o1 Preview will "confirm the tour status", and then get the answer wrong.

null image

This isn't just a short term accuracy problem, it's at the core of how we use products.

As we outsource this "thinking", we lose the subconscious processes that make experiences feel comfortable.

And that might be a big problem for these wrappers that utilise OpenAI's API.

Designers need to embrace the technology with open arms, and then immediately bear hug the complexity to death with more context. Not less.

This is why Apple embedding AI into very specific features, makes a lot of sense.

And over time, having a single generic chat interface that completes your maths homework, books you an Uber and writes poetry for your spouse, probably doesn't.

P.S., If you want to dive more into the UX of chatbots, I've got you covered 🤖.

UX Exercise

Question 1 of 1
BFM+

This is ChatGPT's "chain of thought".

Which psychological bias is most at play here?

Quiz Question Image
Cognitive Dissonance
You need to be a BFM+ member to use exercises
The Labour Illusion
You need to be a BFM+ member to use exercises
Cognitive Drift
You need to be a BFM+ member to use exercises
Knowledge Gap
You need to be a BFM+ member to use exercises
Select an answer

You’ve finished this study

+1

Become a BFM+ member to track your progress, create a library of content and share learnings within your team.

You’ve finished this study

Other studies picked for you

What Audible Gets Wrong About Repeat Purchases

What Audible Gets Wrong About Repeat Purchases

How to create a flywheel of discovery, purchase and satisfaction that leads to repeat purchases.

BFM+ Exclusive

Unlock all 74 case studies with BFM+

View Plans
Is This Why Users Are Ignoring Your Features?

Is This Why Users Are Ignoring Your Features?Preview this content

Simple techniques to increase feature usage, retention and ultimately alter how users perceive the value of your product.

BFM+ Exclusive

Unlock all 74 case studies with BFM+

View Plans
Slack Hides Complexity to Create Effortless Onboarding

Slack Hides Complexity to Create Effortless OnboardingPreview this content

The techniques that Slack have used to create an effortless onboarding flow, and why you might not want to copy them.

All of the UX analysis on Built for Mars is original, and was researched and written by me, Peter Ramsey.

Never miss the free UX analysis

Free case studies, the moment they’re released, plus a digest of the best UX Bites every few weeks.