June 24, 2019

Learned carelessness

Here’s a scary thought.   When their satnav says there is a road, but all they can see is water, drivers will believe the satnav, rather than their own eyes, and end up having to be rescued.

Luckily, most of the time the results of such satnav errors are not drastic – they simply start the conversation after a late arrival – “You won’t believe where the satnav took us!”

But the phenomenon behind these stories – automation bias or “learned carelessness” – is a serious problem.   Confronted with a ‘black box’, whose workings we don’t understand, and which seems on the whole to be reliable, we humans switch off, stop monitoring, and stop thinking.  “Computer says no.”

There are ways to prevent this.

You can de-mystify the ‘black box’, so people understand that it is part of a system designed and built by humans to achieve certain ends; you can frame the information provided by the system as support or advice rather than instruction, and you can engage the human brain by making the human do some of the work – especially where there are other humans involved.

Automation is great, but I want the best of both worlds.

Thanks to James Bridle for sparking this one.