Mar 28, 2018

Read Time IconRead time: 3 mins

Pound of Sense: Fear and Loathing of AI

In Fear and Loathing in Las Vegas, Hunter S. Thompson created an indelible image of a drug-fueled voyage to America’s sinniest of sin cities. Enshrined as the character Uncle Duke in Gary Trudeau’s Doonesbury, Thompson’s vision plays out as an ode the the ability of an individual to endlessly reinvent oneself. While fun, perhaps, what’s interesting is where Thompson originated the phrase “fear and loathing”; he first used it in a letter to a friend describing his feelings towards the assassin of John F. Kennedy.

I recently spoke to a group of senior financial services executives about artificial intelligence and its uses, particularly as it intersects personal data. Their reactions to AI could be divided into three camps, with fear and/or loathing a surprisingly strong undercurrent:

The Superusers:

A small minority were advanced privacy experts. They had migrated their personal technology usage to secured “black” phones that didn’t have location services; DuckDuckGo for search; and similar privacy-preserving and AI-shielding measures. Fear, yes, but with a measured response that sounds less and less paranoid the more and more I understand about how personal data and AI are used by companies today.

The Utilitarians:

A respectable proportion made regular use of AI for financial applications, and otherwise had resigned themselves to the concept of personal data leakage, that their information would be out in the datasphere, and that various types of AI would be used to process and act on that information. (In daily practice, I tend to fall in this camp).

The Unaware:

I was a bit surprised at how many had an irrational fear of artificial intelligence to help them be more competitive in the marketplace, even as they use it everyday in their lives.

“Oh, no, that’s creepy,” say the Unaware, while happily giving Google their most sensitive personal information to get a faster commute, and allowing Amazon to put a listening device in their bedrooms so that they could order new razor blades using their voices instead of having to type on a computer, and empowering Facebook to programmatically deliver censored, dopamine-enhancing propaganda to their brains because it’s more comfortable than, say, The Economist.

In a different discussion this year, I had a major bank executive decide not to improve his bottom line by $1 billion because he thought it would be “bad to secretly track personal mobile phones” (news flash, that’s not what our solutions do, but in his mind “artificial intelligence” and “consumer data” was instantly conflated with George Orwell). He happily participates in a variety of ways in using and misusing personal information, and his organisation reportedly contributes to the financing of terrorists (due to their inability to properly distinguish bad guys from good guys) – but suggest that machines help people in the process and suddenly it’s “Big Brother”, “Skynet”, etc.

Awareness is a curious thing. Somehow “friendly” AI like that used in Google and Facebook (even with its corrosive impact on Western societies) is OK, but begin to explain a little more about what AI can do, so that decision makers can elect to employ it in other ways, and suddenly it becomes the weapon of a police state, a tool of oppression, an invasion of privacy, and other strong emotional reactions. It doesn’t help that some governments are using it for exactly the kind of society-scale control these individuals are afraid of.

How can we move people from fear and loathing to understanding and application?

These fearful executives seem to not connect the dots between their personal individual use of AI every single day, and the moral equivalency of using other (less invasive) applications in their work. The answer is not to ban AI and it’s also not to stick their heads in the sand because, oh, yeah, the challenger institutions, the fintechs and Big Techs (FANGs, BATs) coming for their core books of business, are very actively using AI to deliver better service at lower cost.

What is to be done?

Education, certainly, but no one likes to be lectured to.

How can we move them to a more enlightened response?

Or will they fade into irrelevancy and obsolescence, like the “innovation” executive at a major automaker a couple years ago who declared people would “never” use autonomous cars because they like driving his vehicles so much? (A year later, his employer announced they’d have full autonomy by 2021).

 

The views expressed in this column are those of David Shrier, and may not reflect those of Saïd Business School, University of Oxford or its faculty.

Filed under: Career advice