Back in May, at Google’s I/O Developer Conference, the company demonstrated its new Duplex system, an AI-powered virtual assistant that makes phone calls to organize your schedule for you. The audience watched a recording of Duplex making bookings at a restaurant and hair salon. They laughed in surprise when it ‘mmm-hmmm’d its way through the conversation, apparently convincing the person on the other end of the phone that they were, in fact, talking to a fellow human being.
This unexpectedly convincing demonstration set social media buzzing – and in the process, it raised a question. Does Duplex’s capabilities for reading and sending conversational signals show that a machine is capable of empathy? This is one of the most critical questions in the developing debate around AI, its role in society, and the extent to which it will disrupt creative industries such marketing.
Can a machine have empathy? The three responses
I wanted to get a sense of the different opinions out there – and so I decided to ask the question in my LinkedIn feed: Can a machine have empathy? It was the start of a fascinating discussion stream with some intriguing responses. Broadly speaking, these answers fell into three categories. I think these are a pretty good representation of the views that professionals have of AI’s capabilities – and their views about how those capabilities could be used.
The first type of response is that yes it can, or yes it will, because AI is ultimately capable of anything the human brain is capable of. As one of the commenters on my question put it,
Empathy can be programmed like us. We are machines… the brain is a very good computer but still any ordinary analogically programmed Quantum computer.”
The second response is that no, it can’t, because empathy is a uniquely human characteristic and not something a machine is capable of experiencing:
Empathy entails not only a sense of self, but also experiencing the emotions of someone else (more or less)—to feel another’s pain… We do not understand consciousness in humans, let alone possess the ability to create it—with verification—artificially.”
The third type of response is particularly intriguing. It’s a question of its own: if a machine appears to have empathy, does it really matter if that empathy is real or not? It amounts to functionally the same thing, whether that machine is feeling the same emotions as us or merely deducing those emotions from the signals we send, and coming up with the most appropriate response:
“let's imagine we can't tell the difference if it is genuine or not, because a robot has learned the mimic and structure of empathic behaviour, are we still able to look at the robot as a machine?”
I’m writing this post to share my own view, but also to answer the question raised in the third type of response that I received. Does the distinction between real and ‘artificial empathy’ matter? I believe that it does. Especially in marketing.
Why machines are incapable of true empathy
Why machines are incapable of true empathy
First though, let’s go back to the original question: can a machine have empathy? I’ll put my cards on the table here. I don’t think this is a matter of opinion – and I don’t think it’s one of those questions where the answer may change in the future. A machine cannot have empathy by definition. It comes down to what empathy is – and what a machine is.
The full definition of empathy in the Oxford English Dictionary is this: “the power of mentally identifying oneself with (and so fully comprehending) a person or object of contemplation.” Machines cannot mentally identify themselves with human beings because what goes on in the mind of a human being involves things that a machine can never experience for itself, no matter how advanced and deep-learning-driven its own processes might be. For the same reason, a machine will never fully comprehend a human being. As we discuss the role of AI in society in general, and in marketing in particular, it’s important to be clear about why this is.
Feeling machines that think
The neuroscientist, Antonio Damasio describes it like this:
we are not thinking machines that feel; rather, we are feeling machines that think.
Human consciousness involves a lot, lot more than rational cognition. In fact, that ability for rational thought is a byproduct of most of the other aspects of our consciousness – not our brain’s driving force.
Our conscious life is driven by the way that we experience the world through our senses: a combination of sight, sound, touch, taste and smell that no machine will ever experience in the same way. It’s also driven by powerful biological impulses and needs. No machine will ever feel what it means to be hungry or thirsty; no machine is moved and motivated by the drive to have sex and all the attendant emotions that spin around it; no machine fears homelessness or feels the intense vulnerability that comes from fear for your physical safety.
Finally, and no less significantly, our consciousness is shaped by the collective intelligence and cultural memory that comes from being part of the human race. We have collectively channelled our shared emotions and sensory experiences into stories, conversations, shared jokes, sarcasm, symbolism and incredibly subtle psychological signals for many thousands of years. That same collective intelligence develops ethics and values that we can all instinctively agree with; it makes sense of money and systems of fair trade; it agrees on concepts that aren’t logically concrete but are perfectly solid in our minds.
Nothing else communicates like human beings – and human beings communicate with nothing else the way they do with one another. This is significant, because the only way to acquire a share in our collective intelligence is to be interacted with as a human being yourself. Unless we engage with machines in the same, full way that we do with other human beings, this collective experience and intelligence is simply not available to them. They are not part of the empathy club.
Artificial Intelligence doesn’t replicate human intelligence
When people talk about the human brain operating like a computer or about AI learning in the same way that a human being does, they are guessing. In fact, they are part of a long tradition of guessing at how the brain works – and what really makes our consciousness tick.
Whenever we invent a new technology, there’s a powerful temptation to start using that technology as an analogy for how the brain functions. When we invented electricity, we started talking about electric currents in the brain; when we invented the telegram, we decided it worked by sending signals. Every time you talk about the cogs whirring away trying to figure something out, you’re harking back to the era when we invented clockwork and became convinced that something similar was going on inside our heads. The conviction many people now have that the human brain works like a computer (and is therefore primarily a logic machine) is just our latest guess. We really don’t know how the brain works and how that working produces our consciousness. It’s therefore highly unlikely that we replicated the human brain when we invented computers – or developed AI.
These are the reasons why I agree passionately with the second of the responses to my question in the LinkedIn feed. When we claim that a machine can feel empathy, we’re guilty of reducing the immense, mysterious workings of the human brain and human consciousness down to something that can be understood, replicated and mimicked through a machine driven by logic. It’s not so much that we’re overestimating the capabilities of AI – it’s that we’re severely underestimating how complex our own capabilities are.
What’s the difference between Artificial Empathy and the real thing?
That brings me to the second question – does it matter that Artificial Empathy isn’t true empathy if it still interacts with us in the same way? I believe it matters a lot. If we proceed with AI on the basis that it doesn’t, the implications will be huge.
Artificial Empathy works by observing, learning, responding to and replicating the signals that people send. As deep-learning AIs evolve, and as they are able to work on larger and larger data sets, they’ll get better and better at doing this – of producing the appearance of empathy. However, true empathy involves a lot more than merely observing and responding to emotional signals, no matter how many of those signals you have to work with. Why? Because the signals that people send are a tiny fraction of the internal narrative that they experience. You and I are both far more than the sum of what other people think we are by watching what we do and say. We contain capabilities, emotions, memories and experiences that influence our behaviour without ever coming to the surface. They have to be intuited without ever actually being observed.
Beyond rationality: what human empathy is capable of
Human intelligence is so powerful because it is not limited to rational thinking. The other elements of our consciousness enable us to deal with the inherent unpredictability and ambiguity of the world around us. They enable us to make decisions on the basis of shared values and motivations that resonate collectively and enable us to know what is right without having to figure out what is right. Empathetic human intelligence is able to feel what it feels to be sad, and feel what it feels to be happy – and it allows those feelings to sway its judgments and its behaviour towards others. A machine couldn’t do that, even if it wanted to.
Things become complicated when machines start taking decisions that have profound consequences, without the emotional context and shared values that all humans use when making such decisions. This was one of the key themes in the piece that Henry Kissinger recently wrote on the implications of AI, for The Atlantic. Take the self-driving car that must decide between killing a parent or a child. Will such a machine ever be able to explain to human beings why it makes the choice that it does? And if it’s not required to explain actions with human consequences in human terms, what becomes of our system of ethics and justice? It will need to be rewritten, simplified and stripped of emotion in order to accommodate such machines. As a result, it will feel less representative of us as human beings.
Beware a Narrow AI definition of marketing
A similar process would occur if we substituted artificial empathy for human empathy when it comes to marketing. AI can impersonate human interactions, but with a far narrower understanding of what’s going on than a human being would have. We have to bear this in mind when we choose the role that AI should play in engaging with audiences or directing marketing strategies.
Google’s Duplex may have the appearance of empathy, but that empathy is strictly limited to what’s relevant to the task at hand: completing a restaurant booking, for example. It’s not trained to detect any emotion outside of this – or readjust its behaviour on that basis. If the person on the other end of the phone sounded disorganised and stressed could Duplex respond? Could it make them feel better? Could it thereby charm them into somehow finding a slot for them at a busy time? And from the restaurant’s point of view, will the person making the booking be as likely to actually turn up – or will they feel less obligation to do so, since they never actually spoke to the restaurant themselves? There’s a lot more to human conversation than exchanging information efficiently – and that’s where the implications of real and artificial empathy start to become particularly significant.
It’s not just one-to-one conversations that are affected by the difference between real and artificial empathy. It’s also the conversations that you hold with the market and your audience in general. Marketing is the process of creating a proposition that has value for people, and which they will exchange value for. Up to now, marketers and their audiences have been able to feel that value in broad and varied terms that reflects what it means to be alive. Brands and their products and solutions provide functions and services but also reassurance, confidence and certainty; a sense of support and potentially even belonging. And don’t think I’m just talking about consumer brands here. B2B marketing addresses some of the most powerful motivations and emotions that a human being can feel: around security, hopes for the future, the ability to provide for others, personal value and worth.
If we start to hand fundamental strategic decisions about marketing to AI, then the definition of value will narrow with startling speed. It will be based around what can easily be observed, measured and communicated – the kinds of things that machines can feel artificial empathy for. It will offer efficient optimisation of particular aspects of a marketing proposition – but the risk is that it ignores the other elements that engage human consciousness in different ways. Smart B2B marketers know the dangers of talking about price when their buyers really want reassurance on value. They know the importance of instilling confidence over and above simply describing product features. Perhaps most importantly, they know that what a buyer describes as being the basis for their decision is often not the only basis for their decision. It’s not just what’s observable that matters.
Does AI make better judgments – or just more efficient ones?
Much of the fear that people express about AI involves being replaced by a superior form of intelligence that can think in ways that we can’t conceive of and outcompete us in almost every role we can imagine. I believe that the real danger is subtly different: that we downgrade our own intelligence and unique capacity for empathy because a far narrower artificial version is capable of doing some things in a more efficient way. Unless they are fully aware of these risks, organisations that plan on unlocking vast new forms of competitive advantage through AI could end up narrowing the scope of what they are capable of instead.
I work for LinkedIn, which is itself owned by Microsoft: two businesses that are developing exciting applications of AI but which also spend a lot of time thinking about how that technology can be ethically used, and what impact it has on society. Microsoft thought leaders talk about building self-limiting considerations into AI systems, for example, describing AIs that know “when they need to get out of the way.” That’s hugely important at all levels of marketing and business.
There are exciting times ahead for applying AI in marketing, including applications that can detect emotional signals at scale and provide us with new depths of audience understanding. As marketers, AI tools can make us more responsive to our audiences on an emotional level – but only if we see them as an input for human empathy rather than a substitute for it. The secret to making best use of artificial empathy will be recognising its limitations compared to the real thing. Effective leadership in an age of AI involves recognising that a world of sensory, emotive, complex and conscious beings cannot be navigated by logic and observation alone.
No comments:
Post a Comment