Picture the scene. You’re critically ill. The choices you face are all time consuming, physically demanding or worryingly expensive. You’re understandably nervous, but ready to discuss your options with an experienced consultant. Then someone hands you to a computer.
In this scenario, how much trust would you place in a machine?
Independent studies (from the Regenstrief Institute and researchers at MIT (the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory) confirm that machine-learning tools improve rates of diagnosis, in certain cancers, by 42%. You may not object to finding out that an open source algorithm was responsible for your diagnosis, but what about taking the next decision, or the next?
Robear is a collaboration between the Riken Brain Science Institute and rubber manufacturer Sumitomo Riko [notice the patient maintaining eye contact with the artificial face of the robot assistant]
With fewer doctors than ever, rapidly ageing populations, and health systems struggling to cope, for many societies, this is now less a science fiction and more a scientific reality.
Across the world, health providers now invest in robotic systems for surgery. Telemedicine that advances geographic reach and patient awareness, even automated monitoring of care regimes and medication for the elderly.
Let’s put healthcare to one side for a moment.
Now imagine your mortgage repayments have suddenly gone up. What would you do? Where would you turn to for advice? Familiar with online banking, you might seek help from an online advisor.
How would you feel if, instead of a customer service representative, you were confronted with a computer, largely intent on establishing your ability to meet the repayments and avoid defaulting in the mid to long-term?
Such automated interactions are no longer the stuff of dreams. European banks, aided by digital consultants such as Accenture, already provide interactive banking that allows customers to request, self-select and switch services according to their needs. So, any discomfort we might feel today is less and less likely to be the result of technical unfamiliarity alone. And, in the case of certain subjects, the response more surprising then you might think.
When making decisions that directly impact our personal lives – in moments of pleasure, or extreme stress – we prefer to interact with humans. Biological creatures, as like ourselves as possible. This is known as the mere-exposure effect (or familiarity principle) by which people develop a subconscious preference for things merely because they are more familiar.
In itself this may not surprise you, but learning that we can already place near equal value on the relationships we form with machines might.
Recently, Kyoto University uncovered the first neurophysiological evidence of humans’ ability to empathise with robots. Results suggest that we do indeed connect with humanoid robots in a similar emotional way to the ways in which we bond with our fellow humans. And, that we even have the capacity to fall in love with them despite knowing they have no feelings and cannot reciprocate.
Our relationships are framed, not by naked assessment of artificial intelligence alone, but by the nature of our needs.
When choice is ours to make, even if that choice has the potential to be unpleasant, we are more likely to trust artificial intelligence that has a human-like form or appearance. Bad news, tough choices and complex decision-making, can all benefit from being framed by a human face.
Where choice is beyond our control, and more likely to produce a detrimental or unwelcome outcome, we may process information, make decisions faster, and accept them more readily, if the technology driving us to engage lacks human form or personality.
Think about this for a moment. See yourself, struggling to pay for a simple car park ticket. Are you more likely to set aside your frustrations and complete the process faced with a screen containing only basic instructions and numbers, or when guided by an interactive avatar that cheerfully regurgitates a demand for money, irrespective of your objections? Which are you more likely to feel is ignoring your feelings?
Perhaps most surprising of all, findings from the Texas-based Organisational Wellness and Learning Systems reveal that when we interact with machine-like intelligence, rates of honesty ‘…are especially strong when the information is illegal, unethical, or culturally stigmatised.’
Debt, for example, sits at the centre of these findings. Feelings of shame and failure are typically magnified when discussing our actions with strangers – a bias that also occurs if asked to interact with artificial intelligence in human form. However, remove the human facade and automated tools provoke extreme honesty. Why? In short, because we don’t view machines as judgmental, but as more ethical.
For banks, healthcare providers, even law enforcement agencies, this suggests accurate and honest conversations can be increased by machine interfaces. While trust is more easily gained, and maintained, through real (or highly realistic) human interaction.
So, underlying emotions influence these relationships, but so does cultural framing.
As virtual reality and immersive technologies become more commonplace, and are shown to improve on our human experiences, it is entirely possible that we will consciously choose artificial interactions of many kinds over those involving our less than perfect human brothers and sisters. And place greater trust in them too.
Studies suggest this cultural framing is already on the shift. Apparently, half of Japanese adults no longer seek sexual encounters with their fellow man/woman. In August 2015, it was widely reported that ‘Xiaoice’, a Chinese girlfriend app, was keeping lonely hearts company on major Chinese social networking services including Weibo, used by over 700 million people.
Nor is blind faith in technology restricted to the Far East. Researchers at the Georgia Tech Research Institute in Atlanta, USA, simulated a fire and asked students to follow a machine to safety.
The robot was pre-programmed to lead them astray, rather than find the emergency exit.
Here, students exhibited ‘automation bias’: a tendency to believe in the powers of an automated process, even when no evidence of its capabilities exists. As a result, GTRI students maintained their faith in their guide, even when it broke down at critical moments, as it was designed to.
‘“People think the system knows better because robots have been presented as all-knowing. So, we assume that every system will do the right thing. Since robots don’t react or judge what we say, our own biases get projected onto these automated beings and we assume they’re rooting for us no matter what.” – Professor Alan Wagner, GTRI
In coming years, as digital connections envelop us and deeply embedded technologies become the new norm, we will see and connect with more and more ‘bot’ technologies and interfaces than we can currently imagine.
For each of us, the bot will be the browser of the future. Quite what these all-pervasive interfaces will look like is as yet unclear. While this will undoubtedly grey the lines between predictable human emotion and our subconscious responses to machines, businesses that understand the subtle responses, triggered by humanoid and non-human interactions, will be better placed to leverage the strategic advantage of tomorrow.
With nothing to be taken at face value, designers of technology, machine learning and human experiences must recognise the fissures and exploit the opportunities that exist in the space between human interaction, and the veneer of humanness if their businesses are to thrive.