Technology and Trust

| By TMA World

 

I’m not sure about the navigation system in my car.  The next right turn sometimes turns out to be someone’s driveway, or that right turn is now a one way street and frantic screams from my wife let me know that I’m going the wrong way!  Sometimes the map doesn’t show I am on a road at all, or road configuration on the screen doesn’t bear any resemblance to what I’m actually seeing through the windshield.

I’m sure human error on my part plays into the farcical situations I find myself in, but surely the machine is supposed to know and give me the right advice!  To be fair, the navigation system has also saved me a few times when I’ve trusted in my usually reliable sense of direction.  

This raises the question in my mind about the role of machines in the workplace, and whether or not we will ever be able to trust them.  This is a complicated issue, because trust in everyday life – without machines – is complicated.

There is no doubt in my mind that AI (artificial intelligence) will help us make better decisions.  César Hidalgo, a physicist at MIT, created the term ‘personbyte’ to describe the amount of knowledge that any one person can reasonably be expected to know.  Given the complexity of the problems we face, and the huge amounts of data we generate, the ‘personbyte’ is looking more and more inadequate.  Do we have any choice but to learn to trust our machines?

I read an interesting article recently in Information Week by David Wagner a specialist in business and technology issues.    Apparently, we don’t trust them because of “algorithm aversion” or “algorithm avoidance”.  In a study at Wharton, participants were rewarded for making good predictions.  They were allowed to make their own predictions, or those of an AI.  The algorithm-based machines repeatedly made better predictions, but despite this the participants would see an individual machine error and distrust the technology, even though the participants had made multiple failures of their own.  Part of this distrust it seems is that the machines are reacting to information we don’t see ourselves, or that they are applying rules that we have had no part in creating.

If we are involved in helping the robot, ironically, we put more trust in them.  A study at the University of Massachusetts, Lowell asked participants to help robots through a slalom course.  They could guide the robots with a joystick, let the robots do it themselves, or some combination of the two.  The robot was much faster in automated mode, but what the participants didn’t know was that the robots were programmed to make mistakes.  If the robot made a mistake, the participants would give up this flawed machine.  Interestingly, some robots were programmed to express doubt.  When they weren’t sure which way to go, the robot’s face would change from happy to sad.  If the robot showed doubt, the participants were more likely to trust it to figure things out.  These small human touches seem to help us trust the machine.

In studies in the Netherlands and the U.S. researchers have looked at trust in relation to self-driving cars.  In the U.S. experiments, the talking ‘driver’ was named Iris and given a friendly female voice.  People were more likely to trust in Iris than an unnamed machine.

My navigation system has a friendly female voice, except for when she sounds exasperated that I won’t make a U-turn when she wants me too.  I’ve tried giving the system a name, but that hasn’t seemed to help our relationship.  I’ll be sure that my next navigation system has more human characteristics and demonstrates less of an all-knowing, “I’m perfect” attitude.  Maybe then we’ll get along!


Interested in how introducing a cultural intelligence tool in your business could help to create a more borderless workforce? We’d love to show you our groundbreaking platform.