– Published June 20, 2019
by Andy Shora

Increasing Trust in AI

Increasing Trust in AI

“Where is my boarding pass?”

Twenty-four hours ago, my phone suggested ‘archiving away the clutter’ from my photos. In a rush, I allowed it, as the first few thumbnails shown were receipts I wouldn’t be needing again. Fast-forward to today, and I’m at the gate at the airport, desperately trying to find a screenshot I took of my boarding pass after I checked in. This was not clutter. Lots of British mumblings and apologies followed, as I held up the line.

AI has come along way. No longer do we imagine shiny white droids with human-like faces. Well, we might, but we’re at a stage now where suggestions to streamline various parts of our life are constant, and can frequently appear in various obtrusive and confusing forms.

If a new human acquaintance asked me if they could help organise some old receipts, only to remove the most important travel documents I needed for my imminent trip, I would be fairly annoyed, but as a human, I possess the emotional capacity to help repair our relationship and help ensure that mistake won’t happen again.

Were a machine to do this – let’s say, Microsoft Clippy – I certainly wouldn’t be so forgiving.

Why the difference? I would assume my phone would make exactly the same mistake again. I didn’t happen to see a feedback loop allowing the machine to learn that it has made a mistake. The fallout next time may be even worse. The trust has gone.

If a human made this error, a simple communication afterwards would allow the learning required to prevent the situation happening again. It’s this intelligence without learning which can prove to be the biggest eraser of trust.

Trust in humans vs Trust in machines.

Trust towards humans involves me analysing qualities like intent, integrity, competence, and sincerity. These traits can change during the course of my relationship with another person. We can get ‘in sync’, ‘establish a bond’, whatever you like to call it. We subconsciously adapt our behaviour to work with other personality types, and most of us possess a level of empathy which allows us to be lenient, forgive, and ultimately trust another human.

Trust towards machines involves an entirely different set of characteristics. Empathy with a machine is out of the question – there are no feelings to consider. I’m more concerned with accuracy, reliability and consistency of output, and I definitely don’t want anything unexpected or inexplicable surfacing, which could negatively affect my life.

Misplaced trust in machines can even prove to be dangerous.

A recent accident involving a ‘self-driving’ Tesla Model X was fatal for the pilot, when he assumed the car would steer itself away from an imminent collision, in spite of the manufacturer’s instructions to always maintain hands on the wheel, and multiple dashboard warnings informing the driver to take manual control, six-seconds before the fatal event occurred.

When your life is on the line…

Imagine you’re in a hospital bed, cannula-in-vein ready to have some life-saving drugs injected.

Scenario 1: Human.

You see a nurse walk over confidently with a small kit of equipment, carrying a reassuring smile, light-heartedly asking, “How are you holding up, soldier?“. After brief smalltalk, they inform you of the drugs they are about to administer, explaining how you’ll be “up and about in no time”.

Scenario 2: Machine.

A white droid rolls over, informs you that it has ingested your health information from a centralised data store, crossed and analysed your metrics against thousands of other humans who have been in your current condition, and will shortly be administering you with the correct amount of Penicillin to fight off your infection.

Scenario 3: Human + Machine.

The human nurse from Scenario 1, holding a tablet containing the intelligence mentioned in Scenario 2. They explain to you how the amount of Penicillin to be administered has been cleverly calculated using AI, based on thousands of cases.

Which scenario would you rather be in? Your answer will be entirely based on how you place your trust.

Reasons humans don’t trust AI.

1. AI is still largely in a mystical, fascinating phase.

It’s viewed as a wonder, perhaps not real yet. Most keynote AI presentations involve concepts for the future, which aren’t yet production-ready. We love ‘magic’ when it results in success. When it goes wrong, we feel stupid for ever trusting something from the world of fantasy.

2. Media coverage.

Catastrophic mistakes made by machines are often highlighted by the media, even if the error rates, and overall safety of a system have been significantly reduced compared to human operation.

Developers of AI are desperate (and often pressured) to showcase new experimental functionality, without fully taking into account the ethical weight of potential consequences of their newly-programmed behaviour. There is an argument that increased media scrutiny will keep the AI industry in check, which I’m sure a lot of people would agree with.

3. AI is portrayed as evil in Hollywood.

Hollywood films too-often highlight the fine line between wonder and fear, when it comes to machines taking control of important aspects of our lives.

We’re being conditioned to believe that small unexpected changes in operation (commonly called ‘glitches’), or rapid autonomous learning and evolution in machines can easily lead to the domination, or even the painful end of our species. Often the protagonist responsible for death and destruction by machines is a single evil human with temporary malicious intent – something I assume we all experience at some point in our lives.

4. Users’ needs change in response to advancement in AI.

I’ve just bought the perfect holiday thanks to some suggestions which popped up in my news feed. Why are the same ads appearing over and over? Right now I need to save money, not spend more. My requirements have changed, and the machine has not adapted.

The limits of the system are very aparent to me, and I suddenly find it annoying and inaccurate. The suggestion feature is now a victim of its own success.

5. AI-generated info might not appear credible.

Significant moments in your life, such as receiving medical exam results, hurricane alerts, even verbal warnings of an imminent collision in a vehicle, are often presented with the authority/importance they deserve to inspire us to take appropriate action. It’s the unique way this information is presented which allows us to place a heightened level of significance upon it.

The moment we remove humans from the production of health reports, tsunami predictions, or traffic collisions, we have a huge responsibility to present the information to humans in a way which can illicit the same, if not an improved behavioural response.

We need to maintain the credibility of the information, in spite of the ease and frequency of its production. Often this relates to the method in which information is presented to the user.

6. Humans’ refusal to accept redundancy.

I prefer driving cars with a manual gearbox. (That’s stick-shift, to you folk across the pond). Part of my auto-dislike comes from my love for slamming back into 2nd to overtake, or entering a cruise by slipping back into 4th gear, but I’m sure a lot of it comes from my refusal to accept that an automatic gearbox is changing gear more efficiently than I can.

This is Artificial Intelligence.

It’s mechanical, and it’s designed to change gear safely, and optimally for fuel conservation. However, I’ve spent time and money learning how to yank a manual gear stick around every few seconds, and I don’t plan on giving that up anytime soon.

Similarly, I’ve spent years learning how to spell and form grammar correctly – at least, I think – so using services to replace my consciousness while I type can take a hike.

Change must happen gradually when AI is proposed to replace a manual skill that has taken humans years to learn and master, or it will be naturally resisted as a self-preservation tactic.

How to build trust in AI.

1. Remove the black box.

Add interpretability. Provide explanations. Make analysis digestible.

Netflix recommends Reservoir Dogs because you watched Goodfellas. That’s great. I’m not freaked out at the intelligence, and I don’t feel any invasion of privacy has occurred. If there is a weird recommendation, along with an explanation which reveals my wife watched a show which I didn’t, I won’t lose any trust in the system.

As models become more complex, and training data becomes larger and more diverse, it becomes more difficult to add a few words to put the user at ease when experiencing AI at work.

A few examples of increasing interpretability and explaining AI in real-time:

  • Using simple text explanations for machine-curated content. “Generated”, “based-on”, “because you did this” will all help build trust by revealing the source of input data – which is often not as evil as you might first imagine.
  • A car steering wheel and dashboard switching color scheme to yellow when entering ‘autonomous mode’, helps the driver switch to a ‘monitoring mode’. (Green might otherwise suggest the driver kicks back and catches a few winks across the back seat).
  • Explaining how investment fund selections were generated based on a high risk appetite, coupled with the user’s current age and hence assumed ability to recover from a significant loss. It may also be necessary to explain how only training data from the last two major stock market crashes was used to assess the risk of ultimate ruin. This kind of explaination is wildly different to the regulators’ obligatory “Your capital is at risk” warning.

2. Manage expectations.

Don’t allow humans to slip into an unconsciousness haze, then be left disappointed. (Or worse, dead). Explain what the AI will bring to the experience up front.

Labelled zoning of areas which contain AI-generated content can help direct users around a UI, whether they want to confront or avoid AI. Users with more resistance to AI may prefer to spend a period of time verifying auto-generated input alongside their own before they choose to really embrace it.

Articial personas created to output content in more human-friendly form often introduce themselves not as all-knowing geniuses, but as helpful assistants, highly-competent in mundane cognitive tasks. These voice-activated assistants start by explaining a few tasks they can perform. If they’re unsure of a result, they tell you they ‘attempted’ or ‘Googled’ something, by design, just to ensure you don’t expect too much, and ask them to turn the stove off before you walk out the door.

3. Don’t deceive.

We’ve all experienced chatbots masquarading as humans.

But we human users deserve better. We feel entitled to know when Artificial Intelligence is toying with our various emotions, otherwise we can be left feeling frustrated, or manipulated.

Plus, they never seem to have the answer we’re looking for.

4. Don’t be obtrusive.

Augment/assist humans. It should be there if I need it.

Spotify’s ‘Discover Weekly’ playlist is created, then put to one side in case I fancy a listen to something new. There’s no popup in my face, announcing my manually created playlist is now sub-par.

Recent aviation disasters involving the inability to disable autopilot on Boeing 737 MAX planes form a tragic example of AI in software imposing itself too far. Although I am ignorant of the finer details, I am at least aware that if the ability to prevent a stalling engine was suggested to the pilots, assisting their decades of experience by highlighting recovery procedures, as opposed to overriding their efforts, we may not have this example to reference today.

5. Support emotions.

Anticipate moments of frustration.

There may be moments when a car is auto-parking into a bay, when you’re feeling especially tense. As your hands nervously hover above the steering wheel, your heart rate may increase as your bumper approaches the metal of the car behind.

In reality this part is no more or less difficult for the algorithm parking the car. However, it’s a good idea to let the driver know, with dashboard visuals, and sounds, that everything is going ok.

Confident language, calming colors (blue, green, white) and reassuring sounds can go a long way to relaxing humans during heightened periods of stress.

Let’s take responsibilty.

There are many facets to successfully developing and implementing AI in production. Truly groundbreaking innovation is happening right now, but the industry is extremely sensitive to failure.

No matter what role you play, we should all take collective responsibility for the way our products are presented to humans. Let’s all make sure AI is optimised for trust, then together we can ensure maximum advancement as a society.