Can Artificial Intelligence Systems be Trusted?
Panel Organiser: Andrew Lea
This is one of those interesting questions in which theoretical considerations have strong practical and ethical implications.
To trust computer software ordinarily means to be confident that its output is correct. But in Artificial Intelligence, it is often hard to define or even agree what “correct” might be, and to measure how incorrect an output is. Is this AI summary of a document correct? If not, how erroneous is it? Perhaps we need to abandon concepts of “correctness” for AI, and replace them with usefulness.
So if we can’t determine if we believe AI to be correct because we can’t define correct, is it possible or even meaningful to “trust” AI at all? Maybe we should measure utility and insist on “explainability”?
From a practical perspective what should we do with self-driving cars if explanation-generating techniques are out-performed in terms of accident rates by black-box neural nets?
And ethically can we rely on deep AI to even tell the truth? With its own agenda - or even ours - it might consider the best technique would be to manipulate us with a deliberate lie! Should we be happy to interact with a technology which might lie to us, or should we insist on interacting with people who, after all, never lie.
Given these complexities, maybe our trust in an AI is contingent on in whose hands it is placed. Perhaps vicarious trust is the best or even only way ahead.
The panel and conference delegates will consider this question from these, and no doubt other, perspectives.
Mr. Richard Ellis, RKE Consulting
Prof. Udo Kruschwitz, University of Essex
Mr. Andrew Lea, Amplify Life
Prof. John McCall, Robert Gordon University Aberdeen
Dr. Simon Thompson, BT