|

Why Specialized AI Should Be Certified by Expert Communities

Should the explanations that an Artificial Intelligence system provides for its recommendations, or decisions, meet a higher standard than explanations for the same, that a human expert would be able to provide?

I wrote separately, here, about conditions that good explanations need to satisfy. These conditions are very hard to satisfy, and in particular the requirements that the explanation is about causal relationships, refers to related explanations, and is understandable to people.

It is likely that if we imposed such requirements on explanations that can be produced about outputs from artificial intelligence having billions or trillions of parameters estimated through statistical learning, then few, if any such systems can satisfy them. See the note here.

Should we be imposing the same difficult requirements on AI? 

A line of argument that can be taken is to ask this: Do people you take advice from produce good explanations for advice they give you, or decisions they make on your behalf?

That’s unlikely, as only explanations developed through science will meet many, although not always all requirements for good explanations. This about medical advice you may have received in the past? Has it been backed by elaborate explanations? My personal experience is that of receiving relatively poor explanations from physicians I was advised by. Then, there is another problem: I can certainly ask questions about advice I get, but however many questions I may ask, I will reach a barrier which is that I do not have the training, skills, and experience of the individual giving me advice (assuming they are a professional). In other words, there is a limit to which I will understand the explanation, and be able to judge the quality of that explanation.

Winslow Homer, Breaking Storm, Coast of Maine, 1894, https://www.artic.edu/artworks/16779/breaking-storm-coast-of-maine
Winslow Homer, Breaking Storm, Coast of Maine, 1894, https://www.artic.edu/artworks/16779/breaking-storm-coast-of-maine

Perhaps the principal reason one will take advice is because of trust they have for the individual advising them. This could be because of past experience with that person, or because they have been given authority through belonging to a social structure they chose to trust, e.g., the medical profession, being a doctor at a hospital, and so on.

If the degree to which I’m willing to take advice depends on the trust I have in the advisor, then this begs the question of what factors impact trust in an AI that provides advice? In turn, we have the chicken and egg problem, in that if AI cannot explain well, it is hard to trust it, and if it is hard to trust it, then its advice will not be sought, and yet, perhaps people who provide the same advice are equally incompetent to provide trustworthy explanations.

This gets resolved through use over time, and buildup of experience about the outcomes of the advice from the AI. Which gets us back to the same question: Should we be imposing such difficult requirements for good explanations on AI?

While good explanations can be aspirational, they are not practical, the same as it is not practical to expect every doctor to be able to explain a condition to the same level of sophistication, and ensure their explanations meet all criteria to qualify as good explanations.

A practical solution to this is certification by expert communities. Lawyers would certify AI delivering legal advice, Medical professionals would certify AI that delivers medical advice. And so on. This creates other problems, namely that certification creates friction to innovation and increases barriers to entry. There’s a paradox there, of wanting to move fast to create something that cannot be trusted, whereby the rationale for going fast in the first place is to capture a market – if the market are people, as they usually are, then delivering faster advice they won’t use isn’t very clever.

At the same time, the impact of one incompetent doctor or lawyer is, perhaps not infinitely, but certainly significantly lower than the negative impact an incompetent AI can have: scale is the critical difference between one incompetent human advisor, and an incompetent AI. The damage AI can have, then, may well justify expecting AI to be able to explain advice or decisions in a much more rigorous manner than a human should be able to.

Similar Posts