| |

Opaque, Complex, Biased, and Unpredictable AI

Opacity, complexity, bias, and unpredictability are key negative nonfunctional requirements to address when designing AI systems. Negative means that if you have a design that reduces opacity, for example, relative to another design, the former is preferred, all else being equal.

The first thing is to understand what each term refers to in general, that is, irrespective of what the AI is for. In this context, it is useful to substitute AI for “decision process”, which in turn is a process that yields a recommendation on a decision to make (see my note here on why AI does not decide).

A decision process is:

  • opaque, if it is impossible or costly to produce explanations for why an option was recommended over other alternatives;
  • complex, if to come up with a recommendation of an option, the process requires many inputs, many transformations of, and computations from these inputs;
  • biased, if data used to develop (learn) the relationship between inputs and outputs is not representative of the inputs and/or outputs;
  • unpredictable, if the same inputs can yield different recommendations.

Nonfunctional requirements are by definition impossible to satisfy in some ideal way – instead, they generate preferences over alternative designs. 

Opacity is reduced by designs that improve:

  • Data governance, leading to a better understanding of what data is used, why, how, and of various data quality dimensions;
  • Transparency of steps taken to produce the decision, or transparency into the algorithm(s) that process data to produce recommendations;
  • Transparency into the mechanism that ensures incentive alignment between various stakeholders of the decision process that AI supports.

Complexity is reduced by designs that enable description and inspection of dependencies between variables used in decision making. Similarly to opacity, transparency into the steps taken to make recommendations also helps.

"Pear Tree (Gustav Klimt) , BR66.4,” Harvard Art Museums collections online, Mar 28, 2024, https://hvrd.art/o/299849.
“Pear Tree (Gustav Klimt) , BR66.4,” Harvard Art Museums collections online, https://hvrd.art/o/299849.
Why have Klimt’s Pear Tree in this note? It’s beautiful, but complex, any logic for why it is as it is is opaque, and where each leaf is relative to another is without a pattern, i.e., unpredictable.

Bias is reduced by ensuring that the population described by input and output training data is well understood, both in terms of the properties that the sample (the training data) suggests for that population, and the properties expected of the population independently from the sample. This in turn depends on a combination of expertise with respect to the relevant properties of the population, or knowledge of the properties of the population, and ability, perhaps creativity in identifying bias (see, for example [1]). 

Predictability is addressed by providing evidence of the consistency of output over time and for the same inputs. Depending on the design of the system that supports the decision process, this can involve anything from providing formal logic proofs of the preservation of some properties of input / output relationships, to probability estimates of consistent output given repeat same inputs and/or variations in inputs. If that sounds feasible, it is, but it becomes much more difficult if the decision process is sensitive to context and gets personalized over time.

References

  1. Hofmann, Valentin, et al. “Dialect prejudice predicts AI decisions about people’s character, employability, and criminality.” arXiv preprint arXiv:2403.00742 (2024) – https://arxiv.org/abs/2403.00742

Similar Posts