| | |

Critical Decision Concept in the Algorithmic Accountability Act

The Algorithmic Accountability Act of 2022, here, applies to systems that help make, or themselves make (or recommend) “critical decisions”. 

Determining if something is a “critical decision” determines if a system is subject to the Act or not. Hence the interest in the discussion, below, of the definition of “critical decision”.

The Act defines a critical decision as follows.

CRITICAL DECISION.—The term “critical decision” means a decision or judgment that has any legal, material, or similarly significant effect on a consumer’s life relating to access to or the cost, terms, or availability of— (A) education and vocational training, including assessment, accreditation, or certification; (B) employment, workers management, or self-employment; (C) essential utilities, such as electricity, heat, water, internet or telecommunications access, or transportation; (D) family planning, including adoption services or reproductive services; (E) financial services, including any financial service provided by a mortgage company, mortgage broker, or creditor; (F) healthcare, including mental healthcare, dental, or vision; (G) housing or lodging, including any rental or short-term housing or lodging; (H) legal services, including private arbitration or mediation; or (I) any other service, program, or opportunity decisions about which have a comparably legal, material, or similarly significant effect on a consumer’s life as determined by the Commission through rulemaking.

In general, a decision is a commitment to a course of action. Being a commitment to one course of action means absence of commitment to other, potential and alternative courses of action. In decision theory and decision analysis, a decision is the result of some procedure that involved collection and structuring of information about the objectives of the decision, criteria for the comparison of alternatives, the comparison of alternatives, and the act of making the commitment to one among those that the decision maker considered.

Fragment of The Massacre of the Innocents by Peter Paul Rubens, between 1609 and 1611, Wikipedia

The list of decision categories in the definition is straightforward. While it may be interesting to try to find an important category that was left out, the last category addresses this, by allowing any decision really to fall under the name critical decision, as determined through rule making.

Rulemaking will also show how to assess the significance of the effect of decisions on a consumer.

There are somewhat less obvious points to raise in relation to the definition.

A decision can have critical consequences without itself meeting critical decision criteria. Consider a system that would continuously provide biased information to a consumer, where each individual piece of information may be insufficient to lead to a decision, but the accumulation of such information over time triggers a decision. Wouldn’t an Instagram feed qualify for such a system, if recommendations in the feed shape it so that the viewer changes one’s perception of what it means to be beautiful, leading to a decision to perform plastic surgery? Having the system decide to show, simply based on community recommendations, an image instead of another one seems like an innocuous decision, but many such choices that lead to the shaping of taste invariably will lead the consumer to decisions that may be damaging to, say, mental healthcare, under item (F) in the definition of “critical decision”. 

When designing a system that may provide these seemingly low impact recommendations, it is not trivial to argue either way. The argument that the system does not make / influence critical decisions is easy to argue against, as firm evidence for it is absent at design time. The opposite argument is also difficult to accept, as it would require hypothesizing how many small actions over time may influence human behavior. 

Another less obvious challenge with the critical decision concept is the need to agree on what constitutes an acceptable explanation of cause and effect between the information that the system provides in relation to the critical decision and the effects, positive or negative, that someone claims that decision has. In other words, there could be a system that performs a function for an intended customer, and someone else, who does not qualify as intended customer, is impacted in some way, and is arguing that that impact is caused by the system. The issue is that this means two claims need to be argued, one about the presence of unintended impact to the party that’s not the intended audience; the other claim is that there this unintended impact is in fact caused by the system. I discussed in another note, here, the quality of explanations. For the designer of a system, the issue is the investment in attempting to identify unintended consequences, and the uncertainty around how much of such effort is needed given the variability of criteria that determine what qualifies as a sufficient explanation of cause and effect.

Similar Posts