| | | |

Can an Artificial Intelligence System Decide Autonomously?

To say that something is able to decide requires that it is able to conceive more than the single course of action in a situation where it is triggered to act, that it can compare these alternative courses of action prior to choosing one, and that it likes one over all others as a result of that comparison.

To identify alternatives, it needs to be able to plan. To plan, it needs to be able to identify actions that can be taken given the information it has about the situation in which it can take these actions, and predict the outcome of potential (combinations of) actions. 

Assuming it has alternative plans, it will need to evaluate predicted outcomes. 

To be able to evaluate, it needs to be able to prefer, or in simpler terms, like some outcomes more than others.

Having preferences requires one or a combination of (i) experience with similar outcomes, and/or (ii) having received instructions from someone who has preferences over such outcomes.

Here, by preference, I mean a statement as simple as “Given a car, I prefer it to be black than white.”

Note that I did not mention autonomy so far. All of the above can be programmed.

To decide autonomously, that is, without coercion, requires that the thing which is deciding formed its preferences without coercion. 

Coercion in the context of an artificial intelligence system means that preferences were set by something else – people who made it, or another artificial intelligence system.

On Kawara, "One Million Years", Photo by Janet Lindenmuth https://en.wikipedia.org/wiki/On_Kawara
On Kawara, “One Million Years”, Photo by Janet Lindenmuth https://en.wikipedia.org/wiki/On_Kawara

This leads to a few cases:

  • An artificial intelligence system, call it AI1, had its preferences set by people who designed it, in which case it will reflect their preferences, and consequently cannot autonomously decide – every decision AI1 makes will reflect designers’ preferences. (Or it can reflect the user’s preferences, if it is designed to do so.)
  • AI1 developed preferences through a procedure / algorithm designed by people. By definition, a preference is a relation, and to develop preferences consequently means to have an algorithm which produces relations over some items (situations, outcomes, etc.). For example, the algorithm may be such that it observes what a user of AI1 does, and generates preferences from this – i.e., it takes the user’s revealed preferences. The procedure that generates preferences is designed by people, and therefore reflects values that designers had: it may seem that, for example, generating preferences based on observed choices is neutral, but it isn’t – let’s say AI1 is able to harm people in some way, which would beg the question if it should be designed in a way which makes it impossible to generate preferences which would lead it to do harm (e.g., AI1 controls elevators of a building, asks its human operator if they prefer the elevator to have the door open when people are in the elevator and the elevator is moving fast). Point being that there are no value-free design choices.
  • AI1 could be made so that it picks random actions when it needs to decide. This is also a value-laden design choice, obviously implying that all outcomes are equally preferred, or simply, reflecting a careless design.

Above, you can replace AI1’s designers with another artificial intelligence system AI2, but the same observations apply to AI2, and it does not matter in this argument how far AI1 is from the people whose values are reflected throughout AI1, AI2, and so on.

Consequently, an artificial intelligence system cannot decide autonomously. All its choices are determined by the designers, even if the designers can only assign a probability to potential choices such a system might make. 

The important issue, then, for designers of artificial intelligence systems, is the prediction of possible actions that the system may make, and how to influence probability that some actions will be performed and others not. Part of this concerns how to do risk management and quality control well. This is where draft regulations, such as the Algorithmic Accountability Act are going – they are forcing the creation of quality management systems for products that include artificial intelligence systems.

Similar Posts