Expected Uncertainty to Unexpected Utility

If a decision process is designed according to the expected utility (maximization) model, then the choice of an option is explained by it having the highest expected utility among considered options. Consequently, decision governance over such a decision process needs to help the decision maker predict and prepare for unexpected events in order to maximize utility.

This text is part of the series on the design of decision governance. Decision governance are guidelines, rules, processes designed to improve how people make decisions. It can help ensure that the right information is used, that that information is correctly analyzed, that participants in decision making understand it, and that they use it before they make a decision. Find all texts on decision governance here.

If a decision process is designed according to the expected utility model, then, one, How does it explain decisions? two, How to influence that process through governance?, and three, How is this different from governing a decision process modeled after classical utility maximization?

The expected utility maximization model, or simply the expected utility model (see Briggs, 2023 for an overview of key ideas) is the simplest decision making model in which options have uncertain outcomes.

This is also the simplest model to use, to look at how decision governance can handle uncertainty of options.

In the expected utility model, we have the following:

  • One decision maker
  • Options that the decision maker chooses from
  • Alternative future states that influence the outcome of decisions
  • The probability that each future state occurs, with the sum of the probabilities of all considered future states being equal to 1
  • For each option, its utility in each future state
  • The assumption that the decision maker will choose the option which will lead to the highest total utility over all considered future states

We assume that the decision is made as follows:

  1. A decision maker identifies options.
  2. The decision maker predicts future states for each option.
  3. The decision maker assigns a probability to each future state of an option, such that the sum of probabilities over all future states for an option is equal to 1.
  4. The decision maker assigns a utility to each option in each of its considered future states.
  5. The decision maker calculates the expected utility of each option, by summing the product of that option’s utility in each of its future states, weighted by the probability of the future state.

Consider the following example.

In The Iliad by Homer, Achilles, the Greek hero, decides to withdraw from battle after a dispute with Agamemnon, the leader of the Greek forces during the Trojan War. Agamemnon, having been forced to return his war prize, Briseis, to appease the gods, demands Achilles’ prize as compensation. In response, Achilles, feeling dishonored and enraged by Agamemnon’s actions, chooses to withdraw himself and his troops from the fighting, despite the consequences for the Greek army.

Let’s make up some hypothetical probability and utility values for Achilles, and see how the model would explain his decision.

  1. Identifying Options: Achilles has two options:
    • Option 1: Remain in battle.
    • Option 2: Withdraw from battle.
  2. Predicting Future States: For each option, Achilles considers two possible future states:
    • Future States for Option 1 (Remain in battle):
      • State 1.1: The Greek forces win the war, but Achilles feels dishonored. Probability: p_1.1 = 0.6.
      • State 1.2: The Greek forces lose the battle despite his efforts, but Achilles’ honor remains tarnished. Probability: p_1.2 = 0.4.
    • Future States for Option 2 (Withdraw from battle):
      • State 2.1: The Greek forces struggle and suffer significant losses without Achilles’ support, but Agamemnon’s leadership is questioned. Probability: p_2.1 = 0.7.
      • State 2.2: The Greek forces adapt and eventually find a way to win without Achilles, and his honor remains intact. Probability: p_2.2 = 0.3.
  3. Assigning Utilities to Each Option in Each Future State: Achilles assigns utility values to each outcome based on how much he values each future scenario.
    • Utilities for Option 1 (Remain in battle):
      • Utility of State 1.1: U(1.1) = 30 (value of Greek victory with dishonor).
      • Utility of State 1.2: U(1.2) = 10 (value of Greek loss with dishonor).
    • Utilities for Option 2 (Withdraw from battle):
      • Utility of State 2.1: U(2.1) = 50 (value of Greek struggle with Agamemnon’s leadership questioned).
      • Utility of State 2.2: U(2.2) = 40 (value of Greek victory without dishonor).
  4. Calculating the Expected Utility of Each Option:
    • Expected Utility of Option 1 (Remain in battle):
      • EU(1) = (p_1.1 * U(1.1)) + (p_1.2 * U(1.2))
      • EU(1) = (0.6 * 30) + (0.4 * 10)
      • EU(1) = 18 + 4 = 22
    • Expected Utility of Option 2 (Withdraw from battle):
      • EU(2) = (p_2.1 * U(2.1)) + (p_2.2 * U(2.2))
      • EU(2) = (0.7 * 50) + (0.3 * 40)
      • EU(2) = 35 + 12 = 47
  5. Decision: Based on the above, the expected utility of remaining in battle is 22, while the expected utility of withdrawing from battle is 47. According to the expected utility model, Achilles would choose to withdraw from the battle because it maximizes his expected utility.

Explanation of a decision in the expected utility model is more complicated than in the classical utility maximization one (see the text here). While criteria, preferences, and criteria importance are still there, as there would otherwise be no utility, uncertainty makes information about options more complicated.

The choice of an option depends on future states of options, their utility and uncertainty associated with them. Probability is simply a measure of uncertainty. The figure below shows this complication. The reason why, say Option 1 is chosen, is that it has highest expected utility – which simply leads to the questions about which preferences, criteria and criteria importance were considered, but also which future states we predicted of that option, and the uncertainty we assumed they have.

The next figure, below, shows the same elements in the same diagram I used in previous texts, for other decision models.

I discussed in the text on the classical utility maximization model, here, some common ways to influence options which are considered, as well as preferences and criteria. To influence prediction of future states and the evaluation of their uncertainty, typical types of strategies include the following.

  1. Data and Analytics: We can require that data is collected and analyzed in specific ways ahead of decision making, both before options are defined, or when they are known. 
  2. Training and Experience: Training in decision analysis methods such as scenario planning and decision trees can be helpful when thinking through potential future states and their probability. Domain expertise plays a critical role.
  3. Bias Reduction: Debiasing strategies and nudges help mitigate cognitive distortions like overconfidence and anchoring, improving probability assessments. 
  4. Expert and Group Input: Methods like the Delphi technique and structured group discussions reinforce the importance of gathering information from others.
  5. Scenario Planning and Stress Testing: Scenario planning encourages a broader view of possible outcomes, while stress testing evaluates decision performance under less likely conditions.
  • Briggs, R. A., “Normative Theories of Rational Choice: Expected Utility”, The Stanford Encyclopedia of Philosophy (Winter 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.), URL = <https://plato.stanford.edu/archives/win2023/entries/rationality-normative-utility/>.
  • Gelman, A., et al. (2013). Bayesian Data Analysis. CRC Press.
  • Linstone, H. A., & Turoff, M. (1975). The Delphi Method: Techniques and Applications. Addison-Wesley.
  • Tversky, Amos, and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty.” Science 185.4157 (1974): 1124-1131.