Drawing Attention to Known vs Unknown Goals

If we need to design governance that influences attention, then it matters if we know or not the goal of the decision maker. This text provides a simple simulation that illustrates the differences between the time it takes for the decision maker to reach the goal in both cases, all else being equal.

This text is part of the series on the design of decision governance. Decision governance are guidelines, rules, processes designed to improve how people make decisions. It can help ensure that the right information is used, that that information is correctly analyzed, that participants in decision making understand it, and that they use it before they make a decision. Find all texts on decision governance here.

If the designer knows the goal that the decision maker is pursuing, the problem consists of designing rules which steer the decision maker towards the known goal. This was illustrated already in the Governance of attention case, in another text, here.

What if the designer does not know the decision maker’s goal? The general idea is that governance should involve a feedback loop that helps the decision maker in their search. This can work as follows. 

  1. Governance includes a hypothesis about what the goal may be, or what characteristics it has in case the designer does not want to guess the exact goal. 
  2. Governance includes a procedure to steer the decision maker’s attention towards the hypothesized goal, in the cheapest and/or fastest way to determine if that is in fact the goal.
  3. If the result of step 2 is that the goal is reached, that’s it, the search stops; if not, then governance needs to include a procedure that sets a new hypothesis about what the goal is, and initiates a new search.

As in other texts with simulations, here, let’s start with a simple known search space represented as a grid, and say that the decision maker starts from the upper left corner.

If we are designing governance and don’t know the goal cell, then we can assume something about it and steer the agent’s search accordingly. For example, we might assume one of the following.

  1. The goal is far away from the start, and the rule is to search from the outside in, across the search space.
  2. We don’t know anything about where the goal may be, but we know that it will be found if every position in the search space is visited, and to minimize time required to find the goal, the decision maker should not revisit positions (this influences the path to direct the agent onto).
  3. We split the search space into some number of parts, select a part, fully explore that, then move to the next, and so on, until all positions in all parts are explored, or the goal is found.

Let’s take the third assumption, that the search space has parts. The governance framework is as follows:

  1. The grid (search space) is divided into 4 parts, obtained by dividing the grid in half horizontally and vertically. 
  2. Label each part of the grid with a number from 1 to 4. 
  3. Randomly order the labels of grid parts.
  4. In the given order of grid parts, draw the agent’s attention to move to the first cell of the grid part, which is the upper left corner cell of the grid part. 
  5. Once the agent is in the first cell of the grid part, direct the agent’s attention so it traverses all cells in the grid part, without revisiting a cell. 
  6. If the agent reached the goal cell, stop. If the agent traversed all cells in the grid part and did not reach the goal cell, then direct the agent to move to the first cell of the next grid part.

Below are histograms of numbers of steps across 1,000 and 10,000 runs for the case when attention is drawn to the known goal, and when the goal is not known. The difference is obvious.

The image below shows sample runs for each of the two cases. In an image, if an agent visited the cell, then a circle is drawn in the cell, and the more times the agent visited that cell, the less transparent the color of the circle. For the case when the goal is not known, the path is also shown.

Code for simulations is available at github, here.