Promises: Using Promises To Evaluate Agent Reputation – Part 2
What if fulfilling a promise increases an agent’s reputation? The framework in this text captures this idea.
What if fulfilling a promise increases an agent’s reputation? The framework in this text captures this idea.
The text presents a framework in which reputation is a function of promises between agents. The framework can be used to create a reputation mechanism to use in decision processes, in multi agent systems, and so on.
How do alternative decision governance designs impact attention, memory, competence, expectations, reputation, and how do they in turn affect the steps a decision maker takes to reach a goal?
If the decision maker is distracted, they should pay the price of reaching their goal after more effort. A simple simulation can be done to show just how much distraction may cost relative to a case when the agent’s attention is directed to the goal.
If we need to design governance that influences attention, then it matters if we know or not the goal of the decision maker. This text provides a simple simulation that illustrates the differences between the time it takes for the decision maker to reach the goal in both cases, all else being equal.
Three decision governance strategies are compared in terms of how they influence the ability of an agent to reach their goal in a simple problem: the first strategy involves no governance, the second complement’s agent’s memory, and the third draws their attention.
It seems obvious that it makes no sense to randomly choose between options we are presented with. In this text, I’ll set up and run a simple simulation that illustrates this. The simulation is another way to think about the impact of decision governance, even if in a very simple setting.