Promises: Using Promises To Evaluate Agent Reputation – Part 1

What if we wanted to make reputation a function of promises an agent makes, and the outcome of these promises? This is a first text in a series about how to do so. The text presents a framework in which reputation is a function of promises between agents. The framework can be used to create a reputation mechanism to use in decision processes, in multi agent systems, and in general, in situations in which there are repeated interactions and agents depend on each other to realize their goals.
This text is part of the series on decision governance. Decision Governance is concerned with how to improve the quality of decisions by changing the context, process, data, and tools (including AI) used to make decisions. Understanding decision governance empowers decision makers and decision stakeholders to improve how they make decisions with others. Start with “What is Decision Governance?” and find all texts on decision governance here.
This text presents the simplest framework in which a measure of reputation is defined and it depends on promises being fulfilled or broken. The framework is Reputation Framework 1. In the rest of the text, this framework is described and then formalized in a simple way, so that it can be extended in subsequent texts.
Framework Description
- Assume there are agents.
- Assume time is discrete and each time unit is called a step.
- At any step, any agent \(a_1\) can make a promise to any other agent \(a_2\).
- A promise is a tuple \((a_1, a_2, p)\) where \(a_1\) is the agent making the promise \(p\) to \(a_2\). \(p\) could be a proposition stating some desirable future state of affairs, or an action, or something else – this makes no difference in the framework in this text, the framework simply ignores the specifics of \(p\), that is, of the content of a promise.
- When a promise is made, the promise is in state Open.
- At a step, if \(a_2\) says that \(p\) is fulfilled, then the promise is in state Fulfilled.
- At a step, if \(a_2\) says that \(p\) is broken, then the promise is in state Broken.
- At any step, the Reputation of an agent is a function of all promises that agent made and the states of these promises.
Basic Concepts
- Agent: An entity capable of making promises and being the recipient of promises made by other agents.
- Promise: A tuple \((a_1, a_2, p)\) representing a promise \(p\) made by agent \(a_1\) to agent \(a_2\). What exactly \(p\) is – action, commitment, outcome, etc. – does not matter in this framework.
- Reputation: A measure of the reliability of an agent in fulfilling promises, derived from historical fulfillment performance.
1. Basic Definitions
- Agents: Let \(A = {a_1, a_2, \dots, a_n}\) be a finite set of agents.
- Time Steps: Time is discrete and represented by steps \(t \in T[latex], where [latex]T = {0,1,2,\dots}\).
2. Promises and States
- Promise: A promise at a given step \(t\) is a tuple: \( p = (a_1, a_2, p) \) where:
- \(a_1 \in A\) is the promisor (agent making the promise).
- \(a_2 \in A\), \(a_2 \neq a_1\), is the promisee (agent receiving the promise).
- \(p\) is the content or description of what is promised.
- Promise State: Every promise \(p\) at any step \(t\) has exactly one state from the set: \( S = {\text{Open},\,\text{Fulfilled},\,\text{Broken}} \).
- When a promise is made at step \(t\), it is in the Open state: \( \text{state}_t(p) = \text{Open} \).
3. Promise State Transitions
For any promise \(p = (a_1,a_2,p)\):
- At step \(t’ > t\), agent \(a_2\) may assess the promise and transition it from Open as follows:
- If agent \(a_2\) evaluates the promise as fulfilled, the state transitions to Fulfilled: \( \text{state}_{t’}(p) = \text{Fulfilled} \).
- If agent \(a_2\) evaluates the promise as not fulfilled, the state transitions to Broken: \( \text{state}_{t’}(p) = \text{Broken} \).
- Once transitioned to Fulfilled or Broken, a promise remains permanently in that state.
4. Reputation
Reputation of an agent \(a_1\) at step \(t\), noted as \(R_{a_1}(t) \), is defined as a function of:
- The set of promises made by agent \(a_1\) up to step \(t\), denoted as \(P_{a_1}(t)\).
- The current state of each promise \(p \in P_{a_1}(t)\).
Formally: \( R_{a_1}(t) = f_R\left(\{(a_1,a_2,p), \text{state}_t(a_1,a_2,p) : (a_1,a_2,p) \in P_{a_1}(t)\}\right) \)
A simple reputation function is the ratio of fulfilled promises over all assessed promises: \(R_{a_1}(t) = \frac{|{p \in P_{a_1}(t) : \text{state}_t(p) = \text{Fulfilled}}|}{|{p \in P_{a_1}(t) : \text{state}_t(p) \in {\text{Fulfilled},\,\text{Broken}}}|} \)
If no promises have yet been assessed, the reputation can be initialized at a neutral baseline (e.g., 0 or undefined until at least one promise is assessed).
A Simple Simulation
Let’s say we have 4 agents and the simulation runs for 3 steps. At each step, any agent will give between 1 and 2 promises, each to a different agent, and a random subset of promises will be closed as Fulfilled, and another subset of open promises will be closed as Broken. The resulting network is shown below, where nodes are agents and links are promises, labeled by the state of the step at which the promise was made, the step at which the promise was closed and the state of the promise. Reputation is calculated as the proportion of fulfilled promises to all promises made by the agent.

What’s Next?
An agent needs to have an incentive to make a promise to another agent. This is the topic in the next framework described in Part 2.
Further Reading
- Axelrod, R. (1984). The Evolution of Cooperation. Basic Books.
- Castelfranchi, C., & Falcone, R. (2010). Trust Theory: A Socio-Cognitive and Computational Model. John Wiley & Sons.
- Ostrom, E. (2003). “Toward a Behavioral Theory Linking Trust, Reciprocity, and Reputation.” In Ostrom, E., & Walker, J. (Eds.), Trust and Reciprocity: Interdisciplinary Lessons from Experimental Research (pp. 19–79). Russell Sage Foundation.
Decision Governance
This text is part of the series on the design of decision governance. Other texts on the same topic are linked below. This list expands as I add more texts on decision governance.
- Introduction to Decision Governance
- Stakeholders of Decision Governance
- Foundations of Decision Governance
- How to Spot Decisions in the Wild?
- When Is It Useful to Reify Decisions?
- Decision Governance Is Interdisciplinary
- Individual Decision-Making: Common Models in Economics
- Group Decision-Making: Common Models in Economics
- Individual Decision-Making: Common Models in Psychology
- Group Decision-Making: Common Models in Organizational Theory
- Role of Explanations in the Design of Decision Governance
- Design of Decision Governance
- Design Parameters of Decision Governance
- Factors influencing how an individual selects and processes information in a decision situation, including which information the individual seeks and selects to use:
- Psychological factors, which are determined by the individual, including their reaction to other factors:
- Attention:
- Memory:
- Mood:
- Emotions:
- Commitment:
- Temporal Distance:
- Social Distance:
- Expectations
- Uncertainty
- Attitude:
- Values:
- Goals:
- Preferences:
- Competence
- Social factors, which are determined by relationships with others:
- Impressions of Others:
- Reputation:
- Promises:
- Social Hierarchies:
- Social Hierarchies: Why They Matter for Decision Governance
- Social Hierarchies: Benefits and Limitations in Decision Processes
- Social Hierarchies: How They Form and Change
- Power: Influence on Decision Making and Its Risks
- Power: Relationship to Psychological Factors in Decision Making
- Power: Sources of Legitimacy and Implications for Decision Authority
- Power: Stability and Destabilization of Legitimacy
- Power: What If High Decision Authority Is Combined With Low Power
- Power: How Can Low Power Decision Makers Be Credible?
- Social Learning:
- Psychological factors, which are determined by the individual, including their reaction to other factors:
- Factors influencing information the individual can gain access to in a decision situation, and the perception of possible actions the individual can take, and how they can perform these actions:
- Governance factors, which are rules applicable in the given decision situation:
- Incentives:
- Incentives: Components of Incentive Mechanisms
- Incentives: Example of a Common Incentive Mechanism
- Incentives: Building Out An Incentive Mechanism From Scratch
- Incentives: Negative Consequences of Incentive Mechanisms
- Crowding-Out Effect: The Wrong Incentives Erode the Right Motives
- Crowding-In Effect: The Right Incentives Amplify the Right Motives
- Rules
- Rules-in-use
- Rules-in-form
- Institutions
- Incentives:
- Technological factors, or tools which influence how information is represented and accessed, among others, and how communication can be done
- Environmental factors, or the physical environment, humans and other organisms that the individual must and can interact with
- Governance factors, which are rules applicable in the given decision situation:
- Factors influencing how an individual selects and processes information in a decision situation, including which information the individual seeks and selects to use:
- Change of Decision Governance
- Public Policy and Decision Governance:
- Compliance to Policies:
- Transformation of Decision Governance
- Mechanisms for the Change of Decision Governance