· · · · · ·

Promises: Using Promises To Evaluate Agent Reputation – Part 2

This is the second text in a series about how to make reputation of an agent a function of promises an agent makes, and the outcome of these promises. The text presents a framework in which reputation is a function of promises between agents, and relative to Part 1, here, agents are now making promises which lead their reputation to increase. Same as the framework in Part 1, the framework described in this text can be used to create a reputation mechanism to use in decision processes, in multi agent systems, and in general, in situations in which there are repeated interactions and agents depend on each other to realize their goals.

This text is part of the series on decision governance. Decision Governance is concerned with how to improve the quality of decisions by changing the context, process, data, and tools (including AI) used to make decisions. Understanding decision governance empowers decision makers and decision stakeholders to improve how they make decisions with others. Start with “What is Decision Governance?” and find all texts on decision governance here.

Framework Description

The starting set of rules are from Reputation Framework 1. A change is made to clarify that in a promise, there is an expectation being promised, and that an expectation is a proposition (proposition is defined as in the SEP, here).

  1. There are agents. 
  2. Time is discrete and each time unit is called a step.
  3. At any step, any agent \(a_1\) can make a promise to any other agent \(a_2\). 
  4. A promise is a tuple \((a_1, a_2, e)\) where \(a_1\) is the agent that promises to agent \(a_2\)  that expectation \(e\) will be satisfied; in a promise, an expectation \(e\) is a proposition.
  5. When a promise is made, the promise is in state Open.
  6. At a step, if \(a_2\) says that \(e\) is satisfied, then the promise is in state Fulfilled.
  7. At a step, if \(a_2\) says that \(e\) is not satisfied, then the promise is in state Broken.
  8. At any step, the Reputation of an agent is a function of all promises that agent made and the states of these promises.

The following rules are added to Reputation Framework 1 to produce Reputation Framework 2.

  1. Every agent starts at the first step with zero utility points and zero reputation points.
  2. Every agent \(a_i\) has a random number \(n\) of expectations given by the set \(E(a_i) = { e(a_i)_1, …,  e(a_i)_n }\). 
  3. Every expectation \(e(a_i)_k\) of agent \(a_i\) has a utility of \(u(e(a_i)_k)\) utility points for that agent.
  4. For any promise \((a_i, a_j, e)\), if the promise is Fulfilled, then the utility of \(a_j\) is increased by \(u(e(a_j)_k)\) utility points and the reputation of \(a_i\) is increased by \(1 * u(e(a_j)_k)\) reputation points.
  5. The probability that a promise can be Fulfilled is inversely proportional to the utility associated with the expectation in that promise.
  6. Expectations and utilities of expectations are public: all agents know all agents’ expectations, and the utility associated with each expectation.
  7. At every step, an agent chooses to make at most one promise.
  8. At every step, an expectation appears in at most one promise.
  9. At every step, a promise is made by at most one agent.
  10. If a promise is Fulfilled at a step, then no promise made at a subsequent step can be made for the same expectation.
  11. Every agent aims to maximize their reputation.
1. Basic Entities
1.1. Agents and Time
  • Let \( A = {a_1, a_2, \dots, a_n} \) be a finite, non-empty set of agents.
  • Time is discrete: \( T = {0, 1, 2, \dots} \). Each unit \( t \in T \) is a step.
2. Expectations
  • Each agent \( a_i \in A \) has a finite, publicly known set of expectations: \( E(a_i) = {e(a_i)1, e(a_i)_2, \dots, e(a_i){n_i}} \) where:
    • Each \( e(a_i)_k [latex] is a proposition (i.e., a claim that can be evaluated as true or false).
    • Each [latex] e(a_i)k \) is associated with a utility value: \( u(e(a_i)_k) \in \mathbb{Z}^+, \quad 1 \leq u(e(a_i)_k) \leq U{\text{max}} \text{ (e.g., 5)} \)
    • All ( E(a_i) ) and all ( u(e(a_i)_k) ) are common knowledge.
3. Promises
  • A promise at step \( t \in T \) is a tuple: \( p = (a_1, a_2, e) \) where:
    • \( a_1 \in A \): promisor (agent making the promise)
    • \( a_2 \in A,\, a_2 \neq a_1 \): promisee (agent to whom the promise is made)
    • \( e \in E(a_2) \): the expectation that the promisor commits to fulfilling.
  • At creation, a promise is in the state: \( \text{state}_t(p) = \text{Open} \)
  • The state of a promise \( p [latex] can change at any subsequent step [latex] t’ > t \) as follows:
  • If \( a_2 \) confirms that \( e \) is satisfied: \( \text{state}_{t’}(p) = \text{Fulfilled} \)
  • If \( a_2 \) states that \( e \) is not satisfied: \( \text{state}_{t’}(p) = \text{Broken} \)

Once a promise becomes Fulfilled or Broken, its state is fixed permanently.

4. Constraints on Promises

Let \( \mathcal{P}_t \) denote the set of promises made at step \( t \). Then the following constraints hold:

  • Single promise per agent per step: \( \forall a_i \in A,\, \exists \leq 1\, p \in \mathcal{P}_t \text{ such that } a_i \text{ is the promisor} \)
  • No duplicate expectations: \( \forall p, p’ \in \mathcal{P}_t,\, p \neq p’ \Rightarrow e(p) \neq e(p’) \)
  • No duplicate assignments: \( \forall p, p’ \in \mathcal{P}_t,\, p \neq p’ \Rightarrow \text{promisor}(p) \neq \text{promisor}(p’) \)
  • Persistence of fulfilled expectations: \( \forall t, t’ > t, \forall e,\ \text{if } \exists p \in \mathcal{P}t : \text{state}{t’}(p) = \text{Fulfilled}, \text{ then } \not\exists p’ \in \mathcal{P}_{t”}, t” > t : e(p’) = e \)
5. Utilities and Reputation

At each step \( t \), each agent \( a_i \) has:

  • Utility \( U_{a_i}(t) \in \mathbb{Z} \), with initial value: \( U_{a_i}(0) = 0 \)
  • Reputation \( R_{a_i}(t) \in \mathbb{Z} \), with initial value: \( R_{a_i}(0) = 0 \)

For any promise \( p = (a_i, a_j, e) \): If \(\text{state}t(p) = \text{Fulfilled} \), then: \( U{a_j}(t) = U_{a_j}(t-1) + u(e)\) and \(R_{a_i}(t) = R_{a_i}(t-1) + u(e)\)

6. Probabilistic Fulfillment

The probability that a promise \(p = (a_i, a_j, e)\) is fulfilled is inversely proportional to the utility of expectation \( e \): \(\Pr(\text{Fulfilled } | p) = \frac{1}{u(e)} \) assuming normalization for probabilistic consistency across utilities in the domain.

7. Agent Objective

Each agent \( a_i \in A \) aims to: \( \max R_{a_i}(T) \) subject to the constraints above, and the knowledge of public expectations and utilities.

Definitions
  • Agent: Autonomous decision-making entity in the system.
  • Expectation: A proposition associated with utility for a given agent.
  • Promise: A commitment from one agent to another to satisfy a particular expectation.
  • Utility: Numerical measure of benefit received when an agent’s expectation is fulfilled.
  • Reputation: Accumulated measure of successful fulfillments weighted by the value of expectations.
Further Reading
  • Castelfranchi, C., & Falcone, R. (2010). Trust Theory: A Socio-Cognitive and Computational Model. Wiley.
  • Conte, R., & Paolucci, M. (2002). Reputation in Artificial Societies: Social Beliefs for Social Order. Kluwer.
  • Camerer, C. (2003). Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press.
Decision Governance

This text is part of the series on the design of decision governance. Other texts on the same topic are linked below. This list expands as I add more texts on decision governance.

  1. Introduction to Decision Governance
  2. Stakeholders of Decision Governance 
  3. Foundations of Decision Governance
  4. Role of Explanations in the Design of Decision Governance
  5. Design of Decision Governance
  6. Design Parameters of Decision Governance
  7. Change of Decision Governance