Choice in Absence of Utility and Probability Estimates
In expected utility models, utility quantifies preferences, probability quantifies uncertainty. Sounds simple, elegant, but tends to be expensive. What if options can be identified, but there is no information about preferences or uncertainty in a format that can be translated into, respectively, utility and probability? What is an alternative decision process, which is still structured and has some specific notion of rigour, and that can be applied to select one option over others?
Here’s what I mean by utility and probability being expensive. Take utility. It takes skill and time to elicit preferences, even when they do exist. If they do, and if you go through the pain of eliciting them, the number to elicit grows very fast with the number of options and criteria. Let’s say you did elicit them, and you may have the luxury of making acceptable assumptions to expand the set of preferences to cover enough to translate them into utility values, and magically, it all works – transitivity satisfied, et al. Then, take probability: I always thought it was perverse to ask for a probability function over uncertain outcomes, when the set of outcomes isn’t clear at all, when you don’t really understand well the mechanisms connecting the current state to the outcomes in the future state. Perhaps you make assumptions about mechanisms generating outcomes, and if you make too many and there’s too little evidence, then it ends up being as solid and as tangible as a cloud in a storm. Remember, it takes quite a lot to claim that you can explain something credibly (see conditions for explanation here, and for evidence here).
Then again, perhaps you assume the decision problem is located in a relatively well controlled environment, and so sources of uncertainty are identifiable, mechanisms are amenable to description and assumptions about them to feasible validation. You assume a closed environment. If so, it makes sense to invest in explaining uncertainty and translating that into verifiable assumptions and probability distribution functions, and tying probability with outcomes. Similarly, if assumptions about preferences seem admissibly unclear, reasonably well behaved, and you ignore the basic problem that they are about unobservable intentional states predicted for a future time (that’s a mouthful, more on it here), then, sure, let’s convert preferences into utility values. Note, though, all these concerns fade away if we are analyzing decision problems of users on online ecommerce or social or other platforms, which we decide to perceive as closed decision environments, and we assume that preferences are revealed by users’ past actions.
Anyway, let’s say there are no ethical, moral, or philosophical concerns with advancing utility values and probability values for a decision problem. Even then, there can be many reasons for the absence of information on preferences and uncertainty. Maybe there is no time to elicit. Maybe there is no expertise to do so. Perhaps the cost of doing it is not acceptable.
Is there a rigorous decision analysis approach for such cases? The obvious one is by argumentation, and it involves adopting the following rules:
- the optimal option is the one which remains acceptable when no further arguments are given by the stakeholders with authority to provide arguments,
- an argument is acceptable if and only if there are no acceptable arguments attacking it (as in [1]).
That’s it. This too looks simple, elegant. It is cheaper to implement (requires less information) than what’s needed to populate an expected utility model. But there are problems with argumentation too (here’s one, which is not solved in decision analysis either, though). There are no optimal choice methods, only acceptable ones 🙂
References
- Dung, Phan Minh. “On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.” Artificial intelligence 77.2 (1995): 321-357.