Why AI Owners and AI Users Have Conflicting Interests?

If you make and commercialize high quality #AI, you are also likely to have conflicting interests with users. Here’s why. #MachineLearning #AIeconomics #incentives #economics pic.twitter.com/J8oorXo1dh
— ivanjureta (@ivanjureta) February 22, 2018
I use “depth of expertise” as a data quality dimension of AI training datasets. It describes how much a dataset reflects of expertise in a knowledge domain. This is not a common data quality dimension used in other contexts, and I haven’t seen it as such in discussions of, say, quality of data used for…
In the context of human decision making, a decision is a commitment to a course of action (see the note here); it involves mental states that lead to specific actions. An AI system, as long as it is a combination of statistical learning algorithms and/or logic, and data, cannot have mental states in the same…
The Algorithmic Accountability Act of 2022, here, is a very interesting text if you need to design or govern a process for the design of software that involves some form of AI. The Act has no concept of AI, but of Automated Decision System, defined as follows. Section 2 (2): “The term “automated decision system”…
Goal displacement refers to a situation where an individual, group, or organization shifts its focus from the intended objectives to secondary or substitute goals. Which factors increase, and which decrease the probability of goal displacement?
Let’s assume that we need an incentive mechanism to stimulate employee performance by allocating bonuses based on peer opinion. How do we design it?
Motivated reasoning consists of processing information in a way that aligns with one’s desires, beliefs, or goals, rather than neutrally evaluating evidence. How to mitigate it through decision governance?