Why AI Owners and AI Users Have Conflicting Interests?

If you make and commercialize high quality #AI, you are also likely to have conflicting interests with users. Here’s why. #MachineLearning #AIeconomics #incentives #economics pic.twitter.com/J8oorXo1dh
— ivanjureta (@ivanjureta) February 22, 2018
This text follows my notes on Sections 1 and 2 of the the Algorithmic Accountability Act (2022 and 2023). When (if?) the Act becomes law, it will apply across all kinds of software products, or more generally, products and services which rely in any way on algorithms to support decision making. This makes it necessary…
The short answer is “No”, and the reasons for it are interesting. An AI system is opaque if it is impossible or costly for it (or people auditing it) to explain why it gave some specific outputs. Opacity is undesirable in general – see my note here. So this question applies for both those outputs…
An argumentation framework [1] is a graph of nodes called arguments, and edges called attacks. If arguments are propositions, and “p1 attacks p2” reads “if you believe p1 then you shouldn’t believe p2”, then an argumentation framework looks like something you can use to represent the relationship between arguments and counterarguments in, say, a debate….
As currently drafted (2024), the Algorithmic Accountability Act does not require the algorithms and training data used in an AI System to be available for audit. (See my notes on the Act, starting with the one here.) The way that an auditor learns about the AI System is from documented impact assessments, which involve descriptions…
There is no high quality AI without high quality training data. A large language model (LLM) AI system, for example, may seem to deliver accurate and relevant information, but verifying that may be very hard – hence the effort into explainable AI, among others. If I wanted accurate and relevant legal advice, how much risk…
Motivated reasoning consists of processing information in a way that aligns with one’s desires, beliefs, or goals, rather than neutrally evaluating evidence. How to mitigate it through decision governance?