Why AI Owners and AI Users Have Conflicting Interests?

If you make and commercialize high quality #AI, you are also likely to have conflicting interests with users. Here’s why. #MachineLearning #AIeconomics #incentives #economics pic.twitter.com/J8oorXo1dh
— ivanjureta (@ivanjureta) February 22, 2018
The Algorithmic Accountability Act (2022 and 2023) applies to many more settings than what is in early 2024 considered as Artificial Intelligence. It applies across all kinds of software products, or more generally, products and services which rely in any way on algorithms to support decision making. This makes it necessary for any product manager…
Section 5 specifies the content of the summary report to be submitted about an automated decision system. This text follows my notes on Sections 1 and 2, Section 3 and Section 4 of the Algorithmic Accountability Act (2022 and 2023). This is the fourth of a series of texts where I’m providing a critical reading…
Many people spent a lot of time, across centuries, trying to build good explanations, and trying to distinguish good from bad ones. While there is no consensus on what “explanation” is (always and everywhere), it is worth knowing what good explanations may have in common. It helps develop a taste in explanations, which is certainly helpful given how frequently you may need to explain something, and how often others offered explanations to you.
The Algorithmic Accountability Act of 2022, here, is a very interesting text if you need to design or govern a process for the design of software that involves some form of AI. The Act has no concept of AI, but of Automated Decision System, defined as follows. Section 2 (2): “The term “automated decision system”…
We should reduce the cost of authorship and create an incentive mechanism that generates and assigns credibility to authors in a community.