| |

Perplexing Secrecy of AI Designs

If AI is made for profit, then should its design be confidential? This choice is part of AI product strategy. The decision on this depends on the following at least.

  • Correctness: How likely is it to make errors when providing advice to users? 
  • Verifiability: How likely are its users to detect that advice is erroneous?
  • Impact: How important is the decision problem that it advises users on, to these users?
  • Scarcity: How likely is it that knowledge to improve the quality of its recommendations is outside of the organization that designs it?
  • Specificity: To what extent is the AI design based on publicly available models?

What is the relationship of each of these to AI confidentiality?

Correctness: The more likely the AI / algorithm is to make errors, the more likely it is that users will abandon it in favor of human advisors. 

Correctness is related to verifiability: the more it is likely for most users to distinguish correct versus incorrect advice from the AI, the more likely it is that correctness will matter to them. 

Why is this so? Because of algorithm aversion; see the following, from a slightly different context, but likely applicable here.

“The results of five studies show that seeing algorithms err makes people less confident in them and less likely to choose them over an inferior human forecaster. This effect was evident in two distinct domains of judgment, including one in which the human forecasters produced nearly twice as much error as the algorithm. It arose regardless of whether the participant was choosing between the algorithm and her own forecasts or between the algorithm and the forecasts of a different participant. And it even arose among the (vast majority of) participants who saw the algorithm outperform the human forecaster.” [1]

The role of impact is simpler: the more the advice matters to users, that is, the more important the underlying decision for users is, the more valuable the correct advice is, and the more interesting it becomes to minimize risk of anyone else reproducing the AI.

Scarcity matters if talent needed to improve the AI design cannot be acquired other than having them contribute in some noncommercial manner. Much of current AI advances are made through collaborations between researchers in academia and companies, and in these cases, confidentiality is less of a concern.

Specificity: If the design of the AI product/system is based on adapted public models, possibly also on widely available data (e.g., Common Crawl), and any customization is done in ways which are not hard or expensive to reproduce, then confidentiality will only be interesting if other factors above suggest it.

References

  1. Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. “Algorithm aversion: people erroneously avoid algorithms after seeing them err.” Journal of Experimental Psychology: General 144.1 (2015): 114.

Similar Posts