· ·

Black Box Approach to AI Governance

As currently drafted (2024), the Algorithmic Accountability Act does not require the algorithms and training data used in an AI System to be available for audit. (See my notes on the Act, starting with the one here.) The way that an auditor learns about the AI System is from documented impact assessments, which involve descriptions of inputs and outputs, and their relationship to decisions that the system is designed to support.

This black box approach to governance is coherent with the idea that AI Systems are intellectual property of those that own them. It allows secrecy to be preserved in case, for example, intellectual property is protected as a trade secret. It thereby reduces the risk for the owner that others use their intellectual property without authorization.

The black box approach is one solution among others, to the tradeoff between protection of intellectual property and speed of innovation. Secrecy means fewer eyes on the design of the system, which in the case of AI means fewer people evaluating and improving the algorithms and training data. 

For faster innovation, there needs to be the possibility for many to review and contribute improvements, and there need to be incentives to do so. Open source software and platforms with app stores are two complementary ways to create such incentives for people outside the organization that owns the AI System. 

Enforcing open sourcing by regulation would be questionable: if it were mandatory to make an AI System open source as a gate when going to market, it would reduce the incentives to invest in the AI System in the first place, as it increases risk that competitors would appear faster and drive margins down.