· · ·

Algorithmic Accountability Act of 2022 and AI Design

The Algorithmic Accountability Act of 2022, here, is a very interesting text if you need to design or govern a process for the design of software that involves some form of AI. The Act has no concept of AI, but of Automated Decision System, defined as follows.

Section 2 (2): “The term “automated decision system” means any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.”

Leaving aside the actual applicability of the Act, and the fact that there are many systems which it does not necessarily apply to, it evokes some design principles and practices that are likely to become common, and implementing them should facilitate assuring compliance with other comparable legislation in the EU (the EU AI Act, see here) and China (here).

In particular, when designing and operating an automated decision system, it is necessary to regularly perform and report an impact assessment. If your system is in fact subject to the Algorithmic Accountability Act, there is the obligation to report it. If not, it still makes sense to build its maintenance into the AI design process, to ensure readiness for compliance in the future. 

It follows that the AI design process needs to have as its output an up to date version, or in other words, controlled updates to a live impact assessment. The impact assessment is better thought of as a database rather than, as noted in the Act, documentation. It is a database that needs to include quite rich content, some of it requiring significant expertise to develop and maintain. 

I highlight a few below, and will deal with more specific items in other notes.

  1. Ontology of impacts: Possible impacts need to be defined and described, likely meaning that a classification of impacts is needed, involving some form of well defined risk assessment. Note that the impact assessment consequently requires a structured approach to risk, something that may be foreign to many teams only starting the development of prototypes of AI systems that they hope to bring to market.
  2. Ontology of the AI system’s domain: Definitions of impacts imply that there is at the very least a set of definitions for the key concepts in the AI system, those that the system collects, manages, uses, and produces data about. Invariably then, as the AI design matures, it will grow an ontology of impacts, closely related to the ontology of the domain that the AI system is about. Better start developing the ontology as early as possible, then.
  3. Bridges between the ontology and (future) ontology that the regulatory bodies will use to structure information about impact assessments across many “covered entities”: The Act notes that over time, the regulatory structures that will audit the impact assessments will aim to standardize and structure information required from such assessments, to enable, quite expectedly, comparative analysis (see Section 3.b.2.iv). 
  4. Stakeholder consultations, and consequently structured identification of stakeholders: The Act requires that impact assessments involve stakeholder consultations, which in turn requires that there are well defined processes for surveying the environment to identify stakeholders. The obvious challenge here is to ensure that there is a rationale for why some are considered as stakeholders, and others are not. The more and the more varied input and output data of an AI system, the more complicated this becomes.
  5. Survey of related AI systems: The Act requires the evaluation of “any previously existing critical decision-making process used for the same critical decision prior to the deployment of the new augmented critical decision process”. This is a very complicated requirement in practice, as there are no clear incentives for protecting IP of AI systems publicly, that is, through patents for example; it seems much more reasonable to treat them as trade secrets – partly because it makes little sense to publish the details thereof as it raises risks to competitive position, and partly since it is likely that there will be many iterations of the AI system precisely for competitive reasons, implying an administrative overhead to maintain up to date public IP protection. Consequently, it is unlikely that there will be much information available to a team to compare their AI system with those of others. A potential solution is for the regulatory bodies to develop registered and structured databases of AI systems subject to regulations, which remains to be seen.

Operating AI systems requires clear explanations of what the systems do, how they work, and why they are intended to do what they do, and why they are designed as they may be. It requires investment in the understanding of impacts, the filing of evidence of that understanding, and publication of some of this information to the general public. These are rather new notions for most of the software industry, and especially smaller organizations where arguably the most interesting commercial innovation happens. It also means that there’s a need to revisit some core concepts in software design, such as “minimum viable product”, in light of the significant additional requirements that the Act, as well as those in other regions, impose.