| |

Algorithmic Accountability Act for AI Product Managers: Section 4

Section 4 provides requirements that influence how to do the impact assessment of an automated decision system on consumers/users. This text follows my notes on Sections 1 and 2, and Section 3 of the Algorithmic Accountability Act (2022 and 2023).

When (if?) the Act becomes law, it will apply across all kinds of software products, or more generally, products and services which rely in any way on algorithms to support decision making. This makes it necessary for any product manager whose products rely on any kind of algorithm, however implemented, to understand the details of the Act.

This is the third of a series of texts where I’m providing a critical reading of the Algorithmic Accountability Act (2022 and 2023). I held various product management positions in the past, for products/services which included software as a significant component, and my notes are inevitably biased by that experience; in almost all cases, products supported non-trivial decisions. If you have any questions, suggestions, or want to connect, email me at ivan@ivanjureta.com.

Part 3 of Peter Paul Rubens - The Fall of Phaeton (National Gallery of Art)
Part 3 of Peter Paul Rubens – The Fall of Phaeton (National Gallery of Art) https://en.wikipedia.org/wiki/The_Fall_of_Phaeton_(Rubens)
Algorithmic Accountability Act (2022 and 2023)Implications to product management
SEC. 4. REQUIREMENTS FOR COVERED ENTITY IMPACT ASSESSMENT.
(a) Requirements For Impact Assessment.—In performing any impact assessment required under section 3(b)(1) for an automated decision system or augmented critical decision process, a covered entity shall do the following, to the extent possible, as applicable to such covered entity as determined by the Commission:
Section 4 provides requirements that your impact assessment process needs to satisfy.
It is important to think of impact assessment not as a one time or infrequent activity – it is much easier to integrate it into the product development process. It becomes part of the job for many involved in product development, helps raise awareness, and thereby contributes to mitigating risks of undesirable impacts. In short, consider these as requirements on the product development process, when the product is or includes an automated decision system (as defined in the Act – see my first note on the Act’s implications for product managers).
(1) In the case of a new augmented critical decision process, evaluate any previously existing critical decision-making process used for the same critical decision prior to the deployment of the new augmented critical decision process, along with any related documentation or information, such as—
(A) a description of the baseline process being enhanced or replaced by the augmented critical decision process;
(B) any known harm, shortcoming, failure case, or material negative impact on consumers of the previously existing process used to make the critical decision;
(C) the intended benefits of and need for the augmented critical decision process; and
(D) the intended purpose of the automated decision system or augmented critical decision process.
This requirement introduces the need to document differences from similar augmented decision processes. There are two possible readings of this. One is that the differences are only those relative to the augmented decision processes your product(s) had in place, not those of others. Another is that differences relative to other companies’ augmented decision processes need to be documented. Evidently, the level of detail will be different in these cases – access to information is different in each case.
The rest of the requirement is normally met through documentation of the business case for the new / changed augmented decision process.
(2) Identify and describe any consultation with relevant stakeholders as required by section 3(b)(1)(G), including by documenting—
(A) the points of contact for the stakeholders who were consulted;
(B) the date of any such consultation; and
(C) information about the terms and process of the consultation, such as—
(i) the existence and nature of any legal or financial agreement between the stakeholders and the covered entity;
(ii) any data, system, design, scenario, or other document or material the stakeholder interacted with; and
(iii) any recommendations made by the stakeholders that were used to modify the development or deployment of the automated decision system or augmented critical decision process, as well as any recommendations not used and the rationale for such nonuse.
Keep a log of all data collected from stakeholders, be they users or otherwise impacted parties, and document the methodology followed to collect, store, and use that data. Note the emphasis on legal and financial agreements, implying that you should clarify any concrete incentives in place.

An interesting question is if data generated by the user when using the automated decision system is considered as a stakeholder consultation – users are a key stakeholder type, and their interactions with the system are indicative of their behavior and preferences.
(3) In accordance with any relevant National Institute of Standards and Technology or other Federal Government best practices and standards, perform ongoing testing and evaluation of the privacy risks and privacy-enhancing measures of the automated decision system or augmented critical decision process, such as—
(A) assessing and documenting the data minimization practices of such system or process and the duration for which the relevant identifying information and any resulting critical decision is stored;
(B) assessing the information security measures in place with respect to such system or process, including any use of privacy-enhancing technology such as federated learning, differential privacy, secure multi-party computation, de-identification, or secure data enclaves based on the level of risk; and
(C) assessing and documenting the current and potential future or downstream positive and negative impacts of such system or process on the privacy, safety, or security of consumers and their identifying information.
See the NIST Privacy Framework in particular, which introduces many additional requirements. I don’t cover these here, they require much more space.
(4) Perform ongoing testing and evaluation of the current and historical performance of the automated decision system or augmented critical decision process using measures such as benchmarking datasets, representative examples from the covered entity’s historical data, and other standards, including by documenting—
(A) a description of what is deemed successful performance and the methods and technical and business metrics used by the covered entity to assess performance;
It is useful to break this requirement into components.

(4)(A) 1: “a description of what is deemed successful performance” requires a clear and complete definition of what qualifies as desirable output of the augmented decision process. In case outputs are not fully predictable (as in the case of mainstream Large Language Models), ideally probability distributions for possible outputs would be provided, although even that may not be feasible – if so, then it is necessary to explain why only a partial definition of desirable outputs can be provided.

(4)(A) 2: “business metrics used by the covered entity to assess performance” requires a documented inventory of metrics, as well as explanations of how metrics correlate with the financial performance of the company operating the product.
(B) a review of the performance of such system or process under test conditions or an explanation of why such performance testing was not conducted;
(C) a review of the performance of such system or process under deployed conditions or an explanation of why performance was not reviewed under deployed conditions;
It is necessary to have meetings, reports, internal decisions, roles and responsibilities, processes to ensure that product performance reviews are done. (B) requires these at design and development time, (C) for the deployed versions.
(D) a comparison of the performance of such system or process under deployed conditions to test conditions or an explanation of why such a comparison was not possible;(D) is about tying design intent and actual performance, that is, about comparing performance of the deployed system to the documented desired performance during testing of the system version prior to its deployment.
(E) an evaluation of any differential performance associated with consumers’ race, color, sex, gender, age, disability, religion, family status, socioeconomic status, or veteran status, and any other characteristics the Commission deems appropriate (including any combination of such characteristics) for which the covered entity has information, including a description of the methodology for such evaluation and information about and documentation of the methods used to identify such characteristics in the data (such as through the use of proxy data, including ZIP Codes); andThe assumption in Section 4 is that the automated decision system is tested using a customer data. Consequently, (E) requires that parameters describing customers, such as those mentioned in that paragraph of the Act, should be used to show how the system’ outputs differ when the values of these parameters change. In practice, test cases need to be defined to cover different sub-populations. The part on “methods used to identify such characteristics in the data” covers cases where customers’ characteristics were not directly available in the data, but were computed in some way from data that does not explicitly capture them (as mentioned, inferring socioeconomic status from a postal code).
(F) if any subpopulations were used for testing and evaluation, a description of which subpopulations were used and how and why such subpopulations were determined to be of relevance for the testing and evaluation.If test cases used for evaluation covered subpopulations, then it is necessary to document why, both in terms of, say, cost, and in terms of a subpopulation being sufficient.
(5) Support and perform ongoing training and education for all relevant employees, contractors, or other agents regarding any documented material negative impacts on consumers from similar automated decision systems or augmented critical decision processes and any improved methods of developing or performing an impact assessment for such system or process based on industry best practices and relevant proposals and publications from experts, such as advocates, journalists, and academics.The impact assessment should include a risk assessment of the impacts, as well as mitigation strategies. Both risks and mitigation strategies need to be communicated to anyone working on the design and operation of the automated decision system, so that they can understand both, and understand any responsibilities they may have in risk mitigation. Practically, this means that every impact assessment, and any subsequent change in the product’s risk assessment should lead to standardized communications to staff and other parties as deemed necessary, with a mechanism that captures their signoff after having read/reviewed that material.
Instances when risk are realized should be documented, as well as any decisions on remediating actions. All these are going to be evidence used during audits.
(6) Assess the need for and possible development of any guard rail for or limitation on certain uses or applications of the automated decision system or augmented critical decision process, including whether such uses or applications ought to be prohibited or otherwise limited through any terms of use, licensing agreement, or other legal agreement between entities.Terms and Conditions should state what the automated decision system should not be used for. Exclusions of this kind make sense only for cases where it could be used, but doing so would lead to negative impacts. In other words, if you did the risk assessment, and there are risks where impact could be damaging enough (based on the thresholds you set, and have a documented rationale for), then Terms and Conditions could limit use in situations that can lead to the said sever risks realizing.

It is worth documenting if and how you can and perhaps are verifying whether the system has been used in such situations, that is, which violations of Terms and Conditions you can detect, and f you are doing so, as well as those that cannot be detected, and reasons why this is the case.
(7) Maintain and keep updated documentation of any data or other input information used to develop, test, maintain, or update the automated decision system or augmented critical decision process, including—(7) introduces the need for clear and documented data governance for the product that incorporates the automated decision system.
(A) how and when such data or other input information was sourced and, if applicable, licensed, including information such as—
(i) metadata and information about the structure and type of data or other input information, such as the file type, the date of the file creation or modification, and a description of data fields;
(ii) an explanation of the methodology by which the covered entity collected, inferred, or obtained the data or other input information and, if applicable, labeled, categorized, sorted, or clustered such data or other input information, including whether such data or other input information was labeled, categorized, sorted, or clustered prior to being collected, inferred, or obtained by the covered entity; and
(iii) whether and how consumers provided informed consent for the inclusion and further use of data or other input information about themselves and any limitations stipulated on such inclusion or further use;
Although requirement (i) is self-explanatory, the interesting point is that having an ontology of the domain that the data is about should help relate the descriptions required of the data, with the use of the data in the decision process.

Requirement (ii) makes it necessary to keep a log of all data sources used, the rationale for using them, and because of requirement (iii), of the customer’s consent.
(B) why such data or other input information was used and what alternatives were explored; andThis is normally covered by the documented design of the system, and the business case for the system.
(C) other information about the data or other input information, such as—
(i) the representativeness of the dataset and how this factor was measured, including any assumption about the distribution of the population on which the augmented critical decision process is deployed; and
(ii) the quality of the data, how the quality was evaluated, and any measure taken to normalize, correct, or clean the data.
While requirement (ii) is addressed through data governance, (i) requires a clear definition of the applicability of the system both to decisions (i.e., which types of decisions it applies to, which will be determined by the allowed range of parameters used by the system to make a recommendation and/or a decision) and to customers (i.e., the ranges of values of parameters describing the customers who are using the system).
(8) Evaluate the rights of consumers, such as—
(A) by assessing the extent to which the covered entity provides consumers with—
(i) clear notice that such system or process will be used; and
(ii) a mechanism for opting out of such use;
(i) and (ii) are satisfied if it is clearly communicated to the customer that they can decide if the system will be used, and if they do, that the recommendation / decision was made by the system. The issue here is of course the level of transparency to be given to the customer into what the system did to produce the output the customer sees – this leads into the topic of explainability of AI, or more specifically of the automated decision system.
(B) by assessing the transparency and explainability of such system or process and the degree to which a consumer may contest, correct, or appeal a decision or opt out of such system or process, including—
(i) the information available to consumers or representatives or agents of consumers about the system or process, such as any relevant factors that contribute to a particular decision, including an explanation of which contributing factors, if changed, would cause the system or process to reach a different decision, and how such consumer, representative, or agent can access such information;
(ii) documentation of any complaint, dispute, correction, appeal, or opt-out request submitted to the covered entity by a consumer with respect to such system or process; and
(iii) the process and outcome of any remediation measure taken by the covered entity to address the concerns of or harms to consumers; and
There needs to be a process and metrics to evaluate and report on the explainability, implying the need to have a working definition of “explainability”. Items under (B) introduce constraints on how explainability needs to be defined, namely:
(i) requires that there is information available to customers on parameters of decisions, and about the relationships between parameters and decisions/recommendations;
(ii) requires records of customer feedback, focusing on the negative; from a product management perspective, both positive and negative matter, as well as the absence of feedback, that is, lack of feedback, positive or negative; for the records of, for example, complaints to exist, there need to be not only the mechanism to collect or receive feedback, but also to assess it;
(iii) requires that there is a process to define corrective actions based on negative feedback.
(C) by describing the extent to which any third-party decision recipient receives a copy of or has access to the results of such system or process and the category of such third-party decision recipient, as defined by the Commission in section 3(b)(1)(I)(iii).For all third parties having access to the automated decision system’s outputs, or any kind of logs of operation, it is necessary to document what specifically they have access to.
(9) Identify any likely material negative impact of the automated decision system or augmented critical decision process on consumers and assess any applicable mitigation strategy, such as by—
(A) identifying and measuring any likely material negative impact of the system or process on consumers, including documentation of the steps taken to identify and measure such impact;
(B) documenting any steps taken to eliminate or reasonably mitigate any likely material negative impact identified, including steps such as removing the system or process from the market or terminating its development;
(C) with respect to the likely material negative impacts identified, documenting which such impacts were left unmitigated and the rationale for the inaction, including details about the justifying non-discriminatory, compelling interest and why such interest cannot be satisfied by other means (such as where there is an equal, zero-sum trade-off between impacts on 2 or more consumers or where the required mitigating action would violate civil rights or other laws); and
(D) documenting standard protocols or practices used to identify, measure, mitigate, or eliminate any likely material negative impact on consumers and how relevant teams or staff are informed of and trained about such protocols or practices.
(9) requires that the risk assessment to customers is part of the impact assessment; the points under (9) are straightforward as far as establishing a risk assessment framework are concerned. Mitigation strategies are likely going to have an impact on the product roadmap, as mitigation strategies may lead to, e.g., restrictions of the automated decision system to a subset of its prior parameters, due to unanticipated observed negative impacts to a subpopulation of customers.
(10) Describe any ongoing documentation of the development and deployment process with respect to the automated decision system or augmented critical decision process, including information such as—
(A) the date of any testing, deployment, licensure, or other significant milestones; and
(B) points of contact for any team, business unit, or similar internal stakeholder that was involved.
The product development, release, and operations processes need to be documented.
(11) Identify any capabilities, tools, standards, datasets, security protocols, improvements to stakeholder engagement, or other resources that may be necessary or beneficial to improving the automated decision system, augmented critical decision process, or the impact assessment of such system or process, in areas such as—
(A) performance, including accuracy, robustness, and reliability;
(B) fairness, including bias and nondiscrimination;
(C) transparency, explainability, contestability, and opportunity for recourse;
(D) privacy and security;
(E) personal and public safety;
(F) efficiency and timeliness;
(G) cost; or
(H) any other area determined appropriate by the Commission.
A documented product roadmap is needed, where improvements are also described in terms of their role in mitigation of risks to customers, system efficiency, system effectiveness (i.e., quality of the outputs to customers), among others.
(12) Document any of the impact assessment requirements described in paragraphs (1) through (11) that were attempted but were not possible to comply with because they were infeasible, as well as the corresponding rationale for not being able to comply with such requirements, which may include—
(A) the absence of certain information about an automated decision system developed by other persons, partnerships, and corporations;
(B) the absence of certain information about how clients, customers, licensees, partners, and other persons, partnerships, or corporations are deploying an automated decision system in their augmented critical decision processes;
(C) a lack of demographic or other data required to assess differential performance because such data is too sensitive to collect, infer, or store; or
(D) a lack of certain capabilities, including technological innovations, that would be necessary to conduct such requirements.
This is a curious requirement, as it allows the covered entity to postpone mitigation strategies, as long as there is an explanation of why this is done. It is likely that over time, the Commission will define criteria for the acceptability of deferring specific kinds of improvements (e.g., some negative impacts will be such that they must be resolved and the product will be taken off the market until this is done).
(13) Perform and document any other ongoing study or evaluation determined appropriate by the Commission.This is a catchall for any requirement that the Commission may have, on reporting that may be specific to a covered entity, or covered entities meeting some specific criterion that not all covered entities may meet, and that the Commission considers sufficient to introduce additional reporting requirements.
(b) Rule Of Construction.—Nothing in this Act should be construed to limit any covered entity from adding other criteria, procedures, or technologies to improve the performance of an impact assessment of their automated decision system or augmented critical decision process.This makes the Act a minimal set of requirements, clarifying that covered entities can have their additional AI governance requirements.
(c) Nondisclosure Of Impact Assessment.—Nothing in this Act should be construed to require a covered entity to share with or otherwise disclose to the Commission or the public any information contained in an impact assessment performed in accordance with this Act, except for any information contained in the summary report required under subparagraph (D) or (E) of section 3(b)(1).The covered entity is not required to disclose the information produced through impact assessments, except those that are covered in the Summary Report. This does not remove the need for keeping information about impact assessments, in case of audits.

The above covers only Section 4 of the Act. Sections 1 and 2 are covered here, and Section 3 is covered here. Texts covering other Sections are coming soon.

Similar Posts