This three-part series outlines the challenges and actions that the Board of Directors for organizations must address as they guide their organization’s responsible and ethical deployment of Artificial Intelligence (AI). Part one covered mitigating the impacts of AI Confirmation Bias. Part two explained unintended consequences and how to identify them. Part three will conclude with introducing the “Unintended Consequences Assessment” design template and the Board of Directors Responsible AI checklist.
In the previous two blogs, we explored the ramifications and risks associated with poorly constructed and vetted AI deployments, including:
- Confirmation Bias from poorly-constructed AI models that deliver irrelevant, biased, unethical outcomes and impact longer-term operational viability.
- Unintended Consequences from decision-makers making decisions without consideration of the second and third (and fourth and fifth…) order ramifications of what “might” go wrong.
Let’s introduce a design canvas that can help us identify, cognize, and codify the risks (as variables and metrics to include in the AI Utility Function) associated with the potential unintended consequences of poorly contemplated AI initiatives.
Updated Unintended Consequences Design Template
In my blog post titled “Economics of Ethics: Is Ethics Ultimately an Economics Conversation? Part III,” I introduced the “Economics of Ethics” design template. However, after testing the design canvas in classrooms and workshops, I realized it was clumsy in identifying and assessing the potential unintended consequences of AI initiatives.
Consequently, I modified the design template to focus on assessing one “unintended consequences” scenario per template in support of an organization’s broader AI initiative. This approach facilitates identifying and evaluating a wider variety of potential scenarios, which is vital in identifying and exploring the “unknown unknown” scenarios that might result from the application of AI.
Figure 1 shows a completed template for the grade school “Head Start” initiative and explores the scenario where the targeted constituents are offended by the initiatives, which could sow community distrust and political grandstanding.
Figure 1: Unintended Consequences Assessment Design Template
Note: The Unintended Consequences Assessment design template is relevant for any organization or government agency in identifying and contemplating the potential unintended consequences of any vital initiative or legislation. For example, the article “11 years into Oklahoma’s affirmative action ban, the state has seen some unintended consequences” provides another example of government legislation intended to accomplish one set of objectives but unfortunately delivered unintended negative results.
But for organizations that are looking to employ AI, the Unintended Consequences Assessment design template is particularly useful in guiding the organization in identifying and comprehending the possible unforeseen outcomes of implementing AI. The design template forces the organization to understand, ideate, and assess the following:
- Initiative: A detailed description of the business or operational initiative against which one plans to apply AI, the initiative’s objectives and desired outcomes, the impacted stakeholders, and the KPIs and metrics against which initiative progress and success will be measured.
- Scenario: Identify or brainstorm a scenario or outcome that might arise from the execution of the initiative. It is critical to engage a diverse set of internal and external stakeholders – those who either impact or are impacted by the initiative – in brainstorming the different ways the initiative or legislation could go wrong. You will create a separate template per scenario, so the more scenarios, the better. Besides, we have plenty of paper!
- Potential Outcomes: For each scenario, identify the factors or outcomes that might impact that scenario as viewed across financial, operational, stakeholder (customer, employee, partner), societal, environmental, and ethical perspectives.
- Variables and Metrics: For each outcome, identify the short-term and long-term variables and metrics we might want to employ to monitor, manage, and optimize those outcomes.
The goal of this exercise is to identify and explore the “unknown unknown” scenarios that might result from the application of AI. And remember, during this process, all ideas are worthy of consideration (one of the Schmarzo fundamental tenets).
Board of Directors AI Challenge Checklist
Board of Directors Challenge: What Role Does the Board of Directors Play in Guiding Senior Management in Balancing AI-based Business Opportunities vs. Societal Risks?
Solution: Ensure the Board asks senior management the right questions about the company’s AI and data strategy from the broader economic frame, considering the costs and risks associated with AI model confirmation bias and potential unintended consequences.
Here is a starter checklist that a Board of Directors should contemplate to ensure AI’s ethical and responsible deployment. No doubt that this list will grow with experience, so please share what you are learning about the questions that the Board of Directors should be asking senior management.
AI Operational Effectiveness Checklist
- Have you clearly defined, triaged, vetted, and socialized the targeted business initiative you are addressing, including the desired outcomes and the KPIs and metrics against which the desired outcomes will be measured?
- Have you identified and gained input from a diverse set of internal and external stakeholders – those stakeholders who either impact or are impacted by the initiative – concerning their desired outcomes and the KPIs and metrics against which they will measure outcomes’ effectiveness?
- Have you identified a comprehensive and diverse range of variables and metrics for measuring the outcomes’ relevance and effectiveness? These should include a diverse set of financial, operational, customer, employee, partner, societal, community, environmental, and ethical metrics.
- Have you identified conflicting variables and metrics to force the AI models to make the needed trade-off decisions as they seek to deliver relevant, meaningful, responsible, and ethical outcomes within a dynamic operating environment?
- Have you focused on identifying, measuring, and analyzing leading indicators instead of lagging indicators? Have you triaged the key lagging indicators to determine their casual leading indicators?
- Have you integrated the organization’s extensive economic charter and associated variables and metrics into your AI Utility Function? Such variables might include social growth and prosperity, educational equality, affordable healthcare, affordable housing, employment growth, economic growth, and environmental sustainability.
AI Risk Mitigation Checklist
- Have you asked and explored the second and third-order questions and potential consequences to understand the possible economic ramifications of AI-based initiatives?
- Have you empowered the broader organization, especially the front lines of the organization, to voice any concerns they might have with the second and third-order ramifications of the AI-based initiative?
- Have you empowered the broader organization to share any potential variables and metrics that might be appropriate in monitoring and measuring the associated economic impact of the AI-based initiative?
- Have you explored and clarified the potential ramifications of initiative failure from the perspectives of financial, operational, customer, employee, partner, societal, community, environmental, and ethical factors?
- Have you identified and integrated the KPIs and metrics to monitor and mitigate the impact of business initiative failure into your management and operational systems?
- Have you formalized identifying and quantifying the costs and risks associated with AI model confirmation bias?
- Have you instrumented and audited the AI models to continuously learn from the AI model’s False Positives and False Negatives?
- Have you identified and accounted for proxy measures for protected classes to minimize model biases in designing and developing AI models?
- Have you operationalized the identification and exploration of the potential unintended consequences from the AI model’s decisions from the perspectives of finance, operational, customer, employee, partner, community, social, environmental, and ethical?
- Have you integrated measures to flag and monitor these potential unintended consequences into your management and operational systems?
And while I called this a checklist, it is more than just checking the list. The Board of Directors and senior management will be held accountable for ensuring that these questions are being asked, voices are being heard, and plans and actions are being put into place to remove the dangers associated with AI’s careless and unethical application. So think of this list as an engagement framework versus a simple checklist.
Also, I am confident that this checklist will evolve as we learn more about the good, the bad, and the ugly of AI.
AI Board of Directors Mandate Summary
“You are What You Measure, And You Measure What You Reward”
We know that ensuring the responsible and ethical deployment of AI will require organizations to more carefully and holistically define the variables and metrics against which they want the AI model to deliver relevant, meaningful, responsible, and ethical outcomes. Yep, we need to hold the AI models to an extreme level of responsibility.
But what would happen if we held our corporate, social, and government leaders to that same level of responsibility? Maybe that alone would help to minimize the unintended consequences of short-sited decisions and legislation. Just saying…