The White House “Blueprint for an AI Bill of Rights” is an important step toward AI policy. Here's what to consider.
Last Updated: October 6, 2022
Nearly all technology companies leverage AI or automated large data systems in their products and services. The White House “Blueprint for an AI Bill of Rights” is an important first step toward future policymaking in the US and is likely to help shape future thinking about this topic. It is also creates an opportunity for companies to ask themselves some essential questions:
1. Does this or will this apply to my company?
2. What are my company’s principles for responsible AI?
3. How do we operationalize our principles?
What’s happening:
On October 4, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill or Rights (“Blueprint”) as a call to action for governments and companies to protect civil rights in an increasingly AI-infused world. The Blueprint and associated documents serve as an overview of issues surrounding the use of automated large data systems and AI and guidelines for mitigating harm. It does not provide a legislative framework or guidelines for enforcement, but is instead intended to be a “guide for society.”
The Blueprint outlines five high-level principles for responsible AI. This overview is clear, and is similar to other recent whitepapers and guidance published on AI and automated large data systems.
The more detailed companion content to the Blueprint “From Principles to Practice” provides examples of the business and technical scenarios the White House would like to see addressed and could signal where future legislative and enforcement action will likely occur. The document is worth reading, but here is a quick summary of the major points:
- Safe and Effective Systems - Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks and potential impacts of the system. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.
- Algorithmic Discrimination Protections - Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.
- Data Privacy - Designers, developers, and deployers of automated systems should seek user permission and respect user decisions regarding collection, use, access, transfer, and deletion of personal data. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language.
- Notice and Explanation - Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes.
- Human Alternatives, Consideration, and Fallback - Users should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law.
Questions to consider:
1. Does this or will this apply to our company?
Read through the examples of real world issues and think hard about how corollaries of these may apply to your company now or in the future.
2. What are my company’s principles for responsible AI?
Create or adapt an existing set of principles. It’s important to formally document this for your company, discuss/debate it as a leadership team and share it with your employees and customers.
3. How do we operationalize our principles?
Form an ethics review board that includes external experts in responsible AI, legal counsel, and engineering and business leaders from your company. Ideally, the board is majority independent/external. When ethical questions arise, the board responds with research, reflections and clear recommendations. Start small, but start - this could be three external experts and two employees.
RIL can help you get started, please reach out to us.