Ethical AI has been a concern of AI leaders, and practitioners for many years, but finally it seems, global jurisdictions are starting to move from policy formulation and stakeholder engagement to putting some teeth into drafting legal bills or acts.
Expect many new laws to pass in 2023, tightening up citizen privacy and creating risk frameworks and audit requirements for data bias, privacy and security risks.
At the same time, regulators are going to have to evolve an entire global ecosystem to ensure AI audits are effectively conducted and many questions loom as to who will validate certifications for AI audit practices and will we over burden AI innovations like we have done in so many other regulated operating practices that the risk and costs of non-conformance inhibit’s innovation and capital funding?
Finding a balance will be key.
The challenges with AI have been well documented in the risks that AI poses for high risk applications in health care scoring as to who will receive treatment first, to systems that are making health recommendations on insufficient data sets creating bias and risks in critical access to resources or services. We have seen major AI risks in recruiting and hiring practices to even loan credit decisions reproduce existing unwanted inequities or embed new harmful bias and discrimination. More rampant unchecked use of social media has also threatened citizen privacy, and often tracked personal activities without consent.
Although AI has so many positive benefits to predict futures – helping agriculture farmers predict storms, or insect outbreaks to early disease detection prevention to helping us advance vaccine production innovations to fight against COVID -19. AI as Malay Upadhyay and I wrote in The AI Dilemma we discussed AI as a Perfect World or as a Perfect Storm in our latest book.
These outcomes are deeply harmful—but they are not inevitable.
Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths – to algorithms that can identify diseases in patients. These tools now drive important decisions across sectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, AI tools hold the potential to redefine every part of our society and make life better for everyone or possibly worse – if we don’t get Ethical AI right.
Let’s take a look at what’s happening in Europe, USA and Canada and then discuss some of the risks and implications for implementing stronger regulatory controls.
The European Union (EU)
The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The EU Commission is drafting an Artificial Intelligence Act to regulate the use of AI and it divides the use of AI into four risk categories to the rights of citizens:
- Unacceptable risks, such as the use of AI in social scoring by governments, like used in China.
- High-risk uses, such as in educational or vocational training, employment, management of workers and remote biometric identification systems, high risk areas are like AI scanning tools that rank job applicants.
- Limited-risk applications with specific transparency obligations (e.g., a requirement to inform users when interacting with AI such as chatbots).
- Minimal-risk AI, such as spam filters.
A detailed summary of the key EU milestones in their AI journey are here. The EU Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:
- a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;
- a civil liability framework – adapting liability rules to the digital age and AI;
- a revision of sectoral safety legislation (e.g. Machinery Regulation, General Product Safety Directive).
There is no question the rigour and thinking of the EU approach to ethical AI has been role model and other global jurisdictions, like the USA and Canada are benefiting for their tremendous leadership in this area.
In October, the Biden administration White House published a draft AI Bill of Rights intended to guide the design, use and deployment of automated systems. Brazil, Canada and the U.K. are working on the development of similar laws and frameworks, as well as other global jurisdictions. The White House Office of Science and Technology Policy has identified five principles to guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Big Five AI ethical principles are:
- BluePrint for Safe and Effective Systems
- Algorithmic Discrimination
- Data Privacy
- Notice of Explanation, and
- Human Alternatives, Considerations and Fallbacks
The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce democracy.
1.BluePrint for Safe and Effective Systems
This principle is more a governance requirement for organizations to be responsible and ensure accountability for effective AI practices. Many examples of poor practices are highlighted in the AI Bill of Rights. A few caught my attention like:
- A proprietary model predicted the likelihood of sepsis in hospitalized patients and was implemented at hundreds of hospitals across the USA. An independent study showed that the model predictions underperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting the likelihood of sepsis.
- An AI algorithm used to deploy police was found to repeatedly send police to neighbourhoods they regularly visit, even if those neighbourhoods were not the ones with the highest crime rates. These incorrect crime predictions were the result of a feedback loop generated from the reuse of data from previous arrests and erroneous AI algorithmic predictions.
Like with most practices, governance is all about effective leadership. Leadership must also start with knowledge of what is AI, and digital literacy is key, an area I have been writing about consistently over the past ten years.
This principle occurs when automated systems contribute to unjustified different treatment or impacts disfavouring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. For example, on social media, Black people who quoted and criticized racist messages have had their own speech silenced when a platform’s automated moderation system failed to distinguish this “counter speech” from the original hateful messages to which such speech responded.
This third principle focuses on the need to protect citizens from abusive data practices and reinforces that citizen’s need more rights to have more agency over how data about them is used. Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and related inferences should only be used for necessary functions, and you should be protected by ethical review and use prohibitions. You and your communities should be free from unchecked surveillance; surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access.
4.Notice of Explanation
This forth AI Privacy principle focuses on that as a citizen you have the right to have an explanation of how any Automated AI system is being used and understand how and why it contributes to outcomes that impact you. You should know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the sole input determining the outcome. Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context.
5.Human Alternatives, Consideration, and Fallback
The forth principle focuses on citizens should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate. You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.
New York City, USA
New York City’s Local Law 144 (LL 144) is effective January 1, 2023, and will require employers using automated employment decision tools in recruiting and promotions to satisfy a bias audit requirement and provide notices and disclosures regarding the audit results and the use of any automated AI tools. Other countries and states, within the U.S. and globally, are also putting forth legislative laws to address the employment-related use of AI and the risks and biases it poses to fair employment recruitment practices. Other states like CA, Illinois and Maryland are also considering legislation that could impact the use of AI in hiring and other employment decisions.Law 144 prohibits the use of an automated AI recruiting tools unless:
- A bias audit is completed within one year of its use;
- The results are made publicly available;
- The notice is provided to job candidates regarding the use of AI recruiting tools; and
- Candidates or employees are allowed to request an alternative evaluation process as an accommodation to the proposed AI methods.
Canada has been a leader in the advancement of AI however its conservative culture could well set it back given the pending privacy legislation of Bill C-27, a section of Canada’s AI and Data Act. The focus of Bill C-27 – entitled the Digital Charter Implementation Act is designed to strengthen Canada’s private sector privacy law, create new rules for the responsible development and deployment of artificial intelligence (AI), and further advance the implementation of Canada’s Digital Charter. The Digital Charter Implementation Act, 2022 introduced three proposed legal acts: the Consumer Privacy Protection Act, the Artificial Intelligence and Data Act, and the Personal Information and Data Protection Tribunal Act. The proposed Artificial Intelligence and Data Act will introduce new rules to strengthen Canadians’ trust in the development and deployment of AI systems, including:
- protecting Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias;
- establishing an AI and Data Commissioner to support the Minister of Innovation, Science and Industry in fulfilling ministerial responsibilities under the Act, including by monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate; and
- outlining clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.
Implications of Executing AI Legal Laws or Bills
Board directors and C levels need to start thinking ahead to plan effectively. For example in the USA, it is estimated that over 80% of publicly traded companies are using AI recruiting toolkits. We have many examples of AI algorithms that are creating cultural bias, surveillance privacy risks, etc, and it’s positive to see that in 2022 so tremendous progress was made on many global jurisdictions developing clearer legal frameworks. However, in spite of this solid progress, there are many governance and operational practices that must be put in place to guide AI’s successful evolution into all business processes/practices and into all societal infrastructures.
Here are some key questions that need to be carefully thought through to operationalize effective AI governance practices:
1.) Who will be able to perform the bias audit and how will they be certified?
2.) How often will these audits need to be performed, quarterly or annually, or?
3.) How will disclosure be handled and to whom is this disclosure issued to?
4.) What are the new legal risks of disclosure and impact to liability insurance?
5.) Will board directors require new board insurance to focus on AI practices? If so what are the right parameters for legal protection?
6.) How will we evolve operational software development practices to ensure that the designers, developers, and deployers of automated systems take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way or seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways?
7.) Will we need to certify AI practitioners, just like we did with our medical doctors and engineers?
8.) What is your level of digital and AI literacy before even attempting to guide your organizations to answer (1-7)?
2023 will finally be the year for stronger AI governance and legal laws advancing. The big challenges are in how these new laws as they are finalized will be regulated, and what infrastructures and new governing bodies will need to be developed to support these laws.
In the meantime, we know changes are rapidly coming in 2023 through 2024 to help governments and businesses take more thoughtful approaches for designing and developing AI systems. It all starts with strong leadership that advances ethical AI underpinnings as a core operating practice and in many companies AI ethics will become a core corporate value. At my company, SalesChoice, we developed our AI Ethical policy and have been advancing ethical and explainable AI in all our business practices. Having served on Forbes Business and Technology Business School’s advisory board now integrated at Arizona State University and teaching AI Ethics and Law at George Brown, a local Toronto College, and researching in this area for the past ten years, I consistently see gaps in board director governance on AI and lack of digital literacy skills.
As much as I am passionate about AI and modernizing our organizations, getting organization’s fundamentals right on data literacy, advanced analytics are key foundations before advancing in AI innovations and applying ethical AI practices.