93% of businesses in a 2022 PWC survey had started their journey into AI and were somewhere between testing and widespread adoption, the other 7% were considering it; everyone is in.
Try to find the number or percentage of companies implementing AI, in the US and around the globe, and you’ll find credible studies and solid numbers. Try to find statistics on the frequency with which AI discriminates against consumers, and numbers will be hard to come by.
Left unsupervised, AI is likely to pick up a few bad habits and make detrimental generalizations. AI is special, in part, because it learns. It takes the data to which developers expose it and makes assumptions. Children who never see women in certain roles may grow up to assume that there are tasks that are less suited to women and careers where women do not belong.
AI tools that grow up with similarly limited gender or racial datasets enter the world and behave in analogous ways.
Unfortunately, these tools get jobs making consequential decisions in areas of healthcare, finance, banking and lending, housing, employment, law enforcement and beyond. The same tools that performed admirably to create fair and neutral processes are now bringing back racial, gender, ethnic, religious, and socioeconomic discrimination that no one wants to relive.
Fortunately, there are best practices that can reduce instances of biased AI. Letting algorithms autonomously reach for data and make unscrutinized decisions invites them to wander into bias. Best practices can help organizations and decision makers get out ahead.
Thinking that it only implicates the creators of the misbehaving AI tools doesn’t catch sight of the big picture. AI is everywhere. When it delivers lawsuit-level bias, there is no way to predict who will be touched by the litigation. Leaders across all sectors are dragged in and have to pay attention to algorithm bias. Among the actions and policies that can help mitigate and prevent the new digital discrimination, there are points at which most individuals, businesses, and leaders can become involved.
Some of the major actions center on identifying and reporting AI bias; others focus on making sure it does not occur at all. Other best practices create algorithm diversity and transparency.
Before implementing any other practices, leaders and businesses have to come to terms with the possibility that the AI tools they crafted or adopted for their own or their customers’ use may be biased. They have to understand that algorithm bias is a shortcoming, not a premeditated sinister intention. However, they must understand that it is damaging to individuals and enterprises and should act quickly and thoroughly whenever they find it.
Fast action limits adverse impacts and may help prevent or reduce legal consequences. A business that finds bias in its AI toolbox should become its own whistleblower. Quick, decisive action requires clearly articulated, standardized policies and procedures.
Developing a set of indicators that identify instances of AI bias and responding proactively with a stipulated set of protocols may be the first step to preventing damage and disruption. Engaging and equipping diverse groups to test and teach AI tools using established sets of measures and criteria is another effective practice.
In addition to exposing algorithms to fair, inclusive datasets, create a safe setting where employees are able to offer feedback. Educate customers and rely on accounts of their experiences with the AI to make changes. Create a job description and set of behavioral standards for the AI, and as with any high-profile customer-facing employees, periodically monitor and evaluate the AI.
In addition to systems related to responding to awareness of AI bias, leaders also need to craft detection criteria. AI bias assumes many forms. What does it look like in your setting?
Businesses need to draft and disseminate their methods of identification. Variations of the AI bias identification criteria and process can be made available to employees and to customers. Written procedures should direct individuals to report what they believe to be qualifying instances of AI bias to designated parties within the organization.
Detecting or preventing AI bias can be the task of highly diverse teams of users, within or outside of a given company. Different people experience the same AI tool in different ways. Feedback and comments from highly inclusive groups of users can catch issues before release or while the AI is in use. Search terms and criteria should be customized for a variety of discriminated groups.
Catching and admitting bias early on is the responsible thing to do. It is more readily achieved if employees and external testers feel safe reporting adverse or biased experiences. Maintaining psychological safety for these inclusive teams of testers enables a business to maintain vigilance and head off legal woes.
Best practices ensure that algorithms create and sustain equity for all groups. Algorithms that “grow up” with diverse, inclusive datasets learn to assume equality and learn to make it happen when they go to work in the worlds of banking, education, healthcare, and law enforcement. It may be an unfair challenge that algorithms have to be fairer and more equitable than people, but isn’t that one of the reasons why we created the AI in the first place?
Since AI tools are digital employees, treating them as such offers another useful best practice.
Establish a job description for the AI. Establish necessary competencies and standardize acceptable behaviors. Demand bias-free performance. Provide periodic corrective feedback and subject your AI tools to an annual performance review. Hear from customers and others who interact with the AI. Create a file for these wholly digital staff and collect performance information. Tweak the algorithms in light of feedback. Trust the data. If an AI tool is making decisions that parallel existing or historical bias, look carefully.
Finally, as a best practice, remain transparent about how your algorithms are intended to work. Doing so can help users meaningfully assess them for neutrality or bias. Build channels of communication within and beyond your organization and listen to what is being said. The application of AI and other “new” technologies is rapidly expanding. Now is the time for business leaders to take part in shaping the tools and minimizing their capacity to discriminate and do harm.
1. Don’t just unleash your own or another company’s AI on the world; “hire”, review, and adjust your AI tools based on feedback and “performance assessments.”
2. Establish and share written best practices for the behavior of your AI and make sure processes for reporting bias are clear, easy, and effective.
3. Test AI on an ongoing basis using diverse users to continuously check for the emergence of discriminatory “behaviors” in the algorithms.
Read the full article here