AI bias, auditing and reporting are all key factors in the Algorithmic Accountability Act. Find out what your business can do today in the field of artificial intelligence and bias.
What is the Algorithmic Accountability Law?
“The law requires all companies using AI to conduct critical impact assessments of the automated systems they use and sell in accordance with Federal Trade Commission regulations,” said Siobhan Hanna, general manager of global AI systems for TELUS International. . “Forcing tech companies to self-monitor and report is a first step, but implementing strategies and processes to more proactively reduce bias will also be critical to addressing discrimination earlier in the AI value chain.”
If the Algorithmic Accountability Act is passed, it will likely lead to an audit of artificial intelligence systems at the vendor level – as well as within companies themselves that use AI in their decision-making.
TO SEE: Ethical Policy for Artificial Intelligence (Tech Republic Premium)
The Algorithmic Accountability Act was reintroduced in both the House and Senate in April 2022 after undergoing amendments.
“Homes you never know are for sale, job openings that never come up, and funding you never realize — all because of biased algorithms,” said Senator Cory Booker, a sponsor of the bill. “This bill requires companies to regularly evaluate their tools for accuracy, fairness, bias and discrimination. It is an important step towards greater accountability of the entities that use software to make life-changing decisions.”
Are companies ready for the challenge?
As much as 188 different human biases that can influence AI have been identified. Many of these biases are deeply embedded in our culture and our data. If AI training models are based on this data, bias can set in. While it is possible for companies and their AI developers to intentionally incorporate bias into their algorithms, bias is more likely to arise from data that is incomplete, skewed, or not from a sufficiently diverse set of data sources.
“The Algorithmic Accountability Act would pose the biggest challenges for companies that have not yet established systems or processes to detect and reduce algorithmic bias,” said Hanna. “Entities developing, acquiring and using AI need to be aware of the potential for biased decision-making and the outcomes resulting from its use.”
If the bill becomes law, the FTC would have the power to conduct an AI bias assessment within two years of approval. Healthcare, banking, housing, employment and education are likely to be prime targets for research.
Specifically, any person, partnership or business subject to federal jurisdiction earning more than $50 million a year, owning or controlling personal information about at least one million people or devices acts primarily as a data broker purchasing consumer data and sells , will be assessed,” said Hanna.
What companies can do now?
Bias is inherent in society and there is really no way to achieve a completely zero-bias environment. But this is no excuse for companies to go out of their way to ensure that data and the AI algorithms that work on it are as objective as possible.
Measures that companies can take to facilitate this are:
- Use diverse AI teams that bring many different views and perspectives on AI and data.
- Develop internal methodologies for monitoring AI for bias.
- Require bias assessment results from third-party AI systems and data providers from which they purchase services.
- Put a lot of emphasis on data quality and preparation in their daily AI work.