© Andrey Popov | Dreamstime
Robot Interviewer 647908791262b

Regulatory Update: Federal Agencies Warn About AI and Bias

June 6, 2023
The EEOC is first out of the blocks with employer guidance.

Four major federal agencies recently announced a joint effort seeking to block potential bias and discrimination that could arise from the use of artificial intelligence (AI) by employers. One of the agencies—the Equal Employment Opportunity Commission (EEOC)—was the first out of the blocks in publishing an AI guidance document for employers.

In addition to EEOC, the Consumer Financial Protection Bureau (CFPB), Department of Justice’s Civil Rights Division (DOL) and the Federal Trade Commission (FTC) announced they will jointly work to ensure AI does not violate individual rights and regulatory compliance regarding civil rights, equal employment opportunity, fair competition and consumer protection.

AI has become a hot topic this year, with news stories proliferating about the technology’s dangers to privacy and democracy. Some are enthusiastically boosting AI as a tool that can solve many problems and make different kinds of work easier and quicker to do. Others like Tesla CEO Elon Musk see AI as a threat that must be controlled before it grows out of control.

Issued on April 25, the agencies’ statement pointed out that AI, which they said is the use of automated systems, including systems other than AI sometimes marketed as AI, is becoming increasingly common in our daily lives. The agencies defined the term “automated systems” broadly to mean “software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.”

“Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes,” the federal officials declared.

All sorts of private and public organizations use these systems to make critical decisions that impact individuals’ rights and opportunities, including fair and equal access to a job, housing, credit opportunities, and other goods and services, they explained. “These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices,” they added. “We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”

The officials pressed the point that many automated systems rely on enormous amounts of data to find patterns or correlations, and then apply those patterns to new data to perform tasks or make recommendations and predictions. “While these tools can be useful, they also have the potential to produce outcomes that result in unlawful discrimination.”

The agencies’ statement also offered various examples of potential discrimination in automated systems that can come from different sources:

Data and Datasets: Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors. “Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes,” they said.

Model Opacity and Access: Many automated systems are “black boxes” whose internal workings are not clear to most people and, in some cases, even the developer of the tool. They observed that this lack of transparency often makes it all the more difficult for developers, businesses and individuals to know whether an automated system is fair.

Design and Use: Developers do not always understand or account for the contexts in which private or public entities will use their automated systems, according to the agency officials, who said developers may design a system on the basis of flawed assumptions about its users, relevant context, or the underlying practices or procedures it may replace.

“The joint statement confirms the federal government’s increased scrutiny of automated systems and AI-enabled technologies,” wrote attorneys from the law firm of McGuireWoods [CQ]. As seen in the employment context, legislators and regulators in the states also are well into the process of introducing laws and guidance aimed at enforcing responsible controls on innovation.

They and other attorneys recommend that employers, healthcare providers, technology developers and others should monitor updates to federal, state and foreign regulation of automated systems. These attorneys also warned those who are vulnerable to government actions assess their organizations’ intentional and inadvertent use of AI to comply with regulations and ensure best practices.

EEOC Forges Ahead

On May 22, the EEOC issued a public warning to employers regarding this very issue intended to serve as a follow-up to the joint statement in April. It reminds employers that Title VII of the federal civil rights law prohibits employers from using tests or selection procedures that have an adverse impact—or a disproportionately large negative impact—on the basis of any protected characteristic.

“In the employment context, using AI has typically meant that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making decisions,” observed Cara Yates Crotty, an attorney with the law firm of Constangy Brooks Smith & Prophete. “AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.”

Although the technology may be new, the commission wrote guidelines for employee mechanical selection procedures as far back as 1978. “What is new is the extensive and expanding use of AI by employers, especially in making hiring decisions,” Crotty said.

Examples provided by the EEOC include:

• Resume scanners that prioritize applications using certain keywords.

• Employee monitoring software that rates employees on the basis of their keystrokes or other factors.

• “Virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.

• Video interviewing software that evaluates candidates based on their facial expressions and speech patterns.

• Testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.

The EEOC says employers should assess whether any selection procedure has an adverse impact on the basis of a characteristic protected by Title VII—race, sex, color, national origin or religion—by comparing the selection rates of the different groups. If the selection rate for one group is “substantially” less than the selection rate for another group, then the process may have adverse impact.

If a selection procedure has adverse impact on the basis of a protected characteristic, the employer must show that the procedure is job-related and consistent with business necessity. Even if an employer can show that a selection procedure is job-related and consistent with business necessity, it may not use a procedure that has adverse impact if there is a less discriminatory alternative available.

The EEOC’s guidance also addresses whether employers can be held liable for adverse impact caused by AI that was developed by a vendor. The short answer is “yes,” Crotty said. Employers who are deciding whether to rely on a software vendor to develop or administer an algorithmic decision-making tool may want to ask the vendor, at a minimum, whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII, the commission suggests.

However, if the software vendor was willing to provide assurances that its system is non-discriminatory and that turns out not to be true, meaning the tool does result in either disparate impact discrimination or disparate treatment discrimination, the employer could still be liable under the commission’s regulatory scheme, she pointed out.

“Although the EEOC’s guidance does not break any new ground, it provides a timely refresher on the Uniform Guidelines on Employee Selection Procedures and reminds us that the same rules apply to any selection method,” Crotty noted. “Thus, employers should continue to monitor all of the tools and steps in their selection procedures for potential adverse impact, including tools that use AI.”

About the Author

David Sparkman

David Sparkman is founding editor of ACWI Advance (www.acwi.org), the newsletter of the American Chain of Warehouses Inc. He also heads David Sparkman Consulting, a Washington D.C. area public relations and communications firm. Prior to these he was director of industry relations for the International Warehouse Logistics Association. Sparkman has also been a freelance writer, specializing in logistics and freight transportation. He has served as vice president of communications for the American Moving and Storage Association, director of communications for the National Private Truck Council, and for two decades with American Trucking Associations on its weekly newspaper, Transport Topics.

Sponsored Recommendations

10 Facts About the State of Workplace Safety in the U.S.

July 12, 2024
Workplace safety in the U.S. has improved over the past 50 years, but progress has recently stalled. This report from the AFL-CIO highlights key challenges.

Free Webinar: ISO 45001 – A Commitment to Occupational Health, Safety & Personal Wellness

May 30, 2024
Secure a safer and more productive workplace using proven Management Systems ISO 45001 and ISO 45003.

ISO 45003 – Psychological Health and Safety at Work

May 30, 2024
ISO 45003 offers a comprehensive framework to expand your existing occupational health and safety program, helping you mitigate psychosocial risks and promote overall employee...

Case Study: Improve TRIR from 4+ to 1 with EHS Solution and Safety Training

May 29, 2024
Safety training and EHS solutions improve TRIR for Complete Mechanical Services, leading to increased business. Moving incidents, training, and other EHS procedures into the digital...

Voice your opinion!

To join the conversation, and become an exclusive member of EHS Today, create an account today!