In this article, Stephen Dwyer¹, Chief Operating and Legal Officer of the American Staffing Association, discusses the state of play of AI regulations currently in place and provides practical suggestions for operating compliantly as a hiring organization looking to leverage AI-based tools.
As human resource professionals continue to widely embrace new technologies to find, screen, interview, and hire qualified candidates, two things are becoming abundantly clear. First, the term “artificial intelligence (AI)” has almost become ubiquitous in the human resources context, broadly referring to resume screening algorithms, facial recognition and video interviewing software, machine learning, chatbots, and more.
Second, AI in HR recruitment has drawn the attention of policymakers and regulators seeking to ensure such tools do not run afoul of antidiscrimination laws by “baking in” biases that may unlawfully discriminate against protected classes of workers, including minorities and people with disabilities.
The following summarizes the current status of regulatory guidance, laws, and recent legislative proposals of which recruiters should be aware.Stephen Dwyer, Chief Operating and Legal Officer of the American Staffing Association, discusses the state of play of AI regulations currently in place in the latest blog from @TalentTechLabs: Click To Tweet
AI Draws the Attention of Congress and the EEOC
In December 2020, amid alleged hiring bias facilitated by social media platforms, several U.S. senators expressed their concerns by writing to the chair of the Equal Employment Opportunity Commission. They noted that, as Covid began to subside, “some companies [would] seek to hire staff more quickly” and “turn to technology to manage and screen large numbers of applicants to support a physically distant hiring process.”
They urged the EEOC to ensure that AI in HR recruitment would not create “built-in headwinds for minority groups” by investigating and auditing AI’s effects on protected classes, prosecuting discriminatory hiring assessments or processes, and providing guidance for employers on designing and auditing equitable hiring processes.
The following year, the EEOC announced its Artificial Intelligence and Algorithmic Fairness Initiative. The chair of the agency remarked, “the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.” The EEOC pledged to:
- Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in HR recruitment decisions;
- Identify promising practices;
- Hold listening sessions with key stakeholders about algorithmic tools and their employment ramifications; and
- Gather information about the adoption, design, and impact of hiring and other employment-related technologies.
True to its word, in May 2022, the EEOC issued technical guidance titled, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. The guidance illustrates how AI may unlawfully discriminate against those with disabilities and suggests steps employers can take to avoid liability.
For example, AI in HR recruitment might not afford those with disabilities reasonable accommodations during the hiring process, might screen out those with disabilities who could do the essential functions of a job with or without reasonable accommodation, or might make unlawful disability-related inquiries during the hiring process.
To address these potential liabilities, the guidance suggests that employers make clear to applicants how they can request reasonable accommodations; provide notice to applicants before using AI to assess their suitability for the role, and confirm with vendors that their AI tools do not run afoul of the Americans with Disabilities Act (ADA) or other laws.
Importantly, even if employers confirm compliance with their AI vendors, the EEOC assumes that employers could still be liable if their use of third-party AI discriminates against those with disabilities. The guidance states that “employers may be held responsible for the actions of their agents, which may include entities such as software vendors if the employer has given them authority to act on the employer’s behalf.”
Some employer groups, including the American Staffing Association, which represents staffing agencies, have urged the EEOC to afford employers a good faith reliance defense to discrimination charges if the employers utilize AI products that have been audited or tested for bias. The results have been made publicly available. The rationale behind such defense is that it is unreasonable to hold employers liable for processes they do not control and did not create, i.e., how the vendor’s AI was developed. Therefore, to the extent they exercise due diligence by utilizing AI software that has been audited or tested for bias, the employers should not have liability.Hear from Stephen Dwyer, Chief Operating and Legal Officer of the American Staffing Association, as he breaks down the most recent regulations introduced to monitor the use of #AI in #HR. See more from @TalentTechLabs: Click To Tweet
Unfortunately, to date, the EEOC has not recognized a due diligence exception based on the use of validated AI products. Regardless, as a best practice, recruiters should verify with their vendors that their AI products comply with the law and should consider seeking indemnification in the event the AI used in HR recruitment is found to be biased or discriminatory.
States and Localities Enter the Fray
In addition to the EEOC, several states and localities have enacted legislation; these include Illinois, Maryland, and New York City. New York State, New Jersey, the District of Columbia, California, and New Jersey are contemplating legislation or regulations.
In 2019, Illinois passed the Artificial Intelligence Video Interview Act, which went into effect in January 2020. The law applies to employers that use AI to analyze video interviews of applicants for positions based in Illinois. Employers must notify candidates that AI may be used to analyze their interview, explain how it will be used, obtain the candidate’s consent before use, and delete the video upon request.
In October 2020, Maryland’s law pertaining to facial recognition technology went into effect. The law requires employers to obtain candidates’ written consent and waiver before using such technology in interviews.
New York City’s law takes effect in January 2023. It prohibits employers from using “automated employment decision tools” unless they have the subject of a bias audit – an evaluation by an independent auditor conducted no more than one year before using the tool. It is unclear which party, the employer or AI vendor, is responsible for conducting the audit, and the City is expected to clarify this obligation through regulations.
New York City’s law also requires employers to provide candidates with at least ten days’ notice regarding the use of AI to evaluate candidates’ qualifications. For employers that need to fill openings on short notice, this requirement could present logistical challenges and hinder the employment process. It remains to be seen how the City will address this requirement in regulations.
Finally, Washington, DC, New York State, and New Jersey have proposed legislation that would similarly impose audit or advance notice requirements with respect to employers’ use of AI.
Takeaways for HR Professionals
Given rapid developments concerning the use of AI in HR recruitment, recruiters should consider:
- Working closely with outside legal counsel to
- Keep abreast of any new federal guidance and state or local laws
- Obtain the requisite candidate consent and waivers for AI use
- Comply with requirements regarding notice to candidates
- Working closely with AI vendors to
- Determine the extent to which their products have been audited or tested for bias
- Assess what role vendors will play if their products are alleged to be biased or discriminatory
- Potentially obtain indemnification regarding any claims of AI bias or discrimination
About Stephen C. Dwyer:
Stephen C. Dwyer is Senior Vice President and Chief Legal and Operating Officer of the American Staffing Association. Dwyer is a leading authority on the legal and public policy aspects of staffing. He engages in and coordinates the association’s legal and public affairs activities and advises the staffing industry on labor and employment law and policy issues. He has testified before legislatures and regulatory bodies regarding and written extensively and spoken widely on the staffing industry. Before joining ASA, he was associated with the New York multinational law firm Chadbourne & Parke and De Forest & Duer, a 100-year-old Wall Street firm. Dwyer is a member of the New York, New Jersey, Massachusetts, Virginia, and District of Columbia bar associations.
About the ASA:
The American Staffing Association is the voice of the U.S. staffing, recruiting, and workforce solutions industry. ASA and its state affiliates advance the interests of the industry across all sectors through advocacy, research, education, and the promotion of high standards of legal, ethical, and professional practices.
¹Stephen Dwyer is Senior Vice President, Chief Legal and Operating Officer of the American Staffing Association, a national trade association that represents staffing firms. The information in this article is not intended and should not be construed as legal advice. Readers should consult with their legal counsel regarding the issues discussed herein.
Get even more insights into the current legal landscape regarding the use of artificial intelligence in HR recruitment by downloading our newest Trends Report: The Impact Of AI Regulations On HR Technology And Employers. Our trends report explores AI’s ethical and explainable use across a wide range of HR technology vendors, helping you thrive in an AI-regulated world.
Download the latest Trends Report here!