Thriving in an AI-Regulated World

When it comes to AI in the hiring process and how organizations can stay compliant and plan for the future, there is arguably no one better situated to advise than Craig Leen, a former director at the OFCCP, who now serves in an advisory capacity to multiple technology builders. 

In this in-depth interview, we discuss the history of employment regulation, how current AI regulations are similar and different from existing employment law, the relative risk of different kinds of AI applications, and how organizations can make sense of it all and thrive in an AI-regulated world.

Historical Context of Regulations in Hiring

David Francis, Vice President of Research and Product, Talent Tech Labs: How companies hire has been regulated for a long time. Can you give us some historical context and perspective on how we got to where we are today? What’s the history of regulating work?

Craig Leen, Former OFCCP Director: There have been workplace regulations in place for a very long time, all the way back to Roosevelt and the passage of the Fair Labor Standards Act (FLSA), which created guidelines around minimum pay, and overtime and prohibited the use of oppressive child labor. You have Title VII of the Civil Rights Act of 1964, which prohibited discrimination in employment decisions based on protected classes and established the Equal Employment Opportunity Commission (EEOC). You also have the Americans with Disabilities Act and Rehabilitation Act, which established guidelines for equal opportunity and protections for disabled individuals in the employment sphere.

Craig Leen, Former OFCCP Director, discusses how to thrive in an #AI regulated world in the latest blog from @TalentTechLabs: Click To Tweet

The Uniform Guidelines and Employee Selection Procedures came out of Title VII, and subsequent amendments and guidelines established the theory around disparate impact and disparate treatment. Disparate treatment involves intentional discrimination in a hiring process based on a protected class; disparate impact involves non-intentional discrimination where a neutral policy or practice causes an adverse impact on a protected class and that policy can’t be validated – that is, that policy may not be be job-related, or not consistent with business necessity, or there’s an alternative that could be adopted that would address the disparate impact but still achieve the same goal. 

That’s the general array of laws. Regarding Artificial Intelligence (AI), the focus seems to be on equal employment opportunity and making sure AI is used to make merit-based decisions and not discriminate through disparate treatment or disparate impact, which are the two theories of liability under Title VII and related laws. Until recently, there hadn’t been a lot of laws related to AI. Instead, the EEOC and OFCCP have said they are applying existing laws, such as Title VII, Exec order 01226, to the use of AI in the hiring process, essentially applying already established principles to a new AI-based “test” or hiring method. These laws apply whether you use AI in the hiring process or not.

Very recently, we’ve started to see some discussions of more specific laws at the federal level related to AI. We’re seeing some states and localities adopt some regulations addressing AI specifically, such as the New York ordinance. There’s a lot of interest around AI right now, and there seems to be a lot ideas and misconceptions, but in my view — and I think most EEO practitioners’ view — there are actually a lot of positive aspects related to using AI in the hiring process, like being able to identify the tops skills and qualifications of candidates without considering protected classes, and helping organizations identify qualified candidates that otherwise wouldn’t have made it through a traditional process. 

There’s also some concern that AI can cause an adverse impact, that it might inadvertently be biased against certain groups such as minorities, the disabled, or immigrants. While there might be some overgeneralizations, some of it could be true depending on the type of AI being used and, of course, needs to be looked at. 

From the regulators’ perspective, there’s an acknowledgment that there’s a lot of upside to using AI in the hiring process. Still, there are concerns as well, and we’re seeing regulators really trying to address that.

Get an undiluted full view of these topics by previewing our Trends Report here.

The Goal of Regulating AI In Hiring

David Francis: AI is used in so many different domains, from movie recommendations to image tagging to healthcare, but all these uses seem to be flying under the radar, while employment, in particular, really seems to be where legislators are coming down on the use of AI. The big question is, for the laws that are being proposed and under consideration, what is the end goal?

Craig Leen: I think any AI that can impact people’s rights or access will be looked at. For example, you have AI right now that adjudicates disputes, and if it turns out a particular person in a protected class is having a hard time getting adjudicated, that’s going to get looked at. Even benign use cases like what movie to see or restaurant to go to, if the AI is systemically missing certain movies or restaurants in this example, that can have a financial impact, and if it turns out to be doing it on the basis of a protected class, that would be an issue. Things like mapping, where an AI is directing traffic through certain neighborhoods, well that might have an impact on those neighborhoods and could potentially get looked at. 

I think there’s a broad recognition that AI is here to stay and is good for society, but there’s also a concern where there could potentially be a disparate impact on a certain group because how are you going to correct that? You can’t talk to a human to course correct; you have to adjust the algorithm, which is new territory for regulators. Employment just happens to be an area where historically, there can be a disparate impact from the tools and methods employers use to hire people. I think regulators are trying to get their hands around it. 

I look at self-driving cars as where all this is headed. In maybe ten or twenty years, AI may become the “standard of care,” and people will need a special driver’s license and additional liability for accidents caused while driving manually because you’re not using the established best practice. There will probably be a similar trajectory with AI as it’s applied to hiring. As people become more accustomed to AI, computing power gets stronger, and systems get better, I think we will see a movement toward AI.

Hear from Craig Leen, Former OFCCP Director, as he analyzes recent regulatory measures taken to prevent #AI bias from interfering with the hiring process. See more from @TalentTechLabs: Click To Tweet

In the meantime, the goal of regulators is to make sure that these tools aren’t creating disparate impacts or affecting equal employment opportunities.

In a recent report by Accenture, they found that AI-based HR platforms can increase worker productivity by up to 40%.

Tips For Employers

David Francis: Given the state of play in the regulatory landscape, what do you think are “acceptable” or “safe” uses of AI in the hiring process, and what might be some areas you’d caution employers against?

Craig Leen: In general, AI that is identifying particular skills or qualifications from thousands or maybe millions of CVs and using that to determine fit against a job is really helpful and broadly an excellent use of AI because you’re using the tool to identify a specific skill that is related to a specific job. AI that is basing its assessment on those identified skills is something that is usually positive because it is directly tied to the job, and it’s probably the same skills a human would be looking at in those same CVs, but AI can do it better and quicker and with a larger amount of data. Now, you still want a human involved in that process to make sure the outputs make sense, but that use case can be really powerful. 

A recent Deloitte study found organizations that have implemented AI in their HR processes have seen a reduction in the time to fill open positions by as much as 50%. 

If an AI is looking at how someone looks, speaks, or sounds, that will be more of a concern to regulators; though even these use cases potentially could be addressed, regulators have specifically expressed concerns about these kinds of applications. For example, my daughter has autism, so she doesn’t always make eye contact, but she could still be an excellent employee. Is an AI disqualifying her from a job because of that? Anything that measures how someone looks or sounds, I think that’s a red flag for regulators. 

Behavioral assessments can be useful, but you must be careful what you measure for and how it’s implemented. For example, if you’re building a success profile, a lot of underrepresented groups don’t have, for example, all the volunteer activity normally associated with “growth potential,” and so an AI that was weighing that type of activity could potentially have a disparate impact. I think it’s the same with socio-economic status (even though that’s not a federally protected class); you just have to be careful in what you’re testing for. 

Regardless of whatever application you’re using AI for, you should be measuring and looking at potential disparate impact, just like you would with any hiring process. For instance, if an AI inadvertently picks up a heuristic of selecting candidates from a certain school or background that historically has not been that diverse, it could propagate those results forward in its selection. And finally, whatever you’re looking at to qualify a candidate should be job-related.

See the full interview with David Francis and Craig Leen in TTL’s latest Trends Report!

About Craig Leen:

Craig is a Partner at K&L Gates and serves on the board of Circa and Eightfold. Previously, Craig served as Director of the Office of Federal Contract Compliance Programs (OFCCP), a federal civil rights enforcement agency at the U.S. Department of Labor. In this role, Craig reported directly to the Secretary and Deputy Secretary of Labor, overseeing approximately 450 employees and a budget of over $105 million, with a mission to ensure compliance by federal contractors with equal employment opportunities and non-discrimination obligations.

Craig’s experiences prior to OFCCP as a government attorney at the municipal and county levels in Florida provide him with a unique vantage point related to all aspects of regulatory compliance at multiple levels of government.

Get even more insights into the current legal landscape regarding using artificial intelligence in HR recruitment by downloading our newest Trends Report: The Impact Of AI Regulations On HR Technology And Employers. Our trends report explores AI’s ethical and explainable use across a wide range of HR technology vendors, helping you thrive in an AI-regulated world.

Download the latest Trends Report here!