HireVue, featured as a video interviewing solution in TTL’s ecosystem, has removed the facial analysis component of its software. The company announced this change at the beginning of 2021 and reported that data indicated nonverbal behaviors were not essential to its predictive algorithm as it often aligned with verbal behaviors.
“HireVue’s internal research demonstrated that recent advances in natural language processing had significantly increased the predictive power of language,” according to the company. “With these advances, visual analysis no longer significantly added value to assessments.”
One of the main reasons for HireVue’s removal of facial analysis may be due to its highly controversial nature. In 2019, the nonprofit Electronic Privacy Information Center filed a complaint against HireVue with the Federal Trade Commission citing “unfair and deceptive” practices. HireVue stated that an “algorithmic audit” of its software conducted in 2019 showed that it did not harbor bias.
New laws are being enacted to enhance transparency on A.I. Hiring Practices. The Illinois Artificial Intelligence Video Interview Act, which took effect January 1, 2020, mandates that companies notify applicants when A.I. is being used to screen them. This law also seeks to provide privacy to the applicants with the following two provisions: It limits who can view an applicant’s recorded video interview to those “whose expertise or technology is necessary” and requires that companies delete any video that an applicant submits within a month of their request.
Jonathan Covey, TTL Analyst, comments,
“Regulation at the government level continues to press forward. In fact, NYC is the first US city to consider a recent bill that would mandate audits on automated employment decision tools. Like the other statutes mentioned, the proposition is well-intentioned. People have a right to know what technology and information about their profile are being used to make a hiring decision.
Where it gets more tricky is assessing bias in the hiring algorithm itself. Inherently, these “prediction machines” are going to be biased (hopefully towards higher quality, better fit talent). But you don’t want the algorithms to take into account factors in the output such as age, gender, race, or religion. This is where much of the fear stems from companies – when it happens, it’s wrong and not good for anyone.
The question becomes, how do you define bias? What is the audit? Who does it? You’ll need multi-party feedback not just from the HireVues and hundreds of talent technologies out there, but equally from candidates, buyers, public opinion etc. and a regimented structure of accountability. It will be a rigorous, but essential undertaking in a world where so much data is captured on the job applicant and worker.”
HireVue will work with ORCAA to continue to audit the use of their algorithm to identify any issues of fairness, bias, and discrimination.
Kevin Parker, Chairman and CEO of HireVue, comments on the announcement of the removal of HireVue’s facial analysis component, “Democratizing the hiring process for candidates and employers is the foundation on which HireVue stands. Our news today reaffirms HireVue’s leadership role in defining the best, most appropriate use of AI in hiring, and our commitment to rigorous, ongoing measurement of the impact our software has on individual candidates, our customers, and society as a whole.”