HireVue drops facial monitoring amid A.I. algorithm audit

HireVue's software has been criticized for being opaque and potentially biased. But the audit is a step towards greater transparency—one other companies ought to emulate.

Nov 30, -0001 - 00:00
 0
HireVue drops facial monitoring amid A.I. algorithm audit
Techatty All-in-1 Publishing
Techatty All-in-1 Publishing

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

The journalist Malcolm Gladwell, on his podcast, “Revisionist History,” devoted a recent episode to his theory of “hiring nihilism.” It is Gladwell’s belief that people are so bad at predicting who will perform well at a given role—especially based on traditional screening criteria such as CVs and candidate interviews—that one should simply concede that all hiring is essentially arbitrary. Gladwell explained, when it came time to find a new assistant or hiring an accountant, he did so in explicitly arbitrary ways—picking whoever an acquaintance recommended or someone he met on the street, with only the most cursory of face-to-face conversation. Why waste time on a process that would ultimately produce a result no better than throwing darts?

For decades, a segment of the tech industry has grown based on an acceptance of Gladwell’s premise—that humans are terrible at forecasting job performance—but an emphatic rejection of his resort to nihilism. Instead, these tech companies argue, with better screening tools (which, not coincidentally, these same companies happen to sell), this problem can fixed. Increasingly, artificial intelligence has been a part of what these firms are selling.

A.I. offers the promise that there exists some hidden constellation of data, too complex or subtle for an H.R. executives or hiring managers to ever discern, that can predict which candidate will excel at a given role. In theory, the technology offers businesses the prospect of radically expanding the diversity of their candidate pool. In practice, though, critics warn, such software runs a high risk of reinforcing existing biases, making it harder for women, Black people and others from non-traditional backgrounds to get hired. What’s worse, it may cloak a process that remains as fundamentally arbitrary and biased as Gladwell argues in an ever more pseudoscientific veneer.

HireVue is one of the leading companies in “hiretech”—its software allows companies to record videos of candidates answering a standard set of interview questions and then sort candidates based on those responses—and it has been the target of such criticism. In 2019, the nonprofit Electronic Privacy Information Center filed a complaint against the company with the Federal Trade Commission alleging that HireVue’s use of A.I. to assess job candidate’s video interviews constituted “unfair and deceptive trade practices.” The company says it’s not done anything illegal. But, partly in response to the criticism, HireVue announced last year that it had stopped using a candidate’s facial expressions in the video interviews as a factor its algorithms considered.

This past week, the company also revealed the results of a third-party audit of its algorithms. The audit mostly gave HireVue good marks for its efforts to eliminate potential bias in its A.I. systems. But it also recommended several areas where the company could do more. For instance, it suggested the company investigate potential bias in the way the system assesses candidates with different accents. It also turns out that minority candidates are more likely to give very short answers to questions—one word responses or saying things such as “I don’t know”—which the system had difficulty scoring, resulting in these candidate interviews being disproportionately flagged for human reviewers.

Lindsey Zuloaga, the company’s chief data scientist, told me that the most important factor in predicting whether a job candidate would succeed was the content of their answers to the interview questions. Nonverbal data didn’t provide much predictive power compared to the content of a candidate’s answers—in fact, in most cases, it contributed about 0.25% to a model’s predictive power, she says. Even when trying to assess candidates for a role with a lot of customer interaction, nonverbal attributes contributed just 4% to the model’s predictive accuracy. “When you put that in the context of the concerns people were having [about potential bias], it wasn’t worth the incremental value we might have been getting from it,” Kevin Parker, HireVue’s CEO, says.

Parker says the company is “always looking for bias in the data that goes into the model” and that it had a policy of discarding datasets if using them led to a disparity in outcomes between groups based on things such as race, gender or age. He also notes that only about 20% of HireVue’s customers currently opt to use the predictive analytics feature of the software—the rest use humans to review the candidates’ videos—but that it’s becoming increasingly popular.

HireVue’s audit was conducted by O’Neil Risk Consulting and Algorithmic Auditing (ORCAA), a firm founded by Cathy O’Neil, the mathematician best known for her 2016 book about algorithmic decision-making, Weapons of Math Destruction, and which is one of a growing handful of companies specializing in these kinds of assessments.

Zuloaga says she was struck by the extent to which the ORCAA auditors sought out different types of people HireVue’s algorithms touched—from the job seekers themselves to the customers using the software to the data scientists helping to build the predictive models. One of the things that came across in the audit, she says, is that certain groups of job candidates may be more comfortable than others with entire idea of being interviewed by a piece of software and having a machine assess that interview—and so there may be some hidden selection bias built into all of HireVue’s data currently.

ORCAA recommended HireVue do more to communicate to candidates exactly what the interview process will involve and how their answers will be screened. Zuloaga says that HireVue is also learning that minority candidates may need more explicit encouragement from the software in order to keep going through the interview process. She and Parker say the company is looking at ways to provide that.

HireVue is among the first to engage a third party to conduct an algorithmic bias audit. And while PR damage control might have been part of the motivation—“this is a continuation of our focus on transparency,” Parker insists—it does make the company a pioneer. As A.I. gets adopted by more and more businesses, it is likely that such audits will become more commonplace. At least the audit reveals that HireVue is thinking hard about issues around A.I. ethics and bias and seems sincere in seeking to        address it. It’s an example other businesses should follow. It is also worth remembering that the alternative to using technology such as HireVue’s is not some utopian vision of rationality, empiricism and fairness—it is Gladwell’s hiring nihilism.

And with that, here is the rest of this week’s A.I. news.

Jeremy Kahn 
@jeremyakahn
jeremy.kahn@fortune.com

Talk to Techatty
Talk to Techatty

***

The societal reckoning over systemic racism continues to underscore the importance businesses must place on responsible A.I. All leaders are wrestling with thorny questions around liability and bias, exploring best practices for their company, and learning how to set effective industry guidelines on how to use the technology. Join us for our second interactive Fortune Brainstorm A.I. community conversation, presented by Accenture, on Tuesday, January 26, 2021 at 1:00–2:00 p.m. ET.

Techatty Connecting the world of tech differently! Read. Write. Learn. Thrive. Make an informed decision without distractions. We are building tech media and publication networks to connect YOU and everyone to reliable information, opportunities, and resources to achieve greater success.
Vote HARRIS for PRESIDENT.
Vote HARRIS for PRESIDENT.