Back to All Commentary

Biased Algorithms are Deciding Who Gets Hired. We’re Not Doing Enough to Stop Them

A NYC law is supposed to root out bias, but it’s too weak to make much of a difference.

Hands touching a keyboard in dark purple light
UladzimirZuyeu / Shutterstock
By: Simon McCormack Senior Writer, Communications & Daniel Schwarz Senior Privacy & Technology Strategist, Policy

A resume scanner that penalizes applications indicating participation in women’s sports. An automated system that identifies traits associated with “success” on the job and finds that being named “Jared” and having played high school lacrosse are the strongest indicators. An ad for a trucking job posted on Facebook that is disproportionately shown to men.

While these examples may sound unlikely, all of these are real instances of biased actions enabled by automated or algorithmic tools used in the hiring process — tools that are widely used by employers as part of every stage of the hiring process.

Algorithms, which are basically a series of steps or instructions designed to perform a specific task or solve a particular problem, increasingly shape the world we live in. They are used to make critical decisions in our education, health care, criminal legal systems and much more.

Many employers advertise job opportunities on social media platforms and use chatbots to collect candidate materials and filter out applications. Employers also use automated tools to analyze candidate materials and conduct interviews using software that analyzes video or audio of candidates’ responses to specific questions.

But the algorithms used to make these automated employment decisions are often cloaked in secrecy. There’s very little publicly available information on what goes into the algorithms to inform the decisions they make.

In 2021, the New York City Council passed a bill designed to bring some much needed transparency to these automated tools, and to try to determine if the decisions they make are biased. Unfortunately, as we get a closer look under the hood at how it works, our initial predictions about the law being too weak to root out bias are coming true.

What are “Bias Audits”?

Under the recently enacted law, Local Law 144, employers in New York City using these technologies are required to work with independent auditors to conduct “bias audits” of some of the tools they use and share certain results publicly. These “bias audits” are required to include statistics about how often candidates of different races and sexes advance in an employer’s hiring or promotion process when a particular automated tool is used. As the law has taken effect, news coverage has characterized the law as requiring companies to “prove their AI tool is free from racial and gender bias before it can be used.”

But what do these “bias audits” actually tell us, and do these “bias audits” definitively root out discrimination? A closer look at these “bias audits” indicates the answer is no. There are many ways that an automated tool declared to be bias-free by an audit could nevertheless be discriminatory.

How Can “Bias Audits” Miss Signs of Bias?

For starters, as we and others have noted, the law only applies to a limited set of the types of automated tools used in hiring processes today. The law doesn’t require any sort of audit for potential bias based on disabilityagepregnancy, and other characteristics where discriminatory decision-making in hiring and employment continues to raise serious concerns — including concerns that automated tools may perpetuate or exacerbate this historical and ongoing discrimination.

Why Don’t “Bias Audits” Tell Us Whether Automated Tools Work?

Vendors of automated tools used in the hiring process often make broad claims about what their tools can do.

For example, some vendors may claim that their tools can be used to accurately measure subjective traits or emotions like “grit” or “openness” based on job applicants’ personality tests or video interviews, but research consistently indicates these broadly constructed traits and emotions are not possible to meaningfully measure or predict. “Bias audits” under Local Law 144 don’t evaluate whether the tools work as advertised and whether they accurately assess the relevant abilities for a specific job. But these questions are critical to any audit of the impacts of these tools.

Why do “Bias Audits” Suffer from the Problem of Missing Data?

Employers can’t measure disparities they can’t see. They may have access to demographic information if candidates choose to self-report this data, but for a variety of reasons, employers and auditors may not always have information about the race and/or sex makeup of their applicant pools.

As a result, bias audits produced in connection with Local Law 144 may be missing a lot of data. For example, one recent “bias audit” computed the required statistics for a sample of 10 million job applications – but noted that there were almost 40 million other applications for which the race of the candidate was unknown – meaning less than 25 percent of the applications from the relevant period were included in that part of the audit.

The scale of missing data here is especially important in light of research suggesting that people may be hesitant to self-report their demographic information without a clear understanding of how their information will be used.

Employers Use Automated Tools for Hiring in Many Different Ways.

Why Don’t “Bias Audits” Account for This?

Though two employers may be employing the same automated tool – say, a resume screener that scores applicants between one to 10 – they may be using the tool in completely different ways, and Local Law 144’s bias audits may obscure important impacts of these differences.

One employer could be using the resume screener to evaluate candidates for roles in very different industries. Even if the industry is the same (say, health care), the roles for which the tool is used to hire may differ drastically (say, a nursing role versus a health insurance claims processer). One employer may interview any candidate who gets a score over five, while another may only consider those who score nine or above. One employer may refine the vendor’s tool using their own in-house data, while another may use the tool as-is.

Local Law 144 lets auditors pool together data from multiple employers that use the same tool, and employers are allowed to rely on results from these kinds of pooled audits. Because employers often use automated tools for hiring in highly customized ways, when evaluating whether an automated system creates a disparate impact, it is critical to be specific about how the tool is actually being used. When audits rely on pooled data from many different employers, the specific ways employers use these tools may not be captured.

Why is Independent Auditing Necessary but Not Sufficient for Making Sure Algorithms Don’t Discriminate?

Independent audits of automated tools are critical. But these audits should be holistic, ongoing, and transparent to examine issues including and beyond bias and discrimination. They should be conducted both before an automated system is built or deployed and once it’s in use.

Audits should be designed to get a clear understanding of the context in which an automated system is used and should center the communities who are most impacted by the system. And most importantly, tools that show disparate impact on protected classes should not be allowed to be used. Many researchers and practitioners have proposed various frameworks and approaches for these kinds of meaningful independent audits, including specifically in the context of hiring.

For algorithmic audits to provide meaningful transparency, they should also be publicly available and easily accessible. So far, we’ve seen that the Local Law 144 bias audits can be very hard to find — which is why we created a public repository of them here.

In this repository, you can find examples of reports employers have posted publicly, as well as copies of reports that have been posted online by employers and later removed. If you know of Local Law 144 bias audits that employers have posted that we missed or if you were notified during your own application process about the use of an automated decision system, let us know or contribute directly to the repository.

How has NYCLU Fought Local Law 144?

Both before it was passed and prior to its implementation, the NYCLU warned lawmakers and other public officials that Local Law 144 was deeply flawed.

We made clear that the law falls far short of providing comprehensive protections for candidates and workers. During the subsequent rulemaking process, the NYCLU urged the Department of Consumer and Worker Protection to amend and strengthen the law’s rules to deliver on its promise to mitigate bias and provide people with the information they need. Unfortunately, our calls for much needed changes were not heeded.

Still, the fight is not over. A new bill could provide much needed oversight.

How Do We Stamp Out Biased Algorithms?

A bill in the state legislature, the Digital Fairness Act, would create comprehensive privacy protections where companies must acquire people’s informed, affirmative consent before they collect or use their personal information. It would also ensure that companies can no longer circumvent our civil and human rights laws through the use of automated tools like the ones used in hiring.

The bill makes clear that using personal information to discriminate against someone is both unlawful and an unfair trade practice. And the legislation also provides guardrails for government use of automated decision systems. It bans discriminatory digital tools and gives communities a say over whether and how state and local governments use automated tools.

The Digital Fairness Act is a much stronger, more comprehensive way to stamp out biased algorithms than the weak, piecemeal approach offered by Local Law 144.

Resources

Know Your Digital Rights: Digital Discrimination in Hiring →

As bold as the spirit of New York, we are the NYCLU.
Donate
© 2024 New York
Civil Liberties Union