A resume scanner that penalizes applications indicating participation in women’s sports. An automated system that identifies traits associated with “success” on the job and finds that being named “Jared” and having played high school lacrosse are the strongest indicators. An ad for a trucking job posted on Facebook that is disproportionately shown to men.
While these examples may sound unlikely, all of these are real instances of biased actions enabled by automated or algorithmic tools used in the hiring process — tools that are widely used by employers as part of every stage of the hiring process.
Algorithms, which are basically a series of steps or instructions designed to perform a specific task or solve a particular problem, increasingly shape the world we live in. They are used to make critical decisions in our education, health care, criminal legal systems and much more.
Many employers advertise job opportunities on social media platforms and use chatbots to collect candidate materials and filter out applications. Employers also use automated tools to analyze candidate materials and conduct interviews using software that analyzes video or audio of candidates’ responses to specific questions.
But the algorithms used to make these automated employment decisions are often cloaked in secrecy. There’s very little publicly available information on what goes into the algorithms to inform the decisions they make.
In 2021, the New York City Council passed a bill designed to bring some much needed transparency to these automated tools, and to try to determine if the decisions they make are biased. Unfortunately, as we get a closer look under the hood at how it works, our initial predictions about the law being too weak to root out bias are coming true.