Despite efforts to remove bias in hiring and promotions, it still happens—and not always at the hands of management operating under conscious or unconscious bias. Algorithms and datasets are increasingly finding use in interviews, hiring, and promotions, and while they can help to reduce human bias in decision-making, they aren’t perfect.

According to Pauline Kim, Daniel Noyes Kirby Professor of Law at Washington University in St. Louis, these algorithms can rely on inaccurate, biased, or unrepresentative data.

“When this happens, the result is classification bias—a term that highlights the risk that data algorithms may sort or score workers in ways that worsen inequality or disadvantage along the lines of race, sex, or other protected characteristics,” Kim said.

Because data mining can obscure the basis on which employment decisions are made, it’s hard to detect errors in the process, and feedback to algorithmic selections can cause an “echo chamber” effect that can compound the bias.

These biases are based on what is essentially faulty data. Bias can filter into such data from a variety of places, and biased data being used by an unbiased computer is no less biased.

“Rote application of our existing laws will fail to address the real sources of bias when discrimination is data-driven,” said Kim.

According to Kim, “we must fundamentally rethink how anti-discrimination law apply in the workplace in order to address classification bias and avoid further entrenching workplace inequality.”

Current anti-discrimination laws and practices were not put into place with classification bias in mind, and therefore they need to be changed to adapt to this rapidly growing problem.

Classification bias is essentially a new kind of discrimination, or at least a new delivery method for existing discriminatory models, and needs a new legal response.

“Focusing on classification bias,” according to Kim, “suggests that anti-discrimination law should be adjusted in a number of ways when it is applied to data algorithms,” which can have the same deleterious effect as human bias if left unchecked.