We might not realize it, but computer algorithms are everywhere.

These algorithms, which are basically a series of steps or instructions designed to perform a specific task or solve a particular problem, are used by governments and private industry in a growing number of fields.

They are used in the criminal legal system to decide things like who is let out on bail and who is forced to stay locked up. They are also utilized to help assign students to schools, offer people loans, decide where health care spending should be allocated, and many other tasks.

The problem is these algorithms, no matter how objective, neutral, and harmless they may appear on the surface, are often steeped in bias. That’s because the decisions about what factors algorithms consider and how much weight each factor is given are made by people, who bring their own biases into these calculations.

Algorithms are also often based on data that has bias baked in. So, for instance, an algorithm based on arrest data will inevitably be biased by the decades-long over-policing of communities of color.

In one recent example of an algorithm gone awry, researchers discovered that an algorithm used to identify patients’ health risks failed to identify many Black patients who would benefit from more medical treatment.

The problem is these algorithms, no matter how objective, neutral, and harmless they may appear on the surface, are often steeped in bias.

In response to the study, last month New York State officials asked UnitedHealth Group, which sells the algorithm, to “investigate these reports and demonstrate that this algorithm is not racially discriminatory.”

As the study – published in the Journal Science last month – explains, the Impact Pro algorithm looks at past health care spending by patients to predict which people will likely require the most care in the future. The patients who are identified by Impact Pro are then targeted for “high-risk care management” programs.

These programs, the study explains, “seek to improve the care of patients with complex health needs by providing additional resources, including greater attention from trained providers, to help ensure that care is well coordinated.”

The problem is that Impact Pro uses past spending on health care to predict future spending. But that metric doesn’t account for various reasons why Black people in poor health spend less than their white counterparts. The result is that the people targeted for high-risk care management do not include many Black patients who could benefit from it.

The study mentions several reasons why previous health care spending is not an effective measure of who will need the most future care. Insofar as socioeconomic status is correlated with race, Black patients tend to have less wealth than white people. So some Black patients likely avoid spending on needed care more often than white patients because of a lack of resources.

Black people are also discriminated against by doctors, who are less likely to recommend preventative care for Black patients if the doctor is white.

Impact Pro does not account for race when it looks for patients to target for more care. But in its race-blindness, the results are heavily biased against Black people.

Algorithms like the one used by Impact Pro are extremely common in the health care industry. And it’s far from the only problematic algorithm. Nearly every day, there is worrying news on this front.

Apple’s new credit card made headlines after the company reportedly offered much lower credit limits to women compared to their husbands, even though they have joint accounts, shared finances, and higher credit scores than their male partners. New York’s Department of Financial Services is investigating both Impact Pro and the Apple Card.

And the Guardian reported this week that Google has acquired the un-redacted health records of millions of patients as part of its efforts to build health algorithms.

The Impact Pro example highlights two important aspects of algorithms. One is that even the ones that appear to be neutral and don’t explicitly use race in their calculations can still be biased.

Another lesson is that by making algorithms transparent and available to the public, biases can sometimes be uncovered and, at least partially, corrected. Such is the case with Impact Pro. The researchers, using a combination of past health care spending and other metrics, were able to eliminate 84 percent of the bias that existed in the Impact Pro algorithm.

This is far from a perfect solution, and there are plenty of cases where algorithms should not be used at all. But transparency is critical, especially since we’re unlikely to rid the world of algorithms any time soon.