Over the last few years, pushes for diversity in the workplace have become increasingly prevalent in the wake of wider conversations about race and gender entering the mainstream of political and cultural discussion. The tech industry has faced scrutiny for its longstanding reputation as being dominated primarily by white men. This is not an unfair assessment, as per the Equal Employment Opportunity Commission, the industry is disproportionately white and male, and many companies have taken great strides to address the lower numbers of women, and black and Hispanic people.

Even titans of the industry such as Google have made a large public show of their commitment to this issue. While for companies like Google and Microsoft the approach for tackling this issue involves investing in education for these groups and making improvements to hiring practices to avoid bias, some have suggested a very “tech” solution to this issue: algorithms.

This idea has some intuitive appeal. Most conversations around racism and how to address it look at the bigotry of individual people, for instance, a hiring manager not weighing two resumes equally because one has a black-sounding name. It makes sense to believe that if we can’t trust fallible humans to behave in unbiased ways, perhaps an objective, a non-emotional algorithm can do it instead.

At this point, it’s no secret that algorithms can be biased. Numerous scandals, such as Google’s Vision AI producing racist results, have drawn a lot of public attention to this issue, and it’s one that pervades algorithms of all kinds. Algorithms don’t just fail to address issues of bias; they can exacerbate them.

One of the core issues is data: if data is collected based on parameters set by humans, with all the bias that entails, that data can end up biased, and the algorithm trained on that data will produce biased results. An example of this directly related to employment comes from Google’s AdFisher. Researchers found that setting a user’s gender to female resulted in fewer high-paying jobs than when it was set to male. In other words, because currently, women are less likely to work high-paying jobs, the algorithm replicated that in its output.

This does not make algorithms completely useless. It’s possible that with enough data and awareness on the part of the designers of the algorithm, we could significantly reduce this bias. There are some intentional design choices that can help. These interventions basically can occur at three levels: the decision algorithm, formula inputs, and the formula itself. An example of the first is “boxing”, where the output data is segmented by some relevant demographic consideration and evaluating that output differently by those segments to produce a more equal outcome. An example of the second would be adjusting the algorithm itself to mathematically weigh factors differently based on some segments of the data, giving a comparable average between groups. Finally, an example at the third level would be to adjust the data to “remove” the effect of a certain factor from the data, by adjusting for the probability of an outcome by that factor. These interventions can be tricky because they require you to declare what the outcome ought to be, which itself can be a biased decision.

There are tradeoffs to approaching the issue of bias technologically vs. through human input. While humans can be more subject to bias in many cases than an algorithm, when an algorithm is biased, it tends to be much more entrenched and influential than a single person. For an algorithm to not produce unequal results, we must program it to have a specific outcome. This is complicated since despite what the framing around hiring practices and the like suggest, these issues start long before a person hands in a resume.

Ultimately, addressing these issues will require a diversity of tactics, and with awareness of their drawbacks and a willingness to toss out a process when it’s not working, I believe algorithms can be a useful tool. But they are not a magic bullet.

Written by:
Allison Kiteley
Software Developer at Definitive Logic

Suggested content for you