Is Bias in NLP Models an Ethical Problem?

Can we responsibly disregard the negative sides of AI?
By Boris Delovski • Updated on Nov 30, 2022

It’s easy to get engrossed in all that artificial intelligence, or AI, can offer us. Every machine learning engineer will remember the first time they created a model that helped someone. Even people who aren’t specialized in the field can hardly not get excited when they find an application that solves their problems efficiently (whether they know that AI is used for their solution or not).

While AI is used in a multitude of fields, deep learning research has flourished in some fields in particular. One of those fields is natural language processing, commonly referred to as NLP. Advancements in NLP have made many positive improvements possible within the field of AI.

In practice, most people focus on the benefits brought about by AI technology. But can we responsibly disregard the negative sides of AI? In this article, I’ll explain how the same biases that exist in our society have crept into deep learning models, and whether we should do something about it.

 

What is Bias?

These days, bias is mostly mentioned with a negative connotation. In a society that strives for equality, holding bias toward a person (or group of people) is exactly what you try to avoid. While avoiding personal bias in your life makes perfect sense for humans, should you apply the same standards to the machine learning and deep learning models you use?

Not so long ago this wouldn't have even been a valid question, but today the situation is different. With the rise in popularity of AI, machine learning and deep learning models have been introduced into many aspects of our lives. You use AI daily, whether you're aware of it or not, and you regularly use it to shape your own behavior.

For example, each time you use Google's search engine you’re using some form of AI, and you often make decisions based on Google search results (ever choose a movie based on Google movie recommendations?). If the behavior of AI in turn influences your own behavior, are you warranted in applying some human standards to machines? At which point does your AI-influenced “decision” stop being your own decision at all?

 

If you go to a new city and want to visit the best restaurant, you’ll probably look at reviews that are available online. If the reviews are biased—or if the model that displays the reviews is biased—and you decide to base your decision on them, who’s really making the decision: you, or the machine? To add one more question to this list of ideas to explore: do you, as a designer of AI, introduce bias into your models, or do the models only seem biased because they prefer one solution over another?

 

How is Bias Introduced to NLP Models?

The deep learning models you use are mostly "black-box” models. In other words, you define a model, compile it, train it using some data, and then later use it to solve a particular problem. As a designer, you'll typically stop there, and explaining why a model makes a given decision is not often tackled. However, this doesn't usually mean that the underlying concepts that drive a model are not known.

After all, every model can be explained using mathematical equations. But what it does mean is that it’s hard to say what a model bases a decision on, so designers often don't bother. It's pretty standard practice for supervised learning models, in fact. It's also standard for unsupervised learning models, where the idea of understanding a model's exact decision is even less relevant, since the model is learning in an unsupervised way.

 

Unfortunately, unsupervised learning is the crux of NLP. Natural language processing models are typically supervised learning models, but the data they use is created using models trained in an unsupervised way. You do this because you can't directly feed text into your models. Instead, you need to convert the text into representations of language that are digestible for your models. Those representations are called word embeddings.

 

 

What Are Word Embeddings in Unsupervised Models? 

Word embeddings are numerical representations of text data created by unsupervised models. A model trained in an unsupervised way scours large quantities of text and creates vectors to represent the words in the text.

Unfortunately, by searching for hidden patterns and using them to create embeddings (which automatically group data), your models are exposed to more than just semantic information. While processing the text, models are also exposed to biases that are not unlike the ones present in human society. The biases then get propagated into your supervised learning models—the same models that are supposed to take in unbiased data to ensure they don't produce biased results.

 

If you’re skeptical about whether something like this can happen, let’s explore a few of the many examples of racial bias and gender bias that occur in deep learning.

 

What is Gender Bias?

A 2021 article published by The Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative, part of series entitled “AI and Bias,” discusses research into bias and NLP. The article notes a study performed by the author at Princeton University's Center for Information Technology Policy in 2017.

The study revealed many instances of gender bias in machine learning language applications. While studying machines as they process word embeddings, the researchers found that the machines learned human-like biases from the word associations in their training material.

For instance, the machine learning tools were learning that sentences that contained the word "woman" more often also included words such as "kitchen" and "art." On the contrary, sentences that contained "man" more often also contained words such as "science" and "technology." As machine learning picks up on these patterns, these types of gender biases are propagated down the line.

If a supervised training model uses biased embeddings, it will also produce biased results.


Observation and evaluation of gender bias in NLPImage Source: Observation and evaluation of gender bias in NLP, Image Source: Sun, Tony, et al. "Mitigating gender bias in natural language processing: Literature review." (2019).

In 2015, instances of gender bias were reported in Google Search results. Adrienne Yapo and Joshua Weiss explore the revelation in their 2018 conference paper “Ethical Implications of Bias in Machine Learning." In the backlash, Google users discussed their experiences with bias while searching. For instance, mostly images of white males were returned when searching for a word like "CEO." The paper also discusses a study that finds that the Google Search engine showed more ads for high-paying, executive jobs to male users than female.

The best example of gender bias in NLP was found more recently. In May 2020, OpenAI introduced the third generation Generative Pre-trained Transformer, or GPT-3, NLP model. The model was praised as the best NLP model available yet, and able to create language indistinguishable from human speech. However, even the GPT-3 was not immune from gender bias. When prompted to define the gender of certain occupations, the GPT-3 reported that doctors are males and nurses are females.

 

What is Racial Bias?

In the 2017 Princeton study, racial bias was observed alongside the gender bias. The study found that the negative biases on the internet toward African Americans and the Black community remained present in the model embeddings. When examining their model, the researchers found that compared to traditionally White names, traditionally Black names were more significantly associated with words with a negative connotation.

This type of racial bias was also demonstrated much earlier than 2017. The Brookings article also cites a 2004 study that found that resumes analyzed by machine learning resulted in resumes with African American names receiving 50% fewer callbacks than those that were characterized by the model as White names.

 

Does Bias Exist in Other Fields of Deep Learning?

Researchers have recognized cases of bias not only in NLP models but also in other deep learning models (although they’re most prominent in NLP models). Computer vision is another field of AI with no shortage of highly controversial cases of racial bias.

 

Photo or image recognition is an example of computer vision. In 2015, Flickr’s image recognition tool brought the company into the headlines when it reportedly tagged a photo of a Black man as an “ape." In a similar instance, Google Photos was also in the public spotlight when it labeled a Black software developer and his friend as “gorillas” in personal photos.

a black manImage: Flickr's problematic image

 

Is Bias an Ethical Issue?

Knowing that bias exists, the question becomes how to deal with it in your models. What can you do to make your models as unbiased as possible?

Whether you like it or not, word embeddings represent—to a certain degree—our culture, and using them to represent words leads to accurate models. This means that the biases you notice in your models exist because they exist in our society.

If you choose to create unbiased models, their accuracy will undoubtedly suffer. Trying to represent a biased society using an unbiased model will render the models itself useless. On the other hand, biased models and the growing dependence on NLP cause a feedback loop that only accentuates the biases that harm society today.

On paper, it’s an unsolvable dilemma. But maybe you should try to approach it from a different angle: instead of trying to create unbiased models, you should consider starting to apply the same standards to your models that we apply to ourselves as humans.

Unfortunately, humans are biased, and if not consciously then unconsciously. Even when you think you're making unbiased decisions, you probably aren’t. As a human, you try to understand and challenge your bias behavior using standards, where an individual's behavior is compared to what is considered acceptable standards of behavior.

So if you can't prevent making biased decisions, you can create standards in advance to audit them. In doing so, you don’t stop models from making biased decisions, but you can create a method to deal with those decisions before they harm others. Creating a type of audit mechanism is a possible solution that would allow you to better understand how the behavior of your models might shape public opinion.

Looking ahead, you might even create models that specifically work on trying to solve these bias problems. Today, there's the opportunity to start investing more time and money into developing explainable models. Explainable models could lead to big breakthroughs not only in terms of dealing with biased models but also in deep learning as a whole.

 

There are likely many solutions to the bias problem in AI—and not all of them are completely unique and revolutionary ideas. So why haven't we already implemented measures to combat bias in deep learning models, and especially in bias-sensitive NLP models? This brings you to the most controversial question of the article: If you have the solution to bias in models, should you implement it?

 

Ethics vs. Business

In an ideal world, the answer is easy. But in reality, many factors are at play between the practical answer and the ethical one. The most important factor is motivation. Are companies today motivated to reduce bias in their models?

The answer is, unfortunately, a resounding no. The competitive market today doesn't allow for financial “wiggle room," and as smaller companies fight to stay relevant, they choose to use the models that produce the best results—even if they’re biased. While bigger companies could afford to take some losses, they’re often governed by the idea that financial gain trumps everything else. For them, taking a loss is often not an option.

Let’s look at this from another angle and say motivation to reduce bias isn’t a significant factor. Let’s assume that everyone is motivated to de-bias their models as much as possible. This scenario still has problems that need to be addressed.

Sometimes you'll want your models to be biased. For example, if a company sells boots for men, it’s perfectly logical to create a model biased towards males. Similarly, if you’re a company selling shoes with high heels, you’ll probably get better results if you bias your model toward females.

The overarching theme here? Everything is relative. In some cases, bias is good. In others, it isn’t. Overall, the harder you try the more you will run into convoluted ethical dilemmas. I haven’t even touched on the ideas around rules of law and morality applying to AI, an object without basic human rights!

The short answer is that there’s not a readily available solution today. The best we can do individually is to try and be less biased ourselves, because our models mirror society, a group change in behavior will also change your datasets, and one day might create truly unbiased embeddings from which your models can learn.

AI is already a big part of our lives, and that won’t change in the future. On the contrary, its role in our lives is becoming steadily more significant. AI models choose the news you're exposed to each day and the commercials you'll see first each morning. To disregard the ethical implications of leaving something as influential as AI models unchecked is a mistake, and leads to questionable results like the examples of bias in this article.

At times, censoring models seems like the only logical thing to do. However, I find that using AI as a scapegoat only serves to hide the truth that our society is a biased one. The models you train learn simply and efficiently through examples. So if you see your models making biased decisions, most likely, we as humans, are the example to blame. The models only know to simulate what they've been taught, so with models as our mirror we see the flaws in our own reflection that we don't want to see.

The real solution to dealing with bias and ethical problems in our models is to solve those ethical problems within ourselves. Fixing ourselves keeps the accuracy of our models at a high level while correcting for bias. Now is the time to begin leading by example for a better future with AI, instead of covering up our past mistakes.

 

Boris Delovski

Data Science Trainer

Boris Delovski

Boris is a data science trainer and consultant who is passionate about sharing his knowledge with others.

Before Edlitera, Boris applied his skills in several industries, including neuroimaging and metallurgy, using data science and deep learning to analyze images.