Home General Google's Gmail Removes Gendered Pronouns to Avert AI Bias

Google’s Gmail Removes Gendered Pronouns to Avert AI Bias

Gmail’s Smart Compose is known for being among Google’s most fascinating artificial intelligence aspects in years, forecasting not only what users will write in their emails but also offering to complete sentences on their behalf.

For this reason, Google has blocked the Smart Compose feature from recommending gender-related pronouns such as her and him in emails — Google is concerned that it will go wrong in guessing the gender.

According to reports from Reuters, the limitation came about after the company’s research scientist found the issue in January 2018.

This happened when the researcher typed “I am meeting an investor next week” in a message when Gmail recommended a follow-up question, “Do you want to meet him,” misgendering the investor.

Paul Lambert, Gmail’s product manager, said to Reuters that his team made an effort to solve the issue in several ways, but none of them proved to be reliable enough.

Lambert said that getting rid of such responses was the easiest solution, a transformation that Google claims impacts less than one percent of all the predictions of Smart Compose.

What’s more, Lambert told Reuters that it is safer to be cautious in such cases since gender is a “big, big thing” to get mixed up.

The gender bias demonstrated by Gmail represents a small case of a much larger issue.

This small bug marks an excellent example of how software developed using machine learning can not only reinforce but also reflect societal biases.

Similar to most AI systems, Smart Compose trains through feeding on old data, going through past emails in a bid to identify the phrases and words it ought to suggest.

Smart Reply, its sister-feature, does a similar thing to recommend bite-size responses to emails.

In Lambert’s case, it appears that Smart Compose had already learned from previous data that investors were most likely men as opposed to women.

Although this example shows a small error, it represents a much greater issue.

In case we trust the forecast conducted by algorithms that are trained using old data, then there’s a likelihood that we could repeat the mistakes made in the past.

Making the wrong gender guess in an email may not have a massive effect, but what do you think the results would be for AI systems that make decisions in the courts, employment, and healthcare sector?

For Google, this issue is potentially massive.

In fact, the company is not only incorporating algorithmic judgments into additional products but also selling machine learning tools across the globe.

The question is, if one of Google’s most popular AI features is making such small errors, why should customers trust the rest of its services?

Google has spotted the emergence of such problems.

In fact, in Smart Compose’s help page, the company warns users that the artificial intelligence models that it utilizes “can also reflect human cognitive biases. Being aware of this is a good start, and the conversation around how to handle it is ongoing.”

However, as far as this case is concerned, Google is yet to fix a lot — the company just got rid of the possibility of the system making a mistake.

Subscribe to our newsletter

Signup today for free and be the first to get notified on the latest news and insights on artificial intelligence

KC Cheung
KC Cheung has over 18 years experience in the technology industry including media, payments, and software and has a keen interest in artificial intelligence, machine learning, deep learning, neural networks and its applications in business. Over the years he has worked with some of the leading technology companies, building and growing dynamic teams in a fast moving international environment.
- Advertisment -

MOST POPULAR

AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning. Like children, successful models need continuous nurturing and monitoring throughout their lifecycle. Parenting is exhilarating and, if...