There’s a lot of talk about artificial intelligence (AI) at the moment, particularly facial recognition technology. This arm of AI has come a long way since it was first developed with some commercial software now being able to determine a person’s gender simply by looking at a photograph.
And, when the person in the photo is white, the AI gets it right 99% of the time. However, the darker the person’s skin in the photo, the less likely the AI will predict right. This is just one example of how biases seep into even the world of AI. If more programmers are white male, the system will find identifying black women much harder.
Facial recognition is an arm of AI that’s used in many different ways, but at the moment it’s just far too biased. Ms Buolamwini is an African-American computer scientist who experienced the biases of AI herself.
When she was a student at the Georgia Institute of Technology, facial recognition systems never seemed to work on her, yet they worked fine on her white friends. Thinking that this issue would get fixed as the technology evolved, she thought nothing more of it. However, when it happened again several years later at the MIT Media Lab, Ms Buolamwini decided to do something about it.
READ MORE – What is Machine Learning? All You Need to Know
As part of her research, Buolamwini took a closer look at three of the top facial recognition systems on offer by IBM, Microsoft AI, and Megvii of China to see how well they performed at guessing people’s gender from their skin tone.
Results from the study showed that each one needed vast improvements to overcome its biases. Microsoft’s accuracy rate for darker-skinned women was 81%, whereas both IBM and Megvii’s was only around 65%. All three had an accuracy of at least 99% for identifying white males correctly.
Following the research, IBM said it’s committed to becoming unbiased and later this month plans to release a new and improved facial recognition software with an improvement in accuracy for darker-skinned people of around 10%.
Microsoft said it had “already taken steps to improve the accuracy of our facial recognition technology” and put more money into researching how to remove such biases. Megvii declined to comment after being approached by Ms Buolamwini following the research.
According to Ms Buolamwini, technology should be more accustomed to those that use it opposed to those that make it. “You can’t have ethical AI that’s not inclusive,” she said.