A few months ago Professor Michal Kosinski of Stanford used artificial intelligence (AI) to predict the sexual orientation of people from just looking at their face. His next challenge is to use similar software to predict people’s political beliefs.
Facial recognition software has been on the rise over the past couple of years now. Last summer, Chinese firms began using technology alongside police to try and stop crimes before they took place.
Along with co-researcher Yilun Wang, Kosinski used a facial analysis software called VGG-Face to translate around 35,000 headshot photos into a sequence of numbers. A computer model was then used to look for patterns between facial features and sexuality.
The AI model was given 5 photos of each subject and asked to determine their sexuality. The results showed that 91% of the time the AI was correct in distinguishing between a gay and a straight man, and 83% correct in distinguishing between a lesbian and a straight female.
When only one photo was used the accuracy level declined to 81% for males and 74% for women. However, compared to 61% and 54% respectively for what humans achieved, that’s not bad. But, it’s not perfect either.
While neural networks are great at detecting patterns, they’re not so great at detecting when there’s a correlation. According to Prof Kosinski that’s the biggest challenge. It is unclear as to how the AI determined the subject’s sexuality.
“It would be nice to understand if the network can tell you why it thinks the answer is one way or another,” commented Prof Vedaldi. “The machine itself is not fully understood.”
Understanding how neural networks analyse images is what Prof Vedaldi is currently working on. It’s much easier to improve the accuracy of AI if it can be understood thoroughly.
But even then, there will be limitations in regards to their development. In order to train neural networks efficiently, high-quality data sources are required, but these aren’t always easy to come by.
“I can see ways to improve the performance [of visual imaging and facial recognition neural networks],“ said Wang.
“But I don’t see ways to get 100 per cent.” For that reason, it may be a little while yet before neural networks can be used as a fail-safe method for security-critical applications. But that’s ok as they’re still extremely useful in many other ways.