Despite the increased hype and adoption of artificial intelligence and image recognition systems, the issue of bias in such technologies appears to be eliciting a lot of attention in society and companies as well.
To make sure that facial recognition technology is developed and trained responsibly, IBM is taking the following measures:
- The lack of adequate, diverse data train systems is one of the main challenges that is causing bias in the facial analysis field. As such, IBM intends to publicly avail a dataset as a tool for both the research community and technology industry. Known as a facial attribute and identity training dataset of more than one million images to boost the training of facial analysis system, the dataset is currently underdeveloped by IBM Research scientists.
The tool will be interpreted with identity and attributes by taking advantage of geo-tags drawn from Flickr images in a bid to balance data gathered from different countries as well as active learning tools to minimize sample selection bias.
Presently, the biggest facial attribute dataset available comprises 200,000 images. Therefore, having a new dataset with one million images will serve as a monumental development.
IBM Research also explained a dataset that consists of 36,000 facial images, which are equally distributed across all ages, genders and ethnicities, which is aimed at delivering a more diverse dataset for individuals to utilize in evaluating their technologies.
According to the company, this dataset will assist algorithm designers in identifying and addressing bias, particularly in their facial analysis systems.
- Early this year, IBM increased the accuracy of its Watson Visual Recognition service considerably, particularly for facial analysis. In turn, this effort demonstrated a nearly ten-fold drop in facial analysis’ error-rate. An ongoing technical workshop organized by the University of Maryland and IBM Research, which is intended to recognize and minimize bias in facial analysis this year on September 14 with the help of ECCV 2018.
IBM Research also added that its researchers are working with a wide array of stakeholders, experts and users to comprehend other vulnerabilities and biases that can interfere with AI decision-making. This move is intended to help make the companies systems better.
With the increased adoption of artificial intelligence (AI), the problem of preventing bias from penetrating into AI systems is becoming a key issue.
Even so, IBM is convinced that no technology regardless of how accurate it is should or can supplant human expertise or judgment. In fact, the company insisted that the capabilities of advanced innovations like artificial intelligence rely on the power to augment human decision making instead of trying to do away with it, as it is essential.
Hence, it is essential that any organization utilizing AI including video analysis or visual recognition technologies train their teams on working with it to comprehend bias including unconscious and implicit bias, how to monitor and deal with it accordingly.
As an entity that spearheads inclusion and diversity in the corporate world, discrimination of any kind goes against IBM’s values. As such, the company is heavily committed to making sure all AI technologies are created without bias.