IBM is set to release a tool that will analyze why and how algorithms make real-time decisions dubbed The Fairness 360 kit. This tool is also expected to aid in scanning for bias signs and recommending adjustments.
The Fairness 360 Kit comes amid a growing concern that algorithms utilized by technology giants among other firms are at times biased in their decision-making efforts. For instance, image recognition systems, in the past, have failed to recognize non-white faces.
With algorithms increasingly making automated decisions regarding a broad variety of problems including what people view online, insurance and policing, the effects of their suggestions become wider. Also, in most cases, they work within what is referred to as a “black box” which means that their owners cannot view how the decisions are being made.
The IBM cloud-based software is expected to be open-source and run with an array of regularly used frameworks, particularly for creating algorithms.
What’s more, customers will be in a position to view, through a visual dashboard, how their algorithms make decisions and which aspects are being utilized in making the final suggestions.
Furthermore, the Fairness 390 Kit will be able to track the model’s record for fairness, performance, and accuracy over time.
“We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making,” said Senior Vice President of Cognitive Solutions David Kenny.
Aside from IBM, other technology companies are also working on their own solutions. In fact, Google released a “what-if” tool, which was developed to allow users to see how their machine learning models work. Nevertheless, Google’s tool does not work in real time, which is why the data can be utilized in building up an image over time.
Algorithmic and machine-learning bias is proving to be a major problem in the artificial intelligence (AI) community. In May, Microsoft said that it was working on developing a bias detection toolkit while Facebook has also asserted that it is currently testing a tool to aid in determining whether an algorithm is biased.
Part of the issue is that the massive amounts of data algorithms that are mainly trained on are not always adequately diverse. Oy Buolamwini released the Algorithmic Justice League (AJL) whereas a Massachusetts Institute of Technology postgraduate student in 2016 discovered that facial recognition only identified her face when she wore a white mask.
And Google revealed that it was “appalled and genuinely sorry”, particularly when its photo algorithm proved to be incorrectly spotting African-Americans as gorillas back in 2015.
Kay Firth-Butterfield from the WEF said that there is an increasing debate revolving around ethics and artificial intelligence (AI). “As a lawyer, some of the accountability questions of how do we find out what made [an] algorithm go wrong are going to be really interesting,” said Kay Firth-Butterfield from WEF in a recent interview with CNBC.
“When we’re talking about bias we are worrying first of all about the focus of the people who are creating the algorithms and so that’s where we get the young white people, white men mainly, so we need to make the industry much more diverse in the West.”