As the world is quickly shifting to machine learning technologies, there has been an increasing need to ensure such technologies provide unbiased and accurate solutions or recommendations.
For this reason, Accenture recently launched its new tool, which is aimed at helping enterprises to not only identify but also get rid of racial, ethics and gender bias, especially in artificial intelligence software.
Both governments and companies are increasingly shifting their attention to machine-learning algorithms to allow them to make fundamental decisions such as who receives government benefits, who gets a mortgage or insurance, whether to grant parole to a prisoner and who to hire.
The main argument supporting the use of such software entails that if it is designed and trained appropriately, it can be in a better position to make prejudice-free decisions.
In several well-publicized instances, algorithms have proved to discriminate against various parties including women and minorities. For example, an algorithm used in numerous US states and cities to assist in making bail decisions had twice the likelihood of falsely identifying black prisoners as high-risk re-offenders.
A 2016 investigation conducted by ProPublica revealed this information. Similar cases have triggered an increase in awareness regarding the dangers of biased algorithms. However, companies behind such technologies still struggle to give responses.
Rumman Chowdhury, a data scientist who heads Accenture’s Responsible AI, said that the company’s customers have been saying that they are not adequately equipped to think about political, social and economic outcomes of their algorithms.
With such customers have been approaching Accenture to assists them in conducting checks and balances, the company resulted in the creation of software that can solve such problems.
In fact, it can do three things including allowing users to define the data fields they deem sensitive such as age, gender or race as well as see the extent of correlation between such factors and other data fields. For instance, there is a high chance that race may be correlated with an individual’s postcode. Hence, to get rid of the algorithm’s bias, it would be advisable to avoid considering race. You would also have to de-bias postcode.
Chowdhury also said during an AI conference held in London that Accenture would utilize a method referred to as mutual information, which gets rid of bias in algorithms. The product also delivers a visualization that allows developers to view how the de-coupling of variables’ dependencies influences the entire accuracy of their model.
Lastly, Accenture’s technique evaluates an algorithm’s objectivity based on predictive parity. Even so, the tool allows developers to see what happens to their model’s accuracy, especially when equalizing the predictive parity amongst sub-groups.
According to Chowdhury, individuals appear to be interested in a push-button solution that will help them solve the fairness issue magically. She emphasized her statement by saying that such expectations are highly unrealistic.
In addition, Chowdhury asserted that the importance of Accenture’s tool is its power to demonstrate the existence of a tradeoff between the complete accuracy of algorithms and their fairness.