As artificial intelligence (AI) services and products become mainstream, there is still a huge gap existing between how the technology can be used and how it should be used.
Furthermore, until regulations or laws catch up with technology, company leaders are responsible for implementing ethical decisions regarding their use of artificial intelligence products and applications.
Ethical issues with artificial intelligence (AI) can have a major impact, especially on a company’s reputation and brand, as well as the lives of customers and employees among other stakeholders.
Although one may argue that it is a little too early to start addressing AI ethical problems, studies suggest that nearly 30% of all big US-based companies have carried out numerous AI projects with lesser percentages outside the United States, and there are currently over 2,000 AI startups.
Currently, all these companies are creating and rolling out AI apps that could have ethical impacts.
Most executives are starting to realize the ethical dimension of artificial intelligence.
A 2018 study conducted by Deloitte relating to 1,400 US-based executives who are informed about artificial intelligence (AI) revealed that 32 percent of them ranked ethical issues among the top three risks of artificial intelligence.
Nevertheless, many organizations still lack the ideal approaches to address AI ethics.
Here are seven actions that executives or leaders of artificial intelligence based companies, irrespective of the industry, ought to consider taking while navigating the fine line existing between “can” and “should”.
1. Involve the Board in dealing with AI Ethics
An AI ethical issue can affect a company’s value and reputation significantly.
It is advisable to involve the company’s board as far as AI ethics is concerned.
Some companies operate advisory and governance groups that consist of senior cross-functional executives who not only create and but also monitor the governance of artificial intelligence (AI) apps or even AI-powered products.
Farmers Insurance, for instance, created two such boards, one for IT-based issues and the other one for business concerns.
Together with the boards, such governance groups ought to be involved in the discussions about AI ethics.
2. Avoid Bias in AI Apps to Promote Fairness
Leaders ought to ask themselves certain questions such as whether the AI apps that they use deal with all groups the same way.
Sadly, some artificial intelligence (AI) apps such as machine learning algorithms are biased against certain groups.
This algorithmic bias has been recognized in various contexts, including hiring decisions, education curriculum design, credit scoring, and judicial sentencing.
Even in cases where the algorithm developers have not intended any discrimination or bias, they together with their companies should try identifying and preventing such issues, and rectify them after discovery.
Bias is not a new issue.
In fact, companies that use conventional decision-making processes have also made such judgment errors.
What’s more, algorithms developed by human beings are at times biased.
However, AI applications with the potential to develop and deploy models a lot faster compared to traditional analytics are likely to intensify this problem.
Even though full transparency of AI models can come in handy, leaders who see their algorithms as a competitive resource are likely to avoid sharing them.
Most companies ought to come up with risk management guidelines to assist the management teams in reducing algorithmic bias, especially within their machine learning and AI applications.
3. Disclose your AI use to Customers
Several technology companies have experienced heavy criticism for not disclosing their AI use to their clients.
On the other hand, non technical companies cannot only learn from their experiences but also take preventative measures in reassuring both customers and other external stakeholders.
The recommended ethical approach relating to AI usage is disclosing to customers or the concerned parties that the technology is being used and offer some information about how it operates.
Also, leaders should reveal the sources and types of data used by the artificial intelligence application.
4. Be Careful when it comes to Privacy Matters
Artificial intelligence (AI) technologies are continuing to penetrate security and marketing systems, potentially triggering privacy concerns.
As such, some governments are leveraging AI-driven video surveillance technology in identifying facial images in both social events and crowds.
On the other hand, several technology companies have been criticized for contributing to these capabilities by external observers and their employees.
With time, non-tech companies may also experience pushback from their clientele and other vital stakeholders regarding privacy issues as they continue to utilize AI in personalizing ads, sites and marketing provisions.
Similar to other AI concerns, the full disclosure relating to how data is obtained and utilized could prove to be the most useful antidote for dealing with privacy concerns
5. Assist in Getting rid of Employee Anxiety
With time, artificial intelligence (AI) use could impact employee jobs and skill sets.
In the 2018 survey by Deloitte involving AI-informed executives, 36% of all the respondents felt that job cuts resulting from AI-powered automation rise to an ethical risk level.
Some initial worries relating to massive unemployment due to AI-powered automation have reduced, and now most observers are convinced that AI-powered unemployment is likely to be marginal in the approaching decades.
Considering that AI only supports some tasks as opposed to entire jobs, machines working together with humans could replace humans.
However, most employees who fear to lose their job may hesitate to explore or even embrace AI.
The ethical approach to deal with this fear is to advise workers on how artificial intelligence might impact their jobs in the future, allowing them to seek other employment opportunities or acquire new skills.
6. Acknowledge that AI often Works Best with Humans
Humans working together with machines are more powerful as opposed to machines or humans working alone.
Most AI-based issues stem from machines operating without sufficient human collaboration or supervision.
Facebook, for instance, recently revealed that it intends to include 10,000 more individuals into its privacy, content review and security teams in a bid to boost AI functions.
Currently, AI technologies cannot effectively execute some activities without the intervention of human beings.
Hence, avoid eliminating the available common human approaches to solve employee or customer issues.
7. Focus on the Big Picture
The most vital AI ethical issue could be creating AI systems with the potential to respect human autonomy and dignity, as well as reflect societal values.
Looking at the uncertainties and rapidly changing technologies, anticipating all the ways that AI may impinge on both society and people before implementation could be difficult.
Furthermore, small-scale experiments might end up uncovering negative results before they take place on a large scale.
However, when indications of harm appear, acknowledging and acting on new threats quickly is important.
Most companies are still in their early stages of their AI journeys, and only a few have dealt with the ethics of AI use in their enterprises.
While security, privacy and bias issues become more crucial to people, AI ethical risks are anticipated to grow into a vital business matter that requires board-level governance structure and process.