What are the Risks and Rewards Using AI to Detect Crime?

What are the Risks and Rewards Using AI to Detect Crime
Share this:

Many companies are utilizing AI in detecting and preventing malicious activities such as insider trading and routine employee theft. In fact, many large corporations and banks currently leverage this groundbreaking technology to spot and prevent money laundering and fraud.

What’s more, social media platforms take advantage of machine learning to block illegal content like child pornography. On the other hand, businesses appear to be experimenting with new techniques to utilize AI not only for improved risk management but also more responsive and faster fraud detection as well as predicting and preventing crimes.

Although the current basic technology is not revolutionary, its algorithms and the results they generate deserve that description. For example, banking institutions have been utilizing transaction monitoring systems for decades, which are based on pre-set binary rules that call for the manual checking of output.

For such reasons, the success rate is mainly low. On the contrary, most machine learning solutions today utilize predictive rules that identify anomalies existing in data sets automatically. These sophisticated algorithms help to minimize the number of false alerts considerably through filtering out the cases that had been flagged wrongly while using conventional rules to uncover others missed.

Today, many companies have opted for this technique as the only solution to help them keep up with the growing number of sophisticated criminals. For example, the expectation is that social media companies should pull down and uncover terrorist recruitment messages and videos almost immediately.

With time, AI-driven crime-fighting tools could emerge as a necessity for large enterprises, as there would be no alternative way of rapidly detecting and interpreting patterns from the billions of data pieces.

Whether AI crime-combating solutions are ideal for a company can be assessing if the advantages are more than the risks involved. One of the risks is that biased conclusions can be derived from AI, primarily based on aspects like age, gender and ethnicity. Here is a look at how leading edge entities are assessing risks and benefits of the increasingly changing AI crime-fighting and risk management:

Assessing the Strategic Fit

Managers have to first comprehend where machine learning is making a huge difference before embarking on an artificial intelligence (AI) risk management undertaking. For instance, banks are preventing financial crimes more cheaply and quickly than they previously used to thanks to the conducting of multilayered deep learning assessments and applying AI in automating processes.

AI tools also enable companies to uncover suspicious relationships and trends, even those that re invisible to professionals. For example, artificial neural networks (ANN) can allow employees to forecast the next moves to be taken by unknown criminals, particularly those who have found ways to go around warning triggers in security systems.

Evaluating and Mitigating Internal Risks

While managers assess how AI can aid them in recognizing criminal activities, they ought to consider how it complements their overall AI plan. Even so, AI crime detection and risk management ought not to be done in isolation.

In addition, back-testing against models that are relatively simple can aid banks in limiting the impact of inexplicable conclusions generated by AI, especially when there exists an unrecognized event for which that model hasn’t been trained. For instance, banking institutions leverage AI in the monitoring of transactions and in minimizing the false alerts they get on potential illegal transactions like money being laundered for criminal reasons.

Understanding and Getting Ready for External Risks

The increased use of artificial intelligence tools for preventing crime could result in the cascading of external risks in various unexpected ways. In fact, a company may lose its credibility with various parties including regulators, the public among other important stakeholders.

This case could happen, for instance, when there are false alerts that incorrectly recognize individuals as criminal as a result of racial bias, which is unintentionally incorporated into the system. Also, this could happen if they miss detecting criminal activities such as the channeling of funds from sanctioned nations such as Iran.

Due to such cases, customers could move to less monitored entities that exist outside of the regulated industries. Worst case scenario, a moral hazard could emerge, especially when employees become too dependent on AI crime-fighting tools when it comes to catching criminals on their behalf.

Companies have to develop and assess a variety of scenarios, primarily those of cascading events generated from AI-powered tools that are used to track criminal acts in a bid to prevent the above situation from happening.

With the results generated through scenario evaluation, managers can then assist board members and top executives decide how comfortable they are when it comes to utilizing AI crime fighting tools. Additionally, they can come up with crisis management guides outlining both external and internal communication plans to allow them to react swiftly when things do not work out as expected.

Source HBR

Share this:

Leave a Reply

avatar
  Subscribe  
Notify of