Recently, Australian financial regulator was a victim of harsh criticism for their unsuccessful attempt to prevent banking institutions from exploiting their consumers.
As a result, they are shifting their focus to AI in an effort to improve compliance across the entire finance industry.
After a public inquiry into the misconduct that rocked the financial sector, the Australian Securities and Investments Commission (ASIC) is financing studies to look into how natural language processing (NLP) technology, an aspect of AI technology that allows computers to go through massive volumes of text and speech in a bid to recognize patterns, could improve regulation and detect misconduct.
The early push by ASIC forms part of a broader global shift, especially towards the use of artificial intelligence and machine learning technologies in the finance, compliance, and regulatory sectors.
IDC revealed that the global expenditure on artificial intelligence and cognitive technologies rose to $25 billion in 2018, an increment of nearly 45% from 2017.
The firm also predicts that in 2019, banking institutions will spend approximately $5.6 billion.
One of the pilot programmes from ASIC involves the deployment of software to scrutinize advertisements, online promotions, and financial planning documentation to make sure that they do not breach rules or hold any problematic advice.
A different trial is expected to leverage artificial intelligence in monitoring and screening conversations between consumers and insurance sales agents in an attempt to spot potential failures for complying with disclosure rules or “hard-sell tactics.”
“It could be the words used by sales agents or the way particular sentences are constructed that are problematic,” said John Price, Asic’s commissioner.
“This is not about automating the entire supervision process, rather it’s about risk targeting — identifying areas of concern that can then be handed on to experienced investigators for further examination.”
Global governments and regulators are currently shifting their focus towards the use of algorithms in carrying out tasks that were previously executed by human beings.
Regulators in Australia are convinced that artificial intelligence(AI) can assist in addressing issues pointed out by the recent public enquiry into financial industry’s misconduct, which led to the blame of regulators for their lack of proper enforcement.
The commission uncovered that banks charged customers fees without providing any service – in some cases, this applied even to dead customers – they not only lied to the regulators but also caused customers to lose a lot of money due to poor advice.
According to Mr Price, ASIC already utilized artificial intelligence as one of its supervisory functions for both futures and equity markets.
Hence, extending the particular technology to financial planning as well as insurance and advice sectors was a case of natural evolution.
Daisee, one of the Australian-based startups that are currently dealing with AI-driven speech analytics software, is conducting trials with call center employees at Westpac, the nation’s second-biggest bank in terms of market capitalization.
Richard Kimber, the CEO, and founder of the startup said that by utilizing a technology that focused on phrase-matching, Daisee could identify calls that featured a higher risk of generating a fraudulent claim, detect “hard-sell” tactics, and ensure compliance with disclosure systems.
“In the past, you could sample maybe 2 percent of calls, whereas now the AI technology enables you to sample 100 percent of calls,” he said.
“This was a laborious and painful process for humans, and it used to be too expensive to review all calls in an organization.”
Research published in 2018 by Morgan Stanley projects that four of the largest banks in Australia – National Australia Bank, ANZ, Westpac, and Commonwealth Bank of Australia – would spend an additional A$ 2.4 billion combined, specifically on compliance, in the coming three years.
Deborah Young, the CEO of RegTech Association, a regulatory tech industry body, claimed that companies in the particular field were increasing their partnership with regulators, governments, and big regulated companies.
However, there are several impediments.
The Australian Prudential Regulation Authority has issued a warning citing risks in doing away with human oversight, especially in the decision-making processes.
“These risks increase with the use of machine self-learning techniques, which impart greater predictive power to algorithms but make them significantly more complex,” Geoff Summerhayes, an Apra board member, told the Insurance Council of Australia’s annual forum last year.
“It’s critical to appropriately safeguard the privacy of our customers when partnering with new and exciting companies,” said Supun King-Jayawardana, executive manager of CBA’s Innovation Lab in London.