Home General Cyber Experts Warns Not to Perceive AI as a ‘Silver Bullet’

Cyber Experts Warns Not to Perceive AI as a ‘Silver Bullet’

Artificial intelligence (AI) has undeniably proven itself to be a valuable cybersecurity tool. However, experts have issued a warning to companies against viewing this revolutionary technology as a silver bullet when it comes to fighting against cyber attacks.

It no surprise that most aspects of cyber defense, specifically those involving the monitoring of huge amounts of data are best handled by machines as opposed to humans.

“It’s not so much that AI does it better, but that it works unendingly and consistently without getting tired,” explained Daniel Miessler, the director of advisory services at IOActive, a cybersecurity consultancy.

He also said that currently, many security teams are only focused on “a tiny fraction –less than 1 percent” of the data produced by their organizations.

Evaluating the rest, particularly in large organizations is quite a mammoth activity that can rapidly overpower human beings. Worst case scenario cybersecurity experts are not only in scarce supply but also expensive.

“Data scientists who can do this work. . . command salaries of up to $600, 000. There is a constant battle for talent. We need technology that gives us coverage with fewer people,” said the Vice-President and General Manager of Security at DXC Technology Chris Moyer.

Nearly half of all companies that were surveyed in a Ponemon Institute and IBM study in 2018 were found to be deploying a type of security automation.

Also, the study discovered that a further 38% of the companies were making plans to deploy such a system in the course of the coming year.

In the past few years, a number of startups including Versive, Respond Software, PerimeterX, Obsidian Security and Endgame have cropped up to provide AI security tools.

Despite all what AI is capable of, there is a danger that the technology could be overhyped, specifically as a cybersecurity solution.

In fact, Zulfikar Ramzan, the chief technology officer of network security company RSA is convinced that AI has its own limitations. He said: “Where I’m skeptical is whether we can treat AI as a panacea.”

According to Mr. Ramzan, AI methods were not developed to operate in adversarial settings. He asserted: “They have been successful in games such as chess and Go and on problems where the rules are very well defined and deterministic.

But in cybersecurity, the rules no longer apply. ‘Threat actors’ constantly adapt their technique in the hunt for ways to circumvent systems. In those environments, AI is unsuitable because it cannot adapt quickly enough.”

Ramzan is also convinced that there could also be some risk pertaining to AI in organizations since evaluating data calls for the need to store it in a single place. “That could be a point of failure,” he said.

Instead of stealing data or destroying it, an attacker may make subtle, nearly unidentifiable alterations to the system that would considerably undermine the manner by which the AI algorithm operates.

For this reason, security analysts are convinced that it is important to understand how AI systems operate as opposed to treating them like magic or black boxes.

According to the Chief Technology Officer of Data Protection at digital security company Gemalto, AI is a remarkable technology for alerting users that they have been compromised. “What we want it to do is identify when something suspicious happens, apply the appropriate security controls to mitigate the risk, then report back that it has noticed a potential attack, stopped it and protected the data,” he said.

In the meantime, hackers will also be utilizing artificial intelligence (AI) to improve their attack abilities. For instance, AI could be utilized in gathering data, specifically for “spear phishing” campaigns.

Also, according to Ramzan, there’s some evidence that online criminals are extensively using AI. “Hackers currently go for the minimum they need to do to compromise a system. There are many easier ways that don’t involve fancy AI techniques,” he said.

However, in the corporate world “there is not a culture of sharing information on data security,” warned Mr. Moyer.

The online fraudsters are mostly quicker in disseminating new techniques and tools amongst themselves. Hence, after AI hacking tools are developed, they could spread rapidly.

This situation raises the likelihood of cybersecurity emerging as a battle of the machines. At the moment, Mr. Miessler is convinced that human beings would have to remain involved in the process even though their function would decline as artificial intelligence (AI) becomes extra powerful.

Source FT

Subscribe to our newsletter

Signup today for free and be the first to get notified on the latest news and insights on artificial intelligence

KC Cheung
KC Cheung has over 18 years experience in the technology industry including media, payments, and software and has a keen interest in artificial intelligence, machine learning, deep learning, neural networks and its applications in business. Over the years he has worked with some of the leading technology companies, building and growing dynamic teams in a fast moving international environment.
- Advertisment -


AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning. Like children, successful models need continuous nurturing and monitoring throughout their lifecycle. Parenting is exhilarating and, if...