Artificial intelligence can be utilized in automatically detecting and fighting malware. Nevertheless, it does not mean hackers cannot take advantage of the technology’s capability to meet their own needs.
In a world characterized by mobility, the Internet of Things, data collection and networked systems, Cybersecurity has proven to be a race between threat actors and white hats. Traditional Cybersecurity solutions including bolt-on antivirus software are no longer as useful as they used to be in the past.
As such, cyber attackers are leveraging every potential avenue they can gain access to implement their deeds including hold businesses to ransom, rinse bank accounts, disrupt critical systems, infiltrate networks and steal data. Worst case scenario, the increased state-backed attacks only add insults to injury.
Response teams and security researchers are also normally hard-pressed as far as keeping up with regular attack attempts is concerned. Patch management and vulnerability, especially nowadays when computing is becoming more complex, is not helping things.
However, artificial intelligence is being viewed by many as a potential solution that could be used in learning how to identify suspicious behavior, take some of the workloads away from human teams and stop cyber attackers.
There’s no doubt that artificial intelligence can be helpful in curbing cyberattacks. However, the risk of threat actors using the same technology to boost their attack techniques is highly imminent. In fact, IBM is convinced that the AI era could lead to weaponized AI.
For this reason, IBM Research embarked on developing an AI-driven attack tool in a bid to assess how artificial intelligence could become part of the attack techniques used by threat actors one day.’
According to the research team at IBM Research, the AI-driven malware, DeepLocker, is highly evasive and targeted. It is carried along through systems like video conferencing software.
However, it stays in a dormant state until it reaches a particular victim, who is recognized by various factors including voice recognition, geolocation, facial recognition and possibly the analysis of data collected from sources like social media and online trackers. Ultimately, DeepLocker unleashes its attack immediately it acquires its target.
Amazed by the capability of DeepLocker, IBM described it by likening its ability to that of a sniper attack as opposed to the traditional malware’s spray and pray technique. The company stressed its point by saying that the AI-powered malware is developed to not only fly under the radar but also be stealthy, avoiding being detected until it identifies its specific target.
According to IBM, DeepLocker’s Deep Neural Network (DNN) models require trigger conditions in a bid to implement a payload. When the conditions are not met, the malware stays locked up. The DNN AI-model is difficult to decipher and far more convoluted.
The security researchers at IBM demonstrated DeepLocker’s capability by creating a proof-of-concept that involved the hiding of the WannaCry ransomware in a video-conferencing application.
Impressively, it was not detected by sandboxing or antivirus engines. Furthermore, the AI model then learnt how to identify individuals’ faces that had been selected for the exercise. Once recognized, the trigger condition would, in turn, be met leading to the execution of the ransomware.