Home Technology How Big Tech Companies are Utilizing Machine Learning to Stop Hackers

How Big Tech Companies are Utilizing Machine Learning to Stop Hackers

Back in 2018, the Azure security team at Microsoft Corp. detected suspicious deeds in a large retailer’s cloud computing usage: One of the entity’s administrators, who commonly logs onto the system from New York, was trying to gain access from Romania.

The surprising thing was that the admin was not on vacation at the time, meaning that a hacker had penetrated the system.

Upon recognizing this activity, Microsoft decided to alert its customer, and the attack was thwarted before any further entry into the system by the intruder.

Microsoft was able to protect its customers’ thanks to a new breed of artificially intelligent software with the potential to adapt to the ever-changing tactics of hackers.

Amazon.com Inc., Alphabet Inc.’s Google, Microsoft, and several startups are shifting away from individually utilizing the previous “rule-based” technology aimed at responding to certain types of intrusion as well as deploying machine-learning algorithms that feed huge piles of data on previous attacks, behavior, and logins to flush out and stop hackers.

“Machine learning is a very powerful technique for security—it’s dynamic, while rules-based systems are very rigid,” says Dawn Song, a professor at the University of California at Berkeley’s Artificial Intelligence Research Lab. “It’s a very manually intensive process to change them, whereas machine learning is automated, dynamic and you can retrain it easily.”

Hackers are now themselves adaptable to help them in harnessing machine learning in a bid to establish not only fresh mischief but also overwhelm the new defenses.

For instance, they could determine how entities train their systems and utilize the data in evading or corrupting the algorithms.

The large cloud services entities are well-aware that the enemy is a moving target but still argues that the revolutionary technology will assist in tilting the balance to favor the good guys.

“We will see an improved ability to identify threats earlier in the attack cycle and thereby reduce the total amount of damage and more quickly restore systems to a desirable state,” says Amazon Chief Information Security Officer Stephen Schmidt.

He acknowledges that it’s impossible to stop all intrusions but says his industry will “get incrementally better at protecting systems and make it incrementally harder for attackers.”

Before the emergence of machine learning, security teams utilized blunter instruments.

For instance, if someone tried to log in from an unidentified locale, they were prevented from entry.

Even though such systems are often successful, they flag many legitimate users.

Mark Russinovich, Azure’s chief technology officer, said that a Microsoft system created to safeguard customers from unauthorized logins attained a 2.8% rate of false positives.

Even though that may not sound like a lot, it was considered unacceptable because Microsoft’s bigger customers can create billions of logins.

In trying to discern who is legit and who is not, Microsoft’s technology learns from the data of every company utilizing it, tailoring security to match a customer’s usual online history and behavior.

Since the release of the service, Microsoft managed to cut the false positive to about 000.1%.

The training of such security algorithms falls under the job description of individuals such as Ram Shankar Siva Kumar, a well-known Microsoft manager who is also referred to as Data Cowboy.

Siva Kumar manages a team made up of nearly 18 engineers who create the machine learning algorithms and further ensure that they are fast and smart enough to prevent hackers and operate flawlessly with the software tools of the entities paying large sums of money for Microsoft’s cloud services.

Siva Kumar is among the individuals who are called upon when algorithms detect any attack.

MORE – Big Data: All the Stats, Facts, and Data You’ll Ever Need to Know

“The amount of data we need to look at to make sure whether this is you or an impostor keeps growing at a rate that is too large for humans to write rules one by one,” said Mark Risher, a product management director who assists in preventing attacks on Google’s clients.

Currently, Google looks for security breaches or threats even after users log in, which helps in nabbing intruders who originally look like actual users.

With machine learning (ML) being in a position to evaluate various pieces of data, detecting unauthorized logins has ceased being a question of yes or no.

Instead, Google monitors different aspects of behavior across a user’s session.

A person who may seem legit originally may later show signs that they are not who they claim to be, allowing Google’s software to get rid of them with ample time to avoid more damage.

Aside from utilizing machine learning in securing their own cloud services and networks, Microsoft and Amazon are offering the technology to clients.

Amazon’s GuardDuty checks clients’ systems for unauthorized or malicious activity.

Often, the service identifies workers who carry out activities that they ought not to be doing-including integrating Bitcoin mining tools on their work-based PCs.

NN Group NV, a Dutch-based insurance firm, utilizes the Advanced Threat Protection from Microsoft in managing the access to all its 27,000 close partners and workers, while locking out other people.

In the start of the year, the company’s Manager of Workplace Services Wilco Jansen demonstrated a novel feature, specifically in Microsoft’s Office cloud software that keeps CxO spamming at bay.

Ninety minutes after Wilco Jansen’s demonstration, the security operations unit made a report that another person had tried the same attack on the CEO of NN Group. “We were like ‘oh, this feature could already have prevented this from happening,’” Jansen says. “We need to be on constant alert, and these tools help us see things that we cannot manually follow.”

Machine learning-based security systems do not work in every case, specifically when there is inadequate data for training them.

Companies and researchers are common targets for hackers.

For instance, they could imitate the activity of users to thwart algorithms that check for typical behavior.

As such, Battista Biggio, who works as a professor at the Pattern Recognition and Applications Laboratory of the University of Cagliari, said that it is imperative for companies to maintain secret algorithmic criteria as well as change the formulas constantly.

Subscribe to our newsletter

Signup today for free and be the first to get notified on the latest news and insights on artificial intelligence

KC Cheung
KC Cheung has over 18 years experience in the technology industry including media, payments, and software and has a keen interest in artificial intelligence, machine learning, deep learning, neural networks and its applications in business. Over the years he has worked with some of the leading technology companies, building and growing dynamic teams in a fast moving international environment.
- Advertisment -

MOST POPULAR

AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning. Like children, successful models need continuous nurturing and monitoring throughout their lifecycle. Parenting is exhilarating and, if...