Home Startups Petuum, an AI Startup Plans to 'Industrialize' Machine Learning

Petuum, an AI Startup Plans to ‘Industrialize’ Machine Learning

The last thirty years of machine learning innovations are closely entangled with a massive computing idea: parallel distributed processing, which entails sections of a program running simultaneously on several processors to expedite computation.

Eric Xing, a machine learning professor at Carnegie Mellon, founded Petuum three years ago and believes the areas require more knowledge regarding parallelism in a bid to simplify parallelism AI.

The Pittsburg-based company recently fundraised a staggering $108 million in a financing round led by SoftBank, a Japanese conglomerate.

Other companies that financed the exercise include Oriza Ventures, Northern Light Venture Capital, Advantech Capital, and Tencent.

Petuum intends to ship the first iteration of its artificial intelligence (AI) platform software in the coming summer.

Xing expects that the offering will help “industrialize” machine learning, hence making it more widely available and reliable.

“When you deploy algorithms, you need to maintain it, you need to update it, change it,” Xing said.

“That is the very bottleneck of getting AI accessible,” he says, “for companies that aren’t Google or Microsoft, that don’t have armies of engineers, for traditional IT teams.”

“There is a shortage of talent, and there is little to no history of building AI teams within most companies.”

“Other companies want Lego pieces, they want building blocks of machine learning solutions. AI needs to be industrialized, and there need to be standards – we want to be the front-runners of such a culture.”

The platform, which is described comprehensively in a 2015 paper that was written by Xing and colleagues, outlines how to automatically break programs into two separate ways.

One of the ways is “data parallelism”, which is a commonly used technique in AI training, and in certain instances inference, particularly in machine learning, is expedited by conveying separate data pieces to varying processors, either GPUs or CPUs.

MORE – Data Science – 8 Powerful Applications

Every processor uses its slice of the entire data set to train the neural network.

What’s more, the parameters and weights of the network are upgraded across all the data segments.

The other approach, “model parallelism,” is not only more challenging to engineer but also less common.

For decades, these challenges of parallelism have been a focal point of computer science.

Nevertheless, Petuum’s software has the potential to automatically attain model or data parallelism or even both, especially for machine learning programs coded in the Caffe or Google’s TensorFlow framework.

The main concept behind such work entails that machine learning is probabilistic as opposed to being “deterministic,” unlike other programs.

For this reason, the technology boasts three advantages, which other types of software do not have, as far as parallelism is concerned.

In fact, separate sections “converge” at a given solution to the problem at different rates; the dependencies that exist between the program’s sections while running it; it can endure error in different sections of the program’s role to a greater extent.

The Petuum software has created some tricks for exploiting such strengths.

For instance, a “parameter server” operates a scheduling protocol that determines the neural network’s parameters that will work in parallel, based on the parameters that are “weakly” connected, and hence can autonomously be affected.

Although the outcomes are somehow similar to the big data framework of MapReduce, Petuum is convinced that its system has many benefits over MapReduce as well as several other parallelizing infrastructures like GraphLab and Spark.

“I was embarrassed at my own inability to deliver my models rapidly,” he recalls. “I went back to CMU, and we started a research project on how to take a piece of existing machine learning code, and automatically make a parallel version for the data center.”

Currently, Petuum is trying to come up with a way for monetizing the platform.

Xing claims it could incorporate a licensing model, which charges per the number of users or machines that a customer has operating on a particular AI system.

In the meantime, Petuum is on track to ship several packaged software mainly for vertical industries.

According to Xing, the idea behind this effort is to prove that “we are able to address non-trivial AI problems.”

Hospitals are an area of interest to Xing, as most of them do not have a dedicated artificial intelligence (AI) team, and those that do, their IT team may be challenged by various things including the need to apply AI models on a wide array of hardware.

“Where they have an IT team, they may sit in front of a UI and update the algorithms, but running on Petuum, they don’t need to worry about how the data is distributed or run on different machines.”

The first product targeting the healthcare space entails a system that automatically produces human-readable reports for physicians that utilize data such as radiology scans, which are processed through reinforcement learning.

“This is not about classification,” says Xing. “It is about summarizing knowledge into a one-pager, with a deeper understanding of medical information.”

“You can increase diagnostic outcomes, you can speed up a doctor’s work.”

Petuum’s collaboration with the Cleveland Clinic represents one outcome.

The partnership, disclosed back in September, is intended to generate an Artificial Intelligence Diagnosis Engine that can “apply advanced machine learning algorithms to medical record data.” The partnership is competing for ” IBM Watson AI Xprize.”

Certainly, industrializing artificial intelligence (AI) brings about the concern of whether the work of Petuum will get any closer to the well-sought-after “artificial general intelligence.”

According to Xing, the term “generalizability” has “been abused or overloaded.”

“I don’t think there is a single algorithm that can solve general AI,” he says, “to process speech and also read pictures, that’s impossible – that’s not even a scientifically viable statement.”

“But if you talk about from an engineering sense, about nuts and bolts that can be used in different places, then we can make these different building blocks that can be reused.”

“The bigger problem,” he says, “is a disconnect between scientists and engineers: the two are not giving insights to one another.”

“A lot more needs to happen to bridge the gap. I don’t believe that just inventing fancier and fancier models is the way to go. You still need engineers to translate the models into product.”

Source Zdnet

Subscribe to our newsletter

Signup today for free and be the first to get notified on the latest news and insights on artificial intelligence

KC Cheung
KC Cheung has over 18 years experience in the technology industry including media, payments, and software and has a keen interest in artificial intelligence, machine learning, deep learning, neural networks and its applications in business. Over the years he has worked with some of the leading technology companies, building and growing dynamic teams in a fast moving international environment.
- Advertisment -

MOST POPULAR

AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning. Like children, successful models need continuous nurturing and monitoring throughout their lifecycle. Parenting is exhilarating and, if...