Facebook is in competition with Google and Amazon to create its own artificial intelligence (AI) chips, upon realizing that it requires faster computing power to come up with the next innovation in artificial intelligence.
The goals of the company include a digital assistant with adequate “common sense” to communicate with a person regarding any topic.
According to one of the pioneers of modern AI and Facebook’s Chief AI Scientist Yann LeCun, the digital assistant will be a major upgrade from the current voice-driven devices.
Facebook also seeks to make artificial intelligence a more practical tool as far as controlling its platform is concerned.
This task not only involves monitoring video in real-time but also assisting its army of human moderators in deciding what content ought to be allowed on the company’s service.
Mr LeCun said that Facebook looks forward to working with numerous chip companies to come up with new designs – the company recently rolled out a project in collaboration with Intel – but he also hinted that Facebook was creating its custom “ASIC” chips in a bid to support all its AI programs.
SEE MORE: China’s Alibaba To Launch AI Chips
“Facebook has been known to build its hardware when required — build its own ASIC, for instance. If there’s any stone unturned, we’re going to work on it,” he claimed, in Facebook’s first official comments confirming the scope of its chip ambitions.
In reference to the opportunity for Facebook to make its own advances in chips that feature a computing system as their foundation, LeCun said: “There’s certainly a lot of room at the bottom.”
The company’s decision to develop its chips marks another long-term challenge to the leading manufacturer of graphic s processors used for artificial intelligence in data centers, Nvidia.
The need for additional specialized artificial intelligence chips, developed for performing single operations with lower power consumption and lightning speed, instead of the all-purpose processors used in the past, has witnessed an investment wave by large companies such as Apple, Amazon, and Google as well as numerous startups across the globe.
The focus on new hardware architectures and silicon designs show the need for crucial breakthroughs, especially in basic computing in a bid to keep modern artificial intelligence (AI) from turning into a dead-end.
LeCun indicated that all through the history of artificial intelligence (AI), there was a need for massive hardware improvements before researchers finally came up with the insights that resulted in breakthroughs in the particular field.
“For a fairly long time, people didn’t think about fairly obvious ideas,” he claimed, holding back the development of artificial intelligence (AI).
The ideas consist of back propagation, a key method used in the latest deep learning systems, which according to LeCun was a clear extension of past research, despite only becoming widely used in the 1990s after the advancement of computing hardware.
In the past, Facebook has developed other hardware types, for example formulating new ideas, primarily for data center equipment, before allowing others to work on them through open-sourcing.
Mr.LeCun said that a similar technique would be used in coming up with chip designs and he also added: “the objective is to give it away.”
Facebook is also currently devoting its research to new designs, specifically for neural networks, which are at the core of the deep learning systems recognized for the latest advances in various things such as language and image recognition.
While working on artificial intelligence (AI) chips at AY&T’s Bell Labs thirty years ago, LeCun developed the first-ever “convolutional” neural network – a design that is based on the way visual cortex functions in animals, and which is currently popular in deep learning systems.
Modern neural networks, leveraging a method referred to as supervised learning, need a lot of data for training purposes while drawing massive amounts of power, especially when working at the scale of an entity like Facebook.
Currently, Facebook conducts numerous instant analyses on each of the 2 to 3 billion images per day that are uploaded by users to its primary service.
The company also uses facial recognition in identifying the people in the pictures, developing captions to describe all the scenes, spotting things such as nudity that currently go against the service’s terms and conditions.
According to Mr LeCun, the social media giant was working on “anything we can do to lower the power consumption [and] improve the latency” in a bid to expedite processing.
However, he added that the massive demands that arise as a result of monitoring video flowing across the site in real-time would call for new neural network designs.
Facebook is also seeking new neural network architectures that not only imitate additional elements of human intelligence but also make its systems increasingly natural for people to interact with comfortably.
According to Mr LeCun, Facebook was heavily betting on “self-supervised” systems that can make extensive forecasts, especially about their surrounding world as opposed to only reaching conclusions that are directly linked to the data that was used to train them.
Among the things that would capture the interest of the social media giant is the provision of smart digital assistants – something that features some standard of common sense.
These assistants feature background knowledge, and you can easily converse with them about any given topic.
These features could allow them to come up with a similar type of world understanding that helps human beings in handling new situations.
“In terms of new uses, one thing Facebook would be interested in is offering smart digital assistants — something that has a level of common sense,” he said. “They have background knowledge and you can have a discussion with them on any topic.”
The concept behind incorporating common sense in computers is still in its early stages, and LeCun asserted that this kind of intelligence “won’t happen tomorrow.”
“You want machines like human beings or animals to understand what will happen when the world responds to your interactions with it. There’s a lot of work we’re doing in causality,” he said. “Being able to predict under uncertainty is one of the main challenges today.”
Facebook makes up a portion of a broader research endeavor to boost the latest neural networks.
Mr LeCun was scheduled to give a speech regarding the work during a chip conference that was recently held in San Francisco.
The neural networks consist of networks that can adapt their design in reaction to the data passing through them, allowing them to be extra flexible when faced with changes that characterize the real world.
Another research avenue is mainly into various networks that “fire” the neurons required to solve a given problem
This technique represents how the human brain works and could significantly assist in minimizing power consumption.
The research activity also involves incorporating computer memory into neural networks in an effort to allow them to retain additional information and create a more robust sense of context, especially when holding a “conversation” with a person.
Breakthroughs made in the manner by which neural networks operate are expected to have considerable impacts on the chips’ design, which helps in powering them.
In fact, this could potentially open up an additional competition, primarily to the companies that currently make the leading artificial intelligence chips in the world.
Mr LeCun claimed that Google’s Tensor Processing Units (TPUs) – which have risen as the most dominant data center chips designed for machine learning – are “still fairly generic”.
“They make assumptions that are not necessarily true for future neural network architectures.”
On the other hand, flexibility in silicon design can pose other disadvantages.
For instance, Microsoft has intentions to install a new type of chip, a field programmable gate array, across its data center servers.
Despite being more flexible, especially in the way they can be utilized, the new chips have a low-level of efficiency when it comes to processing massive amounts of data.
This places Microsoft and its chip at a disadvantage to existing chips that have already been optimized to handle specialized functions.