ARM, a British chip designer, appears to have had enough of its wait and see approach as far as machine learning technology is concerned.
In fact, it marks the latest firm to join the AI industry with specialized hardware by launching two new processor designs.
According to the company, the designs will provide a groundbreaking computing capability intended for firms involved in creating machine learning-driven devices.
The chips designs are for the ARM Machine Learning (ML) Processor, which can expedite general artificial intelligence apps, especially from machine translation to facial recognition.
The other is ARM Object Detection (OD) Processor, which is a second-generation design augmented not only for processing visual data but also detecting objects and people. This OD processor is anticipated to be available to industry clients at the close of May 2018 while the former or ML processor design will be ready around mid-year.
The recent details by the company appear to be ARM’s follow up on its announcement of Project Trillium in February coupled up with new details revolving around its machine learning processor dubbed neural-network processing unit or NPU.
The project entails ARM’s endeavor to take advantage of the neural network and machine learning operations, particularly across their selection of processors regardless of whether the processing task is carried out on GPU, CPU or the company’s new NPU.
ARM’s main agenda involves making machine learning possible on the edge. In this case, the edge mainly describes standalone gadgets like smartphones and other mobile devices instead of centralized data center approach.
Shifting machine learning workloads to the edge delivers tangible advantages in terms of decreased power demands, reduced bandwidth requirements, and the related lower costs. Additionally, it gets rid of a considerable amount of latency since data no longer requires leaving the gadget. Maintaining data on the device can also boost security and reliability.
As with all its other chips, ARM will not be manufacturing the processors itself. Instead, it intends to license the designs to other manufacturers. Previously, the company’s customers have involved chipmakers such as Broadcom as well as hardware firms such as Apple, which uses ARM’s designs for its devices.
Primarily, the ML processor will be of interest for smartphones and tablets manufacturers whereas the OD processor could be put to various uses including drones and smartphone cameras. Additionally, both processors will be included in the system on chip designs (SoCs) in a similar manner to GPUs.
Aside from being used in making smartphones, the chip designs will aid in powering the next generation of IoT devices. Similarly to other AI chip makers, ARM is enthusiastic about the relevance of Edge computing, which entails on-device processing as opposed to relaying data back to the cloud.
This capability has been a key factor in the adoption of artificial intelligence chips by phone companies since on-device computation has more advantages compared to cloud computing.
ARM is not alone in riding the artificial intelligence (AI) wave, particularly with optimized silicon.
Intel recently launched a new selection of AI-based chips while Qualcomm is working on creating its AI platform. Furthermore, Google is building its machine learning chips mainly for its servers.