Cisco Develops a New Deep Learning Server Driven by 8 GPUs

Cisco Develops a New Deep Learning Server Driven by 8 GPUs
Share this:

Cisco Systems Inc., a leading networking company, recently made news by becoming the newest data center equipment developer to develop a system that is optimized for AI. The introduced UCS C480ML M5, which is a new four-rack unit server that is primarily developed to power processor-intensive deep learning tasks.

The machine becomes the recent addition to Cisco’s Unified Computing System family. The UCS servers are known to integrate compute resources with storage and networking features coupled with management automation software.

The C480ML M5 comes with a larger punch compared to many other systems in this series. In fact, it combines two newest-generation Intel Corporation Xeon Scalable central processing units, specifically with Nvidia Corp ’s eight Tesla V100 graphics cards.

The networking giant, Cisco, opted to use the top-end type of the chip dubbed the SMX2, which boasts a whopping 32 GB of onboard memory.

According to the official figures drawn from Nvidia, one V100 offers 47 times the capability of a traditional CPU, primarily for deep learning activities. What’s more, the chips consists 21.7 billion transistors, which are the size of an Apple Watch face.

The transistors are arranged into 5,700 processing core comprising of 640 Tensor Cores that are mainly designed with artificial intelligence in mind.

Inside the new appliance from Cisco, the V100 chips interact with each other through a technology referred to as NVLink, which Nvidia has designed mainly for similar systems. In turn, when it comes to storage, organizations can fit the system with a maximum of 24 flash drives or direct attached disks.

As if that’s not enough, six of Cisco’s C480 M M5’s drive slots support flash equipment , particularly those based on the high-speed NVMe interconnect tech.

Cisco Systems Inc. intends to make sure that the appliance will function harmoniously with other companies’ ideal artificial intelligence (AI) tools. To ensure that it achieves this goal, the company is partnering with Hortonworks Inc. in an attempt to certify the machine, particularly for the 3.1 version of the what is referred to as the Hadoop analytics platform.

The version is known for providing support mainly for well-known deep learning frameworks like TensorFlow.

Cisco, a globally renowned networking giant, is also expected to support Kubeflow, which is an open-source tool that allows compatibility between TensorFlow and Kubernetes software container orchestration engine. Such software containers allow companies to move their tasks easily across multiple environments. To this end, Cisco will allow its customers to gain the power of moving on-premises artificial intelligence (AI) models to the cloud or the other way round.

The release of Cisco’s C480 ML M5 comes just a month after Dell EMC unveiled a server that is similarly geared towards artificial intelligence activities that can be fitted with a maximum of four V10 chips. Earlier on, Pure Storage Inc. introduced AIRI, which is a system that integrates five Nvidia DGX-1 servers, each boasting eight V100s. Other than that, Cisco System is expected to make the remarkable C480 ML M5 available later in the course of the year.

Source SiliconAngle

Share this:

Leave a Reply

avatar
  Subscribe  
Notify of