Nvidia & NetApp Unveil an AI-Based Data Platform for Enterprise

Nvidia & NetApp Unveil an AI-Based Data Platform for Enterprise
Share this:

Contrary to popular belief, artificial intelligence (AI) is not all about algorithms. In fact, some parties are convinced that the data used to train them is crucial compared to the models themselves.

For this reason IDC projects that over 44 zettabytes of digital data would be created by the start of 2020. Fortunately enough , the growth of big data has corresponded with a continued drop in cloud storage pricing , which is partly owed to better management tools, innovations in object storage and cheaper media costs.

Cloud storage providers are not created equal. In fact, some do not have the fine-grain management tools that are necessary for collating, processing and transferring AI model data efficiently and quickly. What’s more, not all businesses have storage stacks that are augmented for data science workflows.

Recently, data storage company NetApp and Nvidia made a joint announcement of Ontap AI , which they are convinced is an ideal solution. According to the senior VP of NetApp Octavian Tanase, the platform is driven by NetApp’s AFF A800 cloud-connected flash storage and Nvidia’s DGX supercomputers.

It is created to aid organizations in attaining edge to core to cloud control, particularly over all their data through providing unmatched performance and access.

In a phone interview with VentureBeat, Octavian Tanase said that NetApp’s unique vision of a data pipeline allows easy deployment. He stressed his remarks by saying that people are seeking scale in that they intend to start small then grow.

According to Octavian, NetApp’s goal is to help customers correlate datasets, manage data sets, create large data lakes and make better and faster decisions regarding data.

NetApp’s Data Fabric is the linking tissue that binds these solutions together. It is an on-premise and SaaS solution that unites data sources across clouds, particularly in hybrid private/public clouds, public cloud offerings from service providers and clouds in datacenters. What’s more, it offers fast access to data irrespective of its physical location or format.

Nvoidia’s DGX-1, an AI supercomputer that is designed for deep learning rests at the core of Ontap AI. The DGX-1boasts eight Tes;la V100s TensorCore GPUs and a maximum of 256B of GPU RAM, which are configured in a hybrid cube-mesh topology through NVIDIA NVLink.

According to Nvidia, one DGX-1 offers 1 petaflop of computer power, which is equal to more than 800 CPUs. On the other hand, NetApp’s AFF A800s has an equally remarkable performance, a throughput of a maximum of 300GB/s in a 24-node cluster and sub-200 microsecond latency.

Jim McHugh, Nvidia’s general manger and VP, said that in the AI world, data integration is fundamental. He added that GPU AI training requires something different compared to what is needed to train traditional applications.

Cambridge Consultants, an engineering consulting firm situated in the UK makes up one of the first practitioners of Ontap AI.

It employed the technology in a health care sector where it is using it to create systems that not only assess drug treatments but also their effects on patient outcomes. The firm also leveraged Ontap AI in building Vincent, a deep learning tool created to learn how to paint like humans.

Source VentureBeat

 

Share this:

Leave a Reply

avatar
  Subscribe  
Notify of