Over a million developers have joined DZone.

Providing the Computational Power for Machine Learning

DZone's Guide to

Providing the Computational Power for Machine Learning

Machine learning eats tons of CPU. What are some ways to make sure it has enough food?

· Big Data Zone ·
Free Resource

The open source HPCC Systems platform is a proven, easy to use solution for managing data at scale. Visit our Easy Guide to learn more about this completely free platform, test drive some code in the online Playground, and get started today.

Machine learning has largely been enabled by the coming together of large datasets, algorithms capable of making sense of the data, and affordable computing to underpin everything.

It’s interesting to see, therefore, that supercomputing giant Cray Inc. have recently undertaken a deep learning collaboration with Microsoft and the Swiss National Supercomputing Centre.  The project aimed to improve the ability of companies to run deep learning algorithms at scale.

The partnership worked to leverage their collective computing expertise to scale up the Microsoft Cognitive Toolkit onto a Cray XC50 supercomputer.

Speeding Up the Learning Process

The aim is to speed up the training process, and thus obtain results in hours that would typically take weeks, or even months.  This increased speed opens up a raft of new possibilities for customers, not only to solve existing problems more efficiently, but to start tackling challenges that were previously too difficult to perform computationally.

“Cray’s proficiency in performance analysis and profiling, combined with the unique architecture of the XC systems, allowed us to bring deep learning problems to our Piz Daint system and scale them in a way that nobody else has,” the team say. “What is most exciting is that our researchers and scientists will now be able to use our existing Cray XC supercomputer to take on a new class of deep learning problems that were previously infeasible.”

“Applying a supercomputing approach to optimize deep learning workloads represents a powerful breakthrough for training and evaluating deep learning algorithms at scale,” they continue. “Our collaboration with Cray and CSCS has demonstrated how the Microsoft Cognitive Toolkit can be used to push the boundaries of deep learning.”

The team believe that the new setup will allow researchers to perform much larger and more complex deep learning experiments at a kind of scale previously unheard of.

Cray plans to offer customers access to a range of deep learning toolkits, including the Microsoft Cognitive Toolkit, to make it as easy as possible to begin using their computational muscle to perform AI experiments.  They see it as another step toward the convergence of big data and supercomputing.

“Only Cray can bring the combination of supercomputing technologies, supercomputing best practices, and expertise in performance optimization to scale deep learning problems,” Cray say. “We are working to unlock possibilities around new approaches and model sizes, turning the dreams and theories of scientists into something real that they can explore. Our collaboration with Microsoft and CSCS is a game changer for what can be accomplished using deep learning.”

Managing data at scale doesn’t have to be hard. Find out how the completely free, open source HPCC Systems platform makes it easier to update, easier to program, easier to integrate data, and easier to manage clusters. Download and get started today.

machine learning ,big data ,data processing

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}