Thanks to Dinesh Nirmal, Vice President of Analytics Development at IBM Cognitive Systems for sharing how IBM is integrating the PowerAI deep learning enterprise software distribution into the Data Science Experience so that data scientists will be able to develop AI models with open-source deep learning frameworks like TensorFlow, Caffe, or Torch to unlock new insights from analytics.
New AI technologies like machine learning and deep learning are achieving higher adoption into the shifting enterprise landscape. Deep learning, in particular, is being adopted by an increasing number of enterprises for richer insights with the aim to better serve their clients. Thanks to more powerful systems and graphical processing units (GPUs), we are able to train complex AI models that enable these insights.
The Data Science Experience is a collaborative workspace that enables data scientists to develop machine learning models and manage their data and trained models. PowerAI adds to deep learning libraries, algorithms and capabilities from popular open-source frameworks. The deep-learning frameworks will be able to sort through all types of data — sound, text or visual — to create and improve learning models on the Data Science Experience.
The data is available regardless of state (cold, warm, or hot) and where it resides.
As an example, banks today can leverage deep learning to:
Make better predictions on clients that might default on credit.
Better detect credit card fraud.
Offer clients other products they are likely to value.
In manufacturing, deep learning models are being trained to identify failures before they happen by analyzing historical data derived from IoT devices that report on the functioning of equipment. These learning models continuously evolve and get smarter over time, and with it, they become more sophisticated at identifying anomalies and failures. This also provides manufacturers with a new revenue stream, selling preventive or predictive maintenance as a service — reducing, or even eliminating, unscheduled downtime.
The growth of deep learning and machine learning is fueled by a rapid rise in computing capability via the use accelerators like GPUs. Deep learning frameworks like TensorFlow are optimized in PowerAI for IBM Power Systems by taking advantage of NVIDIA NVLINK high-speed interconnect, providing a communication superhighway between the CPU and GPU that is 2.5 times faster than conventional PCI-Express 3.0 technologies connecting CPUs and GPUs in Intel-based systems today.
IBM recently introduced the Distributed Deep Learning library in PowerAI from that reduces deep learning training times from weeks to hours. Enabling such capabilities through the Data Science Experience brings unmatched accelerated deep learning to DSX’s collaborative workspace environment.
This brings better machine and deep learning tools to the best and brightest analytical minds, and these tools will improve rapidly over time as they become smarter with more data.