Over a million developers have joined DZone.

Facebook Likes Machine Learning, Announces Open Source GPU Server Big Sur

Tech giant Facebook has set its sights on deep neural networks with the groundbreaking open source GPU server Big Sur.

· Performance Zone

Download Forrester’s “Vendor Landscape, Application Performance Management” report that examines the evolving role of APM as a key driver of customer satisfaction and business success, brought to you in partnership with BMC.

Facebook is one of those companies that dabbles in all sorts of tech awesomeness. While most known for their heavily-used social media platform, Facebook recently announced something else you might like: an open source GPU server.

Image titleImage credit: Facebook

Dubbed the Big Sur, Facebook’s forthcoming open source GPU server boasts a whopping eight (yes, you heard right, eight) Nvidia M40 GPUs. For reference, a single M40 totes 5GB of GDDR5 memory, 3072 cores, and at optimal performance of 7 teraflops. Chances are that could run Witcher 3 in 4k with ease (minus the obvious detail of the M40’s lack of video outputs, like this AMD behemoth). Yet though Nvidia is more commonly known for delivering powerful gaming hardware, these GPUs serve a different purpose, machine learning, and more specifically deep neural networks.

Similar to Facebook users, the Big Sur will be using its GPUs to analyze data. Examples include running pictures and videos through neural networks, so that they can be tagged with information. Then these artificial neural networks can make deductions about new data. In a way, it’s a similar concept to uploading pictures to your own Facebook page and tagging photos. Through looking back at these images one might notice that a certain friend has a fondness for tequila, and is always wearing a flannel shirt. Hopefully however Facebook’s research is a bit more insightful. Of course, a few Google researchers used artificial neural networks to identify cat pictures (and reportedly it required 16,000 computers).

Big Sur kicks off Facebook’s open-source hardware expedition into the realm of machine learning, and deep neural networks (topics they've been working on for a while). It’s no ordinary server, packing quite a performance hike. Facebook researchers Kevin Lee and Serkan Piatino claimed in a Dec. 10, 2015 blog post that Big Sur is 2x as speedy as their last-gen servers. As per a news article on the Nvidia website, Tesla GPUs can decrease the training time of machine learning in deep neural networks by as high as 10-20x. That’s a substantial performance boost.

Facebook’s foray into machine learning, and deep neural networks, specifically focuses on advancements in artificial intelligence (AI). AI has become more prevalent, evidenced by IBM’s Watson, Microsoft’s Cortana, and Apple’s Siri. Now let’s just hope Facebook, armed with its new Big Sur, doesn’t unwittingly unleash Skynet or HAL.

Cover image credit: Facebook

See Forrester’s Report, “Vendor Landscape, Application Performance Management” to identify the right vendor to help IT deliver better service at a lower cost, brought to you in partnership with BMC.

performance,facebook,gpu,machine learning,deep neural network,ai,artificial intelligence

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}