Facebook Open Sources "Big Sur" Machine Learning Framework

by Ostatic Staff - Dec. 11, 2015

Artificial intelligence and machine learning are going through a mini-renaissance right now, and some of the biggest tech companies are helping to drive the trend. Recently, I covered Google's decistion to open source a program called TensorFlow. It’s based on the same internal toolset that Google has spent years developing to support its AI software and other predictive and analytics programs. And, we reported on how H2O.ai, formerly known as Oxdata, has announced a new funding round that it is getting to the tune of $20 million. The money will go toward advancing its machine learning toolset.

Now, Facebook is open sourcing its machine learning system designed for artificial intelligence (AI) computing at a large scale. It's based on Nvidia hardware.

Facebook's Kevin Lee and Serkan Piantino wrote in a blog post that the open sourced AI hardware more efficient than off-the-shelf options because the servers can be operated within data centers based on Open Compute Project standards.

They noted the following:

"At Facebook, we've made great progress thus far with off-the-shelf infrastructure components and design. We've developed software that can read stories, answer questions about scenes, play games and even learn unspecified tasks through observing some examples. But we realized that truly tackling these problems at scale would require us to design our own systems. Today, we're unveiling our next-generation GPU-based systems for training neural networks, which we've code-named 'Big Sur.”

"Big Sur is our newest Open Rack-compatible hardware designed for AI computing at a large scale. In collaboration with partners, we've built Big Sur to incorporate eight high-performance GPUs of up to 300 watts each, with the flexibility to configure between multiple PCI-e topologies. Leveraging NVIDIA's Tesla Accelerated Computing Platform, Big Sur is twice as fast as our previous generation, which means we can train twice as fast and explore networks twice as large. And distributing training across eight GPUs allows us to scale the size and speed of our networks by another factor of two."

"We want to make it a lot easier for AI researchers to share techniques and technologies. As with all hardware systems that are released into the open, it's our hope that others will be able to work with us to improve it."

 You can find out more about Big Sur here.