Facebook's New Open Source Tool Optimizes Deep Learning and AI

by Ostatic Staff - Jun. 27, 2016

Artificial intelligence and machine learning are much in the news right now, and some of the biggest tech companies are helping to drive the trend. Recently, I covered Google's decistion to open source a program called TensorFlow. It’s based on the same internal toolset that Google has spent years developing to support its AI software and other predictive and analytics programs. And, we reported on how H2O.ai, formerly known as Oxdata, has announced new open source AI tools.

Now, Facebook has pubished a whitepaper and published a post explaining Torchnet, a new open source software tool that’s designed to streamline deep learning, a branch of artificial intelligence.

Facebook built its tool on Torch, an open source library that it has steadily contributed to, and Torchnet is in use at Facebook. According to Facebook leaders:

"Torchnet provides a collection of boilerplate code, key abstractions, and reference implementations that can be snapped together or taken apart and then later reused, substantially speeding development. It encourages a modular programming approach, reducing the chance of bugs while making it easy to use asynchronous, parallel data loading and efficient multi-GPU computations.

The new toolkit builds on the success of Torch, a framework for building deep learning models by providing fast implementations of common algebraic operations on both CPU (via OpenMP/SSE) and GPU (via CUDA).

Although Torch has become one of the main frameworks for research in deep machine learning, it doesn't provide abstractions and boilerplate code for machine-learning experiments. So researchers repeatedly code their experiments from scratch and march over the same ground — making the same mistakes and possibly drawing incorrect conclusions — which slows development overall. We created Torchnet to give researchers clear guidelines on how to set up their code, and boilerplate code that helps them develop more quickly."

 Notably, Torchnet is also modular, adding to its flexibility. "The modular Torchnet design makes it easy to test a series of coding variants focused around the data set, the data loading process, and the model, as well as optimization and performance measures," developers report. "This makes rapid experimentation possible. Running the same experiments on a different data set, for instance, is as simple as plugging in a different (bare-bones) data loader, and changing the evaluation criterion amounts to a one-line code change that plugs in a different performance meter."

For more detail on Torchnet, check out the dedicated academic paper.