Google's Popular TensorFlow Project Gets a Major Upgrade
Artificial intelligence and machine learning are creating a lot of buzz right now, and open source tools are part of the buzz. Google has made a possibly hugely influential contribution to the field of machine learning. It is has open sourced a program called TensorFlow that is freely available. It’s based on the same internal toolset that Google has spent years developing to support its AI software and other predictive and analytics programs.
There was just one problem upon the release of TensorFlow, though: It lacked the ability to operate on multiple devices. Now, Google has corrected that by delivering an upgrade that can actually run on hundreds of different machines concurrently.
According to Google researchers:
"Google uses machine learning across a wide range of its products. In order to continually improve our models, it's crucial that the training process be as fast as possible. One way to do this is to run TensorFlow across hundreds of machines, which shortens the training process for some models from weeks to hours, and allows us to experiment with models of increasing size and sophistication. Ever since we released TensorFlow as an open-source project, distributed training support has been one of the most requested features. Now the wait is over."
"Today, we're excited to release TensorFlow 0.8 with distributed computing support, including everything you need to train distributed models on your own infrastructure. Distributed TensorFlow is powered by the high-performance gRPC library, which supports training on hundreds of machines in parallel. It complements our recent announcement of Google Cloud Machine Learning, which enables you to train and serve your TensorFlow models using the power of the Google Cloud Platform."
The distributed trainer also enables you to scale out training using a cluster management system like Kubernetes. And, once you have trained your model, you can deploy to production and speed up inference using TensorFlow Serving on Kubernetes. Beyond distributed Inception, the 0.8 release also includes new libraries for defining your own distributed models.
This upgrade is significant. After TensorFlow became available on GitHub late last year, it actually became the "most forked" project on GitHub in 2015, according to a website that tracks GitHub. According to Google, TensorFlow could help speed up processes ranging from drug discovery to processing astronomy-related data sets.
As the original announcement of the open source version noted:
"TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well."
The basic goal with most machine learning tools is to take a vast quantity of data and reduce it to manageable, actionable insights. TensorFlow, in all likelihood, will eventually perform many influential tasks across industries, especially now that it can run across disparate devices.