TensorFlow 2.0 — What You Need to Know
What's coming in TensorFlow 2.0 and why should we be thinking about it now?
Join the DZone community and get the full member experience.Join For Free
What's Coming in TensorFlow 2.0?
Like anyone involved in deep learning work when TensorFlow 2.0 was announced, we had a lot of questions. Would the graphs work properly with the newer version? Would TensorFlow 2.0 vs. TensorFlow 1.0 have a similarly large jump as AngularJS v2 vs AngularJS v1? Over the last few months, our team has gone through the repo/RFCs, and we’re excited about what’s in store with TensorFlow 2.0.
Why Should We Be Thinking About This Now?
Since RFCs for TensorFlow 2.0 are already out, it is an ideal time to start thinking about what will change and make sure any new development will require little to no upgrade efforts.
While code migration will not be entirely straightforward, we have some tips to avoid code refactoring for the upgrade. Here are two areas that should be considered for any new development.
- API Cleanup
A few libraries will be removed (like tf.app, tf.logging, tf.flags, etc.) and a few libraries will be consolidated. Be sure to check your libraries when starting a new project to make sure they are still valid. The biggest change is the removal of tf.contrib; tf.* TensorFlow 2.0 has cleaned up namespace, as it cluttered the workspace. In addition, modules are consolidated, making it feel like working in a single framework and not several different projects. We recommend that you go through the list of APIs that are getting removed/restructured and avoid using these wherever possible.
2. Data Pipeline
The data pipeline will also get some well-deserved attention. New best practices have already been released. The biggest change is related to using queues vs. datasets. Use tf.data.Dataset and avoid using any other data-structure for the pipeline. Tf.data.Dataset with tf.function is a much more seamless experience.
Keeping these reasons in mind, here are some of the most impactful changes in the TensorFlow 2.0 release.
- Eager Execution
Anyone who has come from a traditional development background hates debugging computational graphs. Eager execution provides a way of working with graphs as algorithms, which you can debug line by line, and gives you better control. Although eager execution was introduced in TensorFlow 1.0, it was limited. TensorFlow 2.0 has now made eager execution the default with a number of improvements; it provides new energy and will reduce TensorFlow development pains.
2. @tf.function and AutoGraph
One aspect of TensorFlow that may have caused frustration was its inability to export parse functions or py_functions. Although you could use custom Python functions with TensorFlow 1.0, the functionality was limited and the functions could not be compiled or exported/reimported.
In TensorFlow 2.0, you can decorate a Python function using the tf.function() to mark it for JIT compilation so that TensorFlow runs it as a single graph.
This mechanism gives TensorFlow 2.0 all of the benefits of graph mode:
- Performance — You can optimize the function (node pruning, kernel fusion, etc.).
- Portability — You can export/re-import the function (SavedModel 2.0 RFC), allowing users to reuse and share modular TensorFlow functions.
The TensorFlow graph works across multiple languages, so the Python function export/reimport will work with mobile, C++, and JS. You can use the new AutoGraph feature of tf.function to write graph code using natural Python syntax.
3. Direct Path to Production
Another feature that we love — the ability to use the same saved model with TensorFlow Light, TensorFlow Serving, TensorFlow.js, and other languages. No more converters are needed for TensorFlow Light or serving. You can use the same model across different deployments.
4. No More Globals
Worrying about a global variable vs a local variable, initializing it, and combining it with a new graph dynamically can be a pain. With the new release of TensorFlow 2.0, there are no more global variables. This freedom comes with a bit of caution: you must track and manage all your own variables. If you lose track, they will be garbage collected. You can use Keras objects to manage variables’ lifecycles.
5. Depreciation of Collections
TensorFlow 1.0 uses collections heavily to track operation dependencies, global variables used during training, local variables used for metrics, update_ops for batch norms, cond_context for conditions, etc. These collections are removed in TensorFlow 2.0 and are handled with regular Keras model objects.
How You Can Prepare
The biggest question on our minds is, “How can your organization prepare for this transition?” Here area few things to think about for any new project you are starting before moving to Tensorflow 2.0.
- GraphDef Compatibility
Fortunately, graphs in TensorFlow 2.0 are backward compatible. You can load any model created with TensorFlow 1.0 into TensorFlow 2.0. However, each TensorFlow release contains the minimum supported GraphDef version, so future minor releases of TensorFlow 2.0 may drop support for GraphDefs created with TensorFlow 1.0. We recommend upgrading these to TensorFlow 2.0 when you have an opportunity to do so.
2. Using the Upgrade Checker
Since there are a number of APIs that have been removed or refactored, the TensorFlow team has provided an upgrade checker to identify what needs to be updated and what will work. It can be downloaded here (tf_upgrade_v2). Don’t forget to check your code with it before making any new changes to avoid refactoring it later.
3. Using tf.data.DataSets
We suggest you stop using queue runners and instead use tf.data.DataSets. With the tf.functions in TensorFlow 2.0, you can fully utilize dataset async prefetching/streaming features.
4. Using Smaller Functions
We recommend refactoring the code in smaller functions that can easily be decorated with the tf.function within TensorFlow 2.0.
Let us know your thoughts in the comments.
If you enjoyed this article and want to learn more about TensorFlow, check out this collection of tutorials and articles on all things TensorFlow.
Opinions expressed by DZone contributors are their own.