PyTorch 1.1 improves JIT compilation and offers TensorBoard support

Share

Several months after the release of PyTorch 1.0, meet the new feature update. PyTorch 1.1 arrives with new developer tools, official TensorBoard support, a few breaking changes, improvements, new features, and new APIs.

See what’s new in the deep learning platform’s latest release.

Experimental TensorBoard support

Version 1.1 supports TensorBoard for visualization and data bugging. TensorBoard is a visualization toolkit made up of a suite of web applications.

This new implementation is currently experimental, so report any issues that you may catch and watch for future news and potential changes. Use the from torch.utils.tensorboard import SummaryWriter command to begin using TensorBoard.

SEE ALSO: The sunset of Python 2: What do IT leaders need to know?

The release notes on GitHub list just some of its use cases: “Histograms, embeddings, scalars, images, text, graphs, and more can be visualized across training runs.”

Full TensorBoard docs available here.

Just-in-time (JIT) compilation

1.1 introduces new improvements to just-in-time (JIT) compilation. Referring to the release notes on GitHub, here’s what’s been modified:

  • Attributes in ScriptModules: Assign attributes on a ScriptModule by wrapping them with torch.jit.Attribute . This update supports all types available in TorchScript. After assigning an attribute, PyTorch saves the attribute in a separate archive in the serialized model binary.
  • Dictionary and list support in TorchScript: Lists and dictionary types behave like Python lists and dictionaries.
  • User-defined classes in TorchScript: Currently in the experimental phase, so as usual, be aware of potential future changes. TorchScript supports annotating a class using @torch.jit.script.

Recurrent neural networks

The PyTorch team wrote a tutorial on one of the new features in v1.1: support for custom Recurrent Neural Networks.

According to the PyTorch Team:

Our goal is for users to be able to write fast, custom RNNs in TorchScript without writing specialized CUDA kernels to achieve similar performance. In this post, we’ll provide a tutorial for how to write your own fast RNNs with TorchScript. To better understand the optimizations TorchScript applies, we’ll examine how those work on a standard LSTM implementation but most of the optimizations can be applied to general RNNs.

Optimizing CUDA Recurrent Neural Networks with TorchScript

Follow the tutorial to begin writing custom RNNs.

New tools & machine learning offerings

With the release of 1.1, PyTorch also fostered some projects and tools for machine learning engineers.

View the full list of these new offerings on the Facebook for Developers blog.

SEE ALSO: Top Python use cases: Bringing machine learning to the enterprise

These include the newly open sourced PyTorch BigGraph, which allows faster embedding of graphs where the model is too large to fit in memory. For demonstration, PyTorch released a public embedding of of the full Wikidata graph, with 50 million Wikipedia concepts for the AI research community.

The blog also highlights noteworthy open source projects from the PyTorch community, as well as new resources for the machine learning community. PyTorch continues to grow, even in academia, where it now finds a home in Universities across the United States. A new Udacity course has also been added for some out of the classroom knowledge.

These are only just some of the highlights of what’s new in version 1.1. View the full release notes on GitHub and take note of the latest deprecations, bug fixes, and more.

The post PyTorch 1.1 improves JIT compilation and offers TensorBoard support appeared first on JAXenter.

Source : JAXenter