It’s no secret that PyTorch is one of the most popular deep learning frameworks out there. You don’t have to venture too far to see how it is being used to shape future technology. However, you may not know the influence that Intel has in its continued development.
Join us and learn how to accelerate your deep learning workloads using the Intel®️ Optimization for PyTorch.
About this event
Working closely with the PyTorch open-source developer community, Intel engineers continuously optimize upstream performance optimizations to accelerate training and inference workloads on Intel hardware. In addition to this, the Intel®️ Extension for PyTorch* is an open-source extension that features the most up-to-date optimizations on Intel’s latest hardware, most of which will eventually be upstreamed into the main PyTorch branch. Together, the upstreamed performance optimizations along with the Intel®️ Extension for PyTorch* is known as the Intel®️ Optimization for PyTorch*
In this talk, we will introduce the Intel®️ Optimization for PyTorch* and the high-performance libraries it is built on. We will demonstrate how it can be used to achieve significant performance accelerations with just a few lines of code changes. We will also discuss some best practices for getting the most optimal performance. By the end of the talk, you will have a better understanding of how to use the Intel®️ Optimization for PyTorch to get the most of out of your deep learning workloads.