Working closely with the PyTorch open-source developer community, Intel engineers continuously optimize upstream performance optimizations to accelerate training and inference workloads on Intel hardware. In addition to this, the Intel®️ Extension for PyTorch* is an open-source extension that features the most up-to-date optimizations on Intel’s latest hardware, most of which will eventually be upstreamed into the main PyTorch branch. Together, the upstreamed performance optimizations along with the Intel®️ Extension for PyTorch* is known as the Intel®️ Optimization for PyTorch* In this talk, we will introduce the Intel®️ Optimization for PyTorch* and the high-performance libraries it is built on. We will demonstrate how it can be used to achieve significant performance accelerations with just a few lines of code changes. We will also discuss some best practices for getting the most optimal performance. By the end of the talk, you will have a better understanding of how to use the Intel®️ Optimization for PyTorch to get the most of out of your deep learning workloads.