Join us for Module 1: Introduction to Delta Lake - Thursday, May 19
-Bringing Reliability to Data Lakes (Concepts)
-Convert existing tables to Delta Lake [SQL]
-Unified Batch and Streaming [Python, SQL]
This 3-part workshop is intended to teach you what Delta Lake is and how to use Apache Spark™ and Delta Lake in your data architectures for reliable large-scale distributed data pipelines. This course will show the features of Delta Lake that, alongside Spark SQL and Spark Structured Streaming, introduce ACID transactions and time travel (data versioning) to your ETL batch and streaming workloads. Slides, demos, exercises, and Q&A sessions should all together help you understand the concepts of the modern data lakehouse architecture.
-Sign up for Databricks Community Edition
-Participants are recommended to have experience with Apache Spark SQL and Python (PySpark)
Register and Attend the Full Series!
Module 2: Tuesday, May 31: DML and Schema
Module 3: Tuesday, June 14: SQL and the Transaction Log