A workshop covering how to apply DevOps and CI/CD practices to Apache Spark applications running on containerized platforms. The session demonstrates packaging Spark jobs as immutable artifacts, implementing automated quality gates including code and data tests, and promoting jobs through environments using pipeline-as-code.

2m read time From cd.foundation
Post cover image
Table of contents
DevOps for Data: Delivering and Orchestrating Apache Spark on Containers
1 Comment

Sort: