How a GitLab engineer DESTROYED their main database…

This title could be clearer and more informative.Try out Clickbait Shieldfor free (5 uses left this month).

In 2017, a GitLab engineer accidentally deleted the production database (300 GB) while trying to fix a replication issue during a spam attack. All five backup mechanisms failed — untested Azure backups, silently failing regular backups, broken volume snapshots, and S3 backups covering a different database type. Recovery was only possible using a 6-hour-old staging snapshot, resulting in permanent loss of user data including issues, merge requests, and comments. GitLab live-streamed the recovery to over 5,000 viewers and later published a comprehensive postmortem. The key lesson: backups are only real if you've actually tested restoring from them.

1m watch time

Sort: