Would it be a catastrophe if GitLab won’t have that backup?

GitLab Fan
GitLab Fan Club
Published in
2 min readFeb 6, 2017

--

Some websites have written that GitLab was on the brink of catastrophe. But is it true?

I don’t think so. Of course, reputational losses would be much more serious, but it wouldn’t be the end of GitLab. If that would happen with GitHub, it would be the end for them, but I am sure they understand it and check their backups thousands of times per month.

GitLab makes money by selling Enterprise Edition licenses. GitLab.com is completely free of charge at this moment, there are even no CI quotas as far as I know. Obviously, customers of GitLab were not affected by this incident, because when we say customer, we mean a user of self-hosted GitLab.

I think this is a reason, why GitLab was so careless about backups of GitLab.com. I guess, in the beginning, GitLab.com was more like a free demo of GitLab EE for the company, not a real asset. But people have loved it and started considering it as an alternative to GitHub.

It looks like GitLab’s popularity overgrown GitLab’s infrastructure team capabilities, but it should be a temporary issue.

If you missed the story

For the full story read GitLab’s blog post. Below is a summary of what would happen to GitLab.com recently.

Incident chronology:

  • During attempts to fix effects of spam attack, Yorick Peterse accidentally deleted database files on production server
  • None of five backup strategies have worked
  • Luckily Yorick made a manual DB backup before he started to work on the problem

GitLab’s actions:

  • Document with the chronology of the incident and recovery plan was created, shared publicly and updated frequently:
  • An online stream of the recovery process was launched. GitLab specialists were answering people’s questions and taking advice from the chat:

Results

  • GitLab.com was not working for about a day
  • 6 hours of data was lost(issues, projects, comments, etc.)
  • Repositories were not affected
  • Most of the users have demonstrated understanding

Summary

  1. Having no working recovery plan is a shame
  2. No, even if GitLab lose all the data, that would hardly be a catastrophe for them
  3. The reason of the problem is GitLab’s tremendous growth
  4. Transparency gives us hope that this will never happen again.
If you’re that transparent, people ready to forgive a lot

UPD: GitLab has published postmortem on the incident. It looks like the worse thing that could happen is 24 hours of production data loss.

--

--