paint-brush
How I destroyed the staging databaseby@deleu
3,737 reads
3,737 reads

How I destroyed the staging database

by Marco Aurélio DeleuOctober 13th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>No customer data was harmed in this accident. The staging database was recovered within 20 minutes by Amazon RDS Snapshots and was only used internally.</em>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How I destroyed the staging database
Marco Aurélio Deleu HackerNoon profile picture

And how I will prevent it from ever happening again

No customer data was harmed in this accident. The staging database was recovered within 20 minutes by Amazon RDS Snapshots and was only used internally.

Today I made a mistake. It resulted in dropping every table from the staging database. On the bright side, it wasn’t production data.

Feature 1: config:cache

If you run php artisan config:cache in your Laravel application, it will generate a bootstrap/cache/config.php file that contains every configuration in your config/*.php folder. The goal is to speed Laravel’s bootstrapping process by caching the settings in a ready-to-go state.

Feature 2: RefreshDatabase

Laravel 5.5 ships with a new trait called RefreshDatabase and an excellent migrate command called migrate:fresh. The trait will migrate only once and then use transactions to speed up the test suite, but it will drop all existing tables before starting. Of course, it is meant to use with sqlite :memory: or a local database.

The conflict

Your local environment is set to use a staging database that is shared with every developer. The reason you do this is because you want to test the performance of a feature and the staging database have some pretty good data set to test this. But since you want to test performance, you obviously want to cache the routes and the config.

You’re happy with the result. It’s time to move on to the next feature. As a TDD lover, you write a new test and run it to see where it breaks. It fails. Great, let’s implement it now. Somebody in the room yells — Guys, what happened to the staging database?

Your test suite didn’t load your :memory: value from your phpunit.xml. It wasn’t necessary because bootstrap/cache/config.php had all the settings necessary. Sadly, not the correct settings.

You go into Amazon console and restore a snapshot 5 minutes in the past. In 15~20 minutes staging environment is working again with no permanent damage.

The aftermath

After learning how to blow up an unintended database, let’s learn how to protect it.

1- 101: Drop permission

Don’t be lazy. Setup a new database user for your projects that doesn’t have DROP permission on the staging environment.

2- Test Settings Whitelist

The CreatesApplication trait is the perfect place to check whether a specific setting is expected or not. It’s shared across test suites and it is executed just before setting up the database.

This will make sure that if you do screw up, unexpected environments will not be affected.

3- [Bonus] Making debugging easier

Your environments are protected. Everything is set. This is not gonna happen again. But next time you cache your settings and forget it, your tests will tell you that you have the wrong settings. You’ll probably look at the phpunit.xml file puzzled at how that is even possible. To make it clear and easy for the future-you, 3 more lines of code could save some debugging time.

4- Conclusion

The next day I reported on my stand-up meeting that I had tested our database backup system. They are working marvelously. On the bright side, I also reported that I’m not going to be testing it again unexpectedly.

Anyway, be careful with production-state commands. Caching config and routing during development stage usually leads to unexpected behavior. It’s possible to lose some time trying to understand why that 404 Page Not Found is happening when you didn’t clear the route cache. The Config Cache was an interesting discovery.