This happened to me (someone in my team) a while ago but with mongo. The production database was ssh-tunneled to the default port of the guys computer and he ran tests that cleaned the database first.
Now... our scenario was such that we could NOT lose those 7 hours because each customer record lost meant $5000 usd penalty.
What saved us is that I knew about the oplog (binlog in mysql) so after restoring the backup i isolated the last N hours lost from the log and replayed it on the database.
Same happened to me many years ago. QA dropped the prod db. It's been many years but if I recall, I believe in the dropdown menu of the MongoDB browser, exit & drop database were next to each other...Spent a whole night replaying the oplog.
No one owned up to it, but had a pretty good idea who it was.
> No one owned up to it, but had a pretty good idea who it was.
That sounds like you're putting (some of) the blame on whoever misclicked. As opposed to everyone who has allowed this insanely dangerous situation to exist.
This. The person that erased the database in my case came forward to me as soon as we realized what had happened. At that moment I was very happy it was an "inside job", it meant I could discard hacking.
As its said before: he made a mistake. The error was allowing the prod database to to be port forwarded from a non prod environment. As head of eng that was MY error. So I owned to it and we changed policies.
Now... our scenario was such that we could NOT lose those 7 hours because each customer record lost meant $5000 usd penalty.
What saved us is that I knew about the oplog (binlog in mysql) so after restoring the backup i isolated the last N hours lost from the log and replayed it on the database.
Lesson learned and a lucky save.