If you can fuck up a database in prod you have a systems problem caused by your boss. Getting fired for that shit would be a blessing because that company sucks ass.
If you are adding guardrails to production… It’s the same story.
Boss should purchase enough equipment to have a staging environment. Don’t touch prod, redeploy everything on a secondary, with the new guardrails, read only export from prod, and cutover services to the secondary when complete.
Small companies often allow devs access to prod DBs. It doesn’t change the fact that it’s a catastrophically stupid decision, but you often can’t do anything about it.
And of course, when they inevitably fuck up the blame will be on the IT team for not implementing necessary restrictions.
If you can fuck up a database in prod you have a systems problem caused by your boss. Getting fired for that shit would be a blessing because that company sucks ass.
What if you’re the one that was in charge of adding safe guards?
Never fire someone who fucked up (again; it isn’t their fault anyways). They know more about the system than anyone. They can help fix it.
This is the way usually but some people just don’t learn from their mistakes…
If you are adding guardrails to production… It’s the same story.
Boss should purchase enough equipment to have a staging environment. Don’t touch prod, redeploy everything on a secondary, with the new guardrails, read only export from prod, and cutover services to the secondary when complete.
Small companies often allow devs access to prod DBs. It doesn’t change the fact that it’s a catastrophically stupid decision, but you often can’t do anything about it.
And of course, when they inevitably fuck up the blame will be on the IT team for not implementing necessary restrictions.
Frequent snapshots ftmfw.