aimsmeee
(aimsmeee)
1
Pretty much that- I just put through some user account changes that fixed a SSO problem we were having but managed to thoroughly mess up several users’ permissions (and, bizarrely, calendars??), including the CEO and COO. Today’s been a lot of fun.
So, what’s been your biggest ‘oh shit’ moment?
31 Spice ups
Clicking yes on a non-reversible low level nvram reset on a 6K piece of equipment.
4 Spice ups
cag16
(CAG16)
3
It’s a toss up between rebooting the wrong 48-port network switch during a busy day, and signing a 36-month agreement with a Managed Print service that had the absolute worst customer service. Yep, now that I think about it, the second one was worse as it took 3 years to overcome that mistake vs. 5 minutes of downtime.
3 Spice ups
Long ago, helping the owner here organize his email. Now I am like an email butler.
5 Spice ups
eric7615
(Eric7615)
5
Accidentally Arching across the battery terminals in a UPS. That thing was toast and 1000 dollars later replaced.
4 Spice ups
robhall
(robhall)
6
I knocked a brand new Cisco ASR1001-X router off of a cart once…
1 Spice up
I left my favorite job to work for family for a decade. Luckily I am back but I missed out on a decade of advancement and seniority.
2 Spice ups
Back at my first tech job, I thought my PC’s network cable had somehow come unplugged from the wall so I plugged it back in without tracing it. My shift ended about 30 minutes later, so I went home without another thought. Came in the next day to find that I had created a network loop, and had taken down my campus’ network. They were pretty understanding, as I was just a student worker at the time. But apparently things got pretty hectic as they tried to trace down what had happened.
6 Spice ups
Changed permissions at a high level on a file server and selected “Replace all permissions…” (I was really green, but I never made that mistake again)
Tested a Backup Exec restore and forgot to change the destination. Took customer service manager 3 days to re-key the orders. Bought her a nice massage package.
Accidentally stepped on a power strip switch that turned out to be powering 5 customer service computers. (New job and they were converting from terminals to PCs, so they had stuff jerry-rigged for the short term)
I’m sure I could come up with some more…
2 Spice ups
lilmo037
(lilmo037)
11
Overwrote one client’s data with another’s was about the worst. Also managed to put a block rule in one of our firewalls that prevented anyone from accessing the server, we had to restore from a backup.
2 Spice ups
Typing in 176 instead of 76 in the IP address and the needed to bring in outside security consultants which ran 40k
4 Spice ups
shnool
(SHNOOL)
14
Ah man I feel you. It’d be a toss up for me.
Instance1:
WAY back in the day (1999?), I rewrote an entire software system we had written in visual foxpro, and using foxpro databases. I designed the “upgraded” system to use a SQL backend. I even had department heads test their pieces of the software. At the time I think we had roughly 25-30 users using the system (so not a lot, but still a few).
Anyway never once did I stress test the system.
I rolled it out in a couple of hours we started to have weird update conflicts. It took me hours to realize that I wasn’t doing a record update, but a table update to the SQL database… we lost an entire 24 hours worth of scheduling thanks to that.
Instance 2:
Also quite a long time ago 2000? I had a backup system (backup exec), that backed up to an IDE tape drive called Onstream. Backup Exec said every backup was good, but yours truly hadn’t test restored with it yet (it was a 6 hour drive back then and remote tools were spotty over the 56k dedicated network connection). Anyway, I went onsite to test/upgrade the server and backups, and when I shut down the server that was running (ugh) software RAID 1. The server would not boot. Worse yet we said no problem we’ll just restore from backup… Went through 2 weeks (yes we only had 2 weeks) worth of tapes to find out that none of our backups were moutable (despite confirmed successful backup notifications).
This was early on in my career and I had my boss (the CFO) and I working through the night using NTFS data restore tools on the server attempting to gain back the lost drive data from the failed hard drive. This was the last time I relied on software RAID 1 from Winders, and also the last time I ever used an Onstream drive. After that we tested ALL our backups, and primarily relied on DDS or LTO tape backups and rarely had an issue again. Oh and we restored everything but one guys documents folder, who also happened to be the one guy who gave IT the most grief he could every time we saw him. The CFO was great about it, told everyone we intentionally didn’t restore his stuff because he was so “cranky” to us.
4 Spice ups
I can’t smash @hulksmash72 like enough
2 Spice ups
It’s funny because it was just so natural when I got into IT, I thought this was it. 20+ years later… eh
@michael9595
1 Spice up
jbaker3
(JFEB)
18
Been over a decade and I can’t really think of anything worthy of an anecdote.
Blew breakers, unplugged remote sites, made some typo’s. Typical stuff that generally got fixed right after it happened.
Well right now I’m dealing with an update I pushed out for our instant issue debit card printing software without testing enough. Apparently there was a bug in the software that’s causing all cards to print out with “pref” and “suff” around the customer’s names. Little bit peeved that we weren’t notified of the bug (I’ve had the installer for about a week and a half) and had to get an updated installer from our contacts but I should have done more thorough testing before pushing it out to everyone in our eBanking department.
My only other big goof happened a few years back when we had an update to deploy for our teller capture software (basically it’s what makes our check scanners work for deposits). The installation process is fairly convoluted and I made a mistake when copying the license/registration/config files, placing them in the wrong folder. Luckily it was only at our home office branch that I work out of so I didn’t have to drive to one of our remote branches but it still sucked having to redo 10 installations at warp speed first thing in the morning to make sure we had the tellers operational before customers arrived.
2 Spice ups