Hacker News new | past | comments | ask | show | jobs | submit login

When I had my first IT job in ~1991 I caused millions of pounds worth of loss to my employer, a well known retailer in the UK, due to a bug I made.

My boss covered my arse. I love that man, and i've never made a serious mistake since, as it's made me risk averse.

Gitlab did the right thing here by owning the situation and making it public.




Now you'll just have to share that story or else I won't get any sleep tonight. Please.


I only posted this a few weeks ago (check my comment history) but here goes again:

I was a programmer in my first IT job in 1992 for a large retailer in the UK. I was working on some stock related code for the branches, of which they had thousands. They sold a lot of local goods like books which were only sold in a couple of stores each - think autobiographies of local politicians, local charity calendars, that sort of thing. Problem with a lot of these items was that they were not on the central database. This caused a problem with books especially as you don't pay VAT on books, but if you can't identify the book then the company had to pay it. This makes sense because some books or magazines you DID pay VAT on, because they came with other stuff - think computer magazines with a CD on the front. So my code looked at different databases and historical info to work out the actual VAT portion payable, which was usually nil.

I wrote the code (COBOL, kill me now), the testers tested it, all went OK until when they deployed, on a Friday night. The first I knew was coming in Monday morning. All the ops had been working throughout the weekend as the entire stock status for each branch had been wiped. They had to pull a previous weeks backup from storage, this didn't work as they didn't have the space for both copies to merge so IBM had to motorcycle courier some hardware from Amsterdam, etc etc. As this was a IBM mainframe with batch jobs we also had to stop subsequent jobs in case it made the fuckup worse, so none of the stock/finance stuff could run at all.

The branches were royally fucked on Monday as, without any stock status to know what to order, they got nothing - no newspapers, books, anything. We even made it to the Daily Mail, I think it took at least 3 weeks before ordering was automatic again. Cost the company literally millions in overtime, not being able to sell stuff, consultants and reputational damage - it was big news in the national newspapers.

The root cause? I processed data on a run per-branch. I'd copy the branch data to a separate area, delete the main data, then stream it back. My SQL however deleted the main data for ALL branches. It didn't get picked up in QA as, like me, they only tested with a single branch dataset at a time.


Wow.... Very interesting, thanks for sharing.


I'm curious to hear the story too. Thank you!


I replied to the other child post




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: