This article is more than 1 year old

That script I wrote three years ago is now doing what? How many times?

Whipping up a RAID repair nearly produced data doom

Who, Me? Welcome once again to the wafer-thin buffer between the weekend and the workday that we call Who, Me? in which Reg readers share tales of tech tricks that didn't quite work out as hoped.

This week, our hero is someone we've chosen to Regomize as "Indiana" who has reached deep into the archive of a top secret storage facility to bring us this tale of a lost RAID.

Indy was doing a bit of sneaky sub-contracting on the side for a small local business that couldn't afford a full-time tech. One day, Indy's client asked him to have a look at their server, which was behaving strangely.

It didn't take Indy long to work out that the RAID controller had died, and therefore the bespoke data retrieval system they had developed was useless.

Tragically, they could not only not afford a full-time tech, they also could not afford a new RAID controller at that particular moment. The business was basically going to fall over unless a quick and dirty – and cheap – fix could be found.

Indy therefore whipped up a clunky but effective solution. He reconfigured each of the disks in the RAID as a standalone, and wrote a script that essentially created a system of five rotating backup copies of the database. It was well short of RAID, but it allowed some measure of redundancy.

The danger, of course, was that each of the five copies of the database looked to the system like it was the production database. To avoid the obvious pitfalls that would create, Indy set up a shortcut on the desktop for all staff to use – so that only one of copy of the database would get opened and edited.

That went just fine. For a while.

Fast forward three years, and our hero gets a call from the tech who now worked full time for said company. Something's gone wrong, and no-one understands why.

Indy got a sinking feeling in his gut like he'd eaten bad dates. From what the tech was describing, there were multiple copies of the database all over the system – many more than just five. No-one was using the desktop icon to start the database, so new copies were spawning all the time and filling the drives to capacity.

And no-one knew exactly how long this had been going on – only that the entire system had become gradually more corrupt.

Naturally, hero that he is, Indiana immediately owned up to … absolutely nothing. He still has no idea what went wrong with the sync script, because the last thing he wanted to do was actually go and look at the thing for himself. That's how you get your face melted off.

He agreed with the tech on the phone that this was very strange and mysterious. Probably the best thing to do was to figure out what was the most recent version of the data (if possible) then wipe the system and restore.

Oh, and maybe spring for a new RAID controller.

Here at Vulture Central, we doff our Fedoras to those who, like Indy here, somehow escape from their messes in the nick of time (and hey, three years ain't bad for a script that was doing the work of a busted RAID controller). If you've ever barely scraped out of a disaster that was maybe a little bit of your own making, let us know in an email to Who, Me? and we'll salute your exploits – anonymously, of course. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like