selmslie wrote:
The problem is not the chosen backup method. The way he described it, it could have happened no matter how he was backing it up.
The corruption happened on his own computer and that got backed up!
The solution is to keep only your current work on your computer where you are likely to spot any problems before they get permanently backed up. Remove stuff from your local system that you are no longer working on.
Back up only your only what you have changed recently (incremental backup) on two different schedules.
I back up changes to my data to two local removable hard drive within 4 hours of working on it. And I keep a third backup in a bank safety deposit box and rotate the three drives once a month. If a corrupted file gets backed up (it's happened a couple of times) there is a good chance that I have an uncorrupted copy at the bank.
The problem is not the chosen backup method. The ... (
show quote)
If you watched the video, he attributed the issue to “bit rot”. By definition, it typically occurs (if at all) when a HD is exposed to external magnetic fields which causes bits to flip states or on an SSD due to electron migration. The fact is that bits in DRAM are flipped all the time due primarily to bombardment by particles created by cosmic rays. Usually, it isn’t noticeable, but sometime it is. Here’s a classic example. The scientists at Lawrence Livermore National Lab running an IBM Blue Gene HPC cluster with IBM storage noticed that some of their models, which could run for days, were just plain wrong. But when the data was reread from disk, it was correct. It was determined that the corruption occurred in the disk cache FIFOs, likely caused by cosmic rays. The answer was to change storage venders to a company that employed parity checking and ECC on both reads and writes. It’s exactly the same reason that more expensive ECC DRAM is used in commercial servers but rarely in client machines.
Now let’s think about Tony’s issue. He’s contending that a huge number of images were completely ruined by this magical “bit rot” but if you look at the images displayed, there’s not one or two or a few bits flipped. For images to to be that corrupted, thousands of bits or more in MByte sized images would have to be flipped, and the same would have to happen on virtually all his images. That just doesn’t happen. Next time you have a chance look at the published specs for total errors on a typical HD (there is such a spec) - it’s vanishingly low - maybe 1 in 10>15 bits. The real problem with his data is most likely the file system on his storage. The problem with consumer grade NAS is that unlike an individual disk, a SAN or a RAID, the NAS “owns” the file system, and unlike professional grade file systems such as ZFS (or NTFS, or XFS, or GPFS, or AFS or…), it’s usually a cobbled up file system created by the small consumer NAS company who has neither the resources to test or support the file system properly.
Tony’s first problem is he doesn’t really understand data storage and his second problem is he isn’t smart enough to go to a professional for advice or use professional grade products. His third problem is he isn’t using versioning in his file system or using a file system with error checking and correction. And his fourth problem is not appropriately archiving his most valuable data. His reason for not having used cloud storage is that he has too much data because lots of it is 4K video, but what he lost that’s so irreplaceable are family photos, not the video, which is already in the cloud if it’s on YouTube. You can store tens of thousands of JPEGs in a TB of cloud storage. And then when he does list some cloud providers, they’re all second tier rather than a major provider. So why is Backblaze and Carbonite for example “second tier”? Because they don’t have enough geographically separated data centers to provide good geographic redundancy. Who he should have listed was Amazon, Google, Microsoft, Apple and maybe IBM. Those data centers do have versioning. They are using professional grade servers with professional grade storage, running professional grade file systems, and that would have saved his bacon. Or he could have put 10,000 JPEGs on 10 MDisks in a bank vault, and that would have saved him also. But what would have really saved him (and you) is for amateurs to quit spouting their own silly (and often mistaken) ideas about data storage architecture, concentrate on photography, and trust his most valuable asset, his data, to professionals and use professional grade storage, file systems, methods and archiving media.
Finally, he seems to think that consumer grade UPSs condition your power. They don’t’. The only protection against spikes on the line are simple MOVs which wear with each spike they absorb until they’re useless after awhile (and your electronic devices typically have MOVs on the incoming line anyway). They’re not like professional grade Lieberts that actually do condition the power - their internal inverter only comes on when the power fails. The only surge suppressors that work long term are inductive suppressors, but a decent one will cost you $100 or so. Relying on a UPS to hold your computer up long enough to shut down gracefully is good practice, but don’t expect them to condition your power any better than a cheap MOV surge protected outlet strip. They’re not going to save you from a nearby lightning strike either.
Btw, I do think he’s correct that redundant card slots are a good idea if you’re photographing critical, non repeatable events such as weddings.
End of rant…