Developing a Backup Strategy
If you have ever tried to lose even one file from your computer - no matter for which reason - you know better than anyone the value of keeping your working files backed up.
It is not hard to agree backing up is good. It is a little harder, though not too much, to go out and buy the necessary equipment and software to do it with. But, strangely, it seems to defy human nature to use a good backup system consistently. That means day in and day out. That means redundant disk arrays, virtualisation, scheduling jobs, differential processing with daily/weekly full backups. The list goes on.
The final consistently frequent click on the 'Backup Now' button is one of the hardest things to achieve in modern business life - even if the action is one of the simplest. One we all do hundreds, if not thousands of times per day. The click of the mouse. Save!
1. What do you back up?
The most frequent changes are likely to occur in data you have created yourself. Whether this includes .doc or .xls files, email .pst local files or even .mp3 and .avi you are almost always going to face an impossible situation recreating these files from scratch in case of loss. Your in-situ databases (.sql etc.) count as data-files too. Ditto for backup files, by the way. Think about copying the backup files to a secondary medium.
Many companies have just an office productivity suite of programs installed (word processor, spreadsheet, presentation software, email), so may not benefit greatly from backing these up. The more specialised the applications used in your organisation, the better off you will be if you pay attention to how you store or back up your application software (including serial numbers, license keys, passwords, hardware dongles, as relevant). Normally a single copy of those applications will suffice for an entire organisation.
Operating system (OS)
This is not normally an obvious candidate for backing up as most users/organisations already control the original install disks (which are the back ups). There are two cases, however:
- Backup the OS when it has been customised over a long period of time, both via official patches and by the user. In this case it would take too long to restore from scratch.
- Backup the OS when you have tailormade applications that may 'break' if the OS gets changed in any way. Although it would be a clear sign of a poorly written app if this should happen.
The easy way to deal with this is to create an organisation-wide standard system-install file, which can be used to install new machines or to restore existing machines to a standard starting point. The downside is that every member of the organisation must have broadly similar hardware specifications, so there are no performance issues for anyone.
2. Think about the recovery process from the start
You back up your data for a reason: If the worst happens you can get your data back quickly. Nevertheless, back up solutions tend to be advertised using any number of metrics focusing on the back up process only (speed, convenience, volume etc), but lack emphasis on the second half of that process: restoring lost data. Restoring data can be arduous or easy depending on solution, but is rarely demonstrated by providers. Additionally, users often omit to test the ability of their chosen solution to restore what has been backed up. Dangerous practice, yes, but mostly ignored anyway. Data corruption in the back up file can have monstrous consequences, if everything is backed up to a single .tar or .zip file.
Testing the restore process is just as important as testing the back-up process.
3. Which medium to store data on?
If you already have a tape-based backup system in use, think about changing it. Tape technologies change and the medium is is slow, so this is on the way out.
The most likely backup medium. Allows for fast backup/restore cycles and is inexpensive.
This medium was originally suggested as perfect for data backups, but there have been reports questioning the stated longevity of the disks used. Even expensive Blueray format disks can only contain up to 50Gb.
Uhm, never go there again!
Solid state storage
With the rapidly falling price of RAM chips, this is said to be the future of hard disks. Silent in operation with no moving parts, they are likely to last longer than traditional hard disks. Read/write access is faster and data defragmentation is not required as with spining disks, reducing the likelihood of data loss.
Remote/Online storage services
There is an increasing array of remote data backup services available to the individual (Carbonite, Mozy, Backblaze etc.) as well as the corporate (Amazon S3, Rackspace, own data center etc.). Any of these services are restricted by your Internet access speed. Standard broadband is asymmetric - as in ADSL - meaning uploads (your backups) are much, much slower than downloads. A typical 500Gb hard disk in a laptop will take several weeks to back up this way. The only way to change that is to convert to symmetric broadband - as in SDSL - but the cost of such lines are ordinarily prohibitively expensive and often throttled to no more than 2Mb/sec, compared to consumer broadband download speeds ranging from 8Mb to 20Mb.
4. Onsite vs. offsite
On-site backup for immidiate redundancy is useful, but prone to disaster risk (fire, theft etc).
Off-site backups insure against disasters and come in two formats:
- Create data backup on-site and move physical medium off-site at regular intervals. Some sort of rotation scheme for the back up medium is needed to keep this process manageable.
- Moving the data off-site during the backup process using an on-line service is generally the most convenient method. But, this can be slow for large data volumes and you need to keep your payments up to date or risk deletion of your data. Several services have issues reported by users.
5. Data repository models
Generally a bunch of floppy disks, CD-Rs, DVD-Rs, or even hard disks, used ad hoc for local back up. Does not provide the same level of reverability as other mthods.
Full and incrementals
One full, time-consuming backup followed by one or a series of incremental back ups, until the next, full, back up. Restoring data means taking the most recent full backup and adjusting for the incremental backups taken since then.
Similar to differential backups, but using a single, differential file (in addition to the full back up file), instead of multiple incremental files.
One full, time-consuming backup followed by periodic synchronisations. Retains data in the backup file required to restore previous versions of changed files.
Continuous data protection
There is no backup batch time defined with this method, since backing up is a continuous process. Only possible with byte-level backups, as file-level backups would require potentially unlimited amunts of storage space (think about a 2Gb .pst file: for every email sent or received it would have to be backed up in its entirety). Not the same as RAID level 1 (disk mirrorring), which cannot restore previous versions of a file.
Full system backup
Requires specialist software, as it backs up from and restores to the 'bare metal' machine. Useful in some applications, but it is time consuming to both back up and recover a full system.
This Wikipedia article is a good starting point for further exploration of Data Backup.
So what are you waiting for?
- Look at what they are doing at 'The Company': Surviving Backup Hell
- Look at our Backup Proposition: Backup
Contact us right away: