Talk about data deduplication (in the backup and archiving domain) seems to be gaining a fair amount of momentum in the last few years! Most enterprise backup software vendors like Symantec (Veritas), EMC (Avamar) etc. support deduplication in some form or the other – some do deduplication in the source system (that is being backed up) and others do deduplication at the target (backup/storage server). There are also pure “deduplication based storage hardware vendors” like Data Domain who have gained considerable traction in the enterprise.
I am actually quite surprised by the hype around deduplication and the adoption it seems to have gained in the enterprise. The reason I am surprised is similar to the one I articulated in my previous blog post: “Synthetic Full Backup in the online backup world – Are we inviting trouble?“. The crux of my argument is that backup and archiving is about building redundancy to the data and not about eliminating redundancy in the name of efficiency of storage or network bandwidth. So it is my contention that wherever feasible we should have as much redundancy to the data (that needs backing up) and only under unavoidable circumstances should we resort to using synthetic full backup or deduplication. Actually, let me state this more strongly: “Avoid falling for the synthetic full backup or deduplication hype if you can!”
But who am I to say this. I am neither an “industry expert” nor am I Steve Jobs to say “this is what is good for you; take it or leave it”. Given that we are a niche company trying to grow (and growing) in the face of industry giants, we are actually contemplating building deduplication support in our data backup software, StoreGrid. While not many of our customers/partners are asking for it, we do get the occasional prospect saying that deduplication (rather, the lack of it) is a show stopper feature for them!
As we started thinking about and designing the best way to support deduplication in StoreGrid, we encountered many options to consider and many complexities to be handled. But at the end, we were left with a fundamental question – whether a full-fledged deduplication is indeed possible in the online backup world! Before I explain some of the options and the complexities, and why we think a full-fledged de-duplication may not be feasible in a pure online backup scenario, let me first get into a broad overview of the two deduplication approaches.
Deduplication at the source (client) vs at the target (backup server) :
There are vendors who claim they do the deduplication at the source (i.e. the client system that is being backed up) as opposed to others who claim that they do deduplication at the target (i.e. at the backup server). If deduplication is done at the source then it is easy to deduplicate data at a block level across all files within the source system. If deduplication is done at the target then it is equally easy to deduplicate data at a block level across all files across all the client systems backing up to the backup server. Quite obviously doing deduplication across all files across all clients will be much more effective than doing deduplication only at a client system level. It is theoretically possible to do deduplication at the source system and still be able to deduplicate across all systems backing up to the backup server. In this case, each client (source) has to continuously update itself with the meta-data of the blocks that are being stored in the backup server. The meta-data in this case would simply be the checksums of the blocks. These checksums are looked up to identify similar blocks of data. I have not personally tested such a product myself – i.e. the ones doing deduplication at the source system and still being able to deduplicate across all systems backing up to the backup server. But this may not be as efficient in terms of performance as compared to doing the deduplication at the backup server end, especially if the backup/storage server resides at a remote data center (and the meta-data needs to be downloaded each time from the remote server).
Armed with this background, lets dive deeper into the implications of these ‘approaches’ in the online backup context…
Option 1: Deduplication at target:
One of the most important requirements in the online backup domain is that the data that is backed up is encrypted before the data leaves the source system and is sent over the internet to the remote data center (where the data is stored). Deduplication works by finding similar blocks across all the files and physically storing only one copy of the block in the storage system. And encryption works by destroying all patterns in a given data and making the data random. Because of the way encryption eliminates all patterns, trying to do deduplication on a set of encrypted files will have no effect – i.e. finding similar blocks of data across encrypted data will not be of much use as encryption would have eliminated all patterns. That means doing deduplication at the remote storage end, where all the data from different clients systems are encrypted and stored, is technically not possible. The option of not encrypting the data that is being backed up to the remote data center is not really an option in the online backup world. Another point to note is that deduplication at target doesn’t really help much in the case of an online backup scenario – clients still send all data across and hence don’t save anything on bandwidth! Of course, you save on ‘server side storage’ but optimizing this, I’d assume, comes a distant second to optimizing bandwidth utilization – for online backups!
Option 2: Deduplication at source – with a common encryption key:
As I said before it is theoretically possible to do deduplication at source and still be able to deduplicate across all client systems in an organization. In order to do that, either the data should not be encrypted during backup or all the client systems will have to use a common encryption key to encrypt the data. Not encrypting the data is not really an option with online backups. Using a common encryption key would mean that for each block of data that is backed up the checksum signature of the unencrypted block is also sent to the backup server where it is stored. Every client that is backed up should look up this database of checksums stored in the backup server before sending a block of data to the backup server. Though this can be done efficiently, I am not really fond of this option, because of the performance penalty, considering that the backup server is at a remote location in the case of online backups.
Option 3: Deduplication at local target backup server – with offsite replication:
The only practical option I can think of is to have a deployment model where all clients in an organization backup to a local backup server – without encryption. The backed up data is deduplicated at the local backup server and then encrypted and sent to a remote backup or replication server. This deployment model will ensure that the deduplication is done on data from across all clients backing up to the local backup server. Depending upon a customer’s preference, the local backup server can either keep a copy of the deduplicated backed up data (for quicker restores) or the backed up data at the local backup server can be purged (not recommended) once the data is moved to the remote backup/replication server.
In summary, we prefer the last approach, viz. doing the deduplication at the target backup server which is deployed locally at the site where clients systems are. This would allow the client to backup to the local backup server without encrypting the data – thus facilitating deduplication at the target. And for offsite storage, the data from the local backup server would be deduplicated, encrypted and sent to the remote backup or replication server. This would also ensure that the benefits of bandwidth savings associated with deduplication are also achieved.
I look forward to feedback & suggestions on other ‘better’ ways of implementing deduplication in the online backup domain!
Hi Vembu,
I liked the third approach. But it depends on the end user, suppose if the backup software is sold as VAR to single host client then it will be difficult to implement the third approach. I feel depending on the situations we would need 3rd and the combination of 2.
Regards,
Jagan
Hi Jagan,
Thanks for the comment. I agree that for very small companies the idea of having a local backup server may not be attractive as opposed to directly backing up to the remote data center. In those cases we may need to do source based deduplication across clients using the same encryption key. As we go into the implementation we will give this more thought and choose the most flexible and efficient approach.
Sekar.
Remote Backup Software is quickly becoming one of the most important pieces of software package needed by the nearly 200 million employees who operate remotely from their desks. Why is this the case? First over a half a million laptops are misplaced at US airports every year.
Sekar,
I agree, data deduplication is not a suit all technology. The type of data your customers deal with, it may or may-not be good for deduplication. You would have to check if the practical results are better than simple zip.
But, the source based deduplication for catching duplicates across the clients is not theory. Its been put to practice by some products. Avarmar, I checked recently also does it. But pure-disk is pure crap, you won’t find a single happy customer of pure-disk.
Deduplication now is much more than block checksum matching.
Various technology changes and optimizations have actually helped (in some cases) deliver 99% better bandwidth and storage utilization than traditional backups.
But, first you would have to check that how many duplicates does you data contain.
Second, ensure that the deduplicated data can be 100% recovered without any chance of failure, else the point of redundancy is gone.
Jaspreet
Jaspreet,
I am talking about deduping across multiple systems in the same organization. So it may have some effect of minimizing the amount of data stored.
I never said source based dedup is only theory or that it is only about checksum matching. My focus is on a much higher level. All methods are based on some kind of matching whether you call it checksum or signature or by some other name. Again my focus is not to get into details of fixed block length comparison or variable block length comparison etc. Those are details to make it more and more optimized and efficient.
What I said was that I am not fond of the source based dedup implementation because instinctively I felt what is good in theory may not be as good in practice. I may be wrong about that. When we implement dedup in StoreGrid we will consider all options and do what we think is practical and what works with least amount of headache.
The blog post is just to discuss some options at a high level rather than to get into details of the implementation.
Sekar
Where is this technology today in the roadmap?
It is still early days and hence no time frame has been set. It might be at least another 6 months before we support full fledged de-duplication.
Sekar.