Paul Clement (MS MVP) wrote a fairly significant & exhaustively documented step-by-step post on the Microsoft Press blog about Windows Server 2012’s new Data Deduplication feature that I think is worth reading for anyone using Windows Server as their file server front end.
Much like the Windows Server 2012 Data Classification Toolkit which provides automatic identification of sensitive information & ‘classifies it’, Data Deduplication provides added value to customers using Windows Server specifically to provide file services for their end users. This is in contrast to folks using non-Windows-based NAS solutions that are front ended using client-accessed SAMBA based Linux variants: These cannot leverage either DCT or Data Dedupe.
Paul here! For as long as there has been file servers running in our organizations, there has been the need to control data sprawl to conserve expensive storage space. As disks began getting larger in capacity and less expensive in cost this issue has moved from critical to more of an annoyance for IT staff to manage. Larger disks meant more space to save data and less urgency to deal with duplicate files.
Solutions have existed for many years to deal with what is known as “deduplication,” both in software and hardware; however they were expensive and not always as simple as they claimed to be.
With the newly minted Windows Server 2012, one feature of the exhaustive list of under the hood improvements and additions is a Service called Data Deduplication. Finally, a built-in and free tool that is integrated with the operating system is here for us to realize some pretty significant storage savings without the need to make it a capital project.
…
Read more at:
- BLOG: From the MVPs: Windows Server 2012’s Data Deduplication feature
http://blogs.msdn.com/b/microsoft_press/archive/2012/10/22/from-the-mvps-windows-server-2012-s-data-deduplication-feature.aspx
