How can we improve SCDPM?

Fix for the 16TB Volume Limit in DPM

Because DPM formats its Replica and Recovery point volumes by default with an allocation unit size of 4K the volumes cannot grow beyond 16TB because of the NTFS limits.
This is a show stopper in enterprise environment respectively for big data source in general. (e.g. Fileserver or VMs with volumes in size of multiple TBs)
Please fix this!
If DPM would check the allocation unit size of the data source (source volume) and will format, its replica and recovery point volume with the same settings the problem should be fixed in the most situations. (e.g. when the source volume has an allocation unit size of 16KB, DPM should format its volumes also with 16KB)
Another, however not optimal, solution could be to let the user to specify the allocation unit size in the Protection Group creation wizard.
Maybe the best would be if both is possible/implemented.

14 votes
Sign in
(thinking…)
Sign in with: Microsoft
Signed in as (Sign out)

We’ll send you updates on this idea

Jonas Feller shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

2 comments

Sign in
(thinking…)
Sign in with: Microsoft
Signed in as (Sign out)
Submitting...
  • Erik Paulsen Skålerud commented  ·   ·  Flag as inappropriate

    The 16TB limit is one issue - for us the 4kB allocation unit size also breaks the block-level dedup we run at the array that hosts DPM. Every single volume we have (both inside VM's and on hypervisor CSV disks) are formatted as 16kB block size in order to allign with the SAN dedup (3PAR has 16kB slices).

    Due to DPM using 4kB block size we are getting less dedup than we should.

    Either check the source volume or give us a option to specify block sizes when creating a protection group.

Feedback and Knowledge Base