Challenge
Storage space is a concern for backup copy jobs in Veeam. If the repository is insufficiently provisioned, retention can fail, leading to the repository filling up and the job failing. If this happens, the job must often be restarted from scratch.
Cause
First and most important to note is that backup copy job retention happens differently from local backups if GFS is enabled.
If you would like to know details, you can review the appropriate section of User Guide for technical specifics.
The important fact is that when GFS point creation takes place, there may be an "extra" VBK-sized file on the disk until the operation is complete and the retention deletes the oldest GFS backp.
For Synthetic GFS method, it utilizes a temp file. For Active Full GFS method, it just has an extra VBK until the end of the job run.
As a result, you should always plan ahead for one extra VBK file in regards of the storage space.
For GFS retention, every Weekly, Monthly, Quarterly, and Yearly backup is itself a VBK, and of similar size.
Solution
To estimate the space requirements for a backup copy job:
-
Add the sizes of a full backup of all local jobs that will be included in the copy job. You may end up with a slightly smaller copy job due to compression and deduplication of similar blocks, but it’s preferable to plan for more.
-
Determine how many full backups you’ll have:
-
One full for the regular retention
-
One full for every Weekly, Monthly, Quarterly, and Yearly GFS archival point
-
-
Add one more full for the overhead needed while GFS backup is being made.
-
Multiply that total by the size estimated for a full backup. Ensure your repository has space for at least this much data, and a bit more for incrementals/variance in data.
Example:
Consider two local backup jobs being written to a single backup copy job. A full backup of the first job is around 750GB and a full backup of the second is 500GB. Add the two together. 1.25TB should be accounted for the combined full of the copy job, and a minimum of another 1.25TB should be allowed for the merge.
The same two local backup jobs have a combined daily rate of change of 170GB (Local Backup 1, on a given week, has increments sized 100, 120, 100, 150, 125, and 175GB for an average of 128.334~ and Local Backup 2 has increments of 50, 40, 55, 30, 45, and 30GB for an average of 41.667). If the backup copy job will retain 14 points, allow at minimum 14 * 170GB or 2.38TB for increments. Given that one cannot consistently predict rates of change, it’s best to oversize here rather than to be left wanting.
Assuming no GFS retention on this example job, the backup copy repository should have:
[1.25TB (full)] + [1.25TB (merge overhead)] + [2.38TB (incremental points)] = 4.88TB
If we add GFS retention of 4 monthly points, those four full restore points add 4 * 1.25TB = 5.0TB. In this case we would need:
[1.25TB (full)] + [5.0TB (GFS points)] + [1.25TB (merge overhead)] + [2.38TB (incremental points)] = 9.88TB
As a general rule, a bit of additional space is recommended for unforeseen issues. Insufficient sizing for a merge can lead to running out of space, and if a copy job repository ends up with insufficient space to perform a merge—and subsequently is filled with more increments than specified by retention—the only recourse in almost every case is to clear the repository and let the copy job start with a new full, or provision more space for the repository if it is scalable.