Quantcast
Channel: Veeam Support Knowledge Base
Viewing all 4473 articles
Browse latest View live

HCL - Fujitsu ETERNUS LT260

$
0
0

Challenge

Product Information:

Product Family: Fujitsu ETERNUS LT
Status: Veeam Ready - Tape
Classification Description: Tape solution where available hardware features have been tested to work with Veeam.

Solution

Product Details:

Model number: LT260
Library Firmware Version 7.60/ 3.20e
Drive firmware version: HB83
Driver for tape drive: IBM Ultrium 8HH 6.2.6.6
Driver for media changer: Fujitsu ETERNUS media changer 8.2.0.6
Media type tested: LTO8
General product family overview:
The ETERNUS LT260 LTO based tape library combines flexible scalability, exceptional storage density with excellent automated and remote management capabilities for the highly efficient handling of fast growing backup and archive volumes.
It starts from 960 TB of non-compressed LTO-8 capacity in a 6U chassis of up to 6720 TB in the maximum configuration of one base module and six expansion modules. Each module has 80 slots and can be equipped with up to six drives for connectivity, performance and redundancy.
ETERNUS LT260 can be split into a maximum of six partitions per unit serving different application environments in parallel. A graphical web interface and a high level of automation enable simple and remote operation without on-site experts. The ETERNUS LT260 systems are enabled for encryption offering enhanced security and compliance.

Veeam Details:

Veeam Build Number: 9.5.0.1536
 

More Information

Company Information:

Company name: Fujitsu Limited
Company overview: Fujitsu is the leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions and services. Approximately 156,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers.


EMC Data Domain Storage with Veeam Backup & Replication: Configuration Best Practices and Performance Expectations

$
0
0

Challenge

This article documents general performance expectations, best practices, and configuration advice, when using an EMC Data Domain appliance with deduplication as a repository for Veeam Backup & Replication.

Solution

For further information regarding how Veeam Backup & Replication works with EMC Data Domain DDBoost please review: https://helpcenter.veeam.com/docs/backup/vsphere/emc_dd.html


Performance Expectations

EMC Data Domain Deduplication Storage Systems provide both high compression and deduplication ratios so that data can be kept for extended periods. When a Data Domain is configured as a repository for Veeam Backup & Replication, write performance may vary depending upon the particular EMC Data Domain Deduplication System model, protocol, and backup infrastructure architecture.

When attempting to read data from a Data Domain it must rehydrate and decompress each block, for this reason operations which read from the Data Domain will perform slower than non-dedupilcated storage, this is more noticeable with operations which use random I/O. All restores will occur as fast as the environment can accept new information, and as fast as the Data Domain can decompress and rehydrate the blocks.

For quick recovery you may consider using fast primary storage and keeping a several restore points (3-7) for quick restore operations such as Instant Recovery, SureBackup, Windows or Other-OS File restores since they generate the highest amount of random reads. Then use the DataDomain as a secondary storage to store files for long term retention. If an EMC Data Domain Deduplication System will be used as primary storage, it is strongly suggested to leverage alternative restore capabilities within Veeam Backup & Replication such as Entire VM restore and VM files restore. This may result in faster recovery capabilities when used with EMC Data Domain Deduplication Systems than Instant Recovery and File Level Restore operations.

Instant Recovery

  • This type of restore can be effected adversely by the aforementioned limitations of a Data Domain appliance, and also the type of VM being restored. Highly transactional VMs will require more IOPS from the Data Domain during the Instant Recovery than others. With this in mind you can expect to only be capable of running only a few Instant Recoveries simultaneously. Instant Recovered VMs that are started from a backup file stored on a Data Domain may react or start slowly as the majority of their read operations will be hindered by the Data Domain.
  • For VMware users it is highly advised when performing an IR that the user select to have virtual disk updates redirected to a high performance Datastore. This will improve performance by caching written blocks to low latency storage.
  • It is advised that if the VM is intended to be made permanent that the VM be migrated to production storage as soon as possible after the Instant Recovery has begun.

File Level Restores

When performing Veeam Backup & Replication File Level Restore (FLR) capabilities slow recovery times may be experienced. During Veeam FLR recovery capabilities, a significant amount of read activity occurs when accessing the Veeam “service data” metadata for each individual file as the Veeam backup files are not arranged in sequence. This read activity must be performed to determine the location of the data block(s) associated with each file during granular restore sessions. This significant level of random access is not recommended with archive tier storage devices because they are designed for optimal performance with sequential read operations. Veeam recommends implementing EMC Data Domain Deduplication Storage Systems as a secondary target for these use cases as the more random read operations, the slower the restore will be with EMC Data Domain Deduplication Systems.

  • The backup browser may take longer than usual to open if an increment is selected and furthermore by that increments distance from the full restore point.
  • Navigating between folders within the Backup Browser may take additional time as each folder’s content must retrieved from the backup file to display it.

Backup

  • Reverse Incremental performance will be very poor due to its highly random I/O.
    Note: When the Backup Job is configured with a DDBoost repository, Veeam Backup & Replication will prevent Reversed Incremental from being selected by the user.
  • Synthetic Full creation will be very slow to a Data Domain, unless using DDBoost.
  • Synthetic Full with Transforms are not advised.

Backup Copy

  • A retention longer than 30 is not advisable as restore operations will diminish in performance.
  • The Health Check option may take a very long time as it is performing a read operation.

Replication

  • Using the Datadomain to store Replica metadata is not advisable.
    Note: Veeam Backup & Replication will prevent the user from selecting a DDBoost repository.


Veeam Backup & Replication Configuration

Parallel processing (global option):

This option significantly accelerates the backup process and decreases the backup window since virtual disk data is gathered simultaneously. It also dramatically increases fragmentation in the backup files causing high random read for any restore operation. The greater volume of VMs or disks processed simultaneously will increase fragmentation and result in slow restore times. You may consider disabling Parallel Processing, this will decrease backup performance, but increase restore performance.

Storage optimization (job option):

Setting the storage optimization to Local 16TB+ has been shown to improve the effectiveness of Data Domain’s deduplication. The larger this value is, the smaller the preparation phase will be for a backup task and less memory will be used to keep storage metadata in memory.

Inline-deduplication (job option):

Since EMC Data Domain Deduplication Systems have excellent hardware deduplication and compression capabilities, it is highly advised that Veeam built-in deduplication be disabled to decrease load on the backup proxy.

Decompress backup block before storing(repository option):

Veeam strongly recommends enabling this option so that raw data is sent to the EMC Data Domain Deduplication System, leveraging its global deduplication and compression capabilities. Leaving Veeam compression enabled may significantly impact EMC Data Domain deduplication capabilities resulting in high load and slow backup jobs.

Use Per-VM Backup Files(repository option):

Veeam recommends enabling this option so that there is improved performance writing and reading data from the EMC Data Domain Deduplication System. This option is enabled by default when adding the repository as a Deduplicating Storage Appliance.


Backup Mode

  • For CIFS/NFS presented repositories Forward incremental mode with periodic Active full backups is recommended to avoid the rehydration penalty during synthetic operations.
  • For DDboost enabled repositories Forward incremental mode with either Active full or Synthetic full backups is recommended. Synthetically produced full backups will generally have the best restore performance and reduce the time VM is run off of a snapshot during the backup job run. However in some environments an Active Full job may run faster.
  • Transforming previous backup chains into rollbacks is not advisable for both repository types.
  • For forever forward incremental backup and backup copy on DDboost enabled repositories, the option “Defragment and compact full backup file” should be enabled if available. In most cases a weekly schedule is appropriate. This helps to avoid excessive growth of pre-compression data size for the full backup file.

Repository Performance Expectations and Configuration

If DDBoost is not licensed on the Data Domain system it must be added as a CIFS type or Linux type repository. It is advised to use a Linux server with the volume mounted via NFS as a relay server to help improve performance. Under some circumstances, CIFS or NFS communication may perform better than DDBoost with Veeam Backup & Replication v8 because of the limitation of a single thread per backup job when using DDBoost. DDBoost has been shown to improve performance when performing Synthetic Fulls.

 

With Veeam Backup & Replication v9, support for EMC Data Domain Boost is enhanced with the introduction of the following capabilities:

  • Support EMC Data Domain Boost 3.0
  • Reduced impact of storage fragmentation during restore operations even with enabled parallel processing. This feature allows Veeam to store the VM backup in the dedicated backup chain so that fragmentation ratio will be minimum.
  • Reduce the impact of the block size so you may define any block size without impact on the restore process. Veeam will be able to read data granularly so amount of the redundant will be minimum.

With DDBoost

If the Data Domain System is licensed for DDBoost please proceed to configure it using the following steps.

  1. Launch the creation of a new Repository, on the Type tab select Deduplication storage appliance.
  2. Select the deduplication storage as EMC Data Domain.
  3. On the next tab configure the information to for connecting to the Data Domain appliance.
  4. On the Repository tab click Browse and select the necessary location from the list of available paths.
  5. The default settings can be taken for the last steps in repository configuration.
    Unless your environment requires you to specify a different vPower NFS Server.

Without DDBoost

CIFS

  1. Launch the creation of a new Repository, on the Type tab select CIFS
  2. On the next tab configure the path to which the Repository will write to, and set credentials to access that share.
  3. On the Repository tab within the advanced section, enable “Decompress backup data blocks before storing”
  4. The default settings can be taken for the last steps in repository configuration.
    Unless your environment requires you to specify a different vPower NFS Server.

NFS

The Data Domain will need to be configured for NFS access, and configure a Linux server to mount the volumes from the Data Domain via NFS. Please refer to the following links for further information regarding connecting Linux to the Data Domain via NFS:

http://forums.veeam.com/veeam-backup-replication-f2/veeam-datadomain-and-linux-nfs-share-t8916.html

http://tsmith.co/2014/veeam-and-datadomain/

  1. Launch the creation of a new Repository, on the Type tab select Linux
  2. On the next tab select the Linux server that we be connected to. If it is not present in the list select “Add New…”
  3. On the Repository tab specify the path on the Linux server that leads to where you mounted the Data Domain via NFS. On this tab in the advanced section enabled “Decompress backup data blocks before storing.
  4. The default settings can be taken for the last steps in repository configuration.
    Unless your environment requires you to specify a different vPower NFS Server.

 

EMC PowerPath limitations

$
0
0

Challenge

If a Linux server has EMC PowerPath devices attached, all the underlying block devices representing the network paths to the server will be included into the backup. Therefore, an extra amount of data will be created inside the backup file.

Cause

In the current versions of Veeam Agent for Linux, only the DM-Multipath based devices are detected as aggregated. All the block devices with underlying EMC PowerPath aggregated devices are also included into the backup file if ‘Entire machine’ backup mode is selected. 

EMC PowerPath support will be introduced in the future versions of Veeam Agent for Linux.

Solution

Configure Volume-level backup job in Veeam Agent for Linux and include only aggregated EMC PowerPath devices and regular block devices into the backup, do not include block devices representing the network paths of EMC PowerPath. In this case, no extra data will be stored inside the backup file, only the exact amount of real data. For example, in following output you may see EMC PowerPath aggregated device name and underlying devices:
[root@localhost ~]# powermt display dev=all
Pseudo name=emcpowerea
Symmetrix ID=000343607604
Logical device ID=07BA
Device WWN=60000980000262604497023030364553
state=alive; policy=SymmOpt; queued-IOs=0
==============================================================================
--------------- Host ---------------   - Stor -  -- I/O Path --   -- Stats ---
###  HW Path               I/O Paths    Interf.  Mode     State   Q-IOs Errors
==============================================================================
   6 lpfc                   sdef       FA 10f:00 active   alive      0      0
   5 lpfc                   sdcm       FA  6f:00 active   alive      0      0
   4 lpfc                   sdau       FA  7f:00 active   alive      0      0
   3 lpfc                   sdb        FA 11f:00 active   alive      0      0
In this example there are following devices:
 
/dev/emcpowerea – aggregated EMC PowerPath device
/dev/sdef, /dev/sdcm, /dev/sdau, /dev/sdb – underlying block devices representing the network paths
 
Accordingly, to mount table /dev/emcpowerea partitioned and mounted under /data folder:
root@localhost:~# mount | grep emcpowerea
/dev/emcpowereap1 on /data type ext4 (rw,relatime,errors=remount-ro,stripe=256,data=ordered)
In this case block device /dev/emcpowereap1 must be included into backup as shown on image below
User-added image
sdef, sdcm, sdau and sdb should be left unchecked.

Veeam Availability Console – Compile and Upload Management Agent Logs

$
0
0

Challenge

This article covers how to properly compile your Veeam Availability Console Management Agent logs (for both client agents and the Cloud Connect agent), as well as uploading logs when submitting a technical support case. Logs may be required for either client management agents or for the Cloud Connect server management agent.

Solution

To export Veeam Availability Console Management Agent logs for one or more client agents, please refer to the animation and instructions below:

User-added image

 
  1. Log into the Veeam Availability Console UI and navigate to Discovery > Discovered Computers
  2. Select the client machines from which you would like to export logs from the list of managed computers. 
  3. Select the Management Agent drop-down list in the top bar and select Download Logs
  4. Select the time period for which you would like logs collected, then click OK

To export Veeam Availability Console Management Agent logs for the Cloud Connect agent, please refer to the animation and instructions below:
 
User-added image
  1. Log into the Veeam Availability Console UI and navigate to Configuration > Cloud Connect Server
  2. Select your Cloud Connect server and click Download Logs
  3. Select the time period for which you would like logs collected, then click OK
Once the relevant logs have been downloaded, upload the exported archive to your case via the customer portal or via the FTP listed on your case. In the case of uploading via FTP, please notify the engineer assigned to your case once the upload is complete.

More Information

Alternatively, you may manually collect client or Cloud connect management agent logs from the following location on the managed machine:
 
C:\ProgramData\Veeam\Veeam Availability Console\Log\Agent
 
This directory is the same for both the client agent and the Cloud Connect agent, however, the log files themselves differ. The engineer assigned to your case will inform you of which agent logs are needed for troubleshooting.

Veeam Availability Console – Compile and Upload ConnectWise Manage Plugin Logs

$
0
0

Challenge

This article covers how to properly compile your Veeam Availability Console ConnectWise Manage Plugin logs, as well as uploading logs when submitting a technical support case.

Solution

To export ConnectWise Manage Plugin for Veeam Availability Console log files, please refer to the animation and instructions below:
 
User-added image

 
  1. Log into the Veeam Availability Console UI and navigate to Configuration > Plugin Library and click on the ConnectWise Manage plugin to access the plugin UI. 
  2. In the plugin UI, navigate to Support Information, then click Download logs
  3. Select the time period for which you would like logs collected, then click OK 
  4. Once downloaded, upload the exported archive to your case via the customer portal or via the FTP listed on your case. In the case of uploading via FTP, please notify the engineer assigned to your case once the upload is complete.

More Information

Alternatively, you may manually collect the ConnectWise Manage plugin log files from the following location on the Veeam Availability Console server:
 
C:\ProgramData\Veeam\Veeam Availability Console\Plugins\ConnectWiseManage\Log

 

Veeam Availability Console – Compile and Upload Server Logs

$
0
0

Challenge

This article covers how to properly compile your Veeam Availability Console Server logs, as well as uploading logs when submitting a technical support case.

Solution

To export Veeam Availability Console Server logs, please refer to the animation and instructions below:
 
User-added image

 
  1. Log into the Veeam Availability Console UI and navigate to Configuration > Support Information.  
  2. Click Download Logs and select the time period for which you would like logs collected, then click OK.  
  3. Once downloaded, upload the exported archive to your case via the customer portal or via the FTP listed on your case. In the case of uploading via FTP, please notify the engineer assigned to your case once the upload is complete.

More Information

Alternatively, you may manually collect the server log files from the following location on the Veeam Availability Console server:
 
C:\ProgramData\Veeam\Veeam Availability Console\Log
 
In the case of a distributed deployment, logs must be collected form this directory on both the application server and the web UI server.

SQL Express Maximum Database Size Limitation

$
0
0

Challenge

Once Veeam ONE database reaches the maximum allowed size, Veeam ONE will not be able to continue data collection thus affecting data accuracy and alarms generation.
 

Cause

If you choose to host the Veeam ONE database on Microsoft SQL Server Express, be informed there is a 10 GB database size limitation for this edition. For details, see Editions and Supported Features for SQL Server.

Solution

Database migration (permanent solution)

The best way to resolve the issue would be to move Veeam ONE database to a Standard or Enterprise edition of MS SQL Server. The procedure for configuring Veeam ONE to use a new SQL Server connection is described in the following KB article: http://www.veeam.com/kb1599

Purging old data (temporary workaround)

You can also address the issue by purging old performance data as described below.

You can delete past performance data via a custom SQL script that should be run against Veeam ONE database. Follow these steps to reduce the database size:

1. Before you begin:
  • make sure to properly backup Veeam ONE database;
  • be aware that this change will touch all of Veeam ONE components: for example in Veeam ONE Reporter you will not be able to build reports that rely on the data that you delete with the script presented below;
  • keep in mind that all graphs in Veeam ONE Monitor that rely on the deleted data that will be unavailable;
  • unless specifically instructed by Veeam Support, do not modify the SQL statement and do not execute it against other database tables. If the aforementioned workaround does not help to reduce the database size, please contact Veeam Support for further assistance.
2. Stop Veeam ONE Monitor and Reporter services on the Veeam ONE server.
3. Specify the time and the database name in the following statement: 
 
CHECKPOINT
DECLARE @dt DATETIME
SET @dt = CONVERT(DATETIME, '2018-01-25 00:00:00.001' ,120)
 
WHILE EXISTS (SELECT * FROM [monitor].[perfsamplelow] WITH(NOLOCK) WHERE [timestamp] < @dt)
BEGIN
    BEGIN TRAN
    DELETE TOP (50000) FROM [monitor].[perfsamplelow] WHERE [timestamp] < @dt
    COMMIT TRAN
    CHECKPOINT
END​
DBCC shrinkfile (N'VeeamOne', 1)​

   2017-01-25 00:00:00.001 - the point in time before which the historical performance data will be purged. Change the date accordingly.
   (year)-(month)-(date)

4. Execute the statement against Veeam ONE database.
NOTE: this operation can cause significant workload on the database and growth of the database transaction log. Make sure you do not have mission-critical databases on this server.
5. Start Veeam ONE Monitor and Reporter services on the Veeam ONE server.


Reducing further database growth rate

Before applying any of the approaches described below, you will need to migrate the DB to a standard or enterprise installation of MS SQL Server or purge the old data.

Changing the scalability mode from Typical to Advanced
In the Advanced mode, the data collections are less frequent and include fewer performance metrics, which helps to slow down the database growth. You can learn more about the Advanced scalability mode from our deployment guide.

Modifying the Retention policy
You can modify the historical data retention period by adjusting the settings in Veeam ONE Settings utilityRetention Policy Period section.
 

More Information

In case the aforementioned cleanup procedure does not help, please contact Veeam Support.

Veeam Backup Catalog Service fails to start with Error Attribute "LastSyncTimeUtc" was not found under the tree

$
0
0

Challenge

When attempting to start the Veeam Backup Catalog service an error is generated in the Svc.VeeamCatalog.log file stating Attribute "LastSyncTimeUtc" was not found under the tree “”

Cause

This is caused when the unpacked_data.txt file is corrupted in some manner, either not presenting the proper information or being empty. The state of the file can be confirmed by going to the Veeam catalog location which by default is C:\VBRCatalog\Index\Machines\ (your path to the VBRCatalog folder may differ). Open the unpacked_data.txt file and confirm it looks like the below. The actual time synced will very.
 
LastSyncTimeUtc=08/22/2017 09:37:28.516

#  Unpacked guest index data of unregistered OIBs options
KeepUnpackedInHours=48

#  Temporary unpacked guest index data

Solution

If your unpacked_data.txt does not look like the above rename the file to unpacked_data_old.txt, start the Veeam Catalog service and a new file will be created. This will allow the catalog service to start while keeping your Catalog information intact.

‘The revision level is unknown’ upon Guest OS File restore

$
0
0

Challenge

A Backup contains a VM with Windows 2016 as the GuestOS and Deduplication is enabled for some (or all) of the drives. The Backup completes without Warnings or Errors, but when performing a Windows File Level Restore, you receive the following message:
“Failed to transfer C:\VeeamFLR\VMName_index\Volume#\filename. The revision level is unknown”. Clicking ‘Continue’ skips the file.
User-added image
If you navigate to C:\VeeamFLR and the locate the same volume, the file will be seen, but it won’t be restorable, producing the error: Error 0x80070519: The revision level is unknown.
User-added image

Solution

To resolve this issue, please do the following:
  1. Install remote Veeam console on Windows 2016 with deduplication feature enabled: https://helpcenter.veeam.com/docs/backup/vsphere/install_console.html?ver=95
  2. Change mount server for these backups to the Windows server 2016 with deduplication enabled: https://helpcenter.veeam.com/docs/backup/vsphere/repository_mount_server.html?ver=95
After making these changes, restores should be successful.
 

More Information

As part of the File Level Restore workflow, disks of the VM are first mounted from the backup file to the machine on which the Veeam Console was used to start the restore process. When the files are chosen, Veeam creates an additional mount point on the Mount Server associated with the Backup Repository where the backup files are located.

Both machines (the one with the Console and Mount Server) need to understand deduplication volumes format in order to perform the restore successfully. Unfortunately, the Windows 2012R2 deduplication engine is different than that in Windows 2016, which is why we can see the contents of the disk (as opposed to server with no deduplication enabled), but the error occurs upon restoring.
 

Veeam Backup for Microsoft Office 365 v2 cumulative patch KB2765

$
0
0

Challenge

Veeam Backup for Microsoft Office 365 v2 cumulative patch KB2765

Cause

Please confirm you are running Veeam Backup for Microsoft Office 365 version 2.0.0.567 prior to installing this cumulative patch KB2765. You can check this under Help > About in Veeam Backup for Microsoft Office 365 console. After upgrading, your build version will change to 2.0.0.594.
As a result of on-going R&D effort and in response to customer feedback, cumulative patch KB2765 brings optimizations to SharePoint Online and OneDrive for Business interactions and processing making backup jobs less affected by throttling, and includes a set of bug fixes, the most significant of which are listed below:

 

Licensing

 

•    Discovery Search Mailbox may consume a license when included into a backup job.
•    Under certain conditions resource mailboxes may consume a license.

 

SharePoint Online and OneDrive for Business processing

 

•    SharePoint Online and OneDrive for business backup jobs may fail with the “Specified argument was out of the range of valid values” error. More details in KB 2713.
•    SharePoint Online personal sites may have duplicate names displayed in a Backup Job wizard.
•    Backup of SharePoint Online and OneDrive for Business items with an incorrect last modified date may fail with the “Item not found in the database” error.
•    SharePoint Online and OneDrive for business backup jobs may fail with the “Failed to backup item: , The remote server returned an error: (404) Not Found” warning.

 

Group and shared mailboxes backup

 

•    Group mailbox backup may fail with the “Failed to find group owner account” warning.
•    Private Office 365 group backup may fail, if a group owner has more than one record in Office 365.
•    In hybrid organizations, shared mailbox backup job may fail with the “Unspecified error”.

 

Server

 

•    Collecting Veeam Backup for Microsoft Office 365 logs with remote console may fail with the “Object reference not set to an instance of an object” error. 
•    Connecting to a remote management server with Connect-VBOServer PowerShell cmdlet fails.
•    Under certain conditions backup job retry behavior is incorrect.
 

Solution

To install the cumulative patch KB2765:
1.    Download VBO2.0-KB2765.msp. 
2.    Execute VBO2.0-KB2765.msp as administrator on the Veeam Backup for Microsoft Office 365 server.
3.    If there are any remote proxies in your environment please update those as described here 
4.    If you use a remote VBO365 console and/or remote VBO365 PowerShell module installation, please contact technical support to assist you in upgrading those components.

More Information

[[DOWNLOAD|DOWNLOAD CUMULATIVE PATCH|https://www.veeam.com/download_add_packs/backup-microsoft-office-365/kb2765/]]

MD5 checksum for Veeam.Backup365.msp is e201483a28d2aaf871a2bbdb64f278e3

Should you have any questions, contact Veeam Support.

3PAR Integration fails after firmware update

$
0
0

Challenge

After upgrading 3PAR firmware you may encounter an all jobs failing and/or 3PAR rescans failing with an inability to communicate to the 3PAR device.

Manual communication can be performed via PowerShell with the following:
http://xwiki.support2.veeam.local/bin/view/Main/Internal%20Technical%20Docs/Veeam%20Backup%20and%20Replication/Storage%20Integration/How-to-test-connectivity-to-3PAR-%28API%29/

Yes, I know this is our internal wiki, we should perhaps ask the creator if this is something we can publish externally for manual testing as this is an easy tool to test communication and is how this issue was isolated/resolved.

To review the logs for this issue you can find the location, log, and example below for comparison.

Log Location:
C:\ProgramData\Veeam\Backup and Replication\Backup\SanRescan

Log Name:
Util.SanRescan.Name.log

Example:
[29.08.2018 11:50:46] <01> Error    [Hpe3PAR] Generate Hp3PAR exception. ErrorCode: '1', Description: 'internal server error', Reference: '<null>'.
[29.08.2018 11:50:46] <01> Error    Failed to call web service methods. RetryCount: '1'. MaxRetryCount: '3'.
[29.08.2018 11:50:46] <01> Error    Exception has been thrown by the target of an invocation.


 

Cause

Special character ‘$’ in the account password

Solution

Remove the special character from the account as it appears that newer versions of 3PAR does not allow the use of ‘$’ as a special character in their passwords.

How to configure Veeam for rotated media

$
0
0

Challenge

To fulfill the 3-2-1 rule’s requirement for an offsite backup, a repository has been pointed to removable storage media (such as USB hard drives or RDX disks). The action of swapping to a new medium causes Veeam Backup & Replication jobs to fail because the job cannot find a file.

Cause

By default, Veeam Backup & Replication assumes that if recent files are missing, the job should fail. This way, the job provides a notification that something is wrong instead of taking any action.

Solution

For an introduction to Veeam Backup & Replication’s options for rotated media, consult the user guide. The following notes assume the option for rotated media is enabled in the repository advanced settings.
 

Retention Options on Windows-type Repositories
 
There are three options:
  1. Default backup job behavior;
  2. Default backup copy job behavior;
  3. Deleting files when disks are rotated.
 
Backup jobs will always create active full backups immediately after disks are rotated. This option requires that each storage medium has sufficient disk space for at least two full backup files.
 
With backup copy jobs, when a new disk is detected, the job will only create a full backup if there are no valid backups on the disk for that job. If an existing backup set is detected, the backup copy job will create an incremental backup file that contains the difference between the current restore point and the most recent previous restore point on that disk. As a result, if disks are re-used frequently, the incremental backup files will be similar in size to increments on non-rotated media; if a disk contains very old restore points, the first new incremental backup copy may be almost as large as a full backup.
 
With both job types, Veeam Backup & Replication tracks restore points stored on all disks that have been used with the job. If outdated restore points are stored on the current disk, they are only deleted at the end of the current job session.
 
For example, consider a job that creates two restore points per day, with disks swapped once per day, a total of three disks, and retention policy set to 6 restore points.
  • A forward incremental backup job creates a full backup file and an increment on each of the three disks over the first three days. On the fourth day, the first disk is re-used. A new full backup file is created. There are now 7 points across all disks, so retention policy is met, but the initial full backup cannot be deleted because an incremental file is dependent on it. After another restore point is created, the older two files on disk 1 are deleted by retention policy.
  • A backup copy job also creates a full backup file and an incremental file on each of the three disks over the first three days. On the fourth day, the first disk is re-used. A new incremental backup file is created, and the oldest incremental file is merged into the full backup file.
 
If a disk does not contain enough space for a new backup file, the job will fail instead of deleting old files. This can be avoided by deleting old files as soon as the disks are swapped. This can be done manually, via pre-job script, or with the registry setting described below.
 
 

Retention on All Other Repositories
 
By default, repositories configured for rotated media do not delete any backup files when disks are swapped. If a disk containing a previous backup is to be re-used, but lacks sufficient available space for new backup files, the old files must be deleted manually, or by a pre-job script.
 
Retention policy is enforced, but only on the current backup chain. For example, consider a backup copy job that creates restore points every hour, with disks swapped once per day, and retention policy set to 6 restore points. Once there are 7 restore points on the current disk (a full backup file and 6 incremental backup files), the oldest increment is merged with the full backup file so that there are 6 restore points on disk. The disk is swapped out for a new one, and the process repeats. When the first disk is re-used, the 6 backup files still on the disk are ignored. A new full backup file is created, and a new chain of incremental files. At the end of the day, there are 12 restore points on disk, with only the most recent 6 visible in the Veeam Backup & Replication console.
 
An alternative behavior is available as a registry setting (below).
 

Deleting Files When Disks are Rotated
 
When this registry setting is enabled (set to 3 or 5),
any repository, regardless of rotated media setting, will maintain retention normally until the job detects that previously-available files are missing.
 
Create this value on the Veeam Backup and Replication server:
 
HKLM\Software\Veeam\Veeam Backup and Replication
ForceDeleteBackupFiles (DWORD)

 
Set to 3 to make the job delete the entire contents of the backup job’s folder only.
Set to 5 to make the job delete the entire contents of the root backup repository folder, potentially deleting any files belonging to other jobs. Note that if the backup repository points at the volumes root folder, the entire volume's contents are erased.
 
The Veeam Backup Service must be restarted after creating this registry value. Make sure no jobs or restores are running before restarting this service.

HCL - RedHat Ceph Storage 3.1

$
0
0

Challenge

Product Information:

Product Family: RedHat Ceph Storage 3.1
Status: Veeam Ready – Archive
Classification Description: Verified disk archive storage that can be used as a Backup Copy target. Synthetic full backups, granular restores, and vPower features may not provide sufficient performance or be supported.

Solution

Product Details:

Model number: RedHat Ceph Storage 3.1
Storage Category: Software Defined Storage Platform
Drive quantity, size, type: 18 - SAS HDDs | 3 – SATA SSDs
Storage configuration: 3 node Ceph cluster, each node consisting of 6 SAS HDDs and 1 SATA SSD.
Firmware version: N/A
Connection protocol and speed: 10 GbE iSCSI
Additional support: All models and configurations of Ceph Storage with specifications equivalent or greater than the above.

General product family overview:Red Hat Ceph Storage delivers software‑defined storage on your choice of industry‑standard hardware. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data. It also supports backward compatibility to existing block storage resources using the iSCSI storage networking standard.

Veeam testing configuration:

Veeam Build Number: 9.5.0.1922

Job Settings:

  • Deduplication: Enabled (Default)
  • Compression: Optimal (Default)
  • Storage Optimization: Local Target (Default)
Repistory Settings:
  • Repository Type: Windows
  • Per-VM Backup Files: Enabled
  • Decompress before storing: Disabled (Default)
  • Align backup file blocks: Enabled

More Information

Company Information:

Company name: RedHat, Inc.
Company overview: Red Hat is the world's leading provider of open source software solutions, using a community-powered approach to provide reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. As a connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT.

Volume groups created by proxy got stuck in UI

$
0
0

Challenge

When Veeam Availability for Nutanix backup job fails unexpectedly or backup proxy appliance is powered off by some reason (manual power-off, hypervisor host crash or any unexpected failure), you may get the volume groups (created for the backup purposes) left in Prism Element and not removed from your cluster.

Cause

Volume groups created by the backup proxy appliance are temporary and used during the backup job run only. In case of any unexpected job or proxy failure, they are not removed from the cluster properly.

Solution

It is safe to remove manually these temporary volume groups left from the previous job runs. The next job run will create a new volume group. 

Follow the next steps to remove the groups, when there are no backup jobs running:

1. Locate a volume group named with “Veeam-“ in Prism Element:
User-added image

2. Update the volume group options and untick ‘Client IQN/Address’ checkbox:

User-added image

Otherwise, you will get the following error:

User-added image

3. Save the volume group configuration and run the remove operation.

More Information

If there are any difficulties with the volume groups removal, please contact Veeam Support.

Veeam Availability for Nutanix backup fails after upgrade to AOS 5.8.0.1 and higher

$
0
0

Challenge

Veeam Availability for Nutanix backup job fails with:

Backup VM [ID: 4a89ffc5-68b3-4be0-b117-0591dcec184e' Name: 'MYVM’] failed. Error: Backup was unsuccessful. Preparing VM for backup failed, 1."

Prism Element dashboard shows the following failed tasks:
User-added image

The following conditions are met:

  1. Nutanix AOS is upgraded up to 5.8.0.1 within STS (Short Term Support) release branch;
  2. Affinity rules are configured for the affected machines:
​​ User-added image

Cause

This is a known Nutanix issue and is tracked by Nutanix as an internal bug ENG-163615 which was discovered after the release of Nutanix AOS 5.6.1.
 

Solution

Current workaround is to disable the host affinity rules for the affected VMs. The issue is under investigation by Nutanix support. LTS (Long Term Support) release branch is not affected by this issue at the moment

Restoring GPT Disk to Incompatible Legacy BIOS System

$
0
0

Challenge

After selecting a restore point during the bare metal restore configuration, the following message pops up:
 
OS disk in backup uses GPT disk.  This may cause boot issues on BIOS systems.

User-added image

If this is ignored and the restore process completes, the following may happen when the restored machine boots up:
 
No boot disk found
or
An operating system wasn’t found

Cause

Due to compatibility issues between GPT formatted partitions and legacy BIOS systems, certain systems that can only run in legacy BIOS are unable to recognize the formatted boot partitions. This prevents the system from successfully booting to the underlining operating system.

Solution

To resolve the compatibility issues, it is necessary to format the System partition correctly as outlined in the following steps.

1. Boot the Recovery Media.
2. Go to Tools and start command prompt utility.
3. Run the following commands one by one:
 
diskpart

list disk    

At this step, find the disk you are going to use as a restore destination. In this example, we'll use disk 0.     
select disk 0

NOTE: The 'clean' command erases the selected disk completely.
clean
convert mbr

create partition primary size=100

select part 1

format fs=ntfs label=”System reserved” quick

active

assign letter=S

The '​exit' command will take you out of the diskpart utility.
exit
  

Close the command prompt.

4. Start the restore. While allocating the volumes do not delete the system reserved partition.
5. Do not reboot once restore is completed.
6. Go back to Tools > Command Prompt utility.
7. Run the following commands one by one.
 
diskpart

find the volume with restored operating system. In this example it’s going to be volume C
list vol

exit from disk part utility
exit

8. Once you exit the diskpart context, run the following command in the command prompt (in the example below, S: would be the volume letter assigned during step 3 above): 
 
bcdboot C:\Windows /s S: /f ALL

NOTE: The /f switch is only available under a recovery media that was created from Windows 8/Server 2012 and above. 


9. Reboot and start the restored OS.

More Information

If the system does allow for both MBR and GPT partitions, check with your vendor whether this can be changed in the BIOS settings of the system.

General Security Guidelines for Veeam Self-Service Backup Portal

$
0
0

Challenge

Self-Service Backup Portal is usually publicly available, this leads to increased risks associated with security. While it is generally safe to use this site, in order to reach maximum security it's recommended to apply certain Microsoft IIS and Windows settings.


More information:


The desired security settings can be divided in three categories:
  • Allowed protocols for secure communications
  • Allowed ciphers
  • Recommended Microsoft IIS settings
All of these settings should be applied on the Veeam Enterprise Manager server.

 

Solution

Step 1. Disable old security protocols in Windows registry


When IIS server receives HTTPS connection, a client and a server negotiate a common protocol to secure the channel. By default, Windows has a set of enabled protocols and if the client negotiates some old weak protocol (PCT 1.0, for example), it will be used for communication. In modern networks it?s recommended to use one of the latest protocols ( either TLS 1.1 or 1.2).
Windows Server stores information about channel protocols in the Windows registry key below:

[HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols]

This key may contain the following subkeys:

  • PCT 1.0
  • SSL 2.0
  • SSL 3.0
  • TLS 1.0

Each key holds information about the protocol and any of these protocols can be disabled on the server. The following registry values (and subkeys, if necessary) should be created to disable the old protocols:

  • [HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\PCT 1.0]
    "Enabled" = DWORD:0x00000000
  • [HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0]
    "Enabled" = DWORD:0x00000000
  • ​[HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0]
    "Enabled" = DWORD:0x00000000
  • [HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.0]
    "Enabled" = DWORD:0x00000000


​Step 2: Disable weak ciphers


By default, list of ciphers negotiated for given security protocol includes RC4 and 3DES. These ciphers are considered vulnerable, and it?s recommended to disable them completely.

Note: If you use any Windows version except Windows Server 2012R2, Server 2016 and Windows 10, the KB2868725 security update must be installed before applying the settings below.

In order to disable RC4 and 3DES, the following registry values should be created:

  • [HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 128/128]
    "Enabled" = DWORD:0x00000000

  • [HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128]
    "Enabled" = DWORD:0x00000000

  • [HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128]
    "Enabled" = DWORD:0x00000000

  • [HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\Triple DES 168]
    "Enabled" = DWORD:0x00000000


Step 3: Tweak website settings


The settings for Self-Service Backup Portal site are stored in the special Web.config file. The default Web.config file created after installation does not contain recommended security settings. In order to change this file according to recommendations, please follow the steps below:
  1. Browse to the web app installation folder (default is at C:\Program Files\Veeam\Backup and Replication\Enterprise Manager\WebApp)
  2. Copy original Web.config file someplace else to revert the changes if something goes wrong.
  3. Open Web.config file
  4. Find <system.web> section and add the following option to the httpRuntime settings: enableVersionHeader = "false". The httpRuntime settings should look like that afterwards:
<system.web>
  <httpRuntime requestValidationMode="2.0" targetFramework="4.5" executionTimeout="300" enableVersionHeader="false"/>
</system.web>
  1. Find <system.webServer> ---> <httpProtocol> ---> <customHeaders> section and add the following options:
<add name="Strict-Transport-Security" value="max-age=31536000"/>
<add name="X-XSS-Protection" value="1; mode=block"/>
<add name="X-Content-Type-Options" value="nosniff"/>
<remove name="X-Powered-By" />

The httpProtocol section should look like that afterwards:
 
<system.webServer>
...
<httpProtocol>
<customHeaders>
<add name="Strict-Transport-Security" value="max-age=31536000"/>
<add name="X-XSS-Protection" value="1; mode=block"/>
<add name="X-Content-Type-Options" value="nosniff"/>
<remove name="X-Powered-By" />
</customHeaders>
</httpProtocol>
</system.webServer>
  1. Open the Error.aspx file in the same folder.
  2. In the same folder there is an Error.aspx file. Copy it to someplace else as well, then open it and delete its contents.
Please restart the server after all of the changes listed above. If any issues appear after this configuration, please feel free to contact Veeam support.

How to add Veeam Agent for Microsoft Windows to antivirus exclusions list

$
0
0

Challenge

Veeam Agent for Microsoft Windows functionality fails on a machine, where an antivirus product is installed.

Cause

Veeam Agent for Microsoft Windows operations are affected by virus protection tools.

Solution

Add the following folders to the antivirus exclusions list:
  • installation folder, the default path is: C:\Program Files\Veeam\Endpoint Backup
  • localDB folder: C:\Windows\System32\config\systemprofile
  • logs folder: C:\ProgramData\Veeam
  • the folders containing your backup files.

More Information

If the issue persists after following the steps listed in this KB, open a support case as follows:
  1. Right-click the Veeam Agent for Microsoft Windows icon in the system tray and select Control Panel.
  2. Click the Support link at the top of the window.
  3. Click Technical Support to submit a support case to the Veeam Support Team directly from the product.

HCL-Scality-RING

$
0
0

Challenge

Product Information:

Product Family: Scality Ring
Status: Veeam Ready – Archive
Classification Description: Verified disk archive storage that can be used as a Backup Copy target. Synthetic full backups, granular restores, and vPower features may not provide sufficient performance or be supported.

Solution

Product Details:

Model number: Scality Ring
Storage Category: Scale-out File and Object Storage
Drive quantity, size, type: 72 – 1TB 7200 RPM HDD, 6 – 500GB SSD
Storage configuration: ARC 9+3
Firmware version: 7.4.0.0
Connection protocol and speed: 10GbE
Additional support: All models and configurations of Scality Ring with specifications equivalent or greater than the above.

General product family overview: The Scality RING is software that turns any standard x86 servers into web-scale storage. With the RING, you can store any amount of data, of any type, with incredible efficiency and 100% reliability, guaranteed—all while reducing costs by as much as 90% over legacy systems.

Veeam testing configuration:

Veeam Build Number: 9.5.0.1536
Veeam Settings:

  • Repository Type: Shared Folder
  • Deduplication: Yes
  • Compression: Optimal
  • Storage Optimization: Local Target
  • Per-VM Backup Files: Yes
  • Decompress before storing: No
  • Align backup file blocks: No

Veeam Recommended Backup Copy Job Setting:

  • Only full backup copy jobs are recommended to archive data to Scality
    "Read the entire restore point from source backup instead of synthesizing it from increments"

Vendor recommended configuration:

Hardware configuration used

  • Storage Servers : 6x HPE Apollo 4530 Chassis, Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 128GB RAM
  • Connector Server : 1x VM, 8x vCPU Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 32GB RAM

More Information

Company Information:

Company name: Scality
Company overview: Scality, the world leader in object and cloud storage, bridges the gap between application vendors and industry standard hardware providers to meet your cost-effective storage scale, durability, cloud and performance requirements.

Veeam Availability Console U1 Cumulative Patch 1824

$
0
0

Challenge

Veeam Availability Console U1 Cumulative Patch 1824

Cause

Please confirm you are running version 2.0.2.1750 or later prior to installing this cumulative patch 1824. You can check this under Windows > Programs and features. After upgrading, your build will be version 2.0.2.1824.

As a result of on-going R&D effort and in response to customer feedback, cumulative patch 1824 includes a set of bug fixes, the most significant of which are listed below:

Server
  • Under certain conditions password in the SMTP server settings is reset
  • Veeam Management Portal Service cannot be started after installing Update 1
  • After deleting WAN accelerator component from the Veeam Cloud Connect inventory, records about backup resources available to the tenant might be duplicated
Management Agent
  • Management agent RAM consumption might significantly increase when running computer discovery process
Users and Roles 
  • Sub-tenant’s password is reset when changing cloud quota in the assigned backup policy 
UI
  • After upgrading to Update 1, backup resources of the managed company cannot be adjusted when backup portal is running under heavy load. 
Monitoring & Alarms
  • Job name filtering is ignored in the job session state alarm.
  • Storage snapshots are treated as a backup repository resulting in triggered alarms for the repository free space. 
Reporting & Billing
  • Cloud repository quota usage may report incorrect values when Veeam Agent backups are sent to the cloud repository via backup copy jobs
  • Under certain conditions Cloud replication quota usage is not displayed on the dashboard when logged in using managed company user account
  • Management agent re-installation results in losing collected data about protected VMs from the managed backup server. This affects Overview dashboard and Protected VMs report that display zero protected VMs
  • Overview dashboard and Protected VMs report can display zero protected VMs when using Veeam Cloud Connect 9.5 Update 3a
ConnectWise Manage Plugin
  • Under certain conditions ConnectWise Manage configurations cannot be created.
  • Under certain conditions not all products created in ConnectWise Manage are accessible for mapping when using the billing function.
  • Companies data cannot be collected from ConnectWise Manage due to the following error "An item with the same key has already been added"
  • Companies with multiple types cannot be collected from ConnectWise Manage  

Solution

To install the cumulative patch 1824:

1. Back up the VAC database.
2. Log off VAC Web UI.
3. Execute VAC.ApplicationServer.x64_2.0.2.1824.msp as administrator on the VAC server, or run this cmdlet as administrator: 
msiexec /update c:\VAC.ApplicationServer.x64_2.0.2.1824.msp /l*v C:\ProgramData\Veeam\Setup\Temp\VACApplicationServerSetup.txt
4. Execute VAC.ConnectorService.x64_1.0.0.274.msp as administrator on the VAC server, or run this cmdlet as administrator: 
msiexec /update c:\VAC.ConnectorService.x64_1.0.0.274.msp /l*v C:\ProgramData\Veeam\Setup\Temp\VACConnectorServiceSetup.txt
5. Execute VAC.ConnectorWebUI.x64_1.0.0.274.msp as administrator on the VAC server, or run this cmdlet as administrator: 
msiexec /update c:\VAC.ConnectorWebUI.x64_1.0.0.274.msp /l*v C:\ProgramData\Veeam\Setup\Temp\VACConnectorWebUI.txt
6. Log in to VAC Web UI.

More Information

[[DOWNLOAD|DOWNLOAD CUMULATIVE PATCH|https://www.veeam.com/download_add_packs/availability-console/kb2781/]]

MD5 checksum for KB2781.zip is 41626c64c4f0d5bc47ab67ead105d1b3

Should have any questions, contact Veeam Support.
Viewing all 4473 articles
Browse latest View live