Guide setup Specific Settings for Amazon (AWS), Microsoft Azure, Google Cloud
This AWS vs Azure vs Google comparison evaluates the pros and cons of these ... computing platforms are Amazon Web Services (AWS), Microsoft Azure, and ... are more compatible with specific business models and company sizes. ..... That means configuration and integration are effortless, and there are ... Web Services (AWS), Microsoft Azure, or Google Cloud Platform ... This section describes the hardware configurations used for the performance .... configurations above to exhibit no performance loss under the specified load below.
Use this option to send your backup to two destinations at once (e.g., on-site and off-site) and avoid repeated processing of data.
With this option selected, the backup service first processes your data and moves it to a Network-attached storage (NAS) or similar local storage, from where the already processed backup makes its way to a cloud storage.
This allows you to avoid repeated processing of your data when you need it to be compressed and/or encrypted in both target storages, because the backup service performs these operations only once, before the saving the backup to the local storage.
For this reason, when you only need to compress and/or encrypt your backup in a cloud, and not in a local storage, consider creating two separate one-way backup tasks instead of selecting the hybrid backup option.
At present, it is not possible to switch your existing backup plans to a Hybrid mode and you need to manually create a hybrid backup plan from scratch.
You can only apply a single set of retention policies to both the local and cloud storage at the time.
In general, creating any backup starts with uploading a full copy of your data to the storage which serves as a reference point for subsequent incremental backups. However, it is unreasonable to create and upload a full copy of every file each time when you need to apply any changes made to your locally stored data to the backup stored in the cloud. This is why you need to consider utilizing other kinds of backup.
The way in which a full backup processes your data depends on the kind of data that you back up:
When creating an image-based backup, a full backup uploads a complete copy of your data to a target storage (and creates a new backup version) each time the backup plan is being executed.
When creating a file-level backup, the backup service re-uploads an entire file only if it has been changed since the last backup date (as with an incremental backup, a file is not re-uploaded if its modification date is earlier than the last backup date).
Synthetic Full Backup
When choosing Amazon S3 as your target storage and enabling block-level backup, you can enable a synthetic full backup as well.
This feature is only supported for image-based backups.
A synthetic full backup is a combination of a full and block-level backups. As opposed to a full backup, a synthetic full backup does not upload a full version to the target storage with each backup. Instead, it assembles a new revision directly in the target storage by combining already existing blocks from previous revisions with newly uploaded blocks. This enables you to upload less data to the target storage and speed up the upload process.
With this option enabled, the first run of your backup plan will automatically force a full backup (synthetic backups will be executed during subsequent runs of this backup plan).
As opposed to a full backup which uploads a complete copy of each file to a storage, a block-level backup uploads the full copy of your data only during the first execution of the backup plan and when explicitly forced to do this. In other cases, the backup service uploads only blocks (parts of a file) that were modified since the last backup date, which can dramatically decrease the processing time required for completing your backup routine, as well as reduce the required storage space. However, unlike incremental backup, a block-level backup only uploads modified portions of files instead of uploading a full copy of each file that has been modified since the last backup was made.
The backup service decides whether a file needs to be updated based on its modification date. If a file was modified after it has been uploaded to a backup storage, the backup service breaks this file into blocks and then checks whether the hash string at the beginning of each block is different from that of the latest full version of this file stored in a backup. On finding any mismatch in block hashes, the corresponding portion of a file is uploaded to the storage.
The block size varies for different kind of backup data. When using a block-level file backup, the block size depends on the file size:
The block size equals 128 kB if a file size is less than 512 GB.
The block size equals 256 kB if a file size is less than 1024 GB.
The block size equals 512 kB if a file size exceeds 1024 GB.
The block size equals 1 MB for an image-based, Microsoft Exchange and VMware backup.
When performing a block-level backup, you can schedule a full backup as well, to be able to purge outdated file versions according to the current retention policy settings. The backup service cannot delete a previous file version unless a newer full backup is uploaded to the storage, because this version is used for restoring the file.
You can manually force a full backup at any time by clicking the corresponding command under the backup plan entry on the Backup Plans tab of your application.
Cloud Backup uses different approaches for processing various kinds of backups. For this reason, block-level backup may become unavailable when using certain storage providers as a destination for your backups. See Feature Comparison by Storage Providers for more information.