BACKUP

On this page Carat arrow pointing down

CockroachDB's BACKUP statement allows you to create full or incremental backups of your cluster's schema and data that are consistent as of a given timestamp.

You can back up a full cluster, which includes:

You can also backup:

  • An individual database, which includes all of its tables and views.
  • An individual table, which includes its indexes and views.

    BACKUP only backs up entire tables; it does not support backing up subsets of a table.

Because CockroachDB is designed with high fault tolerance, these backups are designed primarily for disaster recovery (i.e., if your cluster loses a majority of its nodes) through RESTORE. Isolated issues (such as small-scale node outages) do not require any intervention. You can check that backups in external storage are valid by using a backup validation command.

To view the contents of an backup created with the BACKUP statement, use SHOW BACKUP.

Tip:

We recommend using scheduled backups to automate daily backups of your cluster.

Warning:

The BACKUP ... TO and RESTORE ... FROM syntax is deprecated as of v22.1 and will be removed in a future release.

We recommend using the BACKUP ... INTO {collectionURI} syntax, which creates or adds to a backup collection in your storage location. For restoring backups, we recommend using RESTORE FROM {backup} IN {collectionURI} with {backup} being LATEST or a specific subdirectory.

For guidance on the syntax for backups and restores, see the BACKUP and RESTORE examples.

Considerations

  • Full cluster backups include license keys. When you restore a full cluster backup that includes a license, the license is also restored.
  • Zone configurations present on the destination cluster prior to a restore will be overwritten during a cluster restore with the zone configurations from the backed up cluster. If there were no customized zone configurations on the cluster when the backup was taken, then after the restore the destination cluster will use the zone configuration from the RANGE DEFAULT configuration.
  • You cannot restore a backup of a multi-region database into a single-region database.
  • Exclude a table's row data from a backup using the exclude_data_from_backup parameter.
  • BACKUP is a blocking statement. To run a backup job asynchronously, use the DETACHED option. See the options below.

Storage considerations

  • HTTP storage is not supported for BACKUP and RESTORE.
  • Modifying backup files in the storage location could invalidate a backup, and therefore, prevent a restore. In v22.1 and later, we recommend enabling object locking in your cloud storage bucket.
  • While Cockroach Labs actively tests Amazon S3, Google Cloud Storage, and Azure Storage, we do not test S3-compatible services (e.g., MinIO, Red Hat Ceph).

Required privileges

Note:

Starting in v22.2, CockroachDB introduces a new system-level privilege model that provides finer control over a user's privilege to work with the database, including taking backups.

There is continued support for the legacy privilege model for backups in v22.2, however it will be removed in a future release of CockroachDB. We recommend implementing the new privilege model that follows in this section for all new and existing backups.

You can grant the BACKUP privilege to a user or role depending on the type of backup:

Backup Privilege
Cluster Grant a user the BACKUP system-level privilege. For example, GRANT SYSTEM BACKUP TO user;.
Database Grant a user the BACKUP privilege on the target database. For example, GRANT BACKUP ON DATABASE test_db TO user;.
Table Grant a user the BACKUP privilege at the table level. This gives the user the privilege to back up the schema and all user-defined types that are associated with the table. For example, GRANT BACKUP ON TABLE test_db.table TO user;.

The listed privileges do not cascade to objects lower in the schema tree. For example, if you are granted database-level BACKUP privileges, this does not give you the privilege to back up a table. If you need the BACKUP privilege on a database to apply to all newly created tables in that database, use DEFAULT PRIVILEGES. You can add BACKUP to the user or role's default privileges with ALTER DEFAULT PRIVILEGES.

Note:

You can grant the BACKUP privilege to a user or role without the SELECT privilege on a table. As a result, these users will be able to take backups, but they will not be able to run a SELECT query on that data directly. However, these users could still read this data indirectly, by restoring it from any backups they produce.

Members of the admin role can run all three types of backups (cluster, database, and table) without the need to grant a specific BACKUP privilege. However, we recommend using the BACKUP privilege model to create users or roles and grant them BACKUP privileges as necessary for stronger access control.

Privileges for managing a backup job

To manage a backup job with PAUSE JOB, RESUME JOB, or CANCEL JOB, users must have at least one of the following:

To view a backup job with SHOW JOB, users must have at least one of the following:

See GRANT for detail on granting privileges to a role or user.

Required privileges using the legacy privilege model

The following details the legacy privilege model that CockroachDB supports in v22.2 and earlier. Support for this privilege model will be removed in a future release of CockroachDB:

  • Full cluster backups can only be run by members of the admin role. By default, the root user belongs to the admin role.
  • For all other backups, the user must have read access on all objects being backed up. Database backups require CONNECT privileges, and table backups require SELECT privileges. Backups of user-defined schemas, or backups containing user-defined types, require USAGE privileges.

See the Required privileges section for the updated privilege model.

Destination privileges

You can grant a user the EXTERNALIOIMPLICITACCESS system-level privilege.

Either the EXTERNALIOIMPLICITACCESS system-level privilege or the admin role is required for the following scenarios:

No special privilege is required for:

  • Interacting with an Amazon S3 and Google Cloud Storage resource using SPECIFIED credentials. Azure Storage is always SPECIFIED by default.
  • Using Userfile storage.

We recommend using cloud storage. You also need to ensure that the permissions at your storage destination are configured for the operation. See Storage Permissions for a list of the necessary permissions that each bulk operation requires.

Warning:

While Cockroach Labs actively tests Amazon S3, Google Cloud Storage, and Azure Storage, we do not test S3-compatible services (e.g., MinIO, Red Hat Ceph).

Synopsis

BACKUP TABLE table_pattern , DATABASE database_name , INTO subdirectory LATEST IN collectionURI ( localityURI , ) AS OF SYSTEM TIME timestamp WITH backup_options , OPTIONS ( backup_options , )

Parameters

CockroachDB stores full backups in a backup collection. Each full backup in a collection may also have incremental backups. For more detail on this, see Backup collections.

Parameter Description
targets Back up the listed targets.
subdirectory The name of the specific backup (e.g., 2021/03/23-213101.37) in the collection to which you want to add an incremental backup. To view available backup subdirectories, use SHOW BACKUPS IN collectionURI. If the backup subdirectory is not provided, incremental backups will be stored in the default /incrementals directory at the root of the collection URI. See the Create incremental backups example.

Warning: If you use an arbitrary STRING as the subdirectory, a new full backup will be created, but it will never be shown in SHOW BACKUPS IN. We do not recommend using arbitrary strings as subdirectory names.
LATEST Append an incremental backup to the latest completed full backup's subdirectory.
collectionURI The URI where you want to store the backup. (Or, the default locality for a locality-aware backup.) The storage URI for each backup collection must be unique. You will encounter an error if you run multiple backup collections to the same storage URI.

For information about this URL structure, see Backup File URLs.
localityURI The URI containing the COCKROACH_LOCALITY parameter for a non-default locality that is part of a single locality-aware backup.
timestamp Back up data as it existed as of timestamp. The timestamp must be more recent than your data's garbage collection TTL (which is controlled by the gc.ttlseconds replication zone variable).
backup_options Control the backup behavior with a comma-separated list of these options.

Targets

Target Description
N/A Back up the cluster. For an example of a full cluster backup, refer to Back up a cluster.
DATABASE {database_name} [, ...] The names of the databases to back up. A database backup includes all tables and views in the database. Refer to Back Up a Database.
TABLE {table_name} [, ...] The names of the tables and views to back up. Refer to Back Up a Table or View.

Query parameters

Query parameter Value Description
ASSUME_ROLE STRING Pass the ARN of the role to assume. Use in combination with AUTH=implicit or specified.
New in v23.2: external_id: Use as a value to ASSUME_ROLE to specify the external ID for third-party access to your S3 bucket. Refer to Cloud Storage Authentication for setup details.
AUTH STRING The authentication parameter can define either specified (default) or implicit authentication. To use specified authentication, pass your Service Account credentials with the URI. To use implicit authentication, configure these credentials via an environment variable. Refer to the Cloud Storage Authentication page page for examples of each of these.
COCKROACH_LOCALITY Key-value pairs Define a locality-aware backup with a list of URIs using COCKROACH_LOCALITY. The value is either default or a single locality key-value pair, such as region=us-east. At least one COCKROACH_LOCALITY must the default per locality-aware backup. Refer to Take and Restore Locality-aware Backups for more detail and examples.
S3_STORAGE_CLASS STRING Specify the Amazon S3 storage class for files created by the backup job. Refer to Back up with an S3 storage class for the available classes and an example.

If you are creating an external connection with BACKUP query parameters or authentication parameters, you must pass them in uppercase otherwise you will receive an unknown query parameters error.

Options

Option Value Description
revision_history BOOL / None Create a backup with full revision history, which records every change made to the cluster within the garbage collection period leading up to and including the given timestamp.

You can specify a backup with revision history without any value e.g., WITH revision_history. Or, you can explicitly define WITH revision_history = 'true' / 'false'. revision_history defaults to true when used with BACKUP or CREATE SCHEDULE FOR BACKUP. A value is required when using ALTER BACKUP SCHEDULE.
encryption_passphrase STRING The passphrase used to encrypt the files (BACKUP manifest and data files) that the BACKUP statement generates. This same passphrase is needed to decrypt the file when it is used to restore and to list the contents of the backup when using SHOW BACKUP. There is no practical limit on the length of the passphrase.
detached BOOL / None When a backup runs in detached mode, it will execute asynchronously. The job ID will be returned after the backup job creation completes. Note that with detached specified, further job information and the job completion status will not be returned. For more on the differences between the returned job data, see the example. To check on the job status, use the SHOW JOBS statement. Backups running on a schedule have the detached option applied implicitly.

To run a backup within a transaction, use the detached option.
EXECUTION LOCALITY Key-value pairs Restricts the execution of the backup to nodes that match the defined locality filter requirements. For example, WITH EXECUTION LOCALITY = 'region=us-west-1a,cloud=aws'.

Refer to Take Locality-restricted backups for usage and reference detail.
kms STRING The URI of the cryptographic key stored in a key management service (KMS), or a comma-separated list of key URIs, used to take and restore encrypted backups. Refer to URI Formats. The key or keys are used to encrypt the manifest and data files that the BACKUP statement generates and to decrypt them during a restore operation, and to list the contents of the backup when using SHOW BACKUP.

AWS KMS, Google Cloud KMS, and Azure Key Vault are supported.
incremental_location STRING Create an incremental backup in a different location than the default incremental backup location.

WITH incremental_location = 'explicit_incrementals_URI'

See Incremental backups with explicitly specified destinations for usage.

Backup file URLs

CockroachDB uses the URL provided to construct a secure API call to the service you specify. The URL structure depends on the type of file storage you are using. For more information, see the following:

You can create an external connection to represent an external storage or sink URI. This allows you to specify the external connection's name in statements rather than the provider-specific URI. For detail on using external connections, see the CREATE EXTERNAL CONNECTION page.

Tip:

Backups support cloud object locking and Amazon S3 storage classes. For more detail, see Additional cloud storage feature support.

Functional details

Object dependencies

Dependent objects must be backed up at the same time as the objects they depend on. When you back up a table, it will not include any dependent tables, views, or sequences.

For example, if you back up view v that depends on table t, it will only back up v, not t. When you try to restore v, the restore will fail because the referenced table is not present in the backup.

Alternatively, you can pass a skip option with RESTORE to skip the dependency instead:

Dependent object Depends on Skip option
Table with foreign key constraints The table it REFERENCES. skip_missing_foreign_keys
Table with a sequence The sequence. skip_missing_sequences
Views The tables used in the view's SELECT statement. skip_missing_views

We recommend treating tables with foreign keys, which contribute to views, or that use sequences or user-defined types as a single unit with their dependencies. While you can restore individual tables, you may find that backing up and restoring at the database level is more convenient.

Note:

To exclude a table's row data from a backup, use the exclude_data_from_backup parameter with CREATE TABLE or ALTER TABLE.

For more detail, see the Exclude a table's data from backups example.

Users and privileges

The system.users table stores your users and their passwords. To restore your users and privilege grants, do a cluster backup and restore the cluster to a fresh cluster with no user data. You can also backup the system.users table, and then use this procedure.

Performance

The backup job process minimizes its impact to the cluster's performance with:

  • New in v23.2: Even distribution of work to a node that has a replica of the range to back up. If a locality filter is specified, work is distributed to a node from those that match the locality filter and has the most locality tiers in common with a node that has a replica. Refer to the Backup Architecture page for a detailed explanation of how a backup job works.
  • Integration with elastic CPU limiter by default, which helps to minimize the impact backups have on foreground traffic. This integration will limit the amount of CPU time used by a backup thereby allowing foreground SQL traffic to continue largely unaffected.

A backup job, like any read, cannot export a range if the range contains an unresolved intent. While it is important to minimize the impact of bulk, background jobs like BACKUP on your foreground traffic, it is still crucial for backups to finish (in order to maintain your recovery point objective (RPO)).

Unlike a normal read transaction that will block until any uncommitted writes it encounters are resolved, a backup job's read request will be allotted a fixed amount of CPU time to read the required keys and values. Once the backup's read request has exhausted this time, the backup will resume once it has been allocated more CPU time. This process allows for other requests, such as foreground SQL traffic to continue, almost unaffected, because there is a cap on how much CPU a backup job will take.

You can monitor your cluster's admission control system on the Overload dashboard. To monitor your backup jobs, refer to the Backup and Restore Monitoring page.

For a more technical explanation of elastic CPU, refer to the Rubbing control theory on the Go scheduler blog post.

We recommend always starting backups with a specific timestamp at least 10 seconds in the past. For example:

BACKUP...AS OF SYSTEM TIME '-10s';

This improves performance by decreasing the likelihood that the BACKUP will be retried because it contends with other statements/transactions. However, because AS OF SYSTEM TIME returns historical data, your reads might be stale. Taking backups with AS OF SYSTEM TIME '-10s' is a good best practice to reduce the number of still-running transactions you may encounter, because a backup will eventually push the contending transactions to a higher timestamp, which causes the transactions to retry.

A backup job will initially ask individual ranges to back up but to skip if they encounter an intent. Any range that is skipped is placed at the end of the queue. When a backup job has completed its initial pass and is revisiting ranges, it will ask any range that did not resolve within the given time limit (default 1 minute) to attempt to resolve any intents that it encounters and to not skip. Additionally, the backup's read transaction priority is eventually set to high. This will result in contending transactions being pushed and retried at a higher timestamp.

If a backup job encounters too many retryable errors, it will enter a failed state with the most recent error, which allows subsequent backups the chance to succeed. Refer to the Backup and Restore Monitoring page for metrics to track backup failures.

Backup performance configuration

Cluster settings provide a means to tune a CockroachDB cluster. The following cluster settings are helpful for configuring backup files and performance:

bulkio.backup.file_size

Set a target for the amount of backup data written to each backup file. This is the maximum target size the backup will reach, but it is possible files of a smaller size are created during the backup job.

Note that if you lower bulkio.backup.file_size below the default, it will cause the backup job to create many small SST files, which could impact a restore job’s performance because it will need to keep track of so many small files.

Default: 128 MiB

cloudstorage.azure.concurrent_upload_buffers

Improve the speed of backups to Azure Storage by increasing cloudstorage.azure.concurrent_upload_buffers to 3. This setting configures the number of concurrent buffers that are used during file uploads to Azure Storage. Note that the higher this setting the more data that is held in memory, which can increase the risk of OOMs if there is not sufficient memory on each node.

Default: 1

Cluster settings for cloud storage

The following cluster settings limit the read and write rates to cloud storage. A user may choose to use these settings if their backups overwhelm the network. These settings limit throughput and as a result backups and changefeeds will take longer. The designated <provider>s include s3, gs, and azure.

cloudstorage.<provider>.write.node_rate_limit

Limit the number of bytes per second per node across operations writing to the designated cloud storage provider if non-zero.

Default: unlimited, 0 B

cloudstorage.<provider>.write.node_burst_limit

Limit the number of bytes per second per node handled concurrently across operations writing to the designated cloud storage provider if non-zero.

Default: unlimited, 0 B

cloudstorage.<provider>.read.node_rate_limit

Limit the number of bytes per second per node across operations reading to the designated cloud storage provider if non-zero.

Default: unlimited, 0 B

cloudstorage.<provider>.read.node_burst_limit

Limit the number of bytes per second per node handled concurrently across operations reading to the designated cloud storage provider if non-zero.

Default: unlimited, 0 B

For a complete list, including all cluster settings related to backups, see the Cluster Settings page.

Viewing and controlling backups jobs

After CockroachDB successfully initiates a backup, it registers the backup as a job, and you can do the following:

Action SQL Statement
View the backup status SHOW JOBS
Pause the backup PAUSE JOB
Resume the backup RESUME JOB
Cancel the backup CANCEL JOB

You can also visit the Jobs page of the DB Console to view job details. The BACKUP statement will return when the backup is finished or if it encounters an error.

Note:

The presence of the BACKUP MANIFEST file in the backup subdirectory is an indicator that the backup job completed successfully.

Examples

Per our guidance in the Performance section, we recommend starting backups from a time at least 10 seconds in the past using AS OF SYSTEM TIME.

The examples in this section use one of the following storage URIs:

  • External connections, which allow you to represent an external storage or sink URI. You can then specify the external connection's name in statements rather than the provider-specific URI. For detail on using external connections, see the CREATE EXTERNAL CONNECTION page.
  • Amazon S3 connection strings with the default AUTH=specified parameter. For guidance on using AUTH=implicit authentication with Amazon S3 buckets instead, read Cloud Storage Authentication.

For guidance on connecting to other storage options or using other authentication parameters instead, read Use Cloud Storage.

If you need to limit the control specific users have over your storage buckets, see Assume role authentication for setup instructions.

Note:

The BACKUP ... TO syntax is deprecated as of v22.1 and will be removed in a future release.

Cockroach Labs recommends using the BACKUP ... INTO {collectionURI} syntax shown in the following examples.

Back up a cluster

To take a full backup of a cluster:

icon/buttons/copy
BACKUP INTO 'external://backup_s3' AS OF SYSTEM TIME '-10s';

You will encounter an error if you run multiple backup collections to the same storage URI. Each collection's URI must be unique.

Back up a database

To take a full backup of a single database:

icon/buttons/copy
BACKUP DATABASE bank INTO 'external://backup_s3' AS OF SYSTEM TIME '-10s';

To take a full backup of multiple databases:

icon/buttons/copy
BACKUP DATABASE bank, employees INTO 'external://backup_s3' AS OF SYSTEM TIME '-10s';

Back up a table or view

To take a full backup of a single table or view:

icon/buttons/copy
BACKUP bank.customers INTO 'external://backup_s3' AS OF SYSTEM TIME '-10s';

To take a full backup of multiple tables:

icon/buttons/copy
BACKUP bank.customers, bank.accounts INTO 'external://backup_s3' AS OF SYSTEM TIME '-10s';

Back up all tables in a schema

To back up all tables in a schema, use a wildcard (*) with the schema name:

icon/buttons/copy
BACKUP test_schema.* INTO 'external://backup_s3' AS OF SYSTEM TIME '-10s';

Alternatively, use a fully qualified name: database.schema.*.

With this syntax, schemas will be resolved before databases. test_object.* will resolve to a schema of test_object within the set current database before matching to a database of test_object.

If a database and schema have the same name, such as bank.bank, running BACKUP bank.* will result in the schema resolving first. All the tables within that schema will be backed up. However, if this were to be run from a different database that does not have a bank schema, all tables in the bank database will be backed up.

See Name Resolution for more details on how naming hierarchy and name resolution work in CockroachDB.

Create incremental backups

When a BACKUP statement specifies an existing subdirectory in the collection, explicitly or via the LATEST keyword, an incremental backup will be added to the default /incrementals directory at the root of the collection storage location.

To take an incremental backup using the LATEST keyword:

icon/buttons/copy
BACKUP INTO LATEST IN 'external://backup_s3' AS OF SYSTEM TIME '-10s';

To store the backup in an existing subdirectory in the collection:

icon/buttons/copy
BACKUP INTO {'subdirectory'} IN 'external://backup_s3' AS OF SYSTEM TIME '-10s';
Note:

If you intend to take a full backup, we recommend running BACKUP INTO {collectionURI} without specifying a subdirectory.

To explicitly control where you store your incremental backups, use the incremental_location option. For more detail, see this example demonstrating the incremental_location option.

Run a backup asynchronously

Use the DETACHED option to execute the backup job asynchronously:

icon/buttons/copy
BACKUP INTO 'external://backup_s3' AS OF SYSTEM TIME '-10s' WITH DETACHED;

The job ID is returned after the backup job creation completes:

        job_id
----------------------
  592786066399264769
(1 row)

Without the DETACHED option, BACKUP will block the SQL connection until the job completes. Once finished, the job status and more detailed job data is returned:

job_id             |  status   | fraction_completed | rows | index_entries | bytes
-------------------+-----------+--------------------+------+---------------+--------
652471804772712449 | succeeded |                  1 |   50 |             0 |  4911
(1 row)

Back up with an S3 storage class

To associate your backup objects with a specific storage class in your Amazon S3 bucket, use the S3_STORAGE_CLASS parameter with the class. For example, the following S3 connection URI specifies the INTELLIGENT_TIERING storage class:

icon/buttons/copy
BACKUP DATABASE movr INTO 's3://{BUCKET NAME}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}&S3_STORAGE_CLASS=INTELLIGENT_TIERING' AS OF SYSTEM TIME '-10s';

To use an external connection URI to back up to cloud storage with an associated S3 storage class, you need to include the S3_STORAGE_CLASS parameter when you create the external connection.

Use the parameter to set one of these storage classes listed in Amazon's documentation. For more general usage information, see Amazon's Using Amazon S3 storage classes documentation.

Warning:

Incremental backups are not compatible with the S3 Glacier Flexible Retrieval or Glacier Deep Archive storage classes. Incremental backups require the reading of previous backups on an ad-hoc basis, which is not possible with backup files already in Glacier Flexible Retrieval or Glacier Deep Archive. This is because these storage classes do not allow immediate access to an S3 object without first restoring the archived objects to its S3 bucket.

Refer to Incremental backups and storage classes for more detail.

Advanced examples

For examples of advanced BACKUP and RESTORE use cases, see:

See also


Yes No
On this page

Yes No