Because of CockroachDB's multi-active availability design, you can perform a "rolling upgrade" of your CockroachDB cluster. This means that you can upgrade nodes one at a time without interrupting the cluster's overall health and operations.
This page describes how to upgrade to the latest v24.1 release, v24.1.8. To upgrade CockroachDB on Kubernetes, refer to single-cluster or multi-cluster instead.
Terminology
Before upgrading, review the CockroachDB release terminology:
- A new major release is performed multiple times per year. The major version number indicates the year of release followed by the release number, starting with 1. For example, the latest major release is v24.3.
- Each supported major release is maintained across patch releases that contain improvements including performance or security enhancements and bug fixes. Each patch release increments the major version number with its corresponding patch number. For example, patch releases of v24.3 use the format v24.3.x.
- All major and patch releases are suitable for production environments, and are therefore considered "production releases". For example, the latest production release is v24.3.1.
- Prior to an upcoming major release, alpha, beta, and release candidate (RC) binaries are made available for users who need early access to a feature before it is available in a production release. These releases append the terms
alpha
,beta
, orrc
to the version number. These "testing releases" are not suitable for production environments and are not eligible for support or uptime SLA commitments. For more information, refer to the Release Support Policy.
There are no "minor releases" of CockroachDB.
Step 1. Verify that you can upgrade
In CockroachDB v22.2.x and above, a cluster that is upgraded to an alpha binary of CockroachDB or a binary that was manually built from the master
branch cannot subsequently be upgraded to a production release.
Run cockroach sql
against any node in the cluster to open the SQL shell. Then check your current cluster version:
> SHOW CLUSTER SETTING version;
To upgrade to v24.1.8, you must be running either:
Any earlier v24.1 release: v24.1.0-alpha.1 to v24.1.7.
A v23.2 production release: v23.2.0 to v23.2.17.
If you are running any other version, take the following steps before continuing on this page:
Version | Action(s) before upgrading to any v24.1 release |
---|---|
Pre-v24.1 testing release | Upgrade to a corresponding production release; then upgrade through each subsequent major release, ending with a v23.2 production release. |
Pre-v23.2 production release | Upgrade through each subsequent major release, ending with a v23.2 production release. |
v23.2 testing release | Upgrade to a v23.2 production release. |
When you are ready to upgrade to v24.1.8, continue to step 2.
Step 2. Prepare to upgrade
Before starting the upgrade, complete the following steps.
Ensure you have a valid license key
To perform major version upgrades, you must have a valid license key.
Patch version upgrades can be performed without a valid license key, with the following limitations:
- The cluster will run without limitations for a specified grace period. During that time, alerts are displayed that the cluster needs a valid license key. For more information, refer to the Licensing FAQs.
- The cluster is throttled at the end of the grace period if no valid license key is added to the cluster before then.
If you have an Enterprise Free or Enterprise Trial license, you must enable telemetry using the diagnostics.reporting.enabled
cluster setting, as shown below in order to finalize a major version upgrade:
SET CLUSTER SETTING diagnostics.reporting.enabled = true;
If a cluster with an Enterprise Free or Enterprise Trial license is upgraded across patch versions and does not meet telemetry requirements:
- The cluster will run without limitations for a 7-day grace period. During that time, alerts are displayed that the cluster needs to send telemetry.
- The cluster is throttled if telemetry is not received before the end of the grace period.
For more information, refer to the Licensing FAQs.
If you want to stay on the previous version, you can roll back the upgrade before finalization.
Review breaking changes
Review the backward-incompatible changes, deprecated features, and key cluster setting changes in v24.1. If any affect your deployment, make the necessary changes before starting the rolling upgrade to v24.1.
Check load balancing
Make sure your cluster is behind a load balancer, or your clients are configured to talk to multiple nodes. If your application communicates with a single node, stopping that node to upgrade its CockroachDB binary will cause your application to fail.
Check cluster health
Verify the overall health of your cluster using the DB Console:
Under Node Status, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as
SUSPECT
orDEAD
, identify why the nodes are offline and either restart them or decommission them before beginning your upgrade. If there areDEAD
and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually).Under Replication Status, make sure there are
0
under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to identify and resolve the cause of range under-replication and/or unavailability before beginning your upgrade.In the Node List:
- Make sure all nodes are on the same version. If any nodes are behind, upgrade them to the cluster's current version first, and then start this process over.
In the Metrics dashboards:
- Make sure CPU, memory, and storage capacity are within acceptable values for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. If any of these metrics is above healthy limits, consider adding nodes to your cluster before beginning your upgrade.
Check decommissioned nodes
If your cluster contains partially-decommissioned nodes, they will block an upgrade attempt.
To check the status of decommissioned nodes, run the
cockroach node status --decommission
command:cockroach node status --decommission
In the output, verify that the value of the
membership
field of each node isdecommissioned
. If any node'smembership
value isdecommissioning
, that node is not fully decommissioned.If any node is not fully decommissioned, try the following:
- First, reissue the decommission command. The second command typically succeeds within a few minutes.
- If the second decommission command does not succeed, recommission and then decommission it again. Before continuing the upgrade, the node must be marked as
decommissioned
.
Back up cluster
Because CockroachDB is designed with high fault tolerance, backups are primarily needed for disaster recovery. However, taking regular backups of your data is an operational best practice. When upgrading to a major release, we recommend taking a backup of your cluster. See our support policy for restoring backups across versions.
Step 3. Decide how the upgrade will be finalized
When upgrading from one major version to another, certain features and performance improvements will be enabled only after finalizing the upgrade. Refer to the Release notes. Even if no features require finalization, finalization is required.
Finalization does not occur for patch upgrades within the same major version.
After a major-version upgrade is finalized, it is no longer possible to roll back to the cluster's previous major version. By default, clusters on CockroachDB Cloud are set to auto-finalize a major-version upgrade as soon as all nodes have been upgraded. For production clusters, Cockroach Labs recommends that you disable auto-finalization so that you can roll back the upgrade if necessary. Otherwise, in the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the previous binary and then restore from one of the backups created prior to performing the upgrade.
Before finalizing the upgrade, monitor the stability and performance of the upgraded cluster. If auto-finalization is disabled, the upgrade is not complete until you have followed all of the instructions in step 4 and step 6.
To disable auto-finalization:
- Start the
cockroach sql
shell against any node in the cluster. Find the cluster's current version:
SHOW CLUSTER SETTING version;
Set the
cluster.preserve_downgrade_option
cluster setting:SET CLUSTER SETTING cluster.preserve_downgrade_option = '{CURRENT_VERSION}';
Replace
{CURRENT_VERSION}
with the cluster's current version. It is an error to set it to any other value.
Step 4. Perform the rolling upgrade
Cockroach Labs recommends creating scripts to perform these steps instead of performing them manually.
Follow these steps to perform the rolling upgrade. To upgrade CockroachDB on Kubernetes, refer to single-cluster or multi-cluster instead.
For each node in your cluster, complete the following steps. Be sure to upgrade only one node at a time, and wait at least one minute after a node rejoins the cluster to upgrade the next node. Simultaneously upgrading more than one node increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability.
After beginning a major-version upgrade, Cockroach Labs recommends upgrading all nodes as quickly as possible. In a cluster with nodes running different major versions of CockroachDB, a query that is sent to an upgraded node can be distributed only among other upgraded nodes. Data accesses that would otherwise be local may become remote, and the performance of these queries can suffer.
These steps perform an upgrade to the latest v24.1 release, v24.1.8.
Visit What's New in v24.1? and download the CockroachDB v24.1.8 full binary for your architecture.
Extract the archive. In the following instructions, replace
{COCKROACHDB_DIR}
with the path to the extracted archive directory.If you have a previous version of the
cockroach
binary in your$PATH
, rename the outdatedcockroach
binary, and then move the new one into its place.If you get a permission error because the
cockroach
binary is located in a system directory, addsudo
before each command. The binary will be owned by the effective user, which isroot
if you usesudo
.i="$(which cockroach)"; mv "$i" "$i"_old
cp -i {COCKROACHDB_DIR}/cockroach /usr/local/bin/cockroach
If a cluster has corrupt descriptors, a major-version upgrade cannot be finalized. Automatic descriptor repair is enabled by default in v24.1. After restarting each cluster node on v24.1, monitor the cluster logs for errors. If a descriptor cannot be repaired automatically, contact support for assistance completing the upgrade. To disable automatic descriptor repair (not generally recommended), set the environment variable
COCKROACH_RUN_FIRST_UPGRADE_PRECONDITION
tofalse
.Start the node so that it can rejoin the cluster.
Without a process manager like
systemd
, re-run thecockroach start
command that you used to start the node initially, for example:cockroach start \ --certs-dir=certs \ --advertise-addr={node address} \ --join={node1 address},{node2 address},{node3 address}
If you are using
systemd
as the process manager, run this command to start the node:systemctl start {systemd config filename}
Re-run the
cockroach start
command that you used to start the node initially, for example:cockroach start \ --certs-dir=certs \ --advertise-addr={node address} \ --join={node1 address},{node2 address},{node3 address}
Verify the node has rejoined the cluster through its output to
stdout
or through the DB Console.If you use
cockroach
in your$PATH
, you can remove the previous binary:rm /usr/local/bin/cockroach_old
If you leave versioned binaries on your servers, you do not need to do anything.
After the node has rejoined the cluster, ensure that the node is ready to accept a SQL connection.
Unless there are tens of thousands of ranges on the node, it's usually sufficient to wait one minute. To be certain that the node is ready, run the following command:
cockroach sql -e 'select 1'
The command will automatically wait to complete until the node is ready.
Repeat these steps for the next node.
Step 5. Roll back the upgrade (optional)
If you decide to roll back to v23.2, you must do so before the upgrade has been finalized, as described in the next section. It is always possible to roll back to a previous v24.1 version.
To roll back an upgrade, do the following on each cluster node:
- Perform a rolling upgrade, as described in the previous section, but replace the upgraded
cockroach
binary on each node with the binary for the previous version. - Restart the
cockroach
process on the node and verify that it has rejoined the cluster before rolling back the upgrade on the next node. - After all nodes have been rolled back and rejoined the cluster, finalize the rollback in the same way as you would finalize an upgrade, as described in the next section.
Step 6. Finish the upgrade
Because a finalized major-version upgrade cannot be rolled back, Cockroach Labs recommends that you monitor the stability and performance of your cluster with the upgraded binary for at least a day before deciding to finalize the upgrade.
Finalization is required only when upgrading from v23.2.x to v24.1. For upgrades within the v24.1.x series, skip this step.
If you disabled auto-finalization in step 3, monitor the stability and performance of your cluster for at least a day. If you decide to roll back the upgrade, repeat the rolling restart procedure with the previous binary. Otherwise, perform the following steps to re-enable upgrade finalization to complete the upgrade to v24.1. Cockroach Labs recommends that you either finalize or roll back a major-version upgrade within a relative short period of time; running in a partially-upgraded state is not recommended.
Warning:A cluster that is not finalized on v23.2 cannot be upgraded to v24.1 until the v23.2 upgrade is finalized.
Once you are satisfied with the new version, run
cockroach sql
against any node in the cluster to open the SQL shell.Re-enable auto-finalization:
> RESET CLUSTER SETTING cluster.preserve_downgrade_option;
A series of migration jobs runs to enable certain types of features and changes in the new major version that cannot be rolled back. These include changes to system schemas, indexes, and descriptors, and enabling certain types of improvements and new features. Until the upgrade is finalized, these features and functions will not be available and the command
SHOW CLUSTER SETTING version
will return23.2
.You can monitor the process of the migration in the DB Console Jobs page. Migration jobs have names in the format
24.1-{migration-id}
. If a migration job fails or stalls, Cockroach Labs can use the migration ID to help diagnose and troubleshoot the problem. Each major version has different migration jobs with different IDs.Note:All schema change jobs must reach a terminal state before finalization can complete. Finalization can therefore take as long as the longest-running schema change. Otherwise, the amount of time required for finalization depends on the amount of data in the cluster, as the process runs various internal maintenance and migration tasks. During this time, the cluster will experience a small amount of additional load.
When all migration jobs have completed, the upgrade is complete.
To confirm that finalization has completed, check the cluster version:
> SHOW CLUSTER SETTING version;
If the cluster continues to report that it is on the previous version, finalization has not completed. If auto-finalization is enabled but finalization has not completed, check for the existence of decommissioning nodes where decommission has stalled. In most cases, issuing the
decommission
command again resolves the issue. If you have trouble upgrading, contact Support.
After the upgrade to v24.1 is finalized, you may notice an increase in compaction activity due to a background migration job within the storage engine. To observe the migration's progress, check the Compactions section of the Storage Dashboard in the DB Console or monitor the storage.marked-for-compaction-files
time-series metric. When the metric's value nears or reaches 0
, the migration is complete and compaction activity will return to normal levels.
By default, the storage engine uses a compaction concurrency of 3. If you have sufficient IOPS and CPU headroom, you can consider increasing this setting via the COCKROACH_COMPACTION_CONCURRENCY
environment variable. This may help to reshape the LSM more quickly in inverted LSM scenarios; and it can lead to increased overall performance for some workloads. Cockroach Labs strongly recommends testing your workload against non-default values of this setting.
Troubleshooting
After the upgrade has finalized (whether manually or automatically), it is no longer possible to downgrade to the previous release. If you are experiencing problems, we therefore recommend that you run the cockroach debug zip
command on any cluster node to capture your cluster's state, then open a support request and share your debug zip.
In the event of catastrophic failure or corruption, it may be necessary to restore from a backup to a new cluster running v23.2.