Logical Data Replication

On this page Carat arrow pointing down
Note:

This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support.

Logical data replication is only supported in CockroachDB self-hosted clusters.

New in v24.3: Logical data replication (LDR) continuously replicates tables between an active source CockroachDB cluster to an active destination CockroachDB cluster. Both source and destination can receive application reads and writes, and participate in bidirectional LDR for eventual consistency in the replicating tables. The active-active setup between clusters can provide protection against cluster, datacenter, or region failure while still achieving single-region low latency reads and writes in the individual CockroachDB clusters. Each cluster in an LDR job still benefits individually from multi-active availability with CockroachDB's built-in Raft replication providing data consistency across nodes, zones, and regions.

Tip:

Cockroach Labs also has a physical cluster replication tool that continuously replicates data for transactional consistency from a primary cluster to an independent standby cluster.

Use cases

You can run LDR in a unidirectional or bidirectional setup to meet different use cases that support:

Note:

For a comparison of CockroachDB high availability and resilience features and tooling, refer to the Data Resilience page.

Achieve high availability and single-region write latency in two-datacenter (2DC) deployments

Maintain high availability and resilience to region failures with a two-datacenter topology. You can run bidirectional LDR to ensure data resilience in your deployment, particularly in datacenter or region failures. If you set up two single-region clusters, in LDR, both clusters can receive application reads and writes with low, single-region write latency. Then, in a datacenter, region, or cluster outage, you can redirect application traffic to the surviving cluster with low downtime. In the following diagram, the two single-region clusters are deployed in US East and West to provide low latency for that region. The two LDR jobs ensure that the tables on both clusters will reach eventual consistency.

Diagram showing bidirectional LDR from cluster A to B and back again from cluster B to A.

Achieve workload isolation between clusters

Isolate critical application workloads from non-critical application workloads. For example, you may want to run jobs like changefeeds or backups from one cluster to isolate these jobs from the cluster receiving the principal application traffic.

Diagram showing unidirectional LDR from a source cluster to a destination cluster with the destination cluster supporting secondary workloads plus jobs and the source cluster accepting the main application traffic.

Features

  • Table-level replication: When you initiate LDR, it will replicate all of the source table's existing data to the destination table. From then on, LDR will replicate the source table's data to the destination table to achieve eventual consistency.
  • Last write wins conflict resolution: LDR uses last write wins (LWW) conflict resolution, which will use the latest MVCC timestamp to resolve a conflict in row insertion.
  • Dead letter queue (DLQ): When LDR starts, the job will create a DLQ table with each replicating table in order to track unresolved conflicts. You can interact and manage this table like any other SQL table.
  • Replication modes: LDR offers different modes that apply data differently during replication, which allows you to consider optimizing for throughput or constraints during replication.
  • Monitoring: To monitor LDR's initial progress, current status, and performance, you can view metrics available in the DB Console, Prometheus, and Metrics Export.

Get started

Known limitations


Yes No
On this page

Yes No