MOLT Fetch

On this page Carat arrow pointing down

MOLT Fetch moves data from a source database into CockroachDB as part of a database migration.

MOLT Fetch uses IMPORT INTO or COPY FROM to move the source data to cloud storage (Google Cloud Storage or Amazon S3), a local file server, or local memory. Once the data is exported, MOLT Fetch can load the data into a target CockroachDB database and replicate changes from the source database. For details, see Usage.

Supported databases

The following source databases are currently supported:

Installation

To install MOLT Fetch, download the binary that matches your system. To download the latest binary:

Operating System AMD 64-bit ARM 64-bit
Windows Download Download
Linux Download Download
Mac Download Download

For previous binaries, refer to the MOLT version manifest. For release details, see the MOLT changelog.

Note:

MOLT Fetch is supported on Red Hat Enterprise Linux (RHEL) 9 and above.

Docker image

Docker multi-platform images containing both the AMD and ARM binaries are available. To pull the latest image:

icon/buttons/copy
docker pull cockroachdb/molt

To pull a specific version (e.g., 1.1.3):

icon/buttons/copy
docker pull cockroachdb/molt:1.1.3

For details on running in Docker, see Docker usage.

Setup

Complete the following items before using MOLT Fetch:

  • Follow the recommendations in Best practices and Security recommendations.

  • Ensure that the source and target schemas are identical, unless you enable automatic schema creation with the drop-on-target-and-recreate option. If you are creating the target schema manually, review the behaviors in Mismatch handling.

  • Ensure that the SQL user running MOLT Fetch has SELECT privileges on the source and target CockroachDB databases, along with the required privileges to run IMPORT INTO or COPY FROM (depending on the command used for data movement) on CockroachDB, as described on their respective pages.

  • If you plan to use continuous replication, using either the MOLT Fetch replication feature or an external change data capture (CDC) tool:

    • For PostgreSQL sources, enable logical replication. In postgresql.conf or in the SQL shell, set wal_level to logical.
    • For MySQL sources, enable GTID consistency. In mysql.cnf, in the SQL shell, or as flags in the mysql start command, set gtid-mode and enforce-gtid-consistency to ON and set binlog_row_metadata to full.
  • Percent-encode the connection strings for the source database and CockroachDB. This ensures that the MOLT tools can parse special characters in your password.

    • Given a password a$52&, pass it to the molt escape-password command with single quotes:

      icon/buttons/copy
      molt escape-password 'a$52&'
      
      Substitute the following encoded password in your original connection url string:
      a%2452%26
      
    • Use the encoded password in your connection string. For example:

      postgres://postgres:a%2452%26@localhost:5432/replicationload
      
  • If you plan to use cloud storage for the data migration, follow the steps in Secure cloud storage.

Best practices

  • To prevent connections from terminating prematurely during data export, set the following to high values on the source database:

    • Maximum allowed number of connections: MOLT Fetch can export data across multiple connections. The number of connections it will create is the number of shards (--export-concurrency) multiplied by the number of tables (--table-concurrency) being exported concurrently.
    • Maximum lifetime of a connection: This is particularly important for MySQL sources, which can only use a single connection to move data. See the following note.
  • If a MySQL database is set as a source, the --table-concurrency and --export-concurrency flags cannot be set above 1. If these values are changed, MOLT Fetch returns an error. This guarantees consistency when moving data from MySQL, due to MySQL limitations. MySQL data is migrated to CockroachDB one table and shard at a time, using WITH CONSISTENT SNAPSHOT transactions.

  • To prevent memory outages during data export of tables with large rows, estimate the amount of memory used to export a table:

    --row-batch-size * --export-concurrency * average size of the table rows
    

    If you are exporting more than one table at a time (i.e., --table-concurrency is set higher than 1), add the estimated memory usage for the tables with the largest row sizes. Ensure that you have sufficient memory to run molt fetch, and adjust --row-batch-size accordingly.

  • If a table in the source database is much larger than the other tables, filter and export the largest table in its own molt fetch task. Repeat this for each of the largest tables. Then export the remaining tables in another task.

  • When using IMPORT INTO to load tables into CockroachDB, if the fetch process terminates before the import job completes, the hanging import job on the target database will keep the table offline. To make this table accessible again, manually resume or cancel the job. Then resume molt fetch using continuation, or restart the process from the beginning.

Security recommendations

Cockroach Labs strongly recommends the following:

Secure connections

  • Use secure connections to the source and target CockroachDB database whenever possible.
  • By default, insecure connections (i.e., sslmode=disable on PostgreSQL; sslmode not set on MySQL) are disallowed. When using an insecure connection, molt fetch returns an error. To override this check, you can enable the --allow-tls-mode-disable flag. Do this only for testing, or if a secure SSL/TLS connection to the source or target database is not possible.

Connection strings

  • Avoid plaintext connection strings.
  • Provide your connection strings as environment variables.
  • If possible within your security infrastructure, use an external secrets manager to load the environment variables from stored secrets.

    For example, to export connection strings as environment variables:

    export SOURCE="postgres://postgres:postgres@localhost:5432/molt?sslmode=verify-full"
    export TARGET="postgres://root@localhost:26257/molt?sslmode=verify-full"
    

    Afterward, to pass the environment variables in molt fetch commands:

    molt fetch \
    --source $SOURCE \
    --target $TARGET \
    --table-filter 'employees' \
    --bucket-path 's3://molt-test' \
    --table-handling truncate-if-exists
    

Secure cloud storage

  • When using cloud storage for your intermediate store, ensure that access control is properly configured.

    • If you are using Amazon S3 for cloud storage:

      • Ensure that the environment variable and access tokens are set appropriately in the terminal running molt fetch. For example:

        icon/buttons/copy
        export AWS_REGION='us-east-1'
        export AWS_SECRET_ACCESS_KEY='key'
        export AWS_ACCESS_KEY_ID='id'
        
      • Alternatively, set the --use-implicit-auth flag to use implicit authentication.

      • Ensure the S3 bucket is created and accessible by CockroachDB.

    • If you are using Google Cloud Storage for cloud storage:

      • Ensure that your local environment is authenticated using Application Default Credentials:

        Using gcloud:

        icon/buttons/copy
        gcloud init
        gcloud auth application-default login
        

        Using the environment variable:

        icon/buttons/copy
        export GOOGLE_APPLICATION_CREDENTIALS={path_to_cred_json}
        
      • Alternatively, set the --use-implicit-auth flag to use implicit authentication.

      • Ensure the Google Cloud Storage bucket is created and accessible by CockroachDB.

  • Do not use public cloud storage in production.

Perform a dry run

To verify that your connections and configuration work properly, run MOLT Fetch in a staging environment before moving any data in production. Use a test or development environment that is as similar as possible to production.

Commands

Command Usage
fetch Start the fetch process. This loads data from a source database to a target CockroachDB database.

Subcommands

Command Usage
tokens list List active continuation tokens.

Flags

Global flags

Flag Description
--source (Required) Connection string for the source database. For details, see Source and target databases.
--target (Required) Connection string for the target database. For details, see Source and target databases.
--allow-tls-mode-disable Allow insecure connections to databases. Secure SSL/TLS connections should be used by default. This should be enabled only if secure SSL/TLS connections to the source or target database are not possible.
--bucket-path The path within the cloud storage bucket where intermediate files are written (e.g., 's3://bucket/path' or 'gs://bucket/path'). Only the path is used; query parameters (e.g., credentials) are ignored.
--cleanup Whether to delete intermediate files after moving data using cloud or local storage. Note: Cleanup does not occur on continuation.
--compression Compression method for data when using IMPORT INTO (gzip/none).

Default: gzip
--continuation-file-name Restart fetch at the specified filename if the process encounters an error. --fetch-id must be specified. For details, see Fetch continuation.
--continuation-token Restart fetch at a specific table, using the specified continuation token, if the process encounters an error. --fetch-id must be specified. For details, see Fetch continuation.
--crdb-pts-duration The duration for which each timestamp used in data export from a CockroachDB source is protected from garbage collection. This ensures that the data snapshot remains consistent. For example, if set to 24h, each timestamp is protected for 24 hours from the initiation of the export job. This duration is extended at regular intervals specified in --crdb-pts-refresh-interval.

Default: 24h0m0s
--crdb-pts-refresh-interval The frequency at which the protected timestamp's validity is extended. This interval maintains protection of the data snapshot until data export from a CockroachDB source is completed. For example, if set to 10m, the protected timestamp's expiration will be extended by the duration specified in --crdb-pts-duration (e.g., 24h) every 10 minutes while export is not complete.

Default: 10m0s
--direct-copy Enables direct copy, which copies data directly from source to target without using an intermediate store.
--export-concurrency Number of shards to export at a time, each on a dedicated thread. This only applies when exporting data from the source database, not when loading data into the target database. The number of concurrent threads is the product of --export-concurrency and --table-concurrency.

This value cannot be set higher than 1 when moving data from MySQL. Refer to Best practices.

Default: 4 with a PostgreSQL source; 1 with a MySQL source
--fetch-id Restart fetch process corresponding to the specified ID. If --continuation-file-name or --continuation-token are not specified, fetch restarts for all failed tables.
--flush-rows Number of rows before the source data is flushed to intermediate files. Note: If --flush-size is also specified, the fetch behavior is based on the flag whose criterion is met first.
--flush-size Size (in bytes) before the source data is flushed to intermediate files. Note: If --flush-rows is also specified, the fetch behavior is based on the flag whose criterion is met first.
--import-batch-size The number of files to be imported at a time to the target database. This applies only when using IMPORT INTO to load data into the target. Note: Increasing this value can improve the performance of full-scan queries on the target database shortly after fetch completes, but very high values are not recommended. If any individual file in the import batch fails, you must retry the entire batch.

Default: 1000
--local-path The path within the local file server where intermediate files are written (e.g., data/migration/cockroach). --local-path-listen-addr must be specified.
--local-path-crdb-access-addr Address of a local file server that is publicly accessible. This flag is only necessary if CockroachDB cannot reach the local address specified with --local-path-listen-addr (e.g., when moving data to a CockroachDB Cloud deployment). --local-path and --local-path-listen-addr must be specified.

Default: Value of --local-path-listen-addr.
--local-path-listen-addr Write intermediate files to a local file server at the specified address (e.g., 'localhost:3000'). --local-path must be specified.
--log-file Write messages to the specified log filename. If no filename is provided, messages write to fetch-{datetime}.log. If "stdout" is provided, messages write to stdout.
--logging Level at which to log messages (trace/debug/info/warn/error/fatal/panic).

Default: info
--metrics-listen-addr Address of the metrics endpoint, which has the path {address}/metrics.

Default: '127.0.0.1:3030'
--mode Configure the MOLT Fetch behavior: data-load, data-load-and-replication, replication-only, export-only, or import-only. For details, refer to Fetch mode.

Default: data-load
--non-interactive Run the fetch process without interactive prompts. This is recommended only when running molt fetch in an automated process (i.e., a job or continuous integration).
--pglogical-replication-slot-drop-if-exists Drop the replication slot, if specified with --pglogical-replication-slot-name. Otherwise, the default replication slot is not dropped.
--pglogical-replication-slot-name The name of a replication slot to create before taking a snapshot of data (e.g., 'fetch'). Required in order to perform continuous replication from a source PostgreSQL database.
--pglogical-replication-slot-plugin The output plugin used for logical replication under --pglogical-replication-slot-name.

Default: pgoutput
--pprof-listen-addr Address of the pprof endpoint.

Default: '127.0.0.1:3031'
--replicator-flags If continuous replication is enabled with --mode data-load-and-replication or --mode replication-only, specify replication flags (PostgreSQL or MySQL) to override.
--row-batch-size Number of rows per shard to export at a time. See Best practices.

Default: 100000
--schema-filter Move schemas that match a specified regular expression.

Default: '.*'
--table-concurrency Number of tables to export at a time. The number of concurrent threads is the product of --export-concurrency and --table-concurrency.

This value cannot be set higher than 1 when moving data from MySQL. Refer to Best practices.

Default: 4 with a PostgreSQL source; 1 with a MySQL source
--table-exclusion-filter Exclude tables that match a specified POSIX regular expression.

This value cannot be set to '.*', which would cause every table to be excluded.

Default: Empty string
--table-filter Move tables that match a specified POSIX regular expression.

Default: '.*'
--table-handling How tables are initialized on the target database (none/drop-on-target-and-recreate/truncate-if-exists). For details, see Target table handling.

Default: none
--transformations-file Path to a JSON file that defines transformations to be performed on the target schema during the fetch process. Refer to Transformations.
--type-map-file Path to a JSON file that contains explicit type mappings for automatic schema creation, when enabled with --table-handling drop-on-target-and-recreate. For details on the JSON format and valid type mappings, see type mapping.
--use-console-writer Use the console writer, which has cleaner log output but introduces more latency.

Default: false (log as structured JSON)
--use-copy Use COPY FROM to move data. This makes tables queryable during data load, but is slower than using IMPORT INTO. For details, refer to Data movement.
--use-implicit-auth Use implicit authentication for cloud storage URIs.

tokens list flags

Flag Description
--conn-string (Required) Connection string for the target database. For details, see List active continuation tokens.
-n, --num-results Number of results to return.

Usage

The following sections describe how to use the molt fetch flags.

Source and target databases

Tip:

Follow the recommendations in Connection strings.

--source specifies the connection string of the source database.

PostgreSQL or CockroachDB:

icon/buttons/copy
--source 'postgresql://{username}:{password}@{host}:{port}/{database}'

MySQL:

icon/buttons/copy
--source 'mysql://{username}:{password}@{protocol}({host}:{port})/{database}'

--target specifies the CockroachDB connection string:

icon/buttons/copy
--target 'postgresql://{username}:{password}@{host}:{port}/{database}

Fetch mode

--mode specifies the MOLT Fetch behavior:

Load data

data-load (default) instructs MOLT Fetch to load the source data into CockroachDB. It does not replicate any subsequent changes on the source.

icon/buttons/copy
--mode data-load

Load data and replicate changes

Note:

Before using this option, the source PostgreSQL or MySQL database must be configured for continuous replication, as described in Setup.

data-load-and-replication instructs MOLT Fetch to load the source data into CockroachDB, and replicate any subsequent changes on the source.

icon/buttons/copy
--mode data-load-and-replication

If the source is a PostgreSQL database, you must also specify a replication slot name. For example, the following snippet instructs MOLT Fetch to create a slot named replication_slot to use for replication:

icon/buttons/copy
--mode data-load-and-replication
--pglogical-replication-slot-name 'replication_slot'

Continuous replication begins once the initial load is complete, as indicated by a fetch complete message in the output.

To cancel replication, enter ctrl-c to issue a SIGTERM signal. This returns an exit code 0. If replication fails, a non-zero exit code is returned.

To customize the replication behavior (an advanced use case), use --replicator-flags to specify one or more replication-specific flags (PostgreSQL or MySQL) to override.

icon/buttons/copy
--mode data-load-and-replication
--replicator-flags "--applyTimeout '1h' --parallelism 64"

Replicate changes

Note:

Before using this option, the source PostgreSQL or MySQL database must be configured for continuous replication, as described in Setup.

replication-only instructs MOLT Fetch to replicate ongoing changes on the source to CockroachDB, using the specified replication marker.

  • For a PostgreSQL source, first create a logical replication slot. For example, to create a replication slot named replication_slot:

    icon/buttons/copy
    SELECT * FROM pg_create_logical_replication_slot('replication_slot', 'pgoutput');
    

    In the molt fetch command, specify the replication slot name using --pglogical-replication-slot-name. For example:

    icon/buttons/copy
    --mode replication-only 
    --pglogical-replication-slot-name 'replication_slot'
    
  • For a MySQL source, first get your GTID record:

    icon/buttons/copy
    SELECT source_uuid, min(interval_start), max(interval_end)
      FROM mysql.gtid_executed
      GROUP BY source_uuid;
    

    In the molt fetch command, specify a GTID set using the format source_uuid:min(interval_start)-max(interval_end). For example:

    icon/buttons/copy
    --mode replication-only 
    --replicator-flags "--defaultGTIDSet 'b7f9e0fa-2753-1e1f-5d9b-2402ac810003:3-21'"
    

To cancel replication, enter ctrl-c to issue a SIGTERM signal. This returns an exit code 0. If replication fails, a non-zero exit code is returned.

Export data to storage

export-only instructs MOLT Fetch to export the source data to the specified cloud storage or local file server. It does not load the data into CockroachDB.

icon/buttons/copy
--mode export-only

Import data from storage

import-only instructs MOLT Fetch to load the source data in the specified cloud storage or local file server into the CockroachDB target.

icon/buttons/copy
--mode import-only

Data movement

MOLT Fetch can use either IMPORT INTO or COPY FROM to load data into CockroachDB.

By default, MOLT Fetch uses IMPORT INTO:

--use-copy configures MOLT Fetch to use COPY FROM:

  • COPY FROM enables your tables to remain online and accessible. However, it is slower than using IMPORT INTO.
  • COPY FROM does not support compression.
Note:

COPY FROM is also used for direct copy.

Data path

MOLT Fetch can move the source data to CockroachDB via cloud storage, a local file server, or directly without an intermediate store.

Cloud storage

Tip:

Only the path specified in --bucket-path is used. Query parameters, such as credentials, are ignored. To authenticate cloud storage, follow the steps in Secure cloud storage.

--bucket-path instructs MOLT Fetch to write intermediate files to a path within a Google Cloud Storage or Amazon S3 bucket to which you have the necessary permissions. For example:

Google Cloud Storage:

icon/buttons/copy
--bucket-path 'gs://migration/data/cockroach'

Amazon S3:

icon/buttons/copy
--bucket-path 's3://migration/data/cockroach'

Cloud storage can be used to move data with either IMPORT INTO or COPY FROM.

Local file server

--local-path instructs MOLT Fetch to write intermediate files to a path within a local file server. local-path-listen-addr specifies the address of the local file server. For example:

icon/buttons/copy
--local-path /migration/data/cockroach
--local-path-listen-addr 'localhost:3000'

In some cases, CockroachDB will not be able to use the local address specified by --local-path-listen-addr. This will depend on where CockroachDB is deployed, the runtime OS, and the source dialect.

For example, if you are migrating to CockroachDB Cloud, such that the Cloud cluster is in a different physical location than the machine running molt fetch, then CockroachDB cannot reach an address such as localhost:3000. In these situations, use --local-path-crdb-access-addr to specify an address for the local file server that is publicly accessible. For example:

icon/buttons/copy
--local-path /migration/data/cockroach
--local-path-listen-addr 'localhost:3000'
--local-path-crdb-access-addr '44.55.66.77:3000'

A local file server can be used to move data with either IMPORT INTO or COPY FROM.

Tip:

Cloud storage is often preferable to a local file server, which can require considerable disk space.

Direct copy

--direct-copy specifies that MOLT Fetch should use COPY FROM to move the source data directly to CockroachDB without an intermediate store:

  • Because the data is held in memory, the machine must have sufficient RAM for the data currently in flight:

    average size of each row * --row-batch-size * --export-concurrency * --table-concurrency
    
  • Direct copy does not support compression or continuation.

  • The --use-copy flag is redundant with --direct-copy.

Schema and table selection

By default, MOLT Fetch moves all data from the --source database to CockroachDB. Use the following flags to move a subset of data.

--schema-filter specifies a range of schema objects to move to CockroachDB, formatted as a POSIX regex string. For example, to move every table in the source database's public schema:

icon/buttons/copy
--schema-filter 'public'

--table-filter and --table-exclusion-filter specify tables to include and exclude from the migration, respectively, formatted as POSIX regex strings. For example, to move every source table that has "user" in the table name and exclude every source table that has "temp" in the table name:

icon/buttons/copy
--table-filter '.*user.*' --table-exclusion-filter '.*temp.*'

Target table handling

--table-handling defines how MOLT Fetch loads data on the CockroachDB tables that match the selection.

To load the data without changing the existing data in the tables, use none:

icon/buttons/copy
--table-handling none

To truncate tables before loading the data, use truncate-if-exists:

icon/buttons/copy
--table-handling truncate-if-exists

To drop existing tables and create new tables before loading the data, use drop-on-target-and-recreate:

icon/buttons/copy
--table-handling drop-on-target-and-recreate

When using the drop-on-target-and-recreate option, MOLT Fetch creates a new CockroachDB table to load the source data if one does not already exist. To guide the automatic schema creation, you can explicitly map source types to CockroachDB types.

Mismatch handling

If either none or truncate-if-exists is set, molt fetch loads data into the existing tables on the target CockroachDB database. If the target schema mismatches the source schema, molt fetch will exit early in certain cases, and will need to be re-run from the beginning.

Note:

This does not apply when drop-on-target-and-recreate is specified, since this option automatically creates a compatible CockroachDB schema.

molt fetch exits early in the following cases, and will output a log with a corresponding mismatch_tag and failable_mismatch set to true:

  • A source table is missing a primary key.
  • A source and table primary key have mismatching types.
  • A STRING primary key has a different collation on the source and target.
  • A source and target column have mismatching types that are not allowable mappings.
  • A target table is missing a column that is in the corresponding source table.
  • A source column is nullable, but the corresponding target column is not nullable (i.e., the constraint is more strict on the target).

molt fetch can continue in the following cases, and will output a log with a corresponding mismatch_tag and failable_mismatch set to false:

  • A target table has a column that is not in the corresponding source table.
  • A source column has a NOT NULL constraint, and the corresponding target column is nullable (i.e., the constraint is less strict on the target).
  • A DEFAULT, CHECK, FOREIGN KEY, or UNIQUE constraint is specified on a target column and not on the source column.

Type mapping

If drop-on-target-and-recreate is set, MOLT Fetch automatically creates a CockroachDB schema that is compatible with the source data. The column types are determined as follows:

--type-map-file specifies the path to the JSON file containing the explicit type mappings. For example:

icon/buttons/copy
--type-map-file 'type-mappings.json'

The following JSON example defines two type mappings:

[
  {
    "table": "public.t1",
    "column-type-map": [
      {
        "column": "*",
        "type-kv": {
          "source-type": "int",
          "crdb-type": "INT2"
        }
      },
      {
        "column": "name",
        "type-kv": {
          "source-type": "varbit",
          "crdb-type": "string"
        }
      }
    ]
  }
]
  • table specifies the table that will use the custom type mappings in column-type-map, written as {schema}.{table}.
  • column specifies the column that will use the custom type mapping in type-kv. If * is specified, then all columns in the table with the matching source-type are converted.
  • type-kv specifies the source-type that maps to the target crdb-type.

Transformations

You can define transformation rules to be performed on the target schema during the fetch process. These can be used to:

Transformation rules are defined in the JSON file indicated by the --transformations-file flag. For example:

icon/buttons/copy
--transformations-file 'transformation-rules.json'

The following JSON example defines two transformation rules:

{
  "transforms": [
    {
      "id": 1,
      "resource_specifier": {
        "schema": ".*",
        "table": ".*"
      },
      "column_exclusion_opts": {
        "add_computed_def": true,
        "column": "^age$"
      }
    },
    {
      "id": 2,
      "resource_specifier": {
        "schema": "public",
        "table": "charges_part.*"
      },
      "table_rename_opts": {
        "value": "charges"
      }
    }
  ]
}
  • resource_specifier configures the following options for transformation rules:
    • schema specifies the schemas to be affected by the transformation rule, formatted as a POSIX regex string.
    • table specifies the tables to be affected by the transformation rule, formatted as a POSIX regex string.
  • column_exclusion_opts configures the following options for column exclusions and computed columns:
    • column specifies source columns to exclude from being mapped to regular columns on the target schema. It is formatted as a POSIX regex string.
    • add_computed_def, when set to true, specifies that each matching column should be mapped to a computed column on the target schema. Instead of being moved from the source, the column data is generated on the target using ALTER TABLE ... ADD COLUMN and the computed column definition from the source schema. This assumes that all matching columns are computed columns on the source.
      Warning:
      Columns that match the column regex will not be moved to CockroachDB if add_computed_def is omitted or set to false (default), or if a matching column is a non-computed column.
  • table_rename_opts configures the following option for table renaming:

    • value specifies the table name to which the matching resource_specifier is mapped. If only one source table matches resource_specifier, it is renamed to table_rename_opts.value on the target. If more than one table matches resource_specifier (i.e., an n-to-1 mapping), the fetch process assumes that all matching tables are partitioned tables with the same schema, and moves their data to a table named table_rename_opts.value on the target. Otherwise, the process will error.

      Additionally, in an n-to-1 mapping situation:

The preceding JSON example therefore defines two rules:

  • Rule 1 maps all source age columns on the source database to computed columns on CockroachDB. This assumes that all matching age columns are defined as computed columns on the source.
  • Rule 2 maps all table names with prefix charges_part from the source database to a single charges table on CockroachDB (i.e., an n-to-1 mapping). This assumes that all matching charges_part.* tables have the same schema.

Each rule is applied in the order it is defined. If two rules overlap, the later rule will override the earlier rule.

To verify that the logging shows that the computed columns are being created:

When running molt fetch, set --logging debug and look for ALTER TABLE ... ADD COLUMN statements with the STORED or VIRTUAL keywords in the log output:

{"level":"debug","time":"2024-07-22T12:01:51-04:00","message":"running: ALTER TABLE IF EXISTS public.computed ADD COLUMN computed_col INT8 NOT NULL AS ((col1 + col2)) STORED"}

After running molt fetch, issue a SHOW CREATE TABLE statement on CockroachDB:

icon/buttons/copy
SHOW CREATE TABLE computed;
  table_name |                         create_statement
-------------+-------------------------------------------------------------------
  computed   | CREATE TABLE public.computed (
  ...
             |     computed_col INT8 NOT NULL AS (col1 + col2) STORED
             | )

Fetch continuation

If MOLT Fetch fails while loading data into CockroachDB from intermediate files, it exits with an error message, fetch ID, and continuation token for each table that failed to load on the target database. You can use this information to continue the process from the continuation point where it was interrupted. For an example, see Continue fetch after encountering an error.

Continuation is only possible under the following conditions:

  • All data has been exported from the source database into intermediate files on cloud or local storage.
  • The initial load of source data into the target CockroachDB database is incomplete. This means that ongoing replication of source data has not begun.
Note:

Only one fetch ID and set of continuation tokens, each token corresponding to a table, are active at any time. See List active continuation tokens.

To retry all data starting from the continuation point, reissue the molt fetch command and include the --fetch-id.

icon/buttons/copy
--fetch-id d44762e5-6f70-43f8-8e15-58b4de10a007

To retry a specific table that failed, include both --fetch-id and --continuation-token. The latter flag specifies a token string that corresponds to a specific table on the source database. A continuation token is written in the molt fetch output for each failed table. If the fetch process encounters a subsequent error, it generates a new token for each failed table. See List active continuation tokens.

Note:

This will retry only the table that corresponds to the continuation token. If the fetch process succeeds, there may still be source data that is not yet loaded into CockroachDB.

icon/buttons/copy
--fetch-id d44762e5-6f70-43f8-8e15-58b4de10a007
--continuation-token 011762e5-6f70-43f8-8e15-58b4de10a007

To retry all data starting from a specific file, include both --fetch-id and --continuation-file-name. The latter flag specifies the filename of an intermediate file in cloud or local storage. All filenames are prepended with part_ and have the .csv.gz or .csv extension, depending on compression type (gzip by default). For example:

icon/buttons/copy
--fetch-id d44762e5-6f70-43f8-8e15-58b4de10a007
--continuation-file-name part_00000003.csv.gz
Note:

Continuation is not possible when using direct copy.

List active continuation tokens

To view all active continuation tokens, issue a molt fetch tokens list command along with --conn-string, which specifies the connection string for the target CockroachDB database. For example:

icon/buttons/copy
molt fetch tokens list \
--conn-string 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full'
+--------------------------------------+--------------------------------------+------------------+----------------------+
|                  ID                  |               FETCH ID               |    TABLE NAME    |      FILE NAME       |
+--------------------------------------+--------------------------------------+------------------+----------------------+
| f6f0284c-d9c1-43c9-8fde-af609d0dbd82 | 66443597-5689-4df3-a7b9-9fc5e27180eb | public.employees | part_00000001.csv.gz |
+--------------------------------------+--------------------------------------+------------------+----------------------+
Continuation Tokens.

CDC cursor

A change data capture (CDC) cursor is written to the output as cdc_cursor at the beginning and end of the fetch process. For example:

{"level":"info","type":"summary","fetch_id":"735a4fe0-c478-4de7-a342-cfa9738783dc","num_tables":1,"tables":["public.employees"],"cdc_cursor":"0/3F41E40","net_duration_ms":4879.890041,"net_duration":"000h 00m 04s","time":"2024-03-18T12:37:02-04:00","message":"fetch complete"}

You can use the cdc_cursor value with an external change data capture (CDC) tool to continuously replicate subsequent changes on the source data to CockroachDB.

Docker usage

For details on pulling Docker images, see Docker image.

Performance

MOLT Fetch and Verify are likely to run more slowly in a Docker container than on a local machine. To improve performance, increase the memory or compute resources, or both, on your Docker container.

Authentication

When using MOLT Fetch with cloud storage, it is necessary to specify volumes and environment variables, as described in the following sections for Google Cloud Storage and Amazon S3.

No additional configuration is needed when running MOLT Fetch with a local file server or in direct copy mode:

docker run -it cockroachdb/molt fetch ...

For more information on docker run, see the Docker documentation.

Google Cloud Storage

If you are using Google Cloud Storage for cloud storage:

Volume map the application_default_credentials.json file into the container, and set the GOOGLE_APPLICATION_CREDENTIALS environment variable to point to this file.

docker run \
  -v ~/.config/gcloud/application_default_credentials.json:/gcp/creds.json:ro \
  -e GOOGLE_APPLICATION_CREDENTIALS=/gcp/creds.json \
  -it \
  cockroachdb/molt fetch ...

In case the previous authentication method fails, you can volume map the entire Google Cloud configuration directory into the container. In addition to setting the GOOGLE_APPLICATION_CREDENTIALS environment variable, set CLOUDSDK_CONFIG to point to the configuration directory:

docker run \
  -v ~/.config/gcloud:/gcp/config:ro \
  -e CLOUDSDK_CONFIG=/gcp/config \
  -e GOOGLE_APPLICATION_CREDENTIALS=/gcp/config/application_default_credentials.json \
  -it \
  cockroachdb/molt fetch ...

For details on Google Cloud Storage authentication, see Application Default Credentials.

Amazon S3

If you are using Amazon S3 for cloud storage:

Volume map the host's ~/.aws directory to the /root/.aws directory inside the container, and set the required AWS_REGION, AWS_SECRET_ACCESS_KEY, and AWS_ACCESS_KEY_ID environment variables:

docker run \
  -v ~/.aws:/root/.aws \
  -e AWS_REGION=your-region \
  -e AWS_SECRET_ACCESS_KEY=your-secret-access-key \
  -e AWS_ACCESS_KEY_ID=your-access-key-id \
  -it \
  cockroachdb/molt fetch ...

Local connection strings

When testing locally, specify the host as follows:

  • For macOS, use host.docker.internal. For example:
--source 'postgres://postgres:postgres@host.docker.internal:5432/molt?sslmode=disable'
--target "postgres://root@host.docker.internal:26257/molt?sslmode=disable"
  • For Linux and Windows, use 172.17.0.1. For example:
--source 'postgres://postgres:postgres@172.17.0.1:5432/molt?sslmode=disable'
--target "postgres://root@172.17.0.1:26257/molt?sslmode=disable"

Examples

The following examples demonstrate how to issue molt fetch commands to load data into CockroachDB. These examples assume that secure connections to the source and target database are used.

Tip:

After successfully running MOLT Fetch, you can run molt verify to confirm that replication worked successfully without missing or mismatched rows.

Load PostgreSQL data via S3 with continuous replication

The following molt fetch command uses IMPORT INTO to load a subset of tables from a PostgreSQL database to CockroachDB.

icon/buttons/copy
molt fetch \
--source 'postgres://postgres:postgres@localhost/molt' \
--target 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full' \
--table-handling 'truncate-if-exists' \
--table-filter 'employees' \
--bucket-path 's3://migration/data/cockroach' \
--cleanup \
--pglogical-replication-slot-name 'replication_slot' \
--mode data-load-and-replication
  • --table-handling specifies that existing tables on CockroachDB should be truncated before the source data is loaded.
  • --table-filter filters for tables with the employees string in the name.
  • --bucket-path specifies a directory on an Amazon S3 bucket where intermediate files will be written.
  • --cleanup specifies that the intermediate files should be removed after the source data is loaded.
  • --pglogical-replication-slot-name specifies a replication slot name to be created on the source PostgreSQL database. This is used in continuous replication.
  • --mode data-load-and-replication starts continuous replication of data from the source database to CockroachDB after the fetch process succeeds.

If the fetch process succeeds, the output displays a fetch complete message like the following:

{"level":"info","type":"summary","fetch_id":"f5cb422f-4bb4-4bbd-b2ae-08c4d00d1e7c","num_tables":1,"tables":["public.employees"],"cdc_cursor":"0/3F41E40","net_duration_ms":6752.847625,"net_duration":"000h 00m 06s","time":"2024-03-18T12:30:37-04:00","message":"fetch complete"}
Note:

If the fetch process encounters an error, it will exit and can be continued.

Continuous replication begins immediately afterward:

{"level":"info","time":"2024-05-13T14:33:07-04:00","message":"starting replicator"}
{"level":"info","time":"2024-05-13T14:36:22-04:00","message":"creating publication"}

To cancel replication, enter ctrl-c to issue a SIGTERM signal.

Load MySQL data via GCP with continuous replication

The following molt fetch command uses COPY FROM to load a subset of tables from a MySQL database to CockroachDB.

icon/buttons/copy
molt fetch \
--source 'mysql://root:password@localhost/molt?sslcert=.%2fsource_certs%2fclient.root.crt&sslkey=.%2fsource_certs%2fclient.root.key&sslmode=verify-full&sslrootcert=.%2fsource_certs%2fca.crt' \
--target 'postgres://root@localhost:26257/defaultdb?sslmode=verify-full' \
--table-handling 'truncate-if-exists' \
--table-filter 'employees' \
--bucket-path 'gs://migration/data/cockroach' \
--use-copy \
--cleanup \
--mode data-load-and-replication
  • --source specifies the MySQL connection string and the certificates in URL-encoded format. Secure connections should be used by default. Refer to Best practices.
  • --table-handling specifies that existing tables on CockroachDB should be truncated before the source data is loaded.
  • --table-filter filters for tables with the employees string in the name.
  • --bucket-path specifies a directory on an Google Cloud Storage bucket where intermediate files will be written.
  • --use-copy specifies that COPY FROM is used to load the tables, keeping the source tables online and queryable but loading the data more slowly than IMPORT INTO.
  • --cleanup specifies that the intermediate files should be removed after the source data is loaded.
  • --mode data-load-and-replication starts continuous replication of data from the source database to CockroachDB after the fetch process succeeds.

If the fetch process succeeds, the output displays a fetch complete message like the following:

{"level":"info","type":"summary","fetch_id":"f5cb422f-4bb4-4bbd-b2ae-08c4d00d1e7c","num_tables":1,"tables":["public.employees"],"cdc_cursor":"0/3F41E40","net_duration_ms":6752.847625,"net_duration":"000h 00m 06s","time":"2024-03-18T12:30:37-04:00","message":"fetch complete"}
Note:

If the fetch process encounters an error, it will exit and can be continued.

Continuous replication begins immediately afterward:

{"level":"info","time":"2024-05-13T14:33:07-04:00","message":"starting replicator"}

To cancel replication, enter ctrl-c to issue a SIGTERM signal.

Load CockroachDB data via direct copy

The following molt fetch command uses COPY FROM to load all tables directly from one CockroachDB database to another.

icon/buttons/copy
molt fetch \
--source 'postgres://root@localhost:26257/defaultdb?sslmode=disable' \
--target 'postgres://root@localhost:26258/defaultdb?sslmode=disable' \
--table-handling 'none' \
--direct-copy \
--allow-tls-mode-disable
  • --source specifies sslmode=disable to establish an insecure connection. By default, insecure connections are disallowed and should be used only for testing or if a secure SSL/TLS connection to the source or target database is not possible.
  • --table-handling specifies that existing tables on the target CockroachDB database should not be modified before the source data is loaded.
  • --direct-copy specifies that COPY FROM is used to load the tables directly, without creating intermediate files.
  • --allow-tls-mode-disable enables insecure connections to the source and target databases. Refer to Secure connections.

Continue fetch after encountering an error

If the fetch process encounters an error, it exits with an error message, fetch ID, and continuation token for each table that failed to load on the target database. You can use these values to continue the fetch process from where it was interrupted.

{"level":"info","table":"public.tbl1","file_name":"shard_01_part_00000001.csv","message":"creating or updating token for duplicate key value violates unique constraint \"tbl1_pkey\"; Key (id)=(22) already exists."}
{"level":"info","table":"public.tbl1","continuation_token":"5e7c7173-101c-4539-9b8d-28fad37d0240","message":"created continuation token"}
{"level":"info","fetch_id":"87bf8dc0-803c-4e26-89d5-3352576f92a7","message":"continue from this fetch ID"}

To retry a specific table, reissue the initial molt fetch command and include the fetch ID and a continuation token:

Tip:

You can use molt fetch tokens list to list all active continuation tokens. Refer to List active continuation tokens.

icon/buttons/copy
molt fetch \
... \
--fetch-id '87bf8dc0-803c-4e26-89d5-3352576f92a7' \
--continuation-token '5e7c7173-101c-4539-9b8d-28fad37d0240'

To retry all tables that failed, exclude --continuation-token from the command. When prompted, type y to clear all active continuation tokens. To avoid the prompt (e.g., when running molt fetch in a job), include the --non-interactive flag:

icon/buttons/copy
molt fetch \
... \
--fetch-id '87bf8dc0-803c-4e26-89d5-3352576f92a7' \
--non-interactive

See also


Yes No
On this page

Yes No