Start a Cluster in Docker (Insecure)

On this page Carat arrow pointing down
Warning:
CockroachDB v19.2 is no longer supported as of May 12, 2021. For more details, refer to the Release Support Policy.

Once you've installed the official CockroachDB Docker image, it's simple to run an insecure multi-node cluster across multiple Docker containers on a single host, using Docker volumes to persist node data.

Warning:

Running multiple nodes on a single host is useful for testing CockroachDB, but it's not suitable for production. To run a physically distributed cluster in containers, use an orchestration tool like Kubernetes or Docker Swarm. See Orchestration for more details, and review the Production Checklist.

Before you begin

Step 1. Create a bridge network

Since you'll be running multiple Docker containers on a single host, with one CockroachDB node per container, you need to create what Docker refers to as a bridge network. The bridge network will enable the containers to communicate as a single cluster while keeping them isolated from external networks.

icon/buttons/copy
$ docker network create -d bridge roachnet

We've used roachnet as the network name here and in subsequent steps, but feel free to give your network any name you like.

Step 2. Start the cluster

  1. Create a Docker volume for each container:

    icon/buttons/copy
    docker volume create roach1
    
    icon/buttons/copy
    docker volume create roach2
    
    icon/buttons/copy
    docker volume create roach3
    
  2. Start the first node:

    icon/buttons/copy
    $ docker run -d \
    --name=roach1 \
    --hostname=roach1 \
    --net=roachnet \
    -p 26257:26257 -p 8080:8080  \
    -v "roach1:/cockroach/cockroach-data"  \
    cockroachdb/cockroach:v19.2.12 start \
    --insecure \
    --join=roach1,roach2,roach3
    
  3. This command creates a container and starts the first CockroachDB node inside it. Take a moment to understand each part:

    • docker run: The Docker command to start a new container.
    • -d: This flag runs the container in the background so you can continue the next steps in the same shell.
    • --name: The name for the container. This is optional, but a custom name makes it significantly easier to reference the container in other commands, for example, when opening a Bash session in the container or stopping the container.
    • --hostname: The hostname for the container. You will use this to join other containers/nodes to the cluster.
    • --net: The bridge network for the container to join. See step 1 for more details.
    • -p 26257:26257 -p 8080:8080: These flags map the default port for inter-node and client-node communication (26257) and the default port for HTTP requests to the Admin UI (8080) from the container to the host. This enables inter-container communication and makes it possible to call up the Admin UI from a browser.
    • -v "roach1:/cockroach/cockroach-data": This flag mounts a host directory as a data volume. This means that data and logs for this node will be stored in the roach1 volume on the host and will persist after the container is stopped or deleted. For more details, see Docker's volumes topic.
    • cockroachdb/cockroach:v19.2.12 start --insecure --join: The CockroachDB command to start a node in the container in insecure mode. The --join flag specifies the hostname of each node that will initially comprise your cluster. Otherwise, all cockroach start defaults are accepted. Note that since each node is in a unique container, using identical default ports won’t cause conflicts.
  4. Start two more nodes:

    icon/buttons/copy
    $ docker run -d \
    --name=roach2 \
    --hostname=roach2 \
    --net=roachnet \
    -v "roach2:/cockroach/cockroach-data" \
    cockroachdb/cockroach:v19.2.12 start \
    --insecure \
    --join=roach1,roach2,roach3
    
    icon/buttons/copy
    $ docker run -d \
    --name=roach3 \
    --hostname=roach3 \
    --net=roachnet \
    -v "roach3:/cockroach/cockroach-data" \
    cockroachdb/cockroach:v19.2.12 start \
    --insecure \
    --join=roach1,roach2,roach3
    
  5. Perform a one-time initialization of the cluster:

    icon/buttons/copy
    $ docker exec -it roach1 ./cockroach init --insecure
    

    You'll see the following message:

    Cluster successfully initialized
    

    At this point, each node also prints helpful startup details to its log. For example, the following command retrieves node 1's startup details:

    icon/buttons/copy
    $ docker exec -it roach1 grep 'node starting' cockroach-data/logs/cockroach.log -A 11
    

    The output will look something like this:

    CockroachDB node starting at 
    build:               CCL v19.2.12 @ 2021-01-19 00:00:00 (go1.12.6)
    webui:               http://roach1:8080
    sql:                 postgresql://root@roach1:26257?sslmode=disable
    client flags:        /cockroach/cockroach <client cmd> --host=roach1:26257 --insecure
    logs:                /cockroach/cockroach-data/logs
    temp dir:            /cockroach/cockroach-data/cockroach-temp273641911
    external I/O path:   /cockroach/cockroach-data/extern
    store[0]:            path=/cockroach/cockroach-data
    status:              initialized new cluster
    clusterID:           1a705c26-e337-4b09-95a6-6e5a819f9eec
    nodeID:              1
    

Step 3. Use the built-in SQL client

Now that your cluster is live, you can use any node as a SQL gateway. To test this out, let's use the docker exec command to start the built-in SQL shell in the first container.

  1. Start the SQL shell in the first container:

    icon/buttons/copy
    $ docker exec -it roach1 ./cockroach sql --insecure
    
  2. Run some basic CockroachDB SQL statements:

    icon/buttons/copy
    > CREATE DATABASE bank;
    
    icon/buttons/copy
    > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
    
    icon/buttons/copy
    > INSERT INTO bank.accounts VALUES (1, 1000.50);
    
    icon/buttons/copy
    > SELECT * FROM bank.accounts;
    
      id | balance
    +----+---------+
       1 | 1000.50
    (1 row)
    
  3. Now exit the SQL shell on node 1 and open a new shell on node 2:

    icon/buttons/copy
    > \q
    
    icon/buttons/copy
    $ docker exec -it roach2 ./cockroach sql --insecure
    
  4. Run the same SELECT query as before:

    icon/buttons/copy
    > SELECT * FROM bank.accounts;
    
      id | balance
    +----+---------+
       1 | 1000.50
    (1 row)
    

    As you can see, node 1 and node 2 behaved identically as SQL gateways.

  5. Exit the SQL shell on node 2:

    icon/buttons/copy
    > \q
    

Step 4. Run a sample workload

CockroachDB also comes with a number of built-in workloads for simulating client traffic. Let's run the workload based on CockroachDB's sample vehicle-sharing application, MovR.

  1. Load the initial dataset:

    icon/buttons/copy
    $ docker exec -it roach1 ./cockroach workload init movr \
    'postgresql://root@roach1:26257?sslmode=disable'
    
  2. Run the workload for 5 minutes:

    icon/buttons/copy
    $ docker exec -it roach1 ./cockroach workload run movr \
    --duration=5m \
    'postgresql://root@roach1:26257?sslmode=disable'
    

Step 5. Access the Admin UI

The CockroachDB Admin UI gives you insight into the overall health of your cluster as well as the performance of the client workload.

  1. When you started the first container/node, you mapped the node's default HTTP port 8080 to port 8080 on the host, so go to http://localhost:8080.

  2. On the Cluster Overview, notice that three nodes are live, with an identical replica count on each node:

    CockroachDB Admin UI

    This demonstrates CockroachDB's automated replication of data via the Raft consensus protocol.

    Note:

    Capacity metrics can be incorrect when running multiple nodes on a single machine. For more details, see this limitation.

  3. Click Metrics to access a variety of time series dashboards, including graphs of SQL queries and service latency over time:

    CockroachDB Admin UI

  4. Use the Databases, Statements, and Jobs pages to view details about your databases and tables, to assess the performance of specific queries, and to monitor the status of long-running operations like schema changes, respectively.

Step 6. Stop the cluster

Use the docker stop and docker rm commands to stop and remove the containers (and therefore the cluster):

icon/buttons/copy
$ docker stop roach1 roach2 roach3
icon/buttons/copy
$ docker rm roach1 roach2 roach3

If you do not plan to restart the cluster, you may want to remove the Docker volumes:

icon/buttons/copy
$ docker volume rm roach1 roach2 roach3

What's next?


Yes No
On this page

Yes No