blog-banner

Feature of the week: Core changefeeds in CockroachDB 19.1

Last edited on July 30, 2019

1 minute read

    Our latest

    , core changefeeds, is an exciting new way for event-driven apps to consume data from CockroachDB. With the CHANGEFEED FOR statement, a SQL client can tell the database to monitor a table and tell it when writes take place. CockroachDB will then hold the connection open until the client closes it and stream records to the client for all writes. You can use changefeeds to build event-driven applications, perform data integration, and do all kinds of other tasks that depend on watching the state of a table.

    Here's what a core changefeed looks like in practice:

    SQL connection 1

    SQL connection 2

    root@:26257/defaultdb> CREATE TABLE office_dogs (

    id SERIAL PRIMARY KEY,

    name STRING

    );

    CREATE TABLE

    Time: 11.598ms

    root@:26257/defaultdb> SET CLUSTER SETTING

    kv.rangefeed.enabled = true;

    SET CLUSTER SETTING

    Time: 7.139ms root@:26257/defaultdb>

    \set display_format csv

    root@:26257/defaultdb> EXPERIMENTAL CHANGEFEED FOR TABLE office_dogs;

    root@:26257/defaultdb> INSERT INTO office_dogs (name) VALUES ('Carl'), ('Petee');

    INSERT 2

    Time: 8.87ms root@:26257/defaultdb>

    table,key,value

    office_dogs,

    [472990757278121985],"{""after"": {""id"": 472990757278121985, ""name"": ""Carl""}}"

    office_dogs,[472990757278187521],"{""after"": {""id"": 472990757278187521, ""name"": ""Petee""}}"

    root@:26257/defaultdb> INSERT INTO office_dogs (name) VALUES ('Frankie');

    INSERT 1

    Time: 3.081ms

    office_dogs,

    [472991019430084609],"{""after"": {""id"": 472991019430084609, ""name"": ""Frankie""}}"

    In the output, the `table` column is the name of the table being watched, the `key` column contains each record's primary key columns (represented as a JSON array), and the `value` column is a JSON object, with the "after" key containing the state of all of the record's columns after the table is written to. Inserts and updates are represented as shown above; deletes are represented with "null" in the "after" field. Changefeeds continue to function even when online schema changes take place:

    SQL connection 1

    SQL connection 2

    root@:26257/defaultdb> ALTER TABLE office_dogs ADD COLUMN good_boy BOOL DEFAULT true;

    ALTER TABLE

    Time: 117.968ms

    office_dogs,[472990757278121985],"{""after"": {""id"": 472990757278121985, ""name"": ""Carl""}}"

    office_dogs,[472990757278187521],"{""after"": {""id"": 472990757278187521, ""name"": ""Petee""}}"

    office_dogs,[472991019430084609],"{""after"": {""id"": 472991019430084609, ""name"": ""Frankie""}}"

    office_dogs,[472990757278121985],"{""after"": {""good_boy"": true, ""id"": 472990757278121985, ""name"": ""Carl""}}"

    office_dogs,[472990757278187521],"{""after"": {""good_boy"": true, ""id"": 472990757278187521, ""name"": ""Petee""}}"

    office_dogs,[472991019430084609],"{""after"": {""good_boy"": true, ""id"": 472991019430084609, ""name"": ""Frankie""}}"

    root@:26257/defaultdb> DELETE FROM office_dogs WHERE name = 'Frankie';

    DELETE 1

    Time: 4.464ms

    office_dogs,[472991019430084609],"{""after"": null}"

    Core changefeeds provide the same ordering and correctness guarantees as our enterprise changefeed functionality, but enterprise changefeeds can write their data to Kafka or cloud storage from multiple nodes at once, instead of delivering their data over a SQL connection.

    Take a look at our change data capture documentation to try out core changefeeds. And follow Cockroach Labs on

    to keep up with all our Features of the Week.

    CRDBFOTW