PostgreSQL Change Data Capture (CDC) Overview
  • 5 Minutes to read
  • Dark
    Light
  • PDF

PostgreSQL Change Data Capture (CDC) Overview

  • Dark
    Light
  • PDF

Article summary

What is Change Data Capture Extraction?

Rivery's Change Data Capture (CDC) extraction mode provides a real-time stream of any changes made to the databases and tables configured, eliminating the need to implement and maintain incremental fields or retrieve data via select queries. It also allows you to retrieve  deleted rows and schema changes from the database.

How Does Change Data Capture Extraction Work?

The  Change Data Capture (CDC) mechanism is based on a  Write-Ahead Log (WAL) provided by the PostgreSQL database. The logical decoding feature of PostgreSQL is available in versions 10 and higher. 

In PostgreSQL, the logical decoding mechanism creates a consistent and easy-to-understand format in one place. The logical decoding mechanism in PostgreSQL, on the other hand, is based on converting the data written to the WAL using an output plugin that converts the WAL transaction rows into a dedicated format. To be able to consume data from the database, this output plugin must be installed on the PostgreSQL server and the logical decoding configuration must be enabled. See the PostgreSQL configuration docs for more information on setting up a PostgreSQL database for Rivery CDC extraction.

Please Note:

Rivery offers CDC extraction mode for PostgreSQL on both Google Cloud SQL and Amazon RDS / Aurora.

Rivery uses the output plugin to continuously pull new rows in the  WAL in order to pull data using the CDC architecture. In order to get historical data from the database, Rivery cannot rely on the entire log history already existing prior to setting up a river. This is because PostgreSQL normally keeps track of the WAL and purges it after a certain amount of time.

Rivery uses the Overwrite loading mode to create a full snapshot (or migration) of the chosen table(s) in order to align the data and metadata as it was on the first run. After the migration is complete, Rivery takes the existing WAL records and performs an Upsert-merge to the target table(s), while continuing to fetch new records from the log as they are created.

Rivery's PostgreSQL connector reads WAL records and generates change events in the FileZone files for row-level INSERT, UPDATE, and DELETE commands. Each file represents a set of database operations performed over a period of time. The data from the log is continuously streamed into the FileZone path determined in the river (with timeframes of no more than 15 minutes) and pushed into the target by the river's scheduled frequency. This method saves the data first in the FileZone, and then it can be pushed into the target DWH at any time.

FileZone is covered in further detail in the Target documentation.

CDC Point in Time Position Feature


The CDC "Point in Time" Position feature allows users to gain deeper insights into the operational details of a River's streaming process. This functionality is essential for data recovery and synchronization, enabling users to locate and retrieve data from a specific point in history using the exact information stored in the CDC log position. For additional information, refer to our documentation.

A Sequence Change Data Capture Deployment

Discrepancies in transaction records can arise when two users simultaneously execute identical transactions, causing conflicts in the timestamp field.
Recognizing this challenge, Rivery has implemented a "sequence" Change Data Capture (CDC) mechanism to tackle this issue.

Rivery has enhanced each emitted record from the database by incorporating two extra metadata fields: '__transaction_id' and '__transaction_order'.

The '__transaction_id' field serves as a unique identifier for each transaction, ensuring that no two transactions share the same identifier. This uniqueness allows for precise identification and differentiation between transactions, mitigating conflicts arising from identical timestamps.

Furthermore, the '__transaction_order' field denotes the order in which the transactions were emitted from the database. By incorporating this field, the sequencing of transactions can be accurately maintained, enabling downstream systems such as Apache Kafka or AWS Kinesis to process and order transactions correctly.

The inclusion of these metadata fields guarantees that the ordering of transactions is preserved throughout the River. As a result, smooth and accurate transaction flows can be achieved, resolving the discrepancies that previously arose from transactions with identical timestamps.

The additional fields are depicted in this table:

For further information about Change Data Capture (CDC) Metadata Fields, please refer to our Database Overview document.

How to Enable Change Data Capture Extraction?

After you've established a connection, go to the Source tab and do the following:

  1. Choose the Multi-Tables as the River mode.
  2. Select the Log Based as the extraction mode.
  3. A brief reminder appears, encouraging you to check your connection and set up your Source and Target, which will happen next. Select 'Got It' to proceed.
  4. Choose Log Based Version from the Advanced Options drop-down menu.
  5. Set the name of the custom replication slot (optional).
  6. Set a custom publication name (optional).
  7. Turn the 'Enable Log' toggle to true at the bottom of the page.
  8.  A new window pops up, guiding you on the following steps. Select 'Target'.
  9.  You'll be directly sent to the Target tab, where you can choose your data target.
  10.  Define your Target Connection, then choose a Database and a Schema to load into.
  11. To make sure the operation is complete, click the Enable Log toggle once more.
  12.  To navigate to the Schema tab, click Schema.
  13.  Click 'Show Me My Schemas'.
  14.  A minimum of one Schema must be chosen. 
  15. To use CDC, the table must contain a key, which is defined as the Source Column - id.
  16.  Navigate to the 'Enable Log' toggle and select the number of hours you want to run the River, then click 'I'm Sure'.
  17.  Wait for the Initiation Log to complete its processing.
  18.  You're all set to run your log-based River now.
  19. Following the completion of the River Run, return to the Schema tab and check that the Table you selected is Streaming.




Type Mapping in CDC

While extracting your data, we match PostgreSQL data types to Rivery data types.

The mapping ofPostgreSQL data types to Rivery-compatible types is shown in the table below:

PostgreSQL TypeRivery Type
BOOLEANBOOLEAN
BYTEASTRING
TOAST-Not Supported-
CHARSTRING
VARCHARSTRING
TEXTSTRING
JSONJSON
JSONBJSON
DATESTRING
FLOAT4STRING
FLOAT8STRING
INT2INTEGER
INT4INTEGER
INT8INTEGER
INTERVALSTRING
NUMERICFLOAT
TIMESTRING
TIMESTAMPSTRING
TIMESTAMPTZSTRING
UUIDSTRING
XMLSTRING


Limitations

  • The 'TOAST' file format is not supported.
  • Any special character will be replaced with an underscore. As a result, if you wish to edit the table name, go to: 

         1. The 'Schema' tab

         2. Select a Table

         3. Click 'Table Settings'

         4. Choose 'Edit' to change the table name manually.


Current Stream Position in Your Database

To confirm the Stream position, run the following command on the server: 

SELECT pg_current_wal_lsn() - '0/0' AS lsn_position;



Was this article helpful?