Filter read access events in Debezium

644 views Asked by At

We are using Debezium + PostgreSQL.

Notice that we get 4 types of events for create, read, update and delete - c, r, u and d.

The read type of event is unused for our application. Actually, I could not think of an use case for the 'r' events unless we are doing auditing or mirroring the activities of a transaction.

We are facing difficulties scaling & I suspect it is because of network getting hogged by read type of events.

How do we filter out those events in postgreSQL itself?

I got a clue from one of the contributors to use snapshot.mode. I guess something that has to be done when Debezium creates a snapshot. I am unable to figure out how to do that.

1

There are 1 answers

0
Randall Hauch On BEST ANSWER

It is likely that your database has existed for some time and contains data and changes that have been purged from the logical decoding logs. If you then start using the Debezium PostgreSQL connector to start capturing changes into Kafka, the question becomes what a consumer of the events in Kafka should be able to see.

One scenario is that a consumer should be able to see events for all rows in the database, even those that existed prior to the start of CDC. For example, this allows a consumer to completely reproduce/replicate all of the existing data and keep that data in sync over time. To accomplish this, the Debezium PostgreSQL connector starts up can begin by creating a snapshot of the database contents before it starts capturing the changes. This is done atomically, so that even if the snapshot process takes a while to run, the connector will still see all of the events that occurred since the snapshot process was started. These events are represented as "read" events, since in effect the connector is simply reading the existing rows. However, they are identical to "insert" events, so any application could treat reads and inserts in the same way.

On the other hand, if consumers of the events in Kafka do not need to see events for all existing rows, then the connector can be configured to avoid the snapshot and to instead begin by capturing the changes. This may be useful in some scenarios where the entire database state need not be found in Kafka, but instead the goal is to simply capture the changes that are occurring.

The Debezium PostgreSQL connector will work either way, so you should use the approach that works for how you're consuming the events.