Reputation: 61
I am running debezium server (without Kafka) on linux server and it is having performance issue. There is lag of around 12-14hrs from reading data from Oracle to source to sending it to Azure Eventhub.
Property File:
#source properties
debezium.source.connector.class=io.debezium.connector.oracle.OracleConnector
debezium.source.database.hostname={HOST Connection String}
debezium.source.database.port={PORT}
debezium.source.database.user={USER}
debezium.source.database.password={PASS}
debezium.source.database.dbname={DB Name from TNS}
debezium.source.database.pdb.name={PDB Name from TNS}
debezium.source.schema.include.list={Schema}
debezium.source.table.include.list={Schema.Tablename}
debezium.source.snapshot.mode=schema_only
debezium.source.topic.prefix=DBZ
debezium.source.schema.history.internal.store.only.captured.tables.ddl=true
debezium.source.log.mining.strategy=online_catalog
debezium.source.decimal.handling.mode=string
debezium.source.max.queue.size=8192
debezium.source.max.batch.size=2048
debezium.source.snapshot.fetch.size=1000
debezium.source.query.fetch.size=1000
debezium.source.poll.interval.ms=1000
debezium.source.schema.history.internal=io.debezium.storage.file.history.FileSchemaHistory
debezium.source.schema.history.internal.file.filename=data/FileDatabaseHistory.dat
debezium.source.offset.storage.file.filename=data/offsets.dat
debezium.source.offset.flush.interval.ms=0
debezium.source.log.mining.query.filter.mode=in
debezium.transforms=unwrap
debezium.transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
debezium.transforms.unwrap.add.fields=op:operation_type,table,lsn,source.ts_ms:src_event_timestamp,ts_ms:dbz_process_timestamp
debezium.transforms.unwrap.delete.tombstone.handling.mode=rewrite
debezium.source.converters=isbn
debezium.source.isbn.type=dbz.connect.util.TimestampConverter
debezium.source.isbn.format.time=HH:mm:ss
debezium.source.isbn.format.date=yyyyMMddHHmmss
debezium.source.isbn.format.datetime=yyyy-MM-dd'T'HH:mm:ss'Z'
debezium.source.isbn.format.timeZone=UTC
debezium.source.isbn.debug=false
debezium.format.key=json
debezium.format.key.schemas.enable=false
debezium.format.value=json
debezium.format.value.schemas.enable=false
#sink properties
debezium.sink.type=eventhubs
debezium.sink.eventhubs.connectionstring=Endpoint={EVENTHUB}
debezium.sink.eventhubs.hubname=debezium
debezium.sink.eventhubs.maxbatchsize=1048576
debezium.sink.eventhubs.partitionid={PartitionID}
#Quarkus parameters
quarkus.http.port=DBZ_PORT_QUARKUS
quarkus.log.level=INFO
quarkus.log.console.json=false
quarkus.log.file.enable=true
# The full path to the Debezium Server log files.
quarkus.log.file.path=logs/logfile
# The maximum file size of the log file after which a rotation is executed.
quarkus.log.file.rotation.max-file-size=1M
# Indicates whether to rotate log files on server initialization.
quarkus.log.file.rotation.rotate-on-boot=true
# File handler rotation file suffix. When used, the file will be rotated based on its suffix.
quarkus.log.file.rotation.file-suffix=.yyyy-MM-dd.gz
# The maximum number of backups to keep.
quarkus.log.file.rotation.max-backup-index=10
The lag time is calculates based on the scrape metrics via JMX java agent property: debezium_metrics_MilliSecondsBehindSource
How can the performance be improved to reduce the lag of CDC job
Upvotes: 0
Views: 98
Reputation: 351
I would start by tweaking the settings relating to logminer and looking at the JMX relating to those. Mostly the logmining batch size and a setting related to logmining filter mode.
Upvotes: 0