Abdelrahman Emara
Abdelrahman Emara

Reputation: 3

Regionserver.ReplicationSinkManager: No sinks available at peer. Will not be able to replicate

I am trying to stream all habse table edits to a kafka topic using Apache HBase™ Kafka Proxy. I am following the steps mentioned in this repo.

I have standalone hbase running on same Centos server with kafka. The HBASE_CLASSPATH in hbase-env.sh file configured to point at connectors download from the hbase mirror link

Replication Enabled in hbase-site.xml and table replication was enabled in hbase shell

<property>
  <name>hbase.replication</name>
  <value>true</value>
</property>

kafka-route-rules.xml is configured to route all mutations to the kafka topic:

<rules>
  <rule action="route" table="default:test_kafka_table" topic="hbase-logs"/>
</rules>

After starting the kafkaproxy using:

bin/hbase-connectors-daemon.sh start kafkaproxy -a -e -p kafka_endpoint -b localhost:9092

A peer named "kafka_endpoint" show at Replications tab with this note underneath

"If the replication delay is UNKNOWN, that means this walGroup doesn't start replicate yet and it may get disabled."

After inserting data into table, logs shows warning below:

2024-11-02T19:44:07,703 INFO  [MemStoreFlusher.0] regionserver.HRegion: Flushing 3a1a23e87b200f50e2d5ba9ae48b30ba 1/1 column families, dataSize=88 B heapSize=496 B
2024-11-02T19:44:07,710 INFO  [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed memstore data size=88 B at sequenceid=185 (bloomFilter=true), to=file:/root/hbase/tmp/hbase/data/default/test_kafka_table/3a1a23e87b200f50e2d5ba9ae48b30ba/.tmp/random_cf/1f10c4cfd5f34d1a9f836df9020b867b
2024-11-02T19:44:07,713 INFO  [MemStoreFlusher.0] regionserver.HStore: Added file:/root/hbase/tmp/hbase/data/default/test_kafka_table/3a1a23e87b200f50e2d5ba9ae48b30ba/random_cf/1f10c4cfd5f34d1a9f836df9020b867b, entries=2, sequenceid=185, filesize=4.9 K
2024-11-02T19:44:07,714 INFO  [MemStoreFlusher.0] regionserver.HRegion: Finished flush of dataSize ~88 B/88, heapSize ~480 B/480, currentSize=0 B/0 for 3a1a23e87b200f50e2d5ba9ae48b30ba in 11ms, sequenceid=185, compaction requested=false
2024-11-02T19:44:22,192 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.14 GB, usedSize=2.37 MB, freeSize=3.14 GB, max=3.14 GB, blockCount=5, accesses=60, hits=43, hitRatio=71.67%, , cachingAccesses=48, cachingHits=41, cachingHitsRatio=85.42%, evictions=389, evicted=0, evictedPerRun=0.0
2024-11-02T19:44:23,459 INFO  [emaradevenv:16020Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=458B, Limit=268435456B
Normal source for cluster kafka_endpoint: Total replicated edits: 0, current progress: 
walGroup [emaradevenv%2C16020%2C1730565562173]: currently replicating from: file:/root/hbase/tmp/hbase/WALs/emaradevenv,16020,1730565562173/emaradevenv%2C16020%2C1730565562173.1730565566468 at position: 1210

2024-11-02T19:44:27,653 WARN  [RS:0;emaradevenv:16020.replicationSource.shipperemaradevenv%2C16020%2C1730565562173,kafka_endpoint] regionserver.ReplicationSinkManager: No sinks available at peer. Will not be able to replicate
2024-11-02T19:45:55,653 WARN  [RS:0;emaradevenv:16020.replicationSource.shipperemaradevenv%2C16020%2C1730565562173,kafka_endpoint] regionserver.ReplicationSinkManager: No sinks available at peer. Will not be able to replicate
2024-11-02T19:47:24,655 WARN  [RS:0;emaradevenv:16020.replicationSource.shipperemaradevenv%2C16020%2C1730565562173,kafka_endpoint] regionserver.ReplicationSinkManager: No sinks available at peer. Will not be able to replicate
2024-11-02T19:48:54,656 WARN  [RS:0;emaradevenv:16020.replicationSource.shipperemaradevenv%2C16020%2C1730565562173,kafka_endpoint] regionserver.ReplicationSinkManager: No sinks available at peer. Will not be able to replicate
2024-11-02T19:49:22,192 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=3.14 GB, usedSize=2.37 MB, freeSize=3.14 GB, max=3.14 GB, blockCount=5, accesses=62, hits=45, hitRatio=72.58%, , cachingAccesses=50, cachingHits=43, cachingHitsRatio=86.00%, evictions=419, evicted=0, evictedPerRun=0.0
2024-11-02T19:49:23,459 INFO  [emaradevenv:16020Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=458B, Limit=268435456B
Normal source for cluster kafka_endpoint: Total replicated edits: 0, current progress: 
walGroup [emaradevenv%2C16020%2C1730565562173]: currently replicating from: file:/root/hbase/tmp/hbase/WALs/emaradevenv,16020,1730565562173/emaradevenv%2C16020%2C1730565562173.1730565566468 at position: 1210

Any lead what I might be missing here? is there is any additional WAL configurations or hbase has to be Fully Distributed?

I tried to to put hbase pseudo distributed mode but didn't solve the issue.

Upvotes: 0

Views: 23

Answers (0)

Related Questions