Reputation: 559
Our endpoint devices are pushing data over MQTT to an IoT system based on the Thingsboard IoT platform. There is only one MQTT topic called /telemetry
where all devices connect. The server knows which device the data belongs to based on the device's token used as the MQTT username.
Due to not rare peaks of data loading, outages happen.
My question is:
Is it possible and how to use HiveMQ (RabbitMQ or some similar product) between devices and our IoT system to avoid data loss and smooth out peaks?
Upvotes: 1
Views: 417
Reputation: 712
This post explains how to use Quality of Service levels, offline buffering, throttling, automatic reconnect and more to avoid data loss and maintain uptime.
The tldr; is that MQTT and HiveMQ have features built in to help avoid data loss, guaranteed delivery, traffic spikes and to handle back-pressure.
It may be worth considering what you can do with your existing tools before expanding your deploy footprint which just adds unnecessary complexity if unwarranted.
Upvotes: 2
Reputation: 356
I would recommend using Apache Kafka or Confluent in between the MQTT Broker & ThingsBoard. Kafka stores all data on disk (instead of RAM in the case of RabbitMQ) and is scalable among multiple cluster nodes. You could also reload data to ThingsBoard by resetting offsets. This could be useful if there was an error in the configuration of a rulechain and you would have ThingsBoard reprocess the data again. To connect with Kafka/Confluent you can use the ThingsBoard Integration.
Find more details here:
https://medium.com/python-point/mqtt-and-kafka-8e470eff606b
Upvotes: 1