Reputation: 518
I'm trying to write an application controlling a swarm of robots via WiFi and MQTT protocol. I have performed some tests to measure will it be fast enough for my application. I would like to have a control loop (a message going from a PC to robot and back) that takes no more than 25-30ms average.
I have written an application using Paho Java client, that runs on two machines. When one receives a message on topic1, it publishes to topic2. Topic2 is subscribed by the second machine, that in turn publishes to topic1.
topic1 topic1
M1---------> broker ---------> M2
topic2 topic2
M1 <-------- broker <--------- M2
When all publishing and subscribing was made with QoS 0 a loop time was around 12ms average. However I would like to use QoS 1 to guarantee that the commands sent to robots will always reach their destination. When I tested the loop time, it averaged at around 250ms.
What causes so much increase in time? From my understanding, if there are no transmission errors, the exchanged packets number just doubles with QoS1 (there are PUBACKs sent from broker to clients for every message, see http://www.hivemq.com/mqtt-essentials-part-6-mqtt-quality-of-service-levels/).
Can I somehow reduce this time? I have tried Mosquitto and Apache Apollo brokers, both replicated the same results.
Edit:
I have changed a testing procedure a bit. Now, I have two instances of mqtt clients running on the same machine. One as a publisher, second as a subscriber. Publisher sends 1000 messages in 10ms intervals like this:
Client publisher = new Client(url, clientId+"pub", cleanSession, quietMode, userName, password);
Client subscriber = new Client(url, clientId+"sub", cleanSession, quietMode, userName, password);
subscriber.subscribe(pubTopic, qos);
while (counter < 1000) {
Thread.sleep(10,0);
String time = new Timestamp(System.currentTimeMillis()).toString();
publisher.publish(pubTopic, qos, time.getBytes());
counter++;
}
While subscriber just waits for messages and measures time:
public void messageArrived(String topic, MqttMessage message) throws MqttException {
// Called when a message arrives from the server that matches any
// subscription made by the client
Timestamp tRec = new Timestamp(System.currentTimeMillis());
String timeSent = new String(message.getPayload());
Timestamp tSent = Timestamp.valueOf(timeSent);
long diff = tRec.getTime() - tSent.getTime();
sum += diff;
counter++;
if (counter == 1000) {
double avg = sum / 1000.0;
System.out.println("avg time: " + avg);
}
}
Broker (mosquitto with default config), runs on a separate machine in the same local network. The results that I have achieved are even more bizarre than before. Now, it takes approximately 8-9ms for a message with QoS 1 to get to the subscriber. With QoS 2 it's around 20ms. However, with QoS 0, I get avg. times from 100ms to even 250ms! I guess that the error is somewhere in my test method, but I can't see where.
Upvotes: 2
Views: 5845
Reputation: 10117
QoS 0 messages are not required to be persisted - they can be maintained entirely in memory.
To be able to assure the QoS 1 (and QoS 2) delivery, the messages need to be persisted in some form. This adds additional processing time to the messages over the simple network transfer time.
Upvotes: 3
Reputation: 59866
The implication of 2 totally different broker implementations showing the same results could be that it's the client side of the code that is taking the time to respond with the ack packet.
Are you doing all the processing of an incoming method in the onMessage callback? If so this work will be being done on the same thread as all the MQTT protocol handling which could delay a response. For high volume/high speed message processing the pattern normally used is to only use the onMessage call back to queue the incoming message for another thread to actually process.
Upvotes: 0