PlikPlok
PlikPlok

Reputation: 110

How to time a BLE GATT exchange with Android Bluetooth APIs?

I am writing an Android application that communicates with a custom made device BLE via GATT services. Said device provides a service with 2 characteristics for reading and writing data. When some data is written to the WRITE characteristic, the BLE device will then send it through a wired UART interface to some other device. That other device will then respond to the BLE device through that same UART interface. Upon reception, the BLE device will send a notification that new data is available on the READ characteristic of its service so my Android application can retrieve it.

What I would like to do is measure the time elapsed between when I send a request from my Android application to when I receive the notification that new data is available.

I have implemented a "stopwatch" as a long. I set it to System.currentTimeMillis(); when I write a data and I compare its value to another call to System.currentTimeMillis(); upon notification reception giving something like :

long stopwatch = System.currentTimeMillis();
// ...

// ...
long elapsed = System.currentTimeMillis() - stopwatch;

I have set 2 stopwatches to compare 2 measured times.

The first stopwatch is reset when I call gatt.writeCharacteristic(myCharacteristic) and the second one is reset when BluetoothGattCallback.onCharacteristicWrite() is called.

I have registered my application for the notifications from the READ characteristic so I stop both stopwatches when BluetoothGattCallback.onCharacteristicChanged() is called.

The thing is, I have on average 100ms between those measurements ! I think it is quite a lot. Average time starting when I call gatt.writeCharacteristic(myCharacteristic) is 140ms Average time starting when BluetoothGattCallback.onCharacteristicWrite() is called is 40ms.

So I was wondering what was the proper way to time such an exchange and when should I reset my stopwatch in order to get the most accurate time measurement.

Upvotes: 0

Views: 743

Answers (1)

Emil
Emil

Reputation: 18452

Well if you use the default connection interval of 50 ms this is not strange at all. Assuming the write is issued at the next connection event (which might happen up to 50 ms in the future), and the result is available and notified at the next connection event 50 ms later, you get 100 ms. You can issue a Connection Parameter Update Request to get a faster connection interval.

If you want to get low overhead, why do you notify that data is available and then read, rather than just embedding the data directly in the notification payload?

Upvotes: 1

Related Questions