Reputation:
I'm trying to create an web application application based on NodeJS which queries a Azure Table Storage with 1000 entries. (I'm using an Azure trail subscription)
The implemented logic retrieves a unique entity using partition-key and row-key so it should be very fast and inspired by the blog posted by Troy Hunt I expected latency times lower than 7 ms.
app.get('/pick', function (req, res, next) {
var parKey = req.param('parKey');
var rowKey = req.param('rowKey');
var getRes, startTime, endTime, tableSvc;
tableSvc = azure.createTableService().withFilter(retryOperations);
//Timestamp before accessing table storage
startTime = new Date().getTime();
tableSvc.retrieveEntity('lookupTable01', parKey, rowKey,
function (error, result, response) {
if (!error) {
//Timestamp after returning the result
endTime = new Date().getTime();
getRes = JSON.stringify(result);
res.send("found in " + (endTime - startTime) + " ms " + getRes);
} else {
//Timestamp after returnung an error
endTime = new Date().getTime();
getRes = JSON.stringify(error);
res.send("Error in " + (endTime - startTime) + " ms " + getRes);
}
})
I've setup the web application server and storage account in the same West European data center for optimum results. However the lowest latency proved to be 47 ms (excluding the network latency)
I also tried to the same setup in the East US data center and found just about the same results.
Upvotes: 0
Views: 1306
Reputation: 24549
We usually use the Point Queries [special both partition key and row key] to query the table that as you did. Another factor is query density. The query that base on the partition is probably quite small.There are also many factors that Impact on the performance of the Azure table storage. Such as location, partition naming convention, limit the data return, the defined principles of the table and so on. More details about how to improve the performance of the Azure Storage, please refer to the document.
Upvotes: 1