hguser
hguser

Reputation: 36068

Response time for the same logic growing fast at different concurrent requests

I am using tomcat to serve my application, and I use ab to test the response time.

This is the core logic in the servlet:

    long st = System.currentTimeMillis();
    data = server.getData(req);  //
    System.out.println("get data cost:" + (System.currentTimeMillis() - st));

    st = System.currentTimeMillis();
    if (data != null) {
        try {
            OutputStream out = resp.getOutputStream();
            data.save(out);
            out.flush();
            System.out.println("save data cost:" + (System.currentTimeMillis() - st));
        } catch (IOException e) {
            status = 500;
        }
    }

@Override
public IData getData(HttpServletRequest req) {
    String layers = req.getParameter("layers");
    String id = req.getParameter("id");
    ArrayList<BufferedImage> result = new ArrayList<BufferedImage>();
    for (String layer : layers) {
        String lkey = layer+"-"+id;
        BufferedImage cachedImage = null;
        if (cacher.contains(lkey)) {
            cachedImage = (BufferedImage) cacher.get(lkey);
        } else {
            cachedImage =...  // heavy operation which will cost 1s for a single request
        }

        if (cachedImage != null) {
            result.add(cachedImage);
        }

    }
    return new Data(result);
}

class Data implements IData {
    private BufferedImage imageResult;

    public Data(ArrayList<BufferedImage> data) {
        imageResult = new BufferedImage(500 ,500 , BufferedImage.TYPE_INT_ARGB);
        Graphics2D g = imageResult.createGraphics();
        for (BufferedImage img : data) {
            g.drawImage(img, 0, 0, null);
        }
        g.dispose();
    }

    @Override
    public void save(OutputStream out, String format) throws IOException {
        PngEncoder encoder = new PngEncoder(PngEncoder.COLOR_TRUECOLOR_ALPHA);
        encoder.encode(imageResult, out);
        imageResult = null;
    }
}

As show,

The test case(I use cache for requested data,I ran the url once before my test, so I am sure all the data is retrieved from the cache during the following test case):

with ab: ab -n 1 -c 1 ...

Response time: 80ms

And log:

get data cost:11
save data cost:68

With: ab -n 10 -c 10 ...

  50%    206
  66%    212
  75%    225
  80%    228
  90%    244
  95%    244
  98%    244
  99%    244
 100%    244 (longest request)

And logs:

http://paste.ubuntu.com/7758426/

with: ab -n 100 -c 100 ..

  50%   2412
  66%   2414
  75%   2415
  80%   2415
  90%   2418
  95%   2418
  98%   2426
  99%   2433
 100%   2433 (longest request)

And logs:

http://paste.ubuntu.com/7758421/

IMO, the time required to save the data should depend on what is done. In the example, the saving operation only takes 70 ms. Why is it growing so fast when dealing with 100 concurrent users?

Is this normal?

If yes, is there any way to reduce the response time?

If no, I may be missing something, how do I fix it?


Update(the whole logic):

I am serving map tiles like google map, a tile is located by a coordinate with (x,y, and zoom). At the same time the map may contain more than one layer, for example, road, water, landuse and etc. We serve 50+ layers in our application.

For example, user can get tiles from our server through a simple request:

http://localhost/tile/z/x/y.png?layer=road,water 

This will return a tile(image) which contains just the road and water layer.

or

http://localhost/tile/z/x/y.png?layer=road,water,landuse,poi 

This will return a tile(image) which contains the road, water, landuse and poi layer.

As shown, we have total 50+ layers, so there are so many "Permutation and Combination", so we have to generate the tile on the fly.

But create a tile is a time cousuming job, so we use the cache here. Take request http://localhost/tile/z/x/y.png?layer=road,water for example, we will check if the tile(BufferedImage) at the given (x,y,zoom) for a given layer exist, if not, creat it and put it in the cache.

ArrayList<BufferedImage> combined=new ArrayList<BufferedImage>();
for(String layer : requset.getParameter("layer")){
    String cacheKey = x+"-"+y+"-"+"+zoom+"-"+layer;
    BufferedImage image = cache.get(cacheKey);
    if(image ==nul){
        //generate it dynamically

        //put it to cache
        cache.add(cacheKey,image)
    }   
    combined.add(image)
}

Then we write the combined image to the client(what Data.save does).

This the whole logic, please let me know if it is unclear.

Upvotes: 1

Views: 88

Answers (2)

merlin2011
merlin2011

Reputation: 75639

The answer depends on whether your thread work is blocking or non-blocking, as well as the degree of parallelism you can achieve on your machine.

However, the times that you report in your question seem to indicate that your requests are simply being queued one behind the other, rather than being served completely in parallel.

This could happen because you have 10 cores so you can only really handle 10 requests concurrently. There is not much you can do in this caes unless you distribute over more machines or get a beefier CPU.

It could also happen because your data.save() is an operation that needs to lock a shared resource, so that lock is causing your work to be serialized. If this is the case you can try to reduce the synchronization by having per-thread state which only synchronizes occasionally (say every 10000 requests).

In either case, your response time (latency) is completely determined by your throughput, rather than the time it takes to handle one request, as you would expect under no load.

Upvotes: 1

David Ehrmann
David Ehrmann

Reputation: 7576

Attach VisualVM to Tomcat. You'll be able to see if the issue is it's using CPU or out of threads. Right now, seeing the PNG code, my guess is the graphics operations are taking the time, but that's just a guess. Profiling will give you a real answer.

Upvotes: 0

Related Questions