Dipin Narayanan
Dipin Narayanan

Reputation: 1105

App Engine app performance test

I have used jMeter for testing my appengine app performance.

I have created a thread group of

and ran the test.

It created 4 instances in app engine. But interesting thing is, > 450 requests were processed by a single instance.

I have ran the test again with this instances up, still most of the requests (> 90%) were going to same instance.

I'm getting much higher latency.
What's going wrong here? Generating load from 1 IP , is there any problem?

Upvotes: 8

Views: 1128

Answers (4)

Chris Halcrow
Chris Halcrow

Reputation: 31950

Spread your requests into different thread groups, and the instances will be utilised. I'm not sure why this happens. I was't able to find any definitive information that explains this.

(I wonder if maybe App Engine sees the requests from a single thread group as requests originating from a common origin, so it places all of the utilised resources in the same instance, so that the output can be most efficiently passed back to the originator of the requests.)

Upvotes: 0

Dipin Narayanan
Dipin Narayanan

Reputation: 1105

It was totally app engine's issue...

see this issue reported at appengine's issue tracker

Oh this is really annoying..

Upvotes: 0

Oliver Lloyd
Oliver Lloyd

Reputation: 5004

Your problem is you are not using a realistic ramp up value. AppEngine, like most auto-scaling solutions, requires a reasonable amount of time to spin up new hardware. During this process while it is creating the new instances latency can increase if there was a large and sudden increase in traffic.

Choose a ramp up value that is representative of the sort of spikes / surges you realistically expect to see on Production and then run the test. Use the values from this test to decide how many appEngine instances you would like to be 'always on', the higher this value the lower any impact from a surge but obviously the higher your costs.

Upvotes: 3

matt burns
matt burns

Reputation: 25380

When you say "I'm getting much higher latency" what exactly are you getting? Do you consider it to be too slow?

If latency is an issue then you can reduce the max pending latency in the application settings. If you try this I imagine you will see your requests spread across the instances more.

My guess is simply that the 2-3 idle instances have spun up in anticipation of increased load but are actually not needed for your test.

Upvotes: 0

Related Questions