Reputation: 2260
I am trying to model a server-to-server REST API interaction in Gatling 2.2.0. There are several interactions of the type "request a list and then request all items on the list at in parallel", but I can't seem to model this in Gatling. Code so far:
def groupBy(dimensions: Seq[String], metric: String) = {
http("group by")
.post(endpoint)
.body(...).asJSON
.check(
...
.saveAs("events")
)
}
scenario("Dashboard scenario")
.exec(groupBy(dimensions, metric)
.resources(
// a http() for each item in session("events"), plz
)
)
I have gotten as far as figuring out that parallel requests are performed by .resources(), but I don't understand how to generate a list of requests to feed it. Any input is appreciated.
Upvotes: 3
Views: 4543
Reputation: 1172
Not very sure of your query but seems like you need to send parallel request which can be done by setUp(scenorio.inject(atOnceUsers(NO_OF_USERS)));
Refer this https://docs.gatling.io/reference/script/core/simulation/#setup
Upvotes: 0
Reputation: 754
Below approach is working for me. Seq of HttpRequestBuilder will be executed concurrently:
val numberOfParallelReq = 1000
val scn = scenario("Some scenario")
.exec(
http("first request")
.post(url)
.resources(parallelRequests: _*)
.body(StringBody(firstReqBody))
.check(status.is(200))
)
def parallelRequests: Seq[HttpRequestBuilder] =
(0 until numberOfParallelReq).map(i => generatePageRequest(i))
def generatePageRequest(id: Int): HttpRequestBuilder = {
val body = "Your request body here...."
http("page")
.post(url)
.body(StringBody(body))
.check(status.is(200))
}
Upvotes: 2