Reputation: 9237
I have a simulation with a step that allows me to publish to different endpoints.
class MySimulation extends Simulation {
// some init code
var testTitle = this.getClass.getSimpleName
val myscenario = scenario("Scn Description")
.exec(PublishMessageRandom(pConfigTest, testTitle + "-" + numProducers, numProducers))
if (testMode == "debug") {
setUp(
myscenario.inject(
atOnceUsers(1)
)
).protocols(httpConf)
} else if (testMode == "open") {
setUp(
myscenario.inject(
rampConcurrentUsers(concurrentUserMin) to (concurrentUserMax) during (durationInMinutes minutes),
)
).protocols(httpConf)
}
}
Now here is my PublishMessageRandom
definition
def PublishMessageRandom(producerConfig : ProducerConfig, testTitle : String, numberOfProducers : Int ) = {
val jsonBody = producerConfig.asJson
val valuedJsonBody = Printer.noSpaces.copy(dropNullValues = true).print(jsonBody)
println(valuedJsonBody)
val nodes : Array[String] = endpoints.split(endpointDelimiter)
val rnd = scala.util.Random
val rndIndex = rnd.nextInt(numberOfProducers)
var endpoint = "http://" + nodes(rndIndex) + perfEndpoint
println("endpoint:" + endpoint)
exec(http(testTitle)
.post(endpoint)
.header(HttpHeaderNames.ContentType, HttpHeaderValues.ApplicationJson)
.body(StringBody(valuedJsonBody))
.check(status.is(200))
.check(bodyString.saveAs("serverResponse"))
)
// the below is only useful in debug mode. Comment it out for longer tests
/*.exec { session =>
println("server_response: " + session("serverResponse").as[String])
println("endpoint:" + endpoint)
session */
}
}
as you can see it simply round-robin of endpoints. Unfortunately I see the above println("endpoint:" + endpoint)
once and it looks like it picks one endpoint randomly and keeps hitting that instead of desired purpose of hitting endpoints randomly.
Can someone explain that behavior? Is Gatling caching the Step or and how do I go around that?
Upvotes: 0
Views: 532
Reputation: 9237
I had to use feeder to solve the problem where the feeder takes the random endpoint.
// feeder is random endpoint as per number of producers
val endpointFeeder = GetEndpoints(numProducers).random
val myscenario = scenario("Vary number of producers hitting Kafka cluster")
.feed(endpointFeeder)
.exec(PublishMessageRandom(pConfigTest, testTitle + "-" + numProducers))
and Publish message random looks like this:
def PublishMessageRandom(producerConfig : ProducerConfig, testTitle : String ) = {
val jsonBody = producerConfig.asJson
val valuedJsonBody = Printer.noSpaces.copy(dropNullValues = true).print(jsonBody)
println(valuedJsonBody)
exec(http(testTitle)
.post("${endpoint}")
.header(HttpHeaderNames.ContentType, HttpHeaderValues.ApplicationJson)
.body(StringBody(valuedJsonBody))
.check(status.is(200))
.check(bodyString.saveAs("serverResponse"))
)
}
you see the line above .post("${endpoint}")
will end up hitting the endpoint coming from the feeder.
The feeder function GetEndpoints is defined as follows where we create an array of maps with one value each "endpoint" is the key.
def GetEndpoints(numberOfProducers : Int ) : Array[Map[String,String]] = {
val nodes : Array[String] = endpoints.split(endpointDelimiter)
var result : Array[Map[String,String]] = Array()
for( elt <- 1 to numberOfProducers ) {
var endpoint = "http://" + nodes(elt-1) + perfEndpoint
var m : Map[String, String] = Map()
m += ("endpoint" -> endpoint )
result = result :+ m
println("map:" + m)
}
result
}
Upvotes: 0
Reputation: 6608
Quoting the official documentation:
Warning
Gatling DSL components are immutable ActionBuilder(s) that have to be chained altogether and are only built once on startup. The results is a workflow chain of Action(s). These builders don’t do anything by themselves, they don’t trigger any side effect, they are just definitions. As a result, creating such DSL components at runtime in functions is completely meaningless.
Upvotes: 1