user1459742
user1459742

Reputation: 11

Gatling tool throws GC Overhead limit exceeded

I am trying to run a load test which uses feed method that is in Gatling tool. Currently when we use a file that is of size around 3.5GB which has 600000 records, Gatling fails with the exception as below: Simulation LoadTestSimulation started...

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.Arrays.copyOf(Arrays.java:2367) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:535) at java.lang.StringBuffer.append(StringBuffer.java:322) at java.io.BufferedReader.readLine(BufferedReader.java:351) at java.io.BufferedReader.readLine(BufferedReader.java:382) at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:72) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:369) at scala.collection.Iterator$class.foreach(Iterator.scala:742) at scala.collection.AbstractIterator.foreach(Iterator.scala:1194) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) at scala.collection.immutable.VectorBuilder.$plus$plus$eq(Vector.scala:732) at scala.collection.immutable.VectorBuilder.$plus$plus$eq(Vector.scala:708) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:308) at scala.collection.AbstractIterator.to(Iterator.scala:1194) at scala.collection.TraversableOnce$class.toVector(TraversableOnce.scala:304) at scala.collection.AbstractIterator.toVector(Iterator.scala:1194) at io.gatling.core.feeder.SeparatedValuesParser$$anonfun$parse$1.apply(SeparatedValuesParser.scala:34) at io.gatling.core.feeder.SeparatedValuesParser$$anonfun$parse$1.apply(SeparatedValuesParser.scala:33) at io.gatling.core.util.IO$.withSource(IO.scala:152) at io.gatling.core.feeder.SeparatedValuesParser$.parse(SeparatedValuesParser.scala:33) at io.gatling.core.feeder.FeederSupport$$anonfun$separatedValues$1.apply(FeederSupport.scala:38) at io.gatling.core.feeder.FeederSupport$$anonfun$separatedValues$1.apply(FeederSupport.scala:38) at io.gatling.core.feeder.FeederSupport$class.feederBuilder(FeederSupport.scala:46) at io.gatling.core.Predef$.feederBuilder(Predef.scala:32) at io.gatling.core.feeder.FeederSupport$class.separatedValues(FeederSupport.scala:38) at io.gatling.core.Predef$.separatedValues(Predef.scala:32) at io.gatling.core.feeder.FeederSupport$class.separatedValues(FeederSupport.scala:35) at io.gatling.core.Predef$.separatedValues(Predef.scala:32) at io.gatling.core.feeder.FeederSupport$class.tsv(FeederSupport.scala:32) :gatling FAILED

We are using gradle gatling task that uses these params - -PjvmArgs=-Dbroker=brokerhost:9092 -Dtopic= -Dusers=100 -Dduration_in_mins=2 -Dinput_file_name= -Psim="LoadTestSimulation".

val scn = scenario("Demo") .feed(tsv(inputFileName, true).circular) .exec(kafka("request") .sendString,String)

setUp( scn.inject(constantUsersPerSec(users.toDouble) during (duration.toInt minutes)) //scn.inject(rampUsers(500) over (200 seconds)) .protocols(kafkaConf)) }

Any suggestions or tips, should we split the file to multiple files and run instead of passing such big file, Will this file be loaded to memory at once?

Upvotes: 0

Views: 1717

Answers (1)

Teliatko
Teliatko

Reputation: 1541

You are using TSV, i.e. tab separated file feeder. This is what the official documentation says:

Those built-ins returns RecordSeqFeederBuilder instances, meaning that the whole file is loaded in memory and parsed, so the resulting feeders doesn’t read on disk during the simulation run.

or better:

Loading feeder files in memory uses a lot of heap, expect a 5-to-10-times ratio with the file size. This is due to JVM’s internal UTF-16 char encoding and object headers overhead. If memory is an issue for you, you might want to read from the filesystem on the fly and build your own Feeder.

For more info see CSV Feeders.

What you "can" do is to try to increase memory enough to allow JVM and GC operate on such "huge" file in memory, which I think will not work due to reason of your exception (see more here)

So I guess your only option is to write your own feeder which reads data from file on the fly.

Upvotes: 1

Related Questions