Leibnitz
Leibnitz

Reputation: 355

java.lang.StackOverflowError on IntelliJ

I'm new to scala/spark and loading a file(csv) of size 2GB and it works fine on my Virtual Machine with below HEAP_SIZE.

HEAP_SIZE="-Xms8g -Xmx8g"

But when running the same code and loading the same file on IntelliJ it throws java.lang.StackOverflowError exception. Know I'm not setting the memory options correctly on IntelliJ. Could someone please help me how and where exactly I need to set this as I have enough memory on my windows machine(32GB)?

By tracing the error, it exactly comes from the below code and obviously at collect.

val lst: Array[String] = expRDD.map((c: tmpClass) => (c.objType, 0))
  .reduceByKey((x: Int, y: Int) => 0)
  .map({ (t: Tuple2[String, Int]) => t._1 })
  .collect

Upvotes: 3

Views: 4847

Answers (3)

Vijay Anand Pandian
Vijay Anand Pandian

Reputation: 1165

This soolved my issue

Some times it is better to add more heap size like 4096 and JVM options -server -Xss256m

Upvotes: 0

Somatik
Somatik

Reputation: 4743

If you are using the Scala compile server the JVM options are here:

Build, Execution, Deployment > Compiler > Scala Compiler > Scala Compile Server

(you might have to restart IntelliJ to apply this)

IntelliJ JVM Options

credits go to @CrazyCoder

Upvotes: 1

CrazyCoder
CrazyCoder

Reputation: 402195

Increasing the stack size may help. You can specify -Xss4m in the VM Options field of the corresponding Run/Debug configuration. This will set the stack size to 4M (the default stack size depends on the OS and JVM version and is usually lower than 1M). Note that it will not help if your problem is caused by the infinite recursion.

Upvotes: 3

Related Questions