how to solve java.lang.OutOfMemoryError: Java heap space when train word2vec model in Spark? - apache-spark

Solu:I put the params driver-memory 40G in the spark-submit .
Ques:My Spark cluster is consist of 5 ubuntu server ,each with 80G memory and 24 cores.
word2vec is about 10G newsdata.
and I submit the job with standalone mode like this :
spark-submit --name trainNewsdata --class Word2Vec.trainNewsData --master spark://master:7077 --executor-memory 70G --total-executor-cores 96 sogou.jar hdfs://master:9000/user/bd/newsdata/* hdfs://master:9000/user/bd/word2vecModel_newsdata
When I train word2vec model in spark,I occure :
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space,
and I don't know how to solve it,please help me:)

I put the params driver-memory 40G in the spark-submit,and Then solve it.

Related

Spark: use of driver-memory parameter

When I submit this command, my job failed with error "Container is running beyond physical memory limits".
spark-submit --master yarn --deploy-mode cluster --executor-memory 5G --total-executor-cores 30 --num-executors 15 --conf spark.yarn.executor.memoryOverhead=1000
But adding the parameter: --driver-memory to 5GB (or upper), the job ends without error.
spark-submit --master yarn --deploy-mode cluster --executor-memory 5G --total executor-cores 30 --num-executors 15 --driver-memory 5G --conf spark.yarn.executor.memoryOverhead=1000
Cluster info: 6 nodes with 120GB of Memory. YARN Container Memory Minimum: 1GB
The question is: what is the difference in using or not this parameter?
If increasing the driver memory is helping you to successfully complete the job then it means that driver is having lots of data coming into it from executors. Typically, the driver program is responsible for collecting results back from each executor after the tasks are executed. So, in your case it seems that increasing the driver memory helped to store more results back into the driver memory.
If you read the some points on executor memory, driver memory and the way Driver interacts with executors then you will get better clarity on the situation you are in.
Hope it helps to some extent.

Yarn Spark HBase - ExecutorLostFailure Container killed by YARN for exceeding memory limits

I am trying to read a big hbase table in spark (~100GB in size).
Spark Version : 1.6
Spark submit parameters:
spark-submit --master yarn-client --num-executors 10 --executor-memory 4G
--executor-cores 4
--conf spark.yarn.executor.memoryOverhead=2048
Error: ExecutorLostFailure Reason: Container killed by YARN for
exceeding limits.
4.5GB of 3GB physical memory used limits. Consider boosting spark.yarn.executor.memoryOverhead.
I have tried setting spark.yarn.executor.memoryOverhead to 100000. Still getting similar error.
I don't understand why spark doesn't spill to disk if the memory is insufficient OR is YARN causing the problem here.
Please share your code how you try to read in.
And also your cluster architecture
Container killed by YARN for exceeding limits. 4.5GB of 3GB physical memory used limits
Try
spark-submit
--master yarn-client
--num-executors 4
--executor-memory 100G
--executor-cores 4
--conf spark.yarn.executor.memoryOverhead=20480
If you have 128 gRam
The situation is clear, you run out of ram, try to rewrite your code in a disk friendly way.

spark heap size error even RAM is 32 GB and JAVA_OPTIONS=-Xmx8g

I have 32 GB of physical memoryand my input file size about 30 MB, I try to submit my spark job in yarn client mode using the below command
spark-submit --master yarn --packages com.databricks:spark-xml_2.10:0.4.1 --driver-memory 8g ericsson_xml_parsing_version_6_stage1.py
and my executor space is 8g, but get the below error anyone please help me to configure the java heap memory. I read about the --driver-java-options using command line but I don't know how to set java heap space using this option.
Anyone please help me out.
java.lang.OutOfMemoryError: Java heap space
enter image description here
Did you try to configure executor memory as well?
like this: "--executor-memory 8g"

Spark heapspace error while running Python program

When I running Python code in Spark using
spark-submit --master local --packages com.databricks:spark-xml_2.10:0.4.1 \
--driver-memory 8G --executor-memory 7G
I get this error
17/02/28 18:59:25 ERROR util.Utils: Uncaught exception in thread stdout writer for /usr/local/bin/python2.7 java.lang.OutOfMemoryError: Java heap space
I get the same error when using
spark.yarn.executor.memoryOverhead=1024M
I have 32 GB of RAM and Java options are 4 GB.
How can I fix this?

Spark executor GC taking long

Am running Spark job on a standalone cluster and I noticed after sometime the GC starts taking long and the red scary color begins to show.
Here is the resources available:
Cores in use: 80 Total, 76 Used
Memory in use: 312.8 GB Total, 292.0 GB Used
Job details:
spark-submit --class com.mavencode.spark.MonthlyReports
--master spark://192.168.12.14:7077
--deploy-mode cluster --supervise
--executor-memory 16G --executor-cores 4
--num-executors 18 --driver-cores 8
--driver-memory 20G montly-reports-assembly-1.0.jar
How do I fix the GC time taking so long?
I had the same problem and could resolve it by using Parallel GC instead of G1GC. You may add the following options to the executors additional Java options in the submit request
-XX:+UseParallelGC -XX:+UseParallelOldGC

Resources