Pyspark: Container exited with a non-zero exit code 143 - apache-spark

I have seen various threads on this issue but the solutions given are not working in my case.
The environment is with pyspark 2.1.0 , Java 7 and has enough memory and Cores.
I am running a spark-submit job which deals with Json files, the job runs perfectly alright with the file size < 200MB but if its more than that it fails for Container exited with a non-zero exit code 143 then I checked yarn logs and the error there is java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Since the json file is not in the format which can directly be read using spark.read.json() the first step in the application is reading the json as textfile to rdd to apply map and flatMap to covert into required format then using spark.read.json(rdd) to create the dataframe for further processing, the code is below
def read_json(self, spark_session, filepath):
raw_rdd = spark_session.sparkContext.textFile(filepath)
raw_rdd_add_sep = raw_rdd.map(lambda x:x.replace('}{','}}{{'))
raw_rdd_split = raw_rdd_add_sep.flatMap(lambda x:x.split('}{'))
required_df = spark_session.read.json(raw_rdd_split)
return required_df
I have tried increasing the Memory overhead for executor and driver which didn't help using options spark.driver.memoryOverhead , spark.executor.memoryOverhead
Also I have enabled the Off-Heap options spark.memory.offHeap.enabled and set the value spark.memory.offHeap.size
I have tried setting the JVM memory option with spark.driver.extraJavaOptions=-Xms10g
So The above options didn't work in this scenario, some of the Json files are nearly 1GB and we ought to process ~200 files a day.
Can someone help resolving this issue please?

Regarding "Container exited with a non-zero exit code 143", it is probably because of the memory problem.
You need to check out on Spark UI if the settings you set is taking effect.
BTW, the proportion for executor.memory:overhead.memory should be about 4:1
I don't know why you change the JVM setting directly spark.driver.extraJavaOptions=-Xms10g, I recommend using --driver-memory 10g instread. e.g.: spark-submit --driver-memory 10G (I remember driver-memory only works with spark-submit sometimes)
from my perspective, you just need to update the four arguments to feed your machine resources:
spark.driver.memoryOverhead ,
spark.executor.memoryOverhead,
spark.driver.memory ,
spark.executor.memory

Related

A SPARK CLUSTER ISSUE

I know that when the spark cluster in the production environment is running a job, it is in the stand-alone mode.
While I was running a job, a few points of worker's memory overflow caused the worker node process to die.
I would like to ask how to analyze the error shown in the image below:
Spark Worker Fatal Error
EDIT: This is a relatively common problem please also view this if the below doesn't help you Spark java.lang.OutOfMemoryError: Java heap space.
Without seeing your code here is the process you should follow:
(1) If the issue is caused primarily from the Java allocation running out of space within the container allocation I would advise messing with your memory overhead settings (below). The current value are a little high and will cause the excess spin-up of vcores. Add the two below settings to your spark-submit and re-run.
--conf "spark.yarn.executor.memoryOverhead=4000m"
--conf "spark.yarn.driver.memoryOverhead=2000m"
(2) Adjust Executor and Driver Memory Levels. Start low and climb. Add these values to the spark-submit statement.
--driver-memory 10g
--executor-memory 5g
(3) Adjust Number of Executor Values in the spark submit.
--num-executors ##
(4) Look at the Yarn stages of the job and figure where inefficiencies in the code is present and where persistence's can be added and replaced. I would advise to heavily look into spark-tuning.

EMR 5.x | Spark on Yarn | Exit code 137 and Java heap space Error

I have been getting this error Container exited with a non-zero exit code 137 while running spark on yarn. I have tried couple of techniques after going through but didnt help. The spark configurations looks like below:
spark.driver.memory 10G
spark.driver.maxResultSize 2G
spark.memory.fraction 0.8
I am using yarn in client mode. spark-submit --packages com.databricks:spark-redshift_2.10:0.5.0 --jars RedshiftJDBC4-1.2.1.1001.jar elevatedailyjob.py > log5.out 2>&1 &
The sample code :
# Load the file (its a single file of 3.2GB)
my_df = spark.read.csv('s3://bucket-name/path/file_additional.txt.gz', schema=MySchema, sep=';', header=True)
# write the de_pulse_ip data into parquet format
my_df = my_df.select("ip_start","ip_end","country_code","region_code","city_code","ip_start_int","ip_end_int","postal_code").repartition(50)
my_df.write.parquet("s3://analyst-adhoc/elevate/tempData/de_pulse_ip1.parquet", mode = "overwrite")
# read my_df data intp dataframe from parquet files
my_df1 = spark.read.parquet("s3://bucket-name/path/my_df.parquet").repartition("ip_start_int","ip_end_int")
#join with another dataset 200 MB
my_df2 = my_df.join(my_df1, [my_df.ip_int_cast > my_df1.ip_start_int,my_df.ip_int_cast <= my_df1.ip_end_int], how='right')
Note: the input file is a single gzip file. It's unzipped size is 3.2GB
Here is the solution for the above issues.
exit code 137 and Java heap space Error is mainly related to memory w.r.t the executors and the driver. Here is something I have done
to increase the driver memory spark.driver.memory 16G increase
the storage memory fraction spark.storage.memoryFraction 0.8
increase the executor memory spark.executor.memory 3G
One very important thing I would like to share which actually made a huge impact in performance is something like below:
As I mentioned above I have a single file (.csv and gzip of 3.2GB) which after unzipping becomes 11.6 GB.
To load gzip files, spark always starts a single executor(for each .gzip file) as it can not parallelize (even if you increase partitions) as gzip files are not splittable. this hampers the whole performance as spark first read the whole file (using one executor) into the master ( I am running spark-submit in client mode) and then uncompress it and then repartitions (if mentioned to re-partition).
To address this, I used s3-dist-cp command and moved the file from s3 to hdfs and also reduced the block size so as to increase the parallelism. something like below
/usr/bin/s3-dist-cp --src=s3://bucket-name/path/ --dest=/dest_path/ --groupBy='.*(additional).*' --targetSize=64 --outputCodec=none
although, it takes little time to move the data from s3 to HDFS, the overall performance of the process increases significantly.

spark on yarn, Container exited with a non-zero exit code 143

I am using HDP 2.5, running spark-submit as yarn cluster mode.
I have tried to generate data using dataframe cross join.
i.e
val generatedData = df1.join(df2).join(df3).join(df4)
generatedData.saveAsTable(...)....
df1 storage level is MEMORY_AND_DISK
df2,df3,df4 storage level is MEMORY_ONLY
df1 has much more records i.e 5 million while df2 to df4 has at most 100 records.
doing so my explain plain would result with better performance using BroadcastNestedLoopJoin explain plan.
for some reason it always fail. I don't know how can I debug it and where the memory explode.
Error log output:
16/12/06 19:44:08 WARN YarnAllocator: Container marked as failed: container_e33_1480922439133_0845_02_000002 on host: hdp4. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal
16/12/06 19:44:08 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_e33_1480922439133_0845_02_000002 on host: hdp4. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal
16/12/06 19:44:08 ERROR YarnClusterScheduler: Lost executor 1 on hdp4: Container marked as failed: container_e33_1480922439133_0845_02_000002 on host: hdp4. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal
16/12/06 19:44:08 WARN TaskSetManager: Lost task 1.0 in stage 12.0 (TID 19, hdp4): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container marked as failed: container_e33_1480922439133_0845_02_000002 on host: hdp4. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal
I didn't see any WARN or ERROR logs before this error.
What is the problem? where should I look for the memory consumption?
I cannot see anything on the Storage tab of SparkUI.
the log was taken from yarn resource manager UI on HDP 2.5
EDIT
looking at the container log, it seems like it's a java.lang.OutOfMemoryError: GC overhead limit exceeded
I know how to increase the memory, but I don't have any memory anymore.
How can I do a cartesian / product join with 4 Dataframes without getting this error.
I also meet this problem and try to solve it by refering some blog.
1. Run spark add conf bellow:
--conf 'spark.driver.extraJavaOptions=-XX:+UseCompressedOops -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps' \
--conf 'spark.executor.extraJavaOptions=-XX:+UseCompressedOops -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC ' \
When jvm GC ,you will get follow message:
Heap after GC invocations=157 (full 98):
PSYoungGen total 940544K, used 853456K [0x0000000781800000, 0x00000007c0000000, 0x00000007c0000000)
eden space 860160K, 99% used [0x0000000781800000,0x00000007b5974118,0x00000007b6000000)
from space 80384K, 0% used [0x00000007b6000000,0x00000007b6000000,0x00000007bae80000)
to space 77824K, 0% used [0x00000007bb400000,0x00000007bb400000,0x00000007c0000000)
ParOldGen total 2048000K, used 2047964K [0x0000000704800000, 0x0000000781800000, 0x0000000781800000)
object space 2048000K, 99% used [0x0000000704800000,0x00000007817f7148,0x0000000781800000)
Metaspace used 43044K, capacity 43310K, committed 44288K, reserved 1087488K
class space used 6618K, capacity 6701K, committed 6912K, reserved 1048576K
}
Both PSYoungGen and ParOldGen are 99% ,then you will get java.lang.OutOfMemoryError: GC overhead limit exceeded
if more object was created .
Try to add more memory for your executor or your driver when more memory resources are avaliable:
--executor-memory 10000m \
--driver-memory 10000m \
For my case : memory for PSYoungGen are smaller then ParOldGen which causes many young object enter into ParOldGen memory area and finaly
ParOldGen are not avaliable.So java.lang.OutOfMemoryError: Java heap space error appear.
Adding conf for executor:
'spark.executor.extraJavaOptions=-XX:NewRatio=1 -XX:+UseCompressedOops
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps '
-XX:NewRatio=rate
rate = ParOldGen/PSYoungGen
It dependends.You can try GC strategy like
-XX:+UseSerialGC :Serial Collector
-XX:+UseParallelGC :Parallel Collector
-XX:+UseParallelOldGC :Parallel Old collector
-XX:+UseConcMarkSweepGC :Concurrent Mark Sweep
Java Concurrent and Parallel GC
If both step 4 and step 6 are done but still get error, you should consider change you code. For example, reduce iterator times in ML model.
Log file of all containers and am are available on,
yarn logs -applicationId application_1480922439133_0845_02
If you just want AM logs,
yarn logs -am -applicationId application_1480922439133_0845_02
If you want to find containers ran for this job,
yarn logs -applicationId application_1480922439133_0845_02|grep container_e33_1480922439133_0845_02
If you want just a single container log,
yarn logs -containerId container_e33_1480922439133_0845_02_000002
And for these commands to work, log aggregation must have been set to true, or you will have to get logs from individual server directories.
Update
There is nothing you can do apart from try with swapping, but that will degrade performance alot.
The GC overhead limit means, GC has been running non-stop in quick succession but it was not able to recover much memory. Only reason for that is, either code has been poorly written and have alot of back reference(which is doubtful, as you are doing simple join), or memory capacity has reached.
REASON 1
By default the shuffle count is 200. Having too many shuffle will increase the complexity and chances of getting program crashed. Try controlling the number of shuffles in the spark session. I changed the count to 5 using the below code.
implicit val sparkSession = org.apache.spark.sql.SparkSession.builder().enableHiveSupport().getOrCreate()
sparkSession.sql("set spark.sql.shuffle.partitions=5")
Additionally if you are using dataframes and if you are not re-partitioning the dataframe, then the execution will be done in a single executor. If only 1 executor is running for some time then the yarn will make other executors to shut down. Later if more memory is required, though yarn tries to re-call the other executors sometimes the executors won't come up, hence the process might fail with memory overflow issue. To overcome this situation, try re-partitioning the dataframe before an action is called.
val df = df_temp.repartition(5)
Note that you might need to change the shuffle and partition count and according to your requirement. In my case the above combination worked.
REASON 2
It can occur due to memory is not getting cleared on time. For example, if you are running a spark command using Scala and that you are executing bunch of sql statements and exporting to csv. The data in some hive tables will be very huge and you have to manage the memory in your code.
Example, consider the below code where the lst_Sqls is a list that contains a set of sql commands
lst_Sqls.foreach(sqlCmd => spark.sql(sqlCmd).coalesce(1).write.format("com.databricks.spark.csv").option("delimiter","|").save("s3 path..."))
When you run this command sometimes you will end up seeing the same error. This is because although spark clears the memory, it does this in a lazy way, ie, your loop will be continuing but spark might be clearing the memory at some later point.
In such cases, you need to manage the memory in your code, ie, clear the memory after each command is executed. For this let us change our code little. I have commented what each line do in the below code.
lst_Sqls.foreach(sqlCmd =>
{
val df = spark.sql(sqlCmd)
// Store the result in in-memory. If in-memory is full, then it stored to HDD
df.persist(StorageLevel.MEMORY_AND_DISK)
// Export to csv from Dataframe
df.coalesce(1).write.format("com.databricks.spark.csv").save("s3 path")
// Clear the memory. Only after clearing memory, it will jump to next loop
df.unpersist(blocking = true)
})

Spark SQL > Join (Shuffle) > Join query always failed because of "executor failed"

I am using Spark SQL (1.5.1) to run JOIN query in Spark Shell. The data contains extremely amount of rows, and the JOIN query never succeeded. Anyway, if I process with Hive SQL on Hive with the same data set, everything went fine. So probably there is something wrong with my configuration
From the console ouput, I found
"[Stage 2:=========================> (92 + 54) / 200]15/10/29 14:26:23 ERROR YarnScheduler: Lost executor 1 on cn233.local: remote Rpc client disassociated"
On base of this Spark started 200 executors by default on base of the configuration spark.shuffle.partitions, and this definitely consumed all memory as I have a small cluster
So how to solve this problem?
Client disassociated error occurs mostly in case of Spark executor running out of memory. You can try the following options
Increase the Executor memory
--executor-memory 20g
You may also try to tune your memory overhead, if your application is using a lot of JVM memory.
--conf spark.yarn.executor.memoryOverhead=5000
Try adjusting the akka framesize, (Default 100MB)
--conf spark.akka.frameSize=1000
May be you may also want to try with smaller block size for the input data. This will increase the tasks, and each tasks will have lesser data to work with, This may prevent executor from running into OutOfMemory.

spark scalability: what am I doing wrong?

I am processing data with spark and it works with a day worth of data (40G) but fails with OOM on a week worth of data:
import pyspark
import datetime
import operator
sc = pyspark.SparkContext()
sqc = pyspark.sql.SQLContext(sc)
sc.union([sqc.parquetFile(hour.strftime('.....'))
.map(lambda row:(row.id, row.foo))
for hour in myrange(beg,end,datetime.timedelta(0,3600))]) \
.reduceByKey(operator.add).saveAsTextFile("myoutput")
The number of different IDs is less than 10k.
Each ID is a smallish int.
The job fails because too many executors fail with OOM.
When the job succeeds (on small inputs), "myoutput" is about 100k.
what am I doing wrong?
I tried replacing saveAsTextFile with collect (because I actually want to do some slicing and dicing in python before saving), there was no difference in behavior, same failure. is this to be expected?
I used to have reduce(lambda x,y: x.union(y), [sqc.parquetFile(...)...]) instead of sc.union - which is better? Does it make any difference?
The cluster has 25 nodes with 825GB RAM and 224 cores among them.
Invocation is spark-submit --master yarn --num-executors 50 --executor-memory 5G.
A single RDD has ~140 columns and covers one hour of data, so a week is a union of 168(=7*24) RDDs.
Spark very often suffers from Out-Of-Memory errors when scaling. In these cases, fine tuning should be done by the programmer. Or recheck your code, to make sure that you don't do anything that is way too much, such as collecting all the bigdata in the driver, which is very likely to exceed the memoryOverhead limit, no matter how big you set it.
To understand what is happening you should realize when yarn decides to kill a container for exceeding memory limits. That will happen when the container goes beyond the memoryOverhead limit.
In the Scheduler you can check the Event Timeline to see what happened with the containers. If Yarn has killed a container, it will be appear red and when you hover/click over it, you will see a message like:
Container killed by YARN for exceeding memory limits. 16.9 GB of 16 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
So in that case, what you want to focus on is these configuration properties (values are examples on my cluster):
# More executor memory overhead
spark.yarn.executor.memoryOverhead 4096
# More driver memory overhead
spark.yarn.driver.memoryOverhead 8192
# Max on my nodes
#spark.executor.cores 8
#spark.executor.memory 12G
# For the executors
spark.executor.cores 6
spark.executor.memory 8G
# For the driver
spark.driver.cores 6
spark.driver.memory 8G
The first thing to do is to increase the memoryOverhead.
In the driver or in the executors?
When you are overviewing your cluster from the UI, you can click on the Attempt ID and check the Diagnostics Info which should mention the ID of the container that was killed. If it is the same as with your AM Container, then it's the driver, else the executor(s).
That didn't resolve the issue, now what?
You have to fine tune the number of cores and the heap memory you are providing. You see pyspark will do most of the work in off-heap memory, so you want not to give too much space for the heap, since that would be wasted. You don't want to give too less, because the Garbage Collector will have issues then. Recall that these are JVMs.
As described here, a worker can host multiple executors, thus the number of cores used affects how much memory every executor has, so decreasing the #cores might help.
I have it written in memoryOverhead issue in Spark and Spark – Container exited with a non-zero exit code 143 in more detail, mostly that I won't forget! Another option, that I haven't tried would be spark.default.parallelism or/and spark.storage.memoryFraction, which based on my experience, didn't help.
You can pass configurations flags as sds mentioned, or like this:
spark-submit --properties-file my_properties
where "my_properties" is something like the attributes I list above.
For non numerical values, you could do this:
spark-submit --conf spark.executor.memory='4G'
It turned out that the problem was not with spark, but with yarn.
The solution is to run spark with
spark-submit --conf spark.yarn.executor.memoryOverhead=1000
(or modify yarn config).

Resources