spark on yarn, Container exited with a non-zero exit code 143 - apache-spark

I am using HDP 2.5, running spark-submit as yarn cluster mode.
I have tried to generate data using dataframe cross join.
i.e
val generatedData = df1.join(df2).join(df3).join(df4)
generatedData.saveAsTable(...)....
df1 storage level is MEMORY_AND_DISK
df2,df3,df4 storage level is MEMORY_ONLY
df1 has much more records i.e 5 million while df2 to df4 has at most 100 records.
doing so my explain plain would result with better performance using BroadcastNestedLoopJoin explain plan.
for some reason it always fail. I don't know how can I debug it and where the memory explode.
Error log output:
16/12/06 19:44:08 WARN YarnAllocator: Container marked as failed: container_e33_1480922439133_0845_02_000002 on host: hdp4. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal
16/12/06 19:44:08 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_e33_1480922439133_0845_02_000002 on host: hdp4. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal
16/12/06 19:44:08 ERROR YarnClusterScheduler: Lost executor 1 on hdp4: Container marked as failed: container_e33_1480922439133_0845_02_000002 on host: hdp4. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal
16/12/06 19:44:08 WARN TaskSetManager: Lost task 1.0 in stage 12.0 (TID 19, hdp4): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container marked as failed: container_e33_1480922439133_0845_02_000002 on host: hdp4. Exit status: 143. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal
I didn't see any WARN or ERROR logs before this error.
What is the problem? where should I look for the memory consumption?
I cannot see anything on the Storage tab of SparkUI.
the log was taken from yarn resource manager UI on HDP 2.5
EDIT
looking at the container log, it seems like it's a java.lang.OutOfMemoryError: GC overhead limit exceeded
I know how to increase the memory, but I don't have any memory anymore.
How can I do a cartesian / product join with 4 Dataframes without getting this error.

I also meet this problem and try to solve it by refering some blog.
1. Run spark add conf bellow:
--conf 'spark.driver.extraJavaOptions=-XX:+UseCompressedOops -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps' \
--conf 'spark.executor.extraJavaOptions=-XX:+UseCompressedOops -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC ' \
When jvm GC ,you will get follow message:
Heap after GC invocations=157 (full 98):
PSYoungGen total 940544K, used 853456K [0x0000000781800000, 0x00000007c0000000, 0x00000007c0000000)
eden space 860160K, 99% used [0x0000000781800000,0x00000007b5974118,0x00000007b6000000)
from space 80384K, 0% used [0x00000007b6000000,0x00000007b6000000,0x00000007bae80000)
to space 77824K, 0% used [0x00000007bb400000,0x00000007bb400000,0x00000007c0000000)
ParOldGen total 2048000K, used 2047964K [0x0000000704800000, 0x0000000781800000, 0x0000000781800000)
object space 2048000K, 99% used [0x0000000704800000,0x00000007817f7148,0x0000000781800000)
Metaspace used 43044K, capacity 43310K, committed 44288K, reserved 1087488K
class space used 6618K, capacity 6701K, committed 6912K, reserved 1048576K
}
Both PSYoungGen and ParOldGen are 99% ,then you will get java.lang.OutOfMemoryError: GC overhead limit exceeded
if more object was created .
Try to add more memory for your executor or your driver when more memory resources are avaliable:
--executor-memory 10000m \
--driver-memory 10000m \
For my case : memory for PSYoungGen are smaller then ParOldGen which causes many young object enter into ParOldGen memory area and finaly
ParOldGen are not avaliable.So java.lang.OutOfMemoryError: Java heap space error appear.
Adding conf for executor:
'spark.executor.extraJavaOptions=-XX:NewRatio=1 -XX:+UseCompressedOops
-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps '
-XX:NewRatio=rate
rate = ParOldGen/PSYoungGen
It dependends.You can try GC strategy like
-XX:+UseSerialGC :Serial Collector
-XX:+UseParallelGC :Parallel Collector
-XX:+UseParallelOldGC :Parallel Old collector
-XX:+UseConcMarkSweepGC :Concurrent Mark Sweep
Java Concurrent and Parallel GC
If both step 4 and step 6 are done but still get error, you should consider change you code. For example, reduce iterator times in ML model.

Log file of all containers and am are available on,
yarn logs -applicationId application_1480922439133_0845_02
If you just want AM logs,
yarn logs -am -applicationId application_1480922439133_0845_02
If you want to find containers ran for this job,
yarn logs -applicationId application_1480922439133_0845_02|grep container_e33_1480922439133_0845_02
If you want just a single container log,
yarn logs -containerId container_e33_1480922439133_0845_02_000002
And for these commands to work, log aggregation must have been set to true, or you will have to get logs from individual server directories.
Update
There is nothing you can do apart from try with swapping, but that will degrade performance alot.
The GC overhead limit means, GC has been running non-stop in quick succession but it was not able to recover much memory. Only reason for that is, either code has been poorly written and have alot of back reference(which is doubtful, as you are doing simple join), or memory capacity has reached.

REASON 1
By default the shuffle count is 200. Having too many shuffle will increase the complexity and chances of getting program crashed. Try controlling the number of shuffles in the spark session. I changed the count to 5 using the below code.
implicit val sparkSession = org.apache.spark.sql.SparkSession.builder().enableHiveSupport().getOrCreate()
sparkSession.sql("set spark.sql.shuffle.partitions=5")
Additionally if you are using dataframes and if you are not re-partitioning the dataframe, then the execution will be done in a single executor. If only 1 executor is running for some time then the yarn will make other executors to shut down. Later if more memory is required, though yarn tries to re-call the other executors sometimes the executors won't come up, hence the process might fail with memory overflow issue. To overcome this situation, try re-partitioning the dataframe before an action is called.
val df = df_temp.repartition(5)
Note that you might need to change the shuffle and partition count and according to your requirement. In my case the above combination worked.
REASON 2
It can occur due to memory is not getting cleared on time. For example, if you are running a spark command using Scala and that you are executing bunch of sql statements and exporting to csv. The data in some hive tables will be very huge and you have to manage the memory in your code.
Example, consider the below code where the lst_Sqls is a list that contains a set of sql commands
lst_Sqls.foreach(sqlCmd => spark.sql(sqlCmd).coalesce(1).write.format("com.databricks.spark.csv").option("delimiter","|").save("s3 path..."))
When you run this command sometimes you will end up seeing the same error. This is because although spark clears the memory, it does this in a lazy way, ie, your loop will be continuing but spark might be clearing the memory at some later point.
In such cases, you need to manage the memory in your code, ie, clear the memory after each command is executed. For this let us change our code little. I have commented what each line do in the below code.
lst_Sqls.foreach(sqlCmd =>
{
val df = spark.sql(sqlCmd)
// Store the result in in-memory. If in-memory is full, then it stored to HDD
df.persist(StorageLevel.MEMORY_AND_DISK)
// Export to csv from Dataframe
df.coalesce(1).write.format("com.databricks.spark.csv").save("s3 path")
// Clear the memory. Only after clearing memory, it will jump to next loop
df.unpersist(blocking = true)
})

Related

Pyspark: Container exited with a non-zero exit code 143

I have seen various threads on this issue but the solutions given are not working in my case.
The environment is with pyspark 2.1.0 , Java 7 and has enough memory and Cores.
I am running a spark-submit job which deals with Json files, the job runs perfectly alright with the file size < 200MB but if its more than that it fails for Container exited with a non-zero exit code 143 then I checked yarn logs and the error there is java.lang.OutOfMemoryError: Requested array size exceeds VM limit
Since the json file is not in the format which can directly be read using spark.read.json() the first step in the application is reading the json as textfile to rdd to apply map and flatMap to covert into required format then using spark.read.json(rdd) to create the dataframe for further processing, the code is below
def read_json(self, spark_session, filepath):
raw_rdd = spark_session.sparkContext.textFile(filepath)
raw_rdd_add_sep = raw_rdd.map(lambda x:x.replace('}{','}}{{'))
raw_rdd_split = raw_rdd_add_sep.flatMap(lambda x:x.split('}{'))
required_df = spark_session.read.json(raw_rdd_split)
return required_df
I have tried increasing the Memory overhead for executor and driver which didn't help using options spark.driver.memoryOverhead , spark.executor.memoryOverhead
Also I have enabled the Off-Heap options spark.memory.offHeap.enabled and set the value spark.memory.offHeap.size
I have tried setting the JVM memory option with spark.driver.extraJavaOptions=-Xms10g
So The above options didn't work in this scenario, some of the Json files are nearly 1GB and we ought to process ~200 files a day.
Can someone help resolving this issue please?
Regarding "Container exited with a non-zero exit code 143", it is probably because of the memory problem.
You need to check out on Spark UI if the settings you set is taking effect.
BTW, the proportion for executor.memory:overhead.memory should be about 4:1
I don't know why you change the JVM setting directly spark.driver.extraJavaOptions=-Xms10g, I recommend using --driver-memory 10g instread. e.g.: spark-submit --driver-memory 10G (I remember driver-memory only works with spark-submit sometimes)
from my perspective, you just need to update the four arguments to feed your machine resources:
spark.driver.memoryOverhead ,
spark.executor.memoryOverhead,
spark.driver.memory ,
spark.executor.memory

OOM error - unable to acquire 261244 bytes of memory, got 0

I am trying to run spark job which is both data and processing intensive job on dataproc and getting OOM with below error
‘OOM error - unable to acquire 261244 bytes of memory, got 0’
To give overview - On collect action, job is shuffling TBs of data. Roughly ~6TB
What I know is mentioned error comes when executor runs out of memory but when I am increasing executor memory then executor per node is decreasing resulting in less vcores causing job to run slow.
Can anyone please help me with above error. I have tried everything which is being suggested on stackoverflow.
Dataproc configuration
I am using highmem-16 dataproc machine.
Code cant be shared as it is massive code with lot of transformations and this is first action on data.

Databricks Spark: java.lang.OutOfMemoryError: GC overhead limit exceeded i

I am executing a Spark job in Databricks cluster. I am triggering the job via a Azure Data Factory pipeline and it execute at 15 minute interval so after the successful execution of three or four times it is getting failed and throwing with the exception "java.lang.OutOfMemoryError: GC overhead limit exceeded".
Though there are many answer with for the above said question but in most of the cases their jobs are not running but in my cases it is getting failed after successful execution of some previous jobs.
My data size is less than 20 MB only.
My cluster configuration is:
So the my question is what changes I should make in the server configuration. If the issue is coming from my code then why it is getting succeeded most of the time. Please advise and suggest me the solution.
This is most probably related to executor memory being bit low .Not sure what is current setting and if its default what is the default value in this particular databrics distribution. Even though it passes but there would lot of GCs happening because of low memory hence it would keep failing once in a while . Under spark configuration please provide spark.executor.memory and also some other params related to num of executors and cores per executor . In spark-submit the config would be provided as spark-submit --conf spark.executor.memory=1g
You may try increasing memory of driver node.
Sometimes the Garbage Collector is not releasing all the loaded objects in the driver's memory.
What you can try is to force the GC to do that. You can do that by executing the following:
spark.catalog.clearCache()
for (id, rdd) in spark.sparkContext._jsc.getPersistentRDDs().items():
rdd.unpersist()
print("Unpersisted {} rdd".format(id))

Memory issue with spark structured streaming

I'm facing memory issues running structured stream with aggregation and partitioning in Spark 2.2.0:
session
.readStream()
.schema(inputSchema)
.option(OPTION_KEY_DELIMITER, OPTION_VALUE_DELIMITER_TAB)
.option(OPTION_KEY_QUOTE, OPTION_VALUE_QUOTATION_OFF)
.csv("s3://test-bucket/input")
.as(Encoders.bean(TestRecord.class))
.flatMap(mf, Encoders.bean(TestRecord.class))
.dropDuplicates("testId", "testName")
.withColumn("year", functions.date_format(dataset.col("testTimestamp").cast(DataTypes.DateType), "YYYY"))
.writeStream()
.option("path", "s3://test-bucket/output")
.option("checkpointLocation", "s3://test-bucket/checkpoint")
.trigger(Trigger.ProcessingTime(60, TimeUnit.SECONDS))
.partitionBy("year")
.format("parquet")
.outputMode(OutputMode.Append())
.queryName("test-stream")
.start();
During testing I noticed that amount of used memory increases each time when new data comes and finally executors exit with code 137:
ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Container marked as failed: container_1520214726510_0001_01_000003 on host: ip-10-0-1-153.us-west-2.compute.internal. Exit status: 137. Diagnostics: Container killed on request. Exit code is 137
Container exited with a non-zero exit code 137
Killed by external signal
I've created a heap dump and found that most of the memory used by org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider that is referenced from StateStore
On the first glance it looks normal since that is how Spark keeps aggregation keys in memory. However I did my testing by renaming files in source folder, so that they could be picked up by spark. Since input records are the same all further rows should be rejected as duplicates and memory consumption shouldn't increase but it is.
Moreover, GC time took more than 30% of total processing time
Here is a heap dump taken from the executor running with smaller amount of memory than on screens above since when I was creating a dump from that one the java process just terminated in the middle of the process.
Migrating my comment on SPARK-23682 which asker of this question also filed in issue.
In HDFS state store provider, it excessively caches the multiple versions of states in memory, default 100 versions. The issue is addressed by SPARK-24717, and it will only maintain two versions (current for replay, and new for update) of state in memory. The patch will be available in Spark 2.4.0.
I think the root reason is that you do not use a watermark along with dropDuplicates, thus all the states are kept and never dropped.
Have a look at: https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#streaming-deduplication

SPARK: YARN kills containers for exceeding memory limits

We're currently encountering an issue where Spark jobs are seeing a number of containers being killed for exceeding memory limits when running on YARN.
16/11/18 17:58:52 WARN TaskSetManager: Lost task 53.0 in stage 49.0 (TID 32715, XXXXXXXXXX):
ExecutorLostFailure (executor 23 exited caused by one of the running tasks)
Reason: Container killed by YARN for exceeding memory limits. 12.4 GB of 12 GB physical memory used.
Consider boosting spark.yarn.executor.memoryOverhead.
The following arguments are being passed via spark-submit:
--executor-memory=6G
--driver-memory=4G
--conf "spark.yarn.executor.memoryOverhead=6G"`
I am using Spark 2.0.1.
We have increased the memoryOverhead to this value after reading several posts about YARN killing containers (e.g. How to avoid Spark executor from getting lost and yarn container killing it due to memory limit?).
Given my parameters and the log message it does seem that "Yarn kills executors when its memory usage is larger than (executor-memory + executor.memoryOverhead)".
It is not practical to continue increasing this overhead in the hope that eventually we find a value at which these errors do not occur. We are seeing this issue on several different jobs. I would appreciate any suggestions as to parameters I should change, things I should check, where I should start looking to debug this etc. Am able to provide further config options etc.
You can reduce the memory usage with the following configurations in spark-defaults.conf:
spark.default.parallelism
spark.sql.shuffle.partitions
And there is a difference when you use more than 2000 partitions for spark.sql.shuffle.partitions. You can see it in the code of spark on Github:
private[spark] object MapStatus {
def apply(loc: BlockManagerId, uncompressedSizes: Array[Long]): MapStatus = {
if (uncompressedSizes.length > 2000) {
HighlyCompressedMapStatus(loc, uncompressedSizes)
} else {
new CompressedMapStatus(loc, uncompressedSizes)
}
}
I recommend to try to use more than 2000 Partitions for a test. It could be faster some times, when you use very huge datasets. And according to this your tasks can be short as 200 ms. The correct configuration is not easy to find, but depending on your workload it can make a difference of hours.

Resources