Spark 2.1.1 Memory Leak Detected [duplicate] - apache-spark

I've tried upgrading to Apache Spark 1.6.0 RC3. My application now spams these errors for nearly every task:
Managed memory leak detected; size = 15735058 bytes, TID = 830
I've set logging level for org.apache.spark.memory.TaskMemoryManager to DEBUG and see in the logs:
I2015-12-18 16:54:41,125 TaskSetManager: Starting task 0.0 in stage 7.0 (TID 6, localhost, partition 0,NODE_LOCAL, 3026 bytes)
I2015-12-18 16:54:41,125 Executor: Running task 0.0 in stage 7.0 (TID 6)
I2015-12-18 16:54:41,130 ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
I2015-12-18 16:54:41,130 ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
D2015-12-18 16:54:41,188 TaskMemoryManager: Task 6 acquire 5.0 MB for null
I2015-12-18 16:54:41,199 ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
I2015-12-18 16:54:41,199 ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
D2015-12-18 16:54:41,262 TaskMemoryManager: Task 6 acquire 5.0 MB for null
D2015-12-18 16:54:41,397 TaskMemoryManager: Task 6 release 5.0 MB from null
E2015-12-18 16:54:41,398 Executor: Managed memory leak detected; size = 5245464 bytes, TID = 6
How do you debug these errors? Is there a way to log stack traces for allocations and deallocations, so I can find what leaks?
I don't know much about the new unified memory manager (SPARK-10000). Is the leak likely my fault or is it likely a Spark bug?

The short answer is that users are not supposed to see this message. Users are not supposed to be able to create memory leaks in the unified memory manager.
That such leaks happen is a Spark bug: SPARK-11293
But if you want to understand the cause of a memory leak, this is how I did it.
Download the Spark source code and make sure you can build it and your build works.
In TaskMemoryManager.java add extra logging in acquireExecutionMemory and releaseExecutionMemory: logger.error("stack trace:", new Exception());
Change all the other debug logs to error in TaskMemoryManager.java. (Easier than figuring out logging configurations...)
Now you will see the full stack trace for all allocations and deallocations. Try to match them up and find the allocations without deallocations. You now have the stack trace for the source of the leak.

I found this warn message too, but it was cause by "df.repartition(rePartNum,df("id"))".
My df is empty, and the warn message's line equals to rePartNum.
version:
spark2.4
win10

Related

Spark Structured streaming - java.lang.OutOfMemoryError: Java heap space

I am getting the below exception when processing input streams using Spark structured streaming.
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 22 in stage 5.0 failed 1 times, most recent failure: Lost task
22.0 in stage 5.0 (TID 403, localhost, executor driver): java.lang.OutOfMemoryError: Java heap space
I have handled watermark as given below,
.withWatermark("timestamp", "5 seconds")
.groupBy(window($"timestamp", "1 second"), $"column")
What could be the issue? I have tried changing the trigger from default to fixed interval but still I am still facing the problem.
I don't believe this issue is related to watermarks or triggers. OutOfMemory errors occur due to two reasons:
Memory Leaks. This programming error will lead your application to constantly consume more memory. Every time the leaking functionality of the application is used it leaves some objects behind into the Java heap space. Over time the leaked objects consume all of the available Java heap space and trigger the error.
Too much data for the resources designated to it. Your cluster has a designated threshold and can only hold a certain amount of data. When the volume of data exceeds that threshold, the job which functioned normally before the spike ceases to operate and triggers the java.lang.OutOfMemoryError: Java heap space error.
Your error says task 22.0 in stage 5.0 as well which means that it completed stages 1 - 4 successfully. To me, that signifies that there was too much data for the resources designated to it as it did not die over multiple runs as it would with a memory leak. Try limiting the amount of data being read in with something like spark.readStream.option("maxFilesPerTrigger", "6")or increasing the memory assigned to that cluster.

Spark Streaming - Block replication policy issue in case of multiple executor on the same worker

I am running a spark streaming application on a cluster composed by three nodes, each node with a worker and three executors (so a total of 9 executors). I am using Spark version 2.3.2, and the spark standalone cluster manager.
The problem
Investigating a recent issue when a worker machine completely went down, I could see that the spark streaming job was stopped for the following reason:
18/10/08 11:53:03 ERROR TaskSetManager: Task 122 in stage 413804.1 failed 8 times; aborting job
The job was aborted due to a task in the same stage failing 8 times.
This is the expected behaviour.
The mentioned task failed for the following reason:
18/10/08 11:53:03 INFO DAGScheduler: ShuffleMapStage 413804 (flatMapToPair at MessageReducer.java:30) failed in 3.817 s due to Job aborted due to stage failure: Task 122 in stage 413804.1 failed 8 times, most recent failure: Lost task 122.7 in stage 413804.1 (TID 223071001, 10.12.101.60, executor 1): java.lang.Exception: Could not compute split, block input-39-1539013586600 of RDD 1793044 not found
org.apache.spark.SparkException: Job aborted due to stage failure: Task 122 in stage 413804.1 failed 8 times, most recent failure: Lost task 122.7 in stage 413804.1 (TID 223071001, 10.12.101.60, executor 1): java.lang.Exception: Could not compute split, block input-39-1539013586600 of RDD 1793044 not found
So then I tried to track the not found block input-39-1539013586600 and I can see this:
18/10/08 11:46:26 INFO BlockManagerInfo: Added input-39-1539013586600 in memory on 10.10.101.66:32825 (size: 1398.0 B, free: 5.2 GB)
18/10/08 11:46:26 INFO BlockManagerInfo: Added input-39-1539013586600 in memory on 10.10.101.66:35258 (size: 1398.0 B, free: 5.2 GB)
18/10/08 11:47:35 WARN BlockManagerMasterEndpoint: No more replicas available for input-39-1539013586600 !
18/10/08 11:53:03 WARN TaskSetManager: Lost task 122.0 in stage 413804.1 (TID 223070944, 10.10.101.60, executor 5): java.lang.Exception: Could not compute split, block input-39-1539013586600 of RDD 1793044 not found
18/10/08 11:53:03 INFO TaskSetManager: Lost task 122.1 in stage 413804.1 (TID 223070956) on 10.12.101.66, executor 9: java.lang.Exception (Could not compute split, block input-39-1539013586600 of RDD 1793044 not found) [duplicate 1]
As you can notice, the block was replicated on two different executors on the same worker (10.10.101.66 in this case).
Spark code
We then checked spark code to see if this behaviour is normal, and it seems it is.
The default policy used in the BlockManager is RandomBlockReplication (https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/storage/BlockManager.scala#L240).
In this policy, despite the javaDoc saying "...basic implementation, that just makes sure we put blocks on different hosts, if possible", the policy seems completely random, as they are not using the host property of the BlockManagerId object to try to put the replica on a different host (https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/storage/BlockReplicationPolicy.scala#L120).
If our analysis is correct, it seems that in a configuration like ours (multiple executors in one worker machine) spark stream can go easily down if the whole host is lost.
Forcing the job to use BasicBlockReplicationPolicy does not seem a solution either, as this policy fallback to the random mechanism if no topology is specified (https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/storage/BlockReplicationPolicy.scala#L169), and I was not able to find in the code where we can set the topology (this value it seems not to be used for the moment).
Final questions
Can we consider this as a bug in Spark?
Has anyone faced this issue in the past? Is there any workaround available (other than reduce the number of executor per worker to 1)?

Spark - Writing large dataframe problems

In Spark 2.2 (via YARN), I am trying to write a pretty large dataframe to HDFS via an overnight batch job. We first have two source tables, which we join, and then we write the joined result. The output is compressed parquet, but the write is failing due to an out of memory error.
We're providing 12 executors each with 20g of memory and 4 cores, plus the driver with 32g.
In a write operation like this, what runs out of memory? The executors? Short of blindly throwing more memory at it, what steps can we take to resolve this?
The join code is simple:
joined.write.option("header", "true").parquet(destPath)
Here are the final logs before a bunch of "heap dump" spam:
18/06/15 14:25:27 INFO BlockManagerInfo: Added broadcast_1385_piece0 in memory on company02.host.comp.com:43201 (size: 36.8 KB, free: 10.5 GB)
18/06/15 14:25:38 INFO TaskSetManager: Finished task 5982.0 in stage 1008.0 (TID 41953) in 63870 ms on company02.host.comp.com (executor 7) (5979/12136)
18/06/15 14:25:39 INFO TaskSetManager: Finished task 5984.0 in stage 1008.0 (TID 41955) in 65189 ms on company02.host.comp.com (executor 7) (5980/12136)
JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2018/06/15 14:25:48 - please wait.

Where can one find exceptions thrown by Spark's Executors

We have been getting ExecutorLostExceptions, but have been unable to determine the root cause.
Here is a simplified script that can create the error
filenames = "hdfs://myfile1,hdfs://myfile2"
sc.textFile(filenames).first()
As an experiment, when I intentionally run a spark job on 1GB of data with only 1mb of spark.executor.memory, the driver prints the following error message
16/04/28 17:28:54 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0,
host.addr, partition 0,ANY, 2257 bytes)
16/04/28 17:29:28 INFO MesosSchedulerBackend: Executor lost: 4e199be7-a0bc-407d-ba70-4147e08d6c39-S5, marking slave 4e199be7-a0bc-407d-ba70-4147e08d6c39-S5 as lost
16/04/28 17:29:28 INFO MesosSchedulerBackend: Mesos slave lost: 4e199be7-a0bc-407d-ba70-4147e08d6c39-S5
16/04/28 17:29:28 ERROR TaskSchedulerImpl: Lost executor 4e199be7-a0bc-407d-ba70-4147e08d6c39-S5 on host.addr: Unknown executor exit code (256) (died from signal 128?)
16/04/28 17:29:28 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, host.addr): ExecutorLostFailure (executor 4e199be7-a0bc-407d-ba70-4147e08d6c39-S5 exited caused by one of the running tasks) Reason: Unknown executor exit code (256) (died from signal 128?)
and after a few automated retries, the entire job fails. This happens in both Pyspark and Scala-spark.
What are the appropriate logs I can look at to determine exactly why this executor failed?
For this controlled case I know that running out of memory was the cause. However these and other failures with different exit codes occur on a regular basis, and then I don't know where to look or what to fix.
The places I have looked so far include
The spark UI running on port 4040
/tmp/mesos/slaves/[slaveid]/frameworks/[frameworkid]/executors/[executorid]/runs/latest/{stderr,stdout} on the node whose executor was "lost"
/var/logs/mesos/mesos-slaves.{INFO,WARN,ERROR,FATAL} on the failed node
/tmp/spark-events/[executorid] on the driver node
Those places have helped address some issues, but not e.g. OOM errors, and now I'm not sure where else to look.
You can check under
$HADOOP_HOME/logs/userlogs
there you can find your logs by app id. app id can be found in hadoop cluster web ui:
<your_cluster_master_ip>:8088

Error ExecutorLostFailure when running a task in Spark

when I am trying to run it on this folder it is throwing me ExecutorLostFailure everytime
Hi I am a beginner in Spark. I am trying to run a job on Spark 1.4.1 with 8 slave nodes with 11.7 GB memory each 3.2 GB Disk . I am running the Spark task from one of the slave node (from 8 nodes) (so with 0.7 storage fraction approx 4.8 gb only is available on each node )and using Mesos as the Cluster Manager. I am using this configuration :
spark.master mesos://uc1f-bioinfocloud-vamp-m-1:5050
spark.eventLog.enabled true
spark.driver.memory 6g
spark.storage.memoryFraction 0.7
spark.core.connection.ack.wait.timeout 800
spark.akka.frameSize 50
spark.rdd.compress true
I am trying to Run Spark MLlib Naive Bayes Algorithm on a Folder around 14 GB of data. (There is no issue when I am running the task on a 6 GB folder) I am reading this folder from google storage as RDD and giving 32 as partition parameter.(I have tried increasing the partition as well). Then using TF to create feature vector and predict on basis of that.
But when I am trying to run it on this folder it is throwing me ExecutorLostFailure everytime. I tried different configurations but nothing is helping. May be I am missing something very basic but not able to figure out. Any help or suggestion will be highly valuable.
Log is:
15/07/21 01:18:20 ERROR TaskSetManager: Task 3 in stage 2.0 failed 4 times; aborting job
15/07/21 01:18:20 INFO TaskSchedulerImpl: Cancelling stage 2
15/07/21 01:18:20 INFO TaskSchedulerImpl: Stage 2 was cancelled
15/07/21 01:18:20 INFO DAGScheduler: ResultStage 2 (collect at /opt/work/V2ProcessRecords.py:213) failed in 28.966 s
15/07/21 01:18:20 INFO DAGScheduler: Executor lost: 20150526-135628-3255597322-5050-1304-S8 (epoch 3)
15/07/21 01:18:20 INFO BlockManagerMasterEndpoint: Trying to remove executor 20150526-135628-3255597322-5050-1304-S8 from BlockManagerMaster.
15/07/21 01:18:20 INFO DAGScheduler: Job 2 failed: collect at /opt/work/V2ProcessRecords.py:213, took 29.013646 s
Traceback (most recent call last):
File "/opt/work/V2ProcessRecords.py", line 213, in <module>
secondPassRDD = firstPassRDD.map(lambda ( name, title, idval, pmcId, pubDate, article, tags , author, ifSigmaCust, wclass): ( str(name), title, idval, pmcId, pubDate, article, tags , author, ifSigmaCust , "Yes" if ("PMC" + pmcId) in rddNIHGrant else ("No") , wclass)).collect()
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 745, in collect
File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 2.0 failed 4 times, most recent failure: Lost task 3.3 in stage 2.0 (TID 12, vamp-m-2.c.quantum-854.internal): ExecutorLostFailure (executor 20150526-135628-3255597322-5050-1304-S8 lost)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
15/07/21 01:18:20 INFO BlockManagerMaster: Removed 20150526-135628-3255597322-5050-1304-S8 successfully in removeExecutor
15/07/21 01:18:20 INFO DAGScheduler: Host added was in lost list earlier:vamp-m-2.c.quantum-854.internal
Jul 21, 2015 1:01:15 AM INFO: parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
15/07/21 01:18:20 INFO SparkContext: Invoking stop() from shutdown hook
{"Event":"SparkListenerTaskStart","Stage ID":2,"Stage Attempt ID":0,"Task Info":{"Task ID":11,"Index":6,"Attempt":2,"Launch Time":1437616381852,"Executor ID":"20150526-135628-3255597322-5050-1304-S8","Host":"uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal","Locality":"PROCESS_LOCAL","Speculative":false,"Getting Result Time":0,"Finish Time":0,"Failed":false,"Accumulables":[]}}
{"Event":"SparkListenerExecutorRemoved","Timestamp":1437616389696,"Executor ID":"20150526-135628-3255597322-5050-1304-S8","Removed Reason":"Lost executor"}
{"Event":"SparkListenerTaskEnd","Stage ID":2,"Stage Attempt ID":0,"Task Type":"ResultTask","Task End Reason":{"Reason":"ExecutorLostFailure","Executor ID":"20150526-135628-3255597322-5050-1304-S8"},"Task Info":{"Task ID":11,"Index":6,"Attempt":2,"Launch Time":1437616381852,"Executor ID":"20150526-135628-3255597322-5050-1304-S8","Host":"uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal","Locality":"PROCESS_LOCAL","Speculative":false,"Getting Result Time":0,"Finish Time":1437616389697,"Failed":true,"Accumulables":[]}}
{"Event":"SparkListenerExecutorAdded","Timestamp":1437616389707,"Executor ID":"20150526-135628-3255597322-5050-1304-S8","Executor Info":{"Host":"uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal","Total Cores":1,"Log Urls":{}}}
{"Event":"SparkListenerTaskStart","Stage ID":2,"Stage Attempt ID":0,"Task Info":{"Task ID":12,"Index":6,"Attempt":3,"Launch Time":1437616389702,"Executor ID":"20150526-135628-3255597322-5050-1304-S8","Host":"uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal","Locality":"PROCESS_LOCAL","Speculative":false,"Getting Result Time":0,"Finish Time":0,"Failed":false,"Accumulables":[]}}
{"Event":"SparkListenerExecutorRemoved","Timestamp":1437616397743,"Executor ID":"20150526-135628-3255597322-5050-1304-S8","Removed Reason":"Lost executor"}
{"Event":"SparkListenerTaskEnd","Stage ID":2,"Stage Attempt ID":0,"Task Type":"ResultTask","Task End Reason":{"Reason":"ExecutorLostFailure","Executor ID":"20150526-135628-3255597322-5050-1304-S8"},"Task Info":{"Task ID":12,"Index":6,"Attempt":3,"Launch Time":1437616389702,"Executor ID":"20150526-135628-3255597322-5050-1304-S8","Host":"uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal","Locality":"PROCESS_LOCAL","Speculative":false,"Getting Result Time":0,"Finish Time":1437616397743,"Failed":true,"Accumulables":[]}}
{"Event":"SparkListenerStageCompleted","Stage Info":{"Stage ID":2,"Stage Attempt ID":0,"Stage Name":"collect at /opt/work/V2ProcessRecords.py:215","Number of Tasks":72,"RDD Info":[{"RDD ID":6,"Name":"PythonRDD","Parent IDs":[0],"Storage Level":{"Use Disk":false,"Use Memory":false,"Use ExternalBlockStore":false,"Deserialized":false,"Replication":1},"Number of Partitions":72,"Number of Cached Partitions":0,"Memory Size":0,"ExternalBlockStore Size":0,"Disk Size":0},{"RDD ID":0,"Name":"gs://uc1f-bioinfocloud-vamp-m/literature/xml/P*/*.nxml","Scope":"{\"id\":\"0\",\"name\":\"wholeTextFiles\"}","Parent IDs":[],"Storage Level":{"Use Disk":false,"Use Memory":false,"Use ExternalBlockStore":false,"Deserialized":false,"Replication":1},"Number of Partitions":72,"Number of Cached Partitions":0,"Memory Size":0,"ExternalBlockStore Size":0,"Disk Size":0}],"Parent IDs":[],"Details":"","Submission Time":1437616365566,"Completion Time":1437616397753,"Failure Reason":"Job aborted due to stage failure: Task 6 in stage 2.0 failed 4 times, most recent failure: Lost task 6.3 in stage 2.0 (TID 12, uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal): ExecutorLostFailure (executor 20150526-135628-3255597322-5050-1304-S8 lost)\nDriver stacktrace:","Accumulables":[]}}
{"Event":"SparkListenerJobEnd","Job ID":2,"Completion Time":1437616397755,"Job Result":{"Result":"JobFailed","Exception":{"Message":"Job aborted due to stage failure: Task 6 in stage 2.0 failed 4 times, most recent failure: Lost task 6.3 in stage 2.0 (TID 12, uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal): ExecutorLostFailure (executor 20150526-135628-3255597322-5050-1304-S8 lost)\nDriver stacktrace:","Stack Trace":[{"Declaring Class":"org.apache.spark.scheduler.DAGScheduler","Method Name":"org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages","File Name":"DAGScheduler.scala","Line Number":1266},{"Declaring Class":"org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1","Method Name":"apply","File Name":"DAGScheduler.scala","Line Number":1257},{"Declaring Class":"org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1","Method Name":"apply","File Name":"DAGScheduler.scala","Line Number":1256},{"Declaring Class":"scala.collection.mutable.ResizableArray$class","Method Name":"foreach","File Name":"ResizableArray.scala","Line Number":59},{"Declaring Class":"scala.collection.mutable.ArrayBuffer","Method Name":"foreach","File Name":"ArrayBuffer.scala","Line Number":47},{"Declaring Class":"org.apache.spark.scheduler.DAGScheduler","Method Name":"abortStage","File Name":"DAGScheduler.scala","Line Number":1256},{"Declaring Class":"org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1","Method Name":"apply","File Name":"DAGScheduler.scala","Line Number":730},{"Declaring Class":"org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1","Method Name":"apply","File Name":"DAGScheduler.scala","Line Number":730},{"Declaring Class":"scala.Option","Method Name":"foreach","File Name":"Option.scala","Line Number":236},{"Declaring Class":"org.apache.spark.scheduler.DAGScheduler","Method Name":"handleTaskSetFailed","File Name":"DAGScheduler.scala","Line Number":730},{"Declaring Class":"org.apache.spark.scheduler.DAGSchedulerEventProcessLoop","Method Name":"onReceive","File Name":"DAGScheduler.scala","Line Number":1450},{"Declaring Class":"org.apache.spark.scheduler.DAGSchedulerEventProcessLoop","Method Name":"onReceive","File Name":"DAGScheduler.scala","Line Number":1411},{"Declaring Class":"org.apache.spark.util.EventLoop$$anon$1","Method Name":"run","File Name":"EventLoop.scala","Line Number":48}]}}}
The Most common cause of ExecutorLostFailure as per my understanding is OOM in executor.
In order to resolve the OOM issue, one needs to figure out what exactly is causing it. Simply increasing the default parallelism or increasing the executor memory is not a strategic solution.
If you look at what increasing parallelism do is it tries to create more executors so that each executor can work on less and less data. But if your data is skewed such that the key on which data partitioning happens (for parallelism) has more data, simply increasing parallelism will be of no effect.
Similarly just by increasing Executor memory will be a very inefficient way of handing such a scenario as if only one executor is failing with ExecutorLostFailure , requesting increased memory for all the executors will make your application require much more memory then actually expected.
This error is occurring because a task failed more than four times.
Try increase the parallelism in your cluster using the following parameter.
--conf "spark.default.parallelism=100"
Set the parallelism value to 2 to 3 time the number of cores available on your cluster. If that doesn't work. try increase the parallelism in an exponential fashion. i.e if your current parallelism doesn't work multiply it by two and so on. Also I have observed that it helps if your level of parallelism is a prime number especially if you are using groupByKkey.
It is hard to say what the problem is without the log of the failed executor and not the driver's but most likely it is a memory problem. Try increasing the partition number significantly (if your current is 32 try 200)
I was having this issue, and the problem for me was very high incidence of one key in a reduceByKey task. This was (I think) causing a massive list to collect on one of the executors, which would then throw OOM errors.
The solution for me was to just filter out keys with high population before doing the reduceByKey, but I appreciate that this may or may not be possible depending on your application. I didn't need all my data anyway.

Resources