I am using m4.2x master + 12 r5.12x core instances to run my spark job (spark 2.4, EMR 5.21)
I gave the following cluster config:
[
{
"classification": "spark-defaults",
"properties": {
"spark.executor.memory": "39219M",
"spark.driver.memory": "39219M",
"spark.driver.cores": "5",
"spark.executor.cores": "5",
"spark.memory.storageFraction": "0.27",
"spark.memory.fraction": "0.80",
"spark.executor.instances": "107",
"spark.yarn.executor.memoryOverhead": "4357M",
"spark.dynamicAllocation.enabled": "false",
"spark.yarn.driver.memoryOverhead": "4357M"
},
"configurations": []
}
]
As ec2 instance types says r5.12x has memory of 384 GB. And I calculated the above as follows
# of cores per executor = 5
# of executors per r5.12x instance = floor(48/5) = 9
spark.executor.instances = 9 * 12 - 1 (minus 1 for driver)
spark.executor.memory = floor(((383 * 1024)/9) * 0.9) = 39219MB
spark.executor.memoryOverhead = floor(((383 * 1024)/9) * 0.1) = 4357MB
(and same for driver)
Yet when the cluster launches, 95 executors + 1 driver is created (instead of 107 executors + 1 driver). Each with storage memory (as per spark UI) of 29 GB (which should have been spark.memory.storageFraction * 39219 ~ 30.63 GB). Why are 12 less executors created (including driver)? And why is storage memory in UI less ? Am I correct about the formula used to derive storage memory in UI ?
Related
I am new in Spark application. I am using r5a.4xlarge aws cluster with min worker is 1 and max worker is 16. This instance has 128GB memory and 16 cores.
I have used spark.executor.cores 5.
As per the memory management calculation memory/ executor is near around 42GB. After subtracting the overhead memory 10% net memory available is around 37GB.
I have kept offheap memory enable. Whenever I am trying to use below spark configuration it is giving me the below error.
Error updating cluster for job S2C_ER_ENROLMENT_PREM_DTL_HISTORY_TEST×Specified heap memory (37888 MB) and off heap memory (73404 MB) is above the maximum executor memory (97871 MB) allowed for node type r5a.4xlarge.
I have three questions from the above error message.
How can maximum executor memory is showing 97871 MB? This is nowhere near to my calculated result.
How become offheap memory is becoming 73404MB as I did not set it explicitly?
How we calculate the offheap memory?
Below is the configuration I was trying to use.
spark.broadcast.compress true
spark.dynamicAllocation.enabled true
yarn.fail-fast true
spark.shuffle.reduceLocality.enabled true
spark.task.cpus 5
spark.dynamicAllocation.shuffleTracking.enabled true
mapreduce.shuffle.listen.queue.size 2048
spark.rpc.message.maxSize 1024
spark.storage.memoryFraction 0.8
spark.files.useFetchCache true
spark.scheduler.mode FAIR
spark.shuffle.compress true
spark.sql.adaptive.coalescePartitions.initialPartitionNum 3
spark.sql.adaptive.coalescePartitions.enabled true
mapreduce.shuffle.max.connections 100
spark.executor.cores 5
spark.executor.memory 37G
spark.driver.memory 37G
spark.driver.cores 5
spark.executor.instances 29
spark.sql.adaptive.coalescePartitions.parallelismFirst false
spark.storage.replication.proactive true
spark.sql.adaptive.skewJoin.enabled true
spark.network.timeout 300000
spark.broadcast.blockSize 128m
spark.sql.adaptive.coalescePartitions.minPartitionSize 256M
spark.akka.frameSize 1024
spark.speculation true
spark.cleaner.periodicGC.interval 12000
mapreduce.reduce.shuffle.memory.limit.percent 85
spark.logConf false
spark.executor.heartbeatInterval 200000
spark.worker.cleanup.enabled true
spark.sql.adaptive.enabled true
spark.sql.adaptive.advisoryPartitionSizeInBytes 1024M
spark.shuffle.io.preferDirectBufs true
mapreduce.map.log.level ERROR
spark.sql.adaptive.skewJoin.skewedPartitionFactor 6
spark.default.parallelism 40
spark.hadoop.databricks.fs.perfMetrics.enable false
mapreduce.shuffle.max.threads 99
We are processing roughly 500 MB file of data in EMR.
I am performing the following operations on the file.
read csv :
val df = spark.read.format("csv").load(s3)
aggregating by key and creating the list :
val data = filteredDf.groupBy($"<key>")
.agg(collect_list(struct(cols.head, cols.tail: _*)) as "finalData")
.toJSON
Iterating through each partition and storing per key aggregation to S3 and sending the key to SQS.
data.foreachPartition(partition => {
partition.foreach(json => ......)
}
Data is skewed with one account having almost 10M records (~400 MB). I am experiencing out of memory issue during foreachPartition for the given account.
Configuration:
1 driver : m4.4xlarge CPU Cores : 16 and Memory : 64GB
1 executor : m4.2x large CPU Cores : 8 and Memory : 32GB
driver-memory: 20G
executor-memory: 10G
Partitions : default 200 [ most of them don't do anything ]
Any help is much appreciated! thanks a lot in advance :)
I am working with Spark GraphX. I am building a graph from a file (around 620 mb, 50K vertices and almost 50 millions of edges). I am using a spark cluster with: 4 workers, each one with 8 cores and 13.4g of ram, 1 driver with the same specs. When I submit my .jar to the cluster, randomly one of the workers loads all the data on it. All the task needed for the computing are requested to that worker. While the computing the remaining three are without doing nothing. I have try everything and i do not found nothing that can force to compute in all of the workers.
When Spark build the graph and I look for the number of partitions of the RDD of vertices say 5, but if I repartition that RDD for example with 32 (number of cores in total) Spark load the data in every worker but gets slow the computation.
Im launching the spark submit by this way:
spark-submit --master spark://172.30.200.20:7077 --driver-memory 12g --executor-memory 12g --class interscore.InterScore /root/interscore/interscore.jar hdfs://172.30.200.20:9000/user/hadoop/interscore/network.dat hdfs://172.30.200.20:9000/user/hadoop/interscore/community.dat 111
The code is here:
object InterScore extends App{
val sparkConf = new SparkConf().setAppName("Big-InterScore")
val sc = new SparkContext(sparkConf)
val t0 = System.currentTimeMillis
runInterScore(args(0), args(1), args(2))
println("Running time " + (System.currentTimeMillis - t0).toDouble / 1000)
sc.stop()
def runInterScore(netPath:String, communitiesPath:String, outputPath:String) = {
val communities = sc.textFile(communitiesPath).map(x => {
val a = x.split('\t')
(a(0).toLong, a(1).toInt)
}).cache
val graph = GraphLoader.edgeListFile(sc, netPath, true)
.partitionBy(PartitionStrategy.RandomVertexCut)
.groupEdges(_ + _)
.joinVertices(communities)((_, _, c) => c)
.cache
val lvalues = graph.aggregateMessages[Double](
m => {
m.sendToDst(if (m.srcAttr != m.dstAttr) 1 else 0)
m.sendToSrc(if (m.srcAttr != m.dstAttr) 1 else 0)
}, _ + _)
val communitiesIndices = communities.map(x => x._2).distinct.collect
val verticesWithLValue = graph.vertices.repartition(32).join(lvalues).cache
println("K = " + communitiesIndices.size)
graph.unpersist()
graph.vertices.unpersist()
communitiesIndices.foreach(c => {
//COMPUTE c
}
})
}
}
I am using sparkR (spark 2.0.0, yarn) on cluster with following configuration: 5 machines (24 cores + 200 GB RAM each). Wanted to run sparkR.session() with additional arguments to assign only a percentage of total resources to my job:
if(Sys.getenv("SPARK_HOME") == "") Sys.setenv(SPARK_HOME = "/...")
library(SparkR, lib.loc = file.path(Sys.getenv('SPARK_HOME'), "R", "lib"))
sparkR.session(master = "spark://host:7077",
appname = "SparkR",
sparkHome = Sys.getenv("SPARK_HOME"),
sparkConfig = list(spark.driver.memory = "2g"
,spark.executor.memory = "20g"
,spark.executor.cores = "4"
,spark.executor.instances = "10"),
enableHiveSupport = TRUE)
The weird thing is that parameters seemed to be passed to sparkContext, but at the same time I end with number of x-core executors which make use of 100% resources (in this example 5 * 24 cores = 120 cores available; 120 / 4 = 30 executors).
I tried creating another spark-defaults.conf with no default paramaters assigned (so the only default parameters are those existed in spark documentation - they should be easily overrided) by:
if(Sys.getenv("SPARK_CONF_DIR") == "") Sys.setenv(SPARK_CONF_DIR = "/...")
Again, when I looked at the Spark UI on http://driver-node:4040 the total number of executors isn't correct (tab "Executors"), but at the same time all the config parameters in tab "Environment" are exactly the same as those provided by me in R script.
Anyone knows what might be the reason? Is the problem with R API or some infrastructural cluster-specific issue (like yarn settings)
I found you have to use the spark.driver.extraJavaOptions, e.g.
spark <- sparkR.session(master = "yarn",
sparkConfig = list(
spark.driver.memory = "2g",
spark.driver.extraJavaOptions =
paste("-Dhive.metastore.uris=",
Sys.getenv("HIVE_METASTORE_URIS"),
" -Dspark.executor.instances=",
Sys.getenv("SPARK_EXECUTORS"),
" -Dspark.executor.cores=",
Sys.getenv("SPARK_CORES"),
sep = "")
))
Alternatively you change the spark-submit args, e.g.
Sys.setenv("SPARKR_SUBMIT_ARGS"="--master yarn --driver-memory 10g sparkr-shell")
I have 1 spark master and 2 slave nodes setup with 8 gb memory each on AWS. I have setup spark master to run every 1 hour. I have a cassandra database which is read every hour from spark to get records and process it in spark. There are around 5000 records every hour. My spark master crashed in one of the run saying
"15/12/20 11:04:45 ERROR ActorSystemImpl: Uncaught fatal error from thread [sparkMaster-akka.actor.default-dispatcher-4436] shutting down ActorSystem [sparkMaster]
java.lang.OutOfMemoryError: GC overhead limit exceeded
at scala.math.BigInt$.apply(BigInt.scala:82)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:16)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:42)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:35)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:42)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:35)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3066)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2161)
at org.json4s.jackson.JsonMethods$class.parse(JsonMethods.scala:19)
at org.json4s.jackson.JsonMethods$.parse(JsonMethods.scala:44)
at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:58)
at org.apache.spark.deploy.master.Master.rebuildSparkUI(Master.scala:793)
at org.apache.spark.deploy.master.Master.removeApplication(Master.scala:734)
at org.apache.spark.deploy.master.Master.org$apache$spark$deploy$master$Master$$finishApplication(Master.scala:712)
at org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$28.apply(Master.scala:445)
at org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$28.apply(Master.scala:445)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1.applyOrElse(Master.scala:445)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at org.apache.spark.deploy.master.Master.aroundReceive(Master.scala:52)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
"
Can you please let me know the reason why spark master crashed with out of memory. I have this as setup for spark
_executorMemory=6G
_driverMemory=6G
creating 8 paritions in my code.
Why does master goes down which out of memory
Here is the code
//create spark context
_sparkContext = new SparkContext(_conf)
//load the cassandra table
val tabledf = _sqlContext.read.format("org.apache.spark.sql.cassandra").options(Map( "table" -> "events", "keyspace" -> "sams")).load
val whereQuery = "addedtime >= '" + _from + "' AND addedtime < '" + _to + "'"
helpers.printnextLine("Where query to run on Cassandra : " + whereQuery)
val rdd = tabledf.filter(whereQuery)
rdd.registerTempTable("rdd")
val selectQuery = "lower(brandname) as brandname, lower(appname) as appname, lower(packname) as packname, lower(assetname) as assetname, eventtime, lower(eventname) as eventname, lower(client.OSName) as platform, lower(eventorigin) as eventorigin, meta.price as price"
val modefiedDF = _sqlContext.sql("select " + selectQuery + " from rdd")
//cache the rdd
modefiedDF.cache
// perform groupby operation
grprdd = filterrdd.groupBy("brandname", "appname", "packname", "eventname", "platform", "eventorigin", "price").count()
grprdd.foreachPartition{iter =>
{
iter.foreach(element =>
{
// Write to sql server table
val statement = con.createStatement()
statement.executeUpdate(insertQuery)
finally
{
if(con != null)
con.close
}
// clear the cache
_sqlContext.clearCache()
The problem may be that you are asking spark master to use 6 GB and spark executor to use another 6 GB (total 12 GB to be used). However the system only has a total 8 GB RAM available.
Of this 8 GB you should also allow some memory to be utilized for OS processes (say 1 GB)k. Thus total RAM available to spark (master and worker combined) is only 7 GB.
Set executorMemory and driverMemory accordingly.