I have a Kafka topic that started at about 100GB that I tried to read in to a IMap with Hazelcast-jet. The machine has plenty of memory and I gave it 300 GB of heap. The topic was partitioned into 147 partitions, but when I run the code telling the Pipeline to read from the topic at "earliest" with local parallelism set to 84, the process doesn't seem to use many cores and after running a while doesn't have anywhere near the number of entries there should be in the map (as compared to data ingested into Elastic search at the same time). Now that the topic has grown beyond 500GB I would expect that the process would eventually run out of memory, but it still seems to not use many cores and load only a fraction of the data.
Does anyone have any ideas why this might be?
Related
I'm relatively new to spark and I have a few questions related to the tuning optimizations with respect to the spark submit command.
I have followed : How to tune spark executor number, cores and executor memory?
and I understand how to utilise maximum resources out of my spark cluster.
However, I was recently asked how to define the number of cores, memory and cores when I have a relatively smaller operation to do as if I give maximum resources, it is going to be underutilised .
For instance,
if I have to just do a merge job (read files from hdfs and write one single huge file back to hdfs using coalesce) for about 60-70 GB (assume each file is of 128 mb in size which is the block size of HDFS) of data(in avro format without compression), what would be the ideal memory, no of executor and cores required for this?
Assume I have the configurations of my nodes same as the one mentioned in the link above.
I can't understand the concept of how much memory will be used up by the entire job provided there are no joins, aggregations etc.
The amount of memory you will need depends on what you run before the write operation. If all you're doing is reading data combining it and writing it out, then you will need very little memory per cpu because the dataset is never fully materialized before writing it out. If you're doing joins/group-by/other aggregate operations all of those will require much ore memory. The exception to this rule is that spark isn't really tuned for large files and generally is much more performant when dealing with sets of reasonably sized files. Ultimately the best way to get your answers is to run your job with the default parameters and see what blows up.
If i have cluster of 5 nodes, each node having 1gb ram, now if my data file is 10gb distributed in all 5 nodes, let say 2gb in each node, now if i trigger
val rdd = sc.textFile("filepath")
rdd.collect
will spark load data into the ram and how spark will deal with this scenario
will it straight away deny or will it process it.
Lets understand the question first #intellect_dp you are asking, you have a cluster of 5 nodes (here the term "node" I am assuming machine which generally includes hard disk,RAM, 4 core cpu etc.), now each node having 1 GB of RAM and you have 10 GB of data file which is distributed in a manner, that 2GB of data is residing in the hard disk of each node. Here lets assume that you are using HDFS and now your block size at each node is 2GB.
now lets break this :
each block size = 2GB
RAM size of each node = 1GB
Due to lazy evaluation in spark, only when "Action API" get triggered, then only it will load your data into the RAM and execute it further.
here you are saying that you are using "collect" as an action api. Now the problem here is that RAM size is less than your block size, and if you process it with all default configuration (1 block = 1 partition ) of spark and considering that no further node will going to add up, then in that case it will give you out of memory exception.
now the question - is there any way spark can handle this kind of large data with the given kind of hardware provisioning?
Ans - yes, first you need to set default minimum partition :
val rdd = sc.textFile("filepath",n)
here n will be my default minimum partition of block, now as we have only 1gb of RAM, so we need to keep it less than 1gb, so let say we take n = 4,
now as your block size is 2gb and minimum partition of block is 4 :
each partition size will be = 2GB/4 = 500mb;
now spark will process this 500mb first and will convert it into RDD, when next chunk of 500mb will come, the first rdd will get spill to hard disk (given that you have set the storage level "MEMORY_AND_DISK_ONLY").
In this way it will process your whole 10 GB of data file with the given cluster hardware configuration.
Now I personally will not recommend the given hardware provisioning for such case,
as it will definitely process the data, but there are few disadvantages :
firstly it will involve multiple I/O operation making whole process very slow.
secondly if any lag occurs in reading or writing to the hard disk, your whole job will get discarded, you will go frustrated with such hardware configuration. In addition to it you will never be sure that will spark process your data and will be able to give result when data will increase.
So try to keep very less I/O operation, and
Utilize in memory computation power of spark with an adition of few more resources for faster performance.
When you use collect all the data send is collected as array only in driver node.
From this point distribution spark and other nodes does't play part. You can think of it as a pure java application on a single machine.
You can determine driver's memory with spark.driver.memory and ask for 10G.
From this moment if you will not have enough memory for the array you will probably get OutOfMemory exception.
In the otherhand if we do so, Performance will be impacted, we will not get the speed we want.
Also Spark store only results in RDD, so I can say result would not be complete data, any worst case if we are doing select * from tablename, it will give data in chunks , what it can affroad....
So, in one of our kafka topic, there's close to 100 GB of data.
We are running spark-structured streaming to get the data in S3
When the data is upto 10GB, streaming runs fine and we are able to get the data in S3.
But with 100GB, it is taking forever to stream the data in kafka.
Question: How does spark-streaming reads data from Kafka?
Does it take the entire data from current offset?
Or does it take in batch of some size?
Spark will work off consumer groups, just as any other Kafka consumer, but in batches. Therefore it takes as much data as possible (based on various Kafka consumer settings) from the last consumed offsets. In theory, if you have the same number of partitions, with the same commit interval as 10 GB, it should only take 10x longer to do 100 GB. You've not stated how long that currently takes, but to some people 1 minute vs 10 minutes might seem like "forever", sure.
I would recommend you plot the consumer lag over time using the kafka-consumer-groups command line tool combined with something like Burrow or Remora... If you notice an upward trend in the lag, then Spark is not consuming records fast enough.
To overcome this, the first option would be to ensure that the number of Spark executors is evenly consuming all Kafka partitions.
You'll also want to be making sure you're not doing major data transforms other than simple filters and maps between consuming and writing the records, as this also introduces lag.
For non-Spark approaches, I would like to point out that the Confluent S3 connector is also batch-y in that it'll only periodically flush to S3, but the consumption itself is still closer to real-time than Spark. I can verify that it's able to write very large S3 files (several GB in size), though, if the heap is large enough and the flush configurations are set to large values.
Secor by Pinterest is another option that requires no manual coding
I'd like to know If my understanding of the partitioning in Spark is correct.
I always thought about the number of partitions and their size and never about the worker they were processed by.
Yesterday, as I was playing a bit with partitioning, I found out that I was able to track the cached partitions' location using the WEB UI (Storage -> Cached RDD -> Data Distribution) and it surprised me.
I have a cluster of 30 cores (3 cores * 10 executors) and I had a RDD of like 10 partitions. I tried to expand it to 100 partitions to increase the parallelism just to find out that almost 90% of the partitions were on the same worker node and thus my parallelism was not limited by the total number of cpu in my cluster but by the number of cpu of the node containing 90% of the partitions.
I tried to find answers on stackoverflow and the only answer I could come by was about data locality. Spark detected that most of my files were on this node so it decided to keep most of the partitions on this node.
Is my understanding correct?
And if it is, is there a way to tell Spark to really shuffle the data?
So far this "data locality" lead to heavy underutilization of my cluster....
I have read from multiple sources on stack overflow which indicated using multiple consumer group will enable me to read from the same topic same partition from multiple consumers concurrently.
For example,
Can multiple Kafka consumers read from the same partition of same topic by default?
How Kafka broadcast to many Consumer Groups
Parallel Producing and Consuming in Kafka
So this is a follow up question to my previous question but on a slightly different context. Given the fact that we can only read and write to a partition leader, and Kafka logs are stored on hard disk. Each partition represents a log.
Now if I have 100 consumer groups reading from the same topic and same partition, that is basically reading from the same computer because we are only allowed to read from partition leader and cannot read from partition replicas, then how does Kafka even scale this kind of read load?
How does it achieve parallelism? Is it just spawning many threads and processes on the same machine to handler all the consumption concurrently? How can this approach scale horizontally?
Thank you
If you have 100 consumers all reading from the same partition then the data for that partition will be cached in the Linux OS page cache (memory) and so 99 or perhaps even all 100 of the consumers will be fetching data from RAM instead of from a spinning hard disk. This is a unique feature of Kafka that despite its being written in a JVM language, it is designed to leverage off heap memory for extra performance in the case of parallel consumers of the same data.