I've a job running which shows the Event Timeline as follows, I am trying to guess the gaps between these single lines, they seem to be parallel but not immediately sequencial with other stages...
Any other insight from this, and what is the cluster doing during these gaps?
Without any code to look at, a blind guess is that during those gaps the driver is busy doing some work. If you are doing a .collect(), or a broadcast(), or any type of local processing in the driver program, then the executors will sit idle, waiting to have work assigned to them.
Note that in a visualization you see tasks from a table below it. If you change a paging size or a sorting of the table, you can see the actual pattern.
Related
I often use the Spark UI to monitor my jobs as it is quite convenient. I like the timeline as it gives me hints about what takes time and where I can improve things.
However, when I run a job with a lot of executors, the beginning of the timeline is completely flooded by the addition of each individual executor:
Here I zoomed out the maximum I could, and even then, I could not get the whole timeline displayed in a single window. I know I can just scroll down until I get to the bottom, but it becomes quite tedious as I have to do it each time I reload the page.
So I wonder if there is a setting somewhere to disable those tags about executors being added. Unfortunately, I have not been able to find anything relevant on the internet (maybe a problem of keywords?).
I have an application written for Spark using Scala language. My application code is kind of ready and the job runs for around 10-15 mins.
There is an additional requirement to provide status of the application execution when spark job is executing at run time. I know that spark runs in lazy way and it is not nice to retrieve data back to the driver program during spark execution. Typically, I would be interested in providing status at regular intervals.
Eg. if there 20 functional points configured in the spark application then I would like to provide status of each of these functional points as and when they are executed/ or steps are over during spark execution.
These incoming status of function points will then be taken to some custom User Interface to display the status of the job.
Can some one give me some pointers on how this can be achieved.
There are few things you can do on this front that I can think of.
If your job contains multiple actions, you can write a script to poll for the expected output of those actions. For example, imagine your script have 4 different DataFrame save calls. You could have your status script poll HDFS/S3 to see if the data has showed up in the expected output location yet. Another example, I have used Spark to index to ElasticSearch, and I have written status logging to poll for how many records are in the index to print periodic progress.
Another thing I tried before is use Accumulators to try and keep rough track of progress and how much data has been written. This works ok, but it is a little arbitrary when Spark updates the visible totals with information from the executors so I haven't found it to be too helpfully for this purpose generally.
The other approach you could do is poll Spark's status and metric APIs directly. You will be able to pull all of the information backing the Spark UI into your code and do with it whatever you want. It won't necessarily tell you exactly where you are in your driver code, but if you manually figure out how your driver maps to stages you could figure that out. For reference, here are is the documentation on polling the status API:
https://spark.apache.org/docs/latest/monitoring.html#rest-api
I am currently using spark to process documents. I have two servers at my disposal (innov1 and innov2) and I am using yarn as the resource manager.
The first step is to gather the paths of the files from a database, filter them, repartition them and persist them in a RDD[String]. However, I can't manage to have a fair sharing of the persist among all the executors:
persisted RDD memory taken among executors
and this lead to the executors not doing the same amount of work after that:
Work done by each executors (do not care about the 'dead' here, it's another problem)
And this happens randomly, sometimes it's innov1 that takes all the persist, and then only executors on innov1 work (but it tends to be innov2 in general). Right now, each time two executors are on innov1, I just kill the job to relaunch, and I pray for them to be on innov2 (which is utterly stupid, and break the goal of using spark).
What I have tried so far (and that didn't work):
make the driver sleep 60 seconds before the loading from the database (maybe innov1 takes more time to wake up?)
add spark.scheduler.minRegisteredResourcesRatio=1.0 when I submit the job (same idea than above)
persist with replication x2 (idea from this link), hoping that some of the block would be replicated on innov1
Note for point 3, sometimes it was persisting a replication on the same executor (which is a bit counter intuitive), or even weirder, not replicated at all (innov2 is not able to communicate with innov1?).
I am open to any suggestion, or link to similar problems I would have missed.
Edit:
I can't really put code here, as it's part of my company's product. I can give a simplified version however:
val rawHBaseRDD : RDD[(ImmutableBytesWritable, Result)] = sc
.newAPIHadoopRDD(...)
.map(x => (x._1, x._2)) // from doc of newAPIHadoopRDD
.repartition(200)
.persist(MEMORY_ONLY)
val pathsRDD: RDD[(String, String)] = rawHBaseRDD
.mapPartitions {
...
extract the key and the path from ImmutableBytesWritable and
Result.rawCells()
...
}
.filter(some cond)
.repartition(200)
.persist(MEMORY_ONLY)
For both persist, everything is on innov2. Is it possible that it's because the data are only on innov2? even if it's the case, I would assume that repartition help to share the rows between innov1 and innov2, but it doesn't happen here.
Your persisted data set is not very big - some ~100MB according to your screenshot. You have allocated 10 cores with 20GB of memory, so the 100MB fits easily into the memory of a single executor and that is basically what is happening.
In other words, you have allocated many more resources than are actually needed, so Spark just randomly picks the subset of resources that it needs to complete the job. Sometimes those resources happen to be on one worker, sometimes on another and sometimes it uses resources from both workers.
You have to remember that to Spark, it makes no difference if all resources are placed on a single machine or on a 100 different machines - as long as you are not trying to use more resources than available (in which case you would get an OOM).
Unfortunately (fortunately?) the problem solved by itself today. I assume it is not spark related as I hadn't modified the code until the resolution.
It's probably due to the complete reboot of all services with Ambari (even if I am not 100% sure, because I already tried this before), as it's the only "major" change that happened today.
I'm playing with the idea of having long-running aggregations (possibly a one day window). I realize other solutions on this site say that you should use batch processing for this.
I'm specifically interested in understanding this function though. It sounds like it would use constant space to do an aggregation over the window, one interval at a time. If that is true, it sounds like a day-long aggregation would be possible-viable (especially since it uses check-pointing in case of failure).
Does anyone know if this is the case?
This function is documented as: https://spark.apache.org/docs/2.1.0/streaming-programming-guide.html
A more efficient version of the above reduceByKeyAndWindow() where the reduce value of each window is calculated incrementally using the reduce values of the previous window. This is done by reducing the new data that enters the sliding window, and “inverse reducing” the old data that leaves the window. An example would be that of “adding” and “subtracting” counts of keys as the window slides. However, it is applicable only to “invertible reduce functions”, that is, those reduce functions which have a corresponding “inverse reduce” function (taken as parameter invFunc). Like in reduceByKeyAndWindow, the number of reduce tasks is configurable through an optional argument. Note that checkpointing must be enabled for using this operation.
After researching this on the MapR forums, it seems that it would definitely use a constant level of memory, making a daily window possible assuming you can fit one day of data in your allocated resources.
The two downsides are that:
Doing a daily aggregation may only take 20 minutes. Doing a window over a day means that you're using all those cluster resources permanently rather than just for 20 minutes a day. So, stand-alone batch aggregations are far more resource efficient.
Its hard to deal with late data when you're streaming exactly over a day. If your data is tagged with dates, then you need to wait till all your data arrives. A 1 day window in streaming would only be good if you were literally just doing an analysis of the last 24 hours of data regardless of its content.
At 10PM each Tuesday all of a sudden oracle is generating huge REDO logs until the disk runs out of space. My application is not running any huge queries or anything during this time according to the logs.
The only thing I can find is that the dba_scheduler_job_run_details table started an oracle job right at that time. I can't find any info on google about this job, so am desperate for any ideas.
Info from dba_scheduler_job_run_details:
JOB_NAME: ORA$AT_SA_SPC_SY_254
STATUS: STOPPED
ACTUAL_START_DATE: 11-03-22 22:00:02.125060000 CST6CDT
RUN_DURATION 9:4:19.0
10PM is usually the time that automatic statistics gathering starts. Although it normally runs every day. In 11g stats gathering uses auto tasks instead of the scheduler, try looking for the stats job with a query like this: select * from dba_autotask_job_history order by window_start_time desc;
But even if the problem is caused by statistics, it seems odd that it would cause too much REDO. Usually gathering statistics is a lot of reading and a very small amount of writing. Unless you've got many small tables that change all the time; in that case the amount of statistics information could be much larger than the actual data. If that's the case you may need to gather the stats more often, or maybe lock the stats.
Or possibly the statistics process is blowing up on a specific table. This will show you what table was last analyzed, maybe it will give you a clue: select last_analyzed, dba_tables.* from dba_tables order by 1 desc nulls last;
I something generates huge REDOLOG, then you must have huge DML activity. For examaple cleanup script which tries to purge some data, but fails, rollbacks, and then tries to do the same task again and again and again...
The best way how to prove/disprove your doubts is the "Log miner tool". It's not trivial to use, but it will tell you which statements (and against which table) generated most of the redo and that time.