dask processes tasks twice - python-3.x

I noticed that a tasks of a dask graph can be executed several times by different workers.
Also I see that log in the scheduler console (Don't know if it can be related to resilience):
"WARNING - Lost connection to ... while sending result: Stream is
closed"
Is there a way to impede dask to execute the same task twice on different workers ?
Note that i'm using:
dask 0.15.0
distributed 1.15.1
Thx
Bertrand

The short answer is "no".
Dask reserves the right to call your function many times. This might occur if a worker goes down or if Dask does some load balancing and moves some tasks around the cluster while at the same time they've just started.
However you can significantly reduce the likelihood of a task running multiple times by turning off work stealing:
def turn_off_stealing(dask_scheduler):
dask_scheduler.extensions['stealing']._pc.stop()
client.run(turn_off_stealing)

Related

Parallel Python with joblibspark: how to evenly distribute jobs?

I have a project in which joblib works well on one computer, it sends function to different cores effectively.
Now I have assignment to do same thing on a Databricks cluster. I've tried this many ways today, but the problem in the result is that the jobs do not spread out one-per-compute node. I've got 4 executors, I set n_jobs=6, but when I send 4 jobs through, some of them pile up on same node, leaving nodes unused. Here's a picture of Databricks Spark UI:
. Sometimes when I try this, I get 1 job running on a node by itself and all of the rest are piled up on one node.
In the joblib and joblibspark docs, I see the parameter batch_size which specifies how many tasks are sent to a given node. Even when I set that to 1, I get this same problem, nodes unused.
from joblib import Parallel, delayed
from joblibspark import register_spark
register_spark()
output = Parallel(backend="spark", n_jobs=6,
verbose=config.JOBLIB_VERBOSE, batch_size=1)(
delayed(fit_one)
(x, model_data=model_data, dlmodel=dlmodel,
outdir=outdir, frac=sample_p,
score_type=score_type,
save=save,
verbose=verbose) for x in ZZ)
I've hacked at this all day, trying various backends and combinations of settings. What am I missing?

Dask DF operation takes a long time after 100% progress in Dask dashboard

I am working with a large CSV (~60GB; ~250M rows) with Dask in Jupyter.
The first thing I want to do with the DF after loading it is to concatenate two string columns. I can do so successfully, but I noticed that cell execution time does not seem to decrease with higher workers counts (I tried 5, 10, and 20 on a machine with 64 logical cores). If anything, every five or so workers seem to add an extra minute to execution time.
Meanwhile, the progress bar of Dask's dashboard suggests that the task scales well with worker count. At 5 workers the task finishes (ac. to the dashboard) in about 10-15 min. At 20 workers the stream visualisation suggests task completion in roughly 3-5 min. But cell execution time remains around 25 min, i.e. in the 5-worker case the cell will appear to be hanging for an extra 10-15 min. after the stream has finished; in the 20-worker case -- for 20-22 more min., with no evidence of worker activity as far as I can see.
This is the code that I'm running:
import dask
import dask.dataframe as dd
from dask.diagnostics import ProgressBar
from dask.distributed import Client, LocalCluster
cluster = LocalCluster(n_workers=20)
client = Client(cluster)
df = dd.read_csv('df_name.csv', dtype={'col1': 'object', 'col2': 'object'})
with ProgressBar():
df["col_merged"] = df["col3"]+df["col4"]
df = df.compute()
Python version: 3.9.1
Dask version: 2021.06.2
What am I missing? Could this simply be overhead from having Dask to coordinate several workers?
To add to #SultanOrazbayev 's answer, the specific thing that's taking time after the tasks have all been done, is copying data from the workers into your client process to assemble the single in-memory dataframe that you have asked for. This is not a "task", as all the computing has already happened, and does not parallelise well, because the client is a single thread pulling data from the workers.
As with the comment above: if you want to achieve parallelism, you need to load the data in workers (which dd.read_csv does) and act on them in workers o get your result. You should on .compute() relatively small things. Conversely, if your data first comfortably into memory, there was probably nothing to be gained by having dask involved at all, just use pandas.
Running
df = df.compute()
will attempt to load all the 250M rows into memory. If this is feasible with your machine, you will still spend a lot of time because each worker is going to send their chunk, so there will be a lot of data transfer...
The core idea is to bring into memory only the results of the reduced calculations, and distribute the workload among the workers until then.

Invisible Delays between Spark Jobs

There are 4 major actions(jdbc write) with respect to application and few counts which in total takes around 4-5 minutes for completion.
But the total uptime of Application is around 12-13minutes.
I see there are certain jobs by name run at ThreadPoolExecutor.java : 1149. Just before this job being reflected on Spark UI, the invisible long delays occur.
I want to know what are the possible causes for these delays.
My application is reading 8-10 CSV files, 5-6 VIEWs from table. Number of joins are around 59, few groupBy with agg(sum) are there and 3 unions are there.
I am not able to reproduce the issue in DEV/UAT env since the data is not that much.
It's in the production where I get the app. executed run by my Manager.
If anyone has come across such delays in their job, please share your experience what could be the potential cause for this, currently I am working around the unions, i.e. caching the associated dataframes and calling count so as to get the benefit of cache in the coming union(yet to test, if union is the reason for delays)
Similarly, I tried the break the long chain of transformations with cache and count in between to break the long lineage.
The time reduced from initial 18 minutes to 12 minutes but the issue with invisible delays still persist.
Thanks in advance
I assume you don't have a CPU or IO heavy code between your spark jobs.
So it really sparks, 99% it is QueryPlaning delay.
You can use
spark.listenerManager.register(QueryExecutionListener) to check different metrics of query planing performance.

Spark tasks stuck at RUNNING

I'm trying to run a Spark ML pipeline (load some data from JDBC, run some transformers, train a model) on my Yarn cluster but each time I run it, a couple - sometimes one, sometimes 3 or 4 - of my executors get stuck running their first task set (that'd be 3 tasks for each of their 3 cores), while the rest run normally, checking off 3 at a time.
In the UI, you'd see something like this:
Some things I have observed so far:
When I set up my executors to use 1 core each with spark.executor.cores (i.e. run 1 task at a time), the issue does not occur;
The stuck executors always seem to be them ones that had to get some partitions shuffled to them in order to run the task;
The stuck tasks would ultimately get successfully speculatively executed by another instance;
Occasionally, a single task would get stuck in an executor that is otherwise normal, the other 2 cores would keep working fine, however;
The stuck executor instances look like everything is normal: CPU is at ~100%, plenty of memory to spare, the JVM processes are alive, neither Spark or Yarn log anything out of the ordinary and they can still receive instructions from the driver, such as "drop this task, someone else speculatively executed it already" -- though, for some reason, they don't drop it;
Those executors never get killed off by the driver, so I imagine they keep sending their heartbeats just fine;
Any ideas as to what may be causing this or what I should try?
TLDR: Make sure your code is threadsafe and race condition-free before you blame Spark.
Figured it out. For posterity: was using an thread-unsafe data structure (a mutable HashMap). Since executors on the same machine share a JVM, this was resulting in data races that were locking up the separate threads/tasks.
The upshot: when you have spark.executor.cores > 1 (and you probably should), make sure your code is threadsafe.

Why does web UI show different durations in Jobs and Stages pages?

I am running a dummy spark job that does the exactly same set of operations in every iteration. The following figure shows 30 iterations, where each job corresponds to one iteration. It can be seen the duration is always around 70 ms except for job 0, 4, 16, and 28. The behavior of job 0 is expected as it is when the data is first loaded.
But when I click on job 16 to enter its detailed view, the duration is only 64 ms, which is similar to the other jobs, the screen shot of this duration is as follows:
I am wondering where does Spark spend the (2000 - 64) ms on job 16?
Gotcha! That's exactly the very same question I asked myself few days ago. I'm glad to share the findings with you (hoping that when I'm lucking understanding others chime in and fill the gaps).
The difference between what you can see in Jobs and Stages pages is the time required to schedule the stage for execution.
In Spark, a single job can have one or many stages with one or many tasks. That creates an execution plan.
By default, a Spark application runs in FIFO scheduling mode which is to execute one Spark job at a time regardless of how many cores are in use (you can check it in the web UI's Jobs page).
Quoting Scheduling Within an Application:
By default, Spark’s scheduler runs jobs in FIFO fashion. Each job is divided into "stages" (e.g. map and reduce phases), and the first job gets priority on all available resources while its stages have tasks to launch, then the second job gets priority, etc. If the jobs at the head of the queue don’t need to use the whole cluster, later jobs can start to run right away, but if the jobs at the head of the queue are large, then later jobs may be delayed significantly.
You should then see how many tasks a single job will execute and divide it by the number of cores the Spark application have assigned (you can check it in the web UI's Executors page).
That will give you the estimate on how many "cycles" you may need to wait before all tasks (and hence the jobs) complete.
NB: That's where dynamic allocation comes to the stage as you may sometimes want more cores later and start with a very few upfront. That's what the conclusion I offered to my client when we noticed a similar behaviour.
I can see that all the jobs in your example have 1 stage with 1 task (which make them very simple and highly unrealistic in production environment). That tells me that your machine could have got busier at different intervals and so the time Spark took to schedule a Spark job was longer but once scheduled the corresponding stage finished as the other stages from other jobs. I'd say it's a beauty of profiling that it may sometimes (often?) get very unpredictable and hard to reason about.
Just to shed more light on the internals of how web UI works. web UI uses a bunch of Spark listeners that collect current status of the running Spark application. There is at least one Spark listener per page in web UI. They intercept different execution times depending on their role.
Read about org.apache.spark.scheduler.SparkListener interface and review different callback to learn about the variety of events they can intercept.

Resources