ADF Tumbling Window Trigger Dependency - azure

I went over the link but I could not understand what is the diagram trying to convey. The diagram is not conclusive enough. There is no explanation.
Consider the following diagram from the doc:
The Dependency Offset part seems clear. The second diagram in Dependency Offset tells me that Trigger A's 10 o clock run should depend on Trigger B's 9 o clock run.
The Dependency Size part does not quite make sense. How to make sense out of it/interpret it? e.g. the second example in the Dependency Size section where it says Offset is -1 hr and Size is 2 hrs. What is the size referring to here? The size 2hrs defined there is causing to have an overlap of 1 hr (i.e. between 10 and 11) with the complete window of Trigger A which effectively leaves Trigger A with no time to execute. I am sure I am interpreting it wrong. Could someone please help?
.

Imagine you have a job that runs every hour to produce "usage metrics" for the past hour. Imagine "Trigger B" runs this job hourly.
Now you want a job (Trigger A) that creates a "rolling average" of usage metrics over the past two hours. To reduce computational requirements, you can produce this by taking the average of the usage metrics produced for the last two hours (calculations output by Trigger B).
You still run this "rolling average" ("Trigger A") job every hour, but it relies not only on the most recent "Trigger B" run that ran on this hour, but the previous one as well. The "Dependency Size" of 2 hours (with offset of -1 hour) means this trigger A will not run unless the 2 "Trigger B" jobs (from this hour and last hour) have both completed successfully.
Agreed that the diagram conveys the concept poorly for a number of reasons. For example in my opinion, Trigger B should be named A and vice-versa to make the dependency order more intuitively obvious.
Also, the individual/multiple pipeline runs should be visualized within the windows. Something like this might have been better:

Related

PySpark groupby strange behaviour

I am querying a large (2 trillion records) parquet file using PySpark, partitioned by two columns, month and day .
If I run a simple query as:
SELECT month, day, count(*) FROM mytable
WHERE month >= 201801 and month< 202301 -- two years data
GROUP BY month, day
ORDER BY month, day
the query is executed in 5 min or less. Super good performance!
If, I remove the where condition, it will bring whole data lake information (4 years). This query will take 1.5 hours to execute.
This behaviour is far from normal. I guess might be related to the large amount of data being queried in the workers node, leading to GC or shuffle, but is just a guess
How can I debug above situation?
My understanding is that Spark should be clever enough to calculate per partion (since is a distributed environment), and take around 5 * 2 (double years), not so much big different
Edit1: Adding information from SparkUI
I will put the screenshots of the two runs, 4 years data, 1.7 hours, and 3 years data, 7.5 min. First, always the 4 years data
General overview
Job Page
Stage 1 - Heavy stage
Stage 2
SQL
Edit 2 - New findings - Scheduler delay
In the heavy task, I have found out an scheduler delay
If this is the case, what is the approach?
Thanks a lot!
I have found what was the problem.
By increasing the memory and cores (not really important) of the
Driver, the problem was solved.
How to reach this conclusion?
First, I knew my data was not very skewed (as pointed by #samkart and #Leonid Vasilev). but, I checked again.
Second, all metrics were very similar to each other, without great number differences, soooo, it had to be something.
Third and lastly, I open the Stage Event line, and found a very interesting issue, see edit 2.
After further investigating why my scheduler was so delayed, I really didn't find the real reason, but this sentence gave me the hint. The problem was in the driver
Scheduler delay (blue) is the time spent waiting. There is something
that the executors are waiting for - often this is waiting for the
driver that controls and coordinates the jobs.
source: enter link description here
In that post, the author also mention something very important that I wish to add
See all that red and blue? This is a sure sign that something is up.
What we really want to see is lots of green - the proportion of time
spent doing work - I mean real work - the part where Spark does the
number crunching.
TDLR:
Biggest problem came from Scheduler delay, very related to driver. Increasing the Memory (and vCPUs), solved the issue.

How to check if the DAG is complete within Given time or not?

I have a Dag A, It runs at a time let's say 10 Am, and typically completes within 15-20 mins, but sometimes it takes more time and due to some tables in the Database it goes into an endless running state, how can I know that if my DAG is completed within a given time frame and if not it should send email Alerts that it's not completed in this time and you need to check.
My thought process:
To build a parallel DAg or process within the same DAG and then write a python function in it which just checks the start time and match it with the Current time and then keeps subtracting it unless it reaches some fixed value lets say 10 mins and then shoots an email that it has not been completed.
Please correct me if I am wrong or what are the other ways to check it
It sounds like you just need to define an SLA. You can find an example here.

Stream Analytics: How can I start and stop a TUMBLINGWINDOW aggregation job inorder to reduce costs while still getting the same aggregation results?

Context
I have created a streaming job using Azure portal which aggregates data using a day wise TUMBLINGWINDOW. Have attached a code snippet below, modified from the docs, which shows similar logic.
SELECT
DATEADD(day, -1, System.Timestamp()) AS WindowStart
System.Timestamp() AS WindowEnd,
TollId,
COUNT(*)
FROM Input TIMESTAMP BY EntryTime
GROUP BY TumblingWindow(day, 1), TollId
Now that the job has been running and can see it producing output I want to be able to reduce the costs ideally by setting some sort of time scheduling so that the job can run and still produce the same output without being on all the time.
The only real constraint being that the aggregated output at the end of each TUMBLINGWINDOW has to remain the same as if it were running all the time (no impact of stop-starting on output).
This then brings me to my question.
Update: 2021-02-28
Before going into the question another thing that drove me was that through Azure portal you can manually start and stop a job. When you start/restart a job you can set a custom start time for the job/query. With this level of control say I start a job (or have a job running) and then decide to stop it for majority of the day and then turn it on at say 11:30pm each day with a custom start time of midnight of the current day then it would be able to be on for approx 30min before it would output the results (yet still to my understanding produce the same aggregation results/effect compared to if it was on the whole day up until that point). This job could then be paused again at 00:30am ( the next day for which it stays paused for the majority of the day (1380min total until 11:30pm again) upon which the same above logic is applied.
This way it remains off the majority of the day yet still can produce the same output for each day wise window (correct me if I am wrong in my thinking). The only issue with this to me seems to be the fact someone would have to manually perform this. Thus I was driven to the docs looking for a way to automate this.
Question
How can I start and stop a job in an automated fashion such that the required output would still remain intact but so that the job doesn't have to remain on all the time (like it currently is)?
Does the documentation linked above suffice given the context above, if so what are some possible arrangements for the N minutes (on) and M minutes (off) time variables for this to work?
Is this possible given the scenario that I want to aggregate on a one day TUMBLINGWINDOW window (whereby I want each window to start and end at midnight of each day, as per its default behaviour.)?
Eg
Window start: 2022-02-20 00:00:00 Window end: 2022-02-21 00:00:00 (aggregation performed),
Window start: 2022-02-21 00:00:00 Window end: 2022-02-22 00:00:00 (aggregation performed),
Window start: 2022-02-22 00:00:00 Window end: 2022-02-23 00:00:00 (aggregation performed),
....so on
Thoughts
I found this documentation from Microsoft regarding auto-pausing jobs using a few methods
However came across a paragraph (quoted below) which made me doubtful whether it seems reasonable in my particular use case (TUMBLING 1 day window as described in my question section).
Note
There are downsides to auto-pausing a job. The main ones being the loss of the low latency /real time capabilities, and the potential risks from allowing the input event backlog to grow unsupervised while a job is paused. Auto-pausing should not be considered for most production scenarios running at scale.
Could this method
There are 3 ways to lower costs:
downscale your job, you will have higher latency but for a lower cost, up to a point where your job crashes because it runs out of memory over time and/or can't catch up with its backlog. Here you need to keep an eye on your metrics to make sure you can react before it's too late
going further, you can regroup multiple queries into a single job. This job most likely won't be aligned in partitions, so it won't be able to scale linearly (adding SUs is not guaranteed to give you better performance). Same comment as above, plus you need to remember that when you need to scale back up, you probably will have to break down that job into multiple jobs to again be able to scale in a linear fashion
finally you can auto-pause a job, one way to implement that being explained in the doc you linked. I wrote that doc, and what I meant by that comment is that here again you are taking the risk of overloading the job if it can't run long enough to process the backlog of events. This is a risky proposition for most production scenarios
But if you know what you are doing, and are monitoring closely the appropriate metrics (as explained in the doc), this is definitely something you should explore.
Finally, all of these approaches, including the auto-pause one, will deal with tumbling windows transparently for you.
Update: 2022-03-03 following comments here
Update: 2022-03-04 following comments there
There are 3 time dimensions here:
When the job is running or not: the wall clock
When the time window is expected to output results: Tumbling(day,1) -> 00:00AM every day, this is absolute (on the day, on the hour, on the minute...) and independent of the job start time below
What output you want produced from the job, via the job start time
Let's say you have the job running 24/7 for multiple months, and decide to stop it at noon (12:00PM) on the 1st day of March.
It already has generated an output for the last day of February, at 00:00AM Mar1.
You won't see a difference in output until the following day, 00:00AM Mar2, when you expect to see the daily window of Mar1, but it's not output because the job is stopped.
Let's start the job at 01:00AM Mar2 wall clock time. If you want the missing time window, you should either pick a start time at 'when last stopped' (noon the day before), or a custom time any time before 23:59PM Mar1. What you are driving is the output window you want. Here you are telling ASA you want all the windows from that point onward.
ASA will then reload all the data it needs to generate that window (make sure the event hub has enough retention for that, we don't cache data between restarts in the job): Azure Stream Analytics will automatically look back at the data in the input source. For instance, if you start a job “Now” and if your query uses a 5-minutes Tumbling Window, Azure Stream Analytics will seek data from 5 minutes ago in the input. The first possible output event would have a timestamp equal to or greater than the current time, and ASA guarantees that all input events that may logically contribute to the output has been accounted for.

Spark optimize "DataFrame.explain" / Catalyst

I've got a complex software which performs really complex SQL queries (well not queries, Spark plans you know). <-- The plans are dynamic, they change based on user input so I can't "cache" them.
I've got a phase in which spark takes 1.5-2min building the plan. Just to make sure, I added "logXXX", then explain(true), then "logYYY" and it takes 1minute 20 seconds for the explain to execute.
I've trying breaking the lineage but this seems to cause worse performance because the actual execution time becomes longer.
I can't parallelize driver work (already did, but this task can't be overlapped with anything else).
Any ideas/guide on how to improve the plan builder in Spark? (like for example, flags to try enabling/disabling and such...)
Is there a way to cache plans in Spark? (so I can run that in parallel and then execute it)
I've tried disabling all possible optimizer rules, setting min iterations to 30... but nothing seems to affect that concrete point :S
I tried disabling wholeStageCodegen and it helped a little, but the execution is longer so :).
Thanks!,
PS: The plan does contain multiple unions (<20, but quite complex plans inside each union) which are the cause for the time, but splitting them apart also affects execution time.
Just in case it helps someone (and if no-one provides more insights).
As I couldn't manage to reduce optimizer times (and well, not sure if reducing optimizer times would be good, as I may lose execution time).
One of the latest parts of my plan was scanning two big tables and getting one column from each one of them (using windows, aggregations etc...).
So I splitted my code in two parts:
1- The big plan (cached)
2- The small plan which scans and aggregates two big tables (cached)
And added one more part:
3- Left Join/enrich the big plan with the output of "2" (this takes like 10seconds, the dataset is not so big) and finish the remainder computation.
Now I launch both actions (1,2) in parallel (using driver-level parallelism/threads), cache the resulting DataFrames and then wait+ afterwards perform 3.
With this, while Spark driver (thread 1) is calculating the big plan (~2minutes) the executors will be executing part "2" (which has a small plan, but big scans/shuffles) and then both get "mixed" in like 10-15seconds, which a good improvement in execution time over the 1:30 I save while calculating the plan.
Comparing times:
Before I would have
1:30 Spark optimizing time + 6 minutes execution time
Now I have
max
(
1:30 Spark Optimizing time + 4 minutes execution time,
0:02 Spark Optimizing time + 2 minutes execution time
)
+ 15 seconds joining both parts
Not so much, but quite a few "expensive" people will be waiting for it to finish :)

Predefined (and large) windows? Any stream processing frameworks support this?

All the examples I see of windowing involve defining the windows. E.g., tumbling 1-minute windows, or sliding 1-minute windows, etc. In my situation, all my data has timestamped events, but that's not the primary interest.
All my data also has an associated period that I do not have control over. That is the desired window in my case. The periods are time-based, but they vary from 2-3 weeks, roughly.
So, if I look at just the period of a stream of values might look like this (almost everything from the current period, a few stragglers from the last period early on in current period),
... PERIOD 6, PERIOD 5, PERIOD 6, PERIOD 6, PERIOD 6, PERIOD 6, ...
It's not clear to me how to handle this situation in terms of watermarks/triggers/etc? If I'm understanding all this terminology correctly I've thought of something like this: the watermark for PERIOD N occurs when the first event with PERIOD (N+1) is processed. The lateness horizon (for garbage collecting state) for the PERIOD N window can be 1-2 days after the timestamp of the first event with PERIOD (N+1). I'd like triggers to be accumulating and every 5 minutes (ideally, I'd like this trigger duration to be increasing: more frequent at beginning of the window, less frequent as time passes).
I'm trying to use terminology from this article, https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-102 sorry if it's incorrect
I'm particularly confused about how watermarks seem to be continuous and based on event-time. In my case, I have both event-time (timestamp) and event-time (period). If I'm understanding this correctly, the curve of my situation (as in the above article) would look like a step-function?
I haven't yet picked a stream processing framework to use. Does my situation make sense for any of them? Does this require a lot of custom logic? Does any framework make this easier? Is this a known problem with a name?
Any help is appreciated.
In Flink, one way to achieve this is to use the processing time window for aggregation. Then you use a rich map function to maintain the accumulated counts before the window. In the end, you sink the aggregates to long-term data storage
You can take a look at my blog post where we did something similar to this. Take a look at Section A peek into Milestone Two

Resources