Azure Batch Preempted state - azure

I have a TVM/pool running under Azure batch and suddenly it went into the Preempted state. Now the problem is, it is not taking any requests now.
I have also setup Scale formula wherein it gives me a VM whenever I have more then 0 job pending to be executed in the Azure batch. But apparently that is not working either. It was working before the TVM went into the preempted state.
How to deal with these situation?

AFAIK, the nodes I think are low-priority nodes can go into the "preempted" state depending on available capacity. For this reason, low-priority VMs are most suitable for certain types of workloads. Use low-priority VMs for batch and asynchronous processing workloads where the job completion time is flexible and the work is distributed across many VMs .and that is the behavior defined here: https://learn.microsoft.com/en-us/azure/batch/batch-low-pri-vms
I think very likely the latter part of question is also relevant to the fact that your VM's were prepempted.
Given the characteristics of low-priority VMs, what workloads can and cannot use them? In general, batch processing workloads are a good fit, as jobs are broken into many parallel tasks or there are many jobs that are scaled out and distributed across many VMs.
To maximize use of surplus capacity in Azure, suitable jobs can scale out.
Occasionally VMs may not be available or are preempted, which results in reduced capacity for jobs and may lead to task interruption and reruns. Jobs must therefore be flexible in the time they can take to run.
Jobs with longer tasks may be impacted more if interrupted. If long-running tasks implement checkpointing to save progress as they execute, then the impact of interruption is reduced. Tasks with shorter execution times tend to work best with low-priority VMs, because the impact of interruption is far less.
Long-running MPI jobs that utilize multiple VMs are not well suited to use low-priority VMs, because one preempted VM can lead to the whole job having to run again.
Hope it helps.

Related

Azure functions are slow

I’m testing the CPU performance of functions, so I made a function that finds the prime numbers in a number. It is triggered by Azure Service Bus.
On my local machine it runs in 350ms.
The function, when running in a consumption plan, takes around 1000ms.
When I batch send 100 messages to the function, it does scale up to around 16 instances, but the performance of each function decreases considerably to between 3000-7000ms.
When trying a standard service plan with 4 cores, the performance is better, but not that much. It is still considerably slower than my laptop.
This guy here has a similar issue.
Is this the performance/scaling to be expected from functions? E.q. not a great fit for batch processing of CPU intensive methods?
Would Azure batch be a better fit?
I don't know the exact specification of the hardware that Functions are running on, but you can assume that each instance of Consumption plan is a low-profile single-core VM. If you need to run CPU-intensive latency-critical workload, that's probably not a good match.
Your local machine is probably faster that those instances, so that's where 350ms vs. 1000ms difference comes from.
The decrease to 3000-7000ms is related to the fact that multiple executions of functions are running at the same time on the same instance. They are competing for CPU, slowing each other down. For pure CPU-bound workloads, you might be better off setting "maxConcurrentCalls": 1 in host.json.

Run time-consuming .exe from Microsoft Azure

I have an executable that performs long calculations and I want to run those calculations on Azure. What would be the optimal service - batch or VM perhaps?
Azure batch or VM scale sets. Azure Batch is based on top of scale sets and is more specifically designed for task/jobs while VM scalesets help for scaling generic VMs.
Use cases for Batch:
Batch is a managed Azure service that is used for batch processing or batch computing--running a large volume of similar tasks to get some desired result. Batch computing is most commonly used by organizations that regularly process, transform, and analyze large volumes of data.
Batch works well with intrinsically parallel (also known as "embarrassingly parallel") applications and workloads. Intrinsically parallel workloads are easily split into multiple tasks that perform work simultaneously on many computers.
More info here for batch: https://azure.microsoft.com/en-us/documentation/articles/batch-technical-overview/
if you can change the doctype to multi-part and you're able to suspend your long job every minute or so and update progress, that will make it more user interactive and stops the http connection timing out. you could also add a cancel job button? or is the question about something else?

Is it possible to avoid or minimize the use of scheduling policies in operating system design?

I recently stumbled upon the question above but I am not sure if I understand what it is asking.
How would one avoid the use of scheduling policies?
I would think that there isn't any other way...
Scheduling policy has nothing to do with the resource allocation! Processes are scheduled basically, and hence allocated resources as such.
From "Resource allocation (computer)" description on Wikipedia :-
When the user opens any program this will be counted as a process, and
therefore requires the computer to allocate certain resources for it
to be able to run. Such resources could have access to a section of
the computer's memory, data in a device interface buffer, one or more
files, or the required amount of processing power.
I don't know how you got confused between them. All the process would, at a time or another, get scheduled at any point of time; unless the CPU is an unfair one.
EDIT :
How would one avoid the use of scheduling policies?
If there are more than one user-process to be executed, then one has to apply the scheduling policy so that the processes get executed in some order. There has to be a queue to hold all the processes. See a different case in BareMetal OS below.
Then, there is BareMetal OS which is single address space OS.
Multitasking on BareMetal is unusual for operating systems in this day
and age. BareMetal uses an internal work queue that all CPU cores
poll. A task added to the work queue will be processed by any
available CPU core in the system and will execute until completion,
which results in no context switch overhead.
So, BareMetal OS doesn't use any scheduling policy, it is based on polling of the work-queue by the cores.

What does it means, Spark’s internal Fair Scheduler lets long-lived applications define queues

I am trying to understand Spark's Job Scheduling and got this point in the Learning Spark,
"Spark provides a mechanism through configurable intra-application
scheduling policies. Spark’s internal Fair Scheduler lets long-lived
applications define queues for prioritizing scheduling of tasks"
Could you please give me bit more details on this?
As described in a fair scheduler documentation you can maintain multiple pools, each with its own scheduling policy, minimal (minShare) and relative (weight) resource allocation. The last one is described as follows:
If you give a specific pool a weight of 2, for example, it will get 2x more resources as other active pools. Setting a high weight such as 1000 also makes it possible to implement priority between pools—in essence, the weight-1000 pool will always get to launch tasks first whenever it has jobs active.

Eclipse RCP: Only one Job runs at a time?

The Jobs API in Eclipse RCP apparently works much differently than I expected. I thought that creating and scheduling multiple Jobs would actually cause multiple worker threads to be created, executing the Jobs in parallel unless there was an ISchedulingRule conflict.
I went back and read the documentation more closely, and also discovered this comment in the JobManager class:
/**
* Returns a running or blocked job whose scheduling rule conflicts with the
* scheduling rule of the given waiting job. Returns null if there are no
* conflicting jobs. A job can only run if there are no running jobs and no blocked
* jobs whose scheduling rule conflicts with its rule.
*/
Now it looks to me like the Job manager will only ever attempt to use one background worker thread. Am I completely wrong about this? If I'm right,
what is the point of scheduling rules and locks? If there is only one worker thread, Jobs can never preemt each other. Wouldn't these only ever be used in case a Job's sleep() method is called (e.g. sleeping while holding a Lock)?
does any part of the platform allow two Jobs to actually run concurrently, on multiple worker threads, thus making the above features useful somehow?
What am I missing here?
Take a look at the run method in the documentation, specifically this part:
Jobs can optionally finish their
execution asynchronously (in another
thread) by returning a result status
of ASYNC_FINISH. Jobs that finish
asynchronously must specify the
execution thread by calling setThread,
and must indicate when they are
finished by calling the method done.
ASYNC_FINISH there looks interresting.
AFAIK creating and scheduling multiple Jobs DO actually cause multiple worker threads to be created and to be executed in parallel.
However if you specify an optional scheduling rule to your job (using the setRule() method) and if that rule conflicts with another job's scheduling rule then those two jobs can't run simultaneously.
This Eclipse Corner article provides good description as well as few code samples for Eclipse Job API.
The IJobManager API is only needed for advanced job manipulation, e.g. when you need to use locks, synchronize between several jobs, terminate jobs, etc.
Note: Eclipse 4.5M4 will include now (Q4 2014) a way to Support for Job Groups with throttling
See bug 432049:
Eclipse provides a simple Jobs API to perform different tasks in parallel and in asynchronous fashion. One limitation of the Eclipse Jobs is that there is no easy way to limit the number of worker threads being used to execute jobs.
This may lead to a thread pool explosion when many jobs are scheduled in quick succession. Due to that it’s easy to use Jobs to perform different unrelated tasks in parallel, but hard to implement thousands of Jobs co-operating to complete a single large task.
Eclipse currently supports the concept of Job Families, which provides one way of grouping with support for join, cancel, sleep, and wakeup operations on the whole family.
To address all these issue we would like to propose a simple way to group a set of Eclipse Jobs that are responsible for pieces of the same large task.
The API would support throttling, join, cancel, combined progress and error reporting for all of the jobs in the group and the job grouping functionality can be used to rewrite performance critical algorithms to use parallel execution of cooperating jobs.
You can see the implementation in this commit 26471fa

Resources