sysV init.d: what does the priority really mean? - linux

The docs for chkconfig are a bit loose on what the priority number actually means, and the docs for init don't even mention priority on my machine.
Say you have the following:
/etc/rc.d/rc3.d/S01foo
/etc/rc.d/rc3.d/S02bar
Which one is run first? The one with the higher priority (bar)? Or is the priority number more of a start-order number, so the lower numbers are started before the higher numbers?
What if it was K01foo and K02bar. Which one would be stopped first? Greater priority one, or is it more of a "stop-order"?

After some experimentation I was able to figure it out.
It's more of an 'order from least to greatest' process.
In other words, the lower the priority number, the sooner the job will stop/start.
S01foo will start before S02bar, and K01foo will stop before K02bar.
Hopefully this saves someone 15 minutes.

Related

How to set priority correctly in Arena simulation

I am working in the Arena Simulation program and I need help. I saw your videos on YouTube, but they did not give me a complete understanding of how to properly work with this program.
I do not understand how to correctly set the priority in the Queue module(Highest and Lowest Attribute Value), I have 120 combinations from 1-2-3-4-5 to 5-4-3-2-1 and I need to somehow set the priority correctly in the Queue module, not please tell me how to do this?
It's been a long time but for those interested in the question:
Let's go through the emergency room example. Let's say you want to prioritize critically ill patients.
First, we can assign a value of "1" to critically ill patients using the assign module.(attribute)
Basic Process-> Queue -> Type -> highest attribute value
Thus, critically ill patients can enter the queue first.
Apart from this, other sorting types can also be used. (FIFO, LIFO, LVF, HVF)
For process to priority,
There are three options. High, Medium and Low. If one source will perform more than one operation, processes can also be given priority.

How to set the maximum priority to a Slurm job?

as administrator I need to give the maximum priority to a given job.
I have found that submission options like: --priority=<value> or --nice[=adjustment] could be useful, but I do not know which values I should assign them in order to provide the job with the highest priority.
Another approach could be to set a low priority by default to all the jobs and to the special ones increase it.
Any idea of how I could carry it out?
EDIT: I am using sched/backfill policy and the default job priority policy (FIFO).
Thank you.
I found a solution that works without the need of using PriorityType=priority/multifactor (as suggested by Bub Espinja):
$ scontrol update job=<job-id> Priority=<any-integer>
The above command will update the priority of the job and update the queue accordingly.
The minimum priority needed to become the next one in line can be found by checking the priority of the next pending job and adding one to it. You can find the priority of a job using the following:
$ scontrol show job=<job-id>
(scontrol update can be used to change many aspects of a job, such as time limit and others.)
EDIT:
I just learned one can do
$ scontrol top <job-id>
to put a job at the top of their queue.
What I have done is to use the priority plug-in multifactor, with the default configuration, adding this line to slurm.conf:
PriorityType=priority/multifactor
Then, as all the jobs will have the priority 0, I must update the target job priority, in my case using the API:
job_desc_msg_t job_update;
slurm_init_job_desc_msg(&job_update);
job_update.job_id = target_job_id;
job_update.priority = 4294967295;
slurm_update_job(&job_update);
EDITED:
From the Slurm FAQ:
The job's priority is an integer that ranges between 0 and 4294967295. The larger the number, the higher the job will be positioned in the queue, and the sooner the job will be scheduled.

What is the default / maximum value for WEBJOBS_RESTART_TIME?

I have a continuous webjob and sometimes it can take a REALLY, REALLY long time to process (i.e. several days). I'm not interested in partitioning it into smaller chunks to get it done faster (by doing it more parallel). Having it run slow and steady is fine with me. I was looking at the documentation about webjobs here where it lists out all the settings but it doesn't specify the defaults or maximums for these values. I was curious if anybody knew.
Since the docs say
"WEBJOBS_RESTART_TIME - Timeout in seconds between when a continuous job's process goes down (for any reason) and the time we re-launch it again (Only for continuous jobs)."
it doesn't matter how long your process runs.
Please clarify your question as most part of it is irrelevant to what you're asking at the end.
If you want to know the min - I'd say try 0. For max try MAX_INT (2147483647), that's 68 years. That should do it ;).
There is no "max run time" for a continuous WebJob. Note that, in practice, there aren't any assurances on how long a given instance of your Web App hosting the WebJob is going to exist, and thus your WebJob may restart anyway. It's always good design to have your continuous job idempotent; meaning it can be restarted many times, and pick back up where it left off.

Performance issue while using Parallel.foreach() with MaximumDegreeOfParallelism set as ProcessorCount

I wanted to process records from a database concurrently and within minimum time. So I thought of using parallel.foreach() loop to process the records with the value of MaximumDegreeOfParallelism set as ProcessorCount.
ParallelOptions po = new ParallelOptions
{
};
po.MaxDegreeOfParallelism = Environment.ProcessorCount;
Parallel.ForEach(listUsers, po, (user) =>
{
//Parallel processing
ProcessEachUser(user);
});
But to my surprise, the CPU utilization was not even close to 20%. When I dig into the issue and read the MSDN article on this(http://msdn.microsoft.com/en-us/library/system.threading.tasks.paralleloptions.maxdegreeofparallelism(v=vs.110).aspx), I tried using a specific value of MaximumDegreeOfParallelism as -1. As said in the article thet this value removes the limit on the number of concurrently running processes, the performance of my program improved to a high extent.
But that also doesn't met my requirement for the maximum time taken to process all the records in the database. So I further analyzed it more and found that there are two terms as MinThreads and MaxThreads in the threadpool. By default the values of Min Thread and MaxThread are 10 and 1000 respectively. And on start only 10 threads are created and this number keeps on increasing to a max of 1000 with every new user unless a previous thread has finished its execution.
So I set the initial value of MinThread to 900 in place of 10 using
System.Threading.ThreadPool.SetMinThreads(100, 100);
so that just from the start only minimum of 900 threads are created and thought that it will improve the performance significantly. This did create 900 threads, but it also increased the number of failure on processing each user very much. So I did not achieve much using this logic. So I changed the value of MinThreads to 100 only and found that the performance was much better now.
But I wanted to improve more as my requirement of time boundation was still not met as it was still exceeding the time limit to process all the records. As you may think I was using all the best possible things to get the maximum performance in parallel processing, I was also thinking the same.
But to meet the time limit I thought of giving a shot in the dark. Now I created two different executable files(Slaves) in place of only one and assigned them each half of the users from DB. Both the executable were doing the same thing and were executing concurrently. I created another Master program to start these two Slaves at the same time.
To my surprise, it reduced the time taken to process all the records nearly to the half.
Now my question is as simple as that I do not understand the logic behind Master Slave thing giving better performance compared to a single EXE with all the logic same in both the Slaves and the previous EXE. So I would highly appreciate if someone will explain his in detail.
But to my surprise, the CPU utilization was not even close to 20%.
…
It uses the Http Requests to some Web API's hosted in other networks.
This means that CPU utilization is entirely the wrong thing to look at. When using the network, it's your network connection that's going to be the limiting factor, or possibly some network-related limit, certainly not CPU.
Now I created two different executable files … To my surprise, it reduced the time taken to process all the records nearly to the half.
This points to an artificial, per process limit, most likely ServicePointManager.DefaultConnectionLimit. Try setting it to a larger value than the default at the start of your program and see if it helps.

Why are niceness values inversely related to process priority?

The niceness of a process decreases with increasing process priority.
Extract from Beginning Linux Programming 4th Edition, Pg 169 :
The default priority is 0. Positive priorities are used for background
tasks that run when no other higher priority task is ready to run.
Negative priorities cause a program to run more frequently, taking a
larger share of the available CPU time. The range of valid priorities
is -20 to +20. This is often confusing because the higher the
numerical value, the lower the execution precedence.
Is there any special reason for negative values corresponding to higher process priority (as opposed to increasing priority for higher niceness valued processes) ?
#Ewald's answer is correct, as is confirmed by Jerry Peek et al. in Unix Power Tools (O'Reilly, 2007, p. 507):
This is why the nice number is usually called niceness: a job with a high niceness is very kind to the users of your system (i.e., it runs at low priority), while a job with little niceness hogs the CPU. The term "niceness" is awkward, like the priority system itself. Unfortunately, it's the only term that is both accurate (nice numbers are used to compute the priorities but are not the priorities themselves) and avoids horrible circumlocutions ("increasing the priority means lowering the priority...").
Nice has had this meaning since at least V6 Unix, but the V6 manual never explains this explicitly. The range of allowed values was -220 through +20, with negative numbers reserved for the superuser. The range was changed to -20 through +20 in V7.
Hysterical reasons - I mean historical... I'm pretty sure it started with numbers going up from 0 .. 20, and the lowest available was taken first. Then someone came to the conclusion that "Hmm, what if we need to make some MORE important" - well we have to go negative.
You want priority to be a sortable value, so if you start with "default is zero", you have to either make higher priority a higher number (but "priority 1" in daily speak is higher then "priority 2" - when your boss says "Make this your number 1 priority", it does mean it's important, right?). Being a computer, clearly priority 0 is higher than priority 1, and priority -1 is higher than priority 0.
In the end, it's an arbitrary choice. Maybe Ken Thomson, Dennis Ritchie or one of those guys will be able to say for sure why they choose just that sequence, and not 0..255, for example.
First of all the answer is a little bit long but it is only for clarification.
As in the linux kernel every conventional process may have the priorities which are called static priority are from 100(highest) to 139(lowest). so there are basically 40 priorities which could be assigned to the process.
so when any process is created it gets the priority of it's parent but if the user wants to change it's priority then it could be done with the help of nice(nice_value) system call.
& the reason for your question is that every process wants base time quantum which is used as how much time the process will get the CPU for its execution in milliseconds and this is calculated as
time={
if static_priority<120
(140-static_priority)*20
if static_priority>=120
(140-static_priority)*5
so The sys_nice( )service routine handles the nice( )system call. Although the nice_value may have any value, absolute values larger than 40 are trimmed down to 40. Traditionally, negative values correspond to requests for priority increments and require superuser privileges, while positive ones correspond to requests for priority decreases. In the case of a negative nice_value, the function
invokes the capable( ) function to verify whether the process has a CAP_SYS_NICE capability. Moreover, the function invokes the security_task_setnice( )security hook. so in the end the nice_value is used to calculate the static priority & then this static priority is used for calculation of base time quantum.
so it's clear that -ve values are used for increment the priority so needs super user access & +ve values are used for decrease the priority so no need of super user access.
Yes - it gets NICER as the number goes up and MEANER as the number goes down. So the process is seen as "friendlier" when it's not taking up all the resources and "nasty" as it gets greedier with resources.
Think of it as "nice" points - the nicer you are to others, the more points you have.

Resources