I was just given this example in class and it seemed as if my professor was not too sure about his answer on it. I have come here for a more detailed explanation.
Given 5 batch jobs, A through E, all arrive at a computer center at the same time.
For the following scheduling algorithm, determine the mean process turnaround time. Ignore process switching overhead. Assume that the system is multi programmed and each jobs gets it’s fair share of cpu.
Round Robin
Process: Run time(min):
A 10
B 6
C 2
D 4
E 8
My professors answer: 33 min turnaround time
Related
I am looking for advice on how to reduce unpredictable horrible latency/response times for API calls from VBA. I did some statistical analysis of Excel VBA API calls to QueryPerformanceCounter and GetSystemTimeAsPrecisionFileTime.
On my machine (8 core, 5.2Ghz Max frequency, W10, Office 2019) both of these have 100nanosecond single tick resolution, they both need at a minimum of 6 ticks elapsed time to get response back, a mode of 7 ticks, an average of 8+ ticks, which I can live with.
But there are serious outliers in the distribution: 0.2% of the time they need at least 100 ticks (10 microseconds), and on very rare occasion as much as 5 milliseconds to get a response back to VBA. If I unplug the power supply from this laptop, the delays increase of course. They skyrocket to >11 ticks average, and ~0.2% of the time > 20 microseconds. I surmise this is some sort of queue time issue but I have failed to find any discussion on this issue.
Is there a way to improve priority for the API calls? Maybe something crazy like assigning two or three cores exclusively to Excel and the API, everything else to the other 5-6 cores?
Excel/VBA only uses max of ~30% CPU time according to task manager, so probably no hit on speed of execution for the code.
VBA does not support multi-threading it can only use one core of your computer. So if you need multi-threading switch to a real programming language eg. Python (there exist libraries to handle Excel data with Python). Or use one of the alternatives mentioned here: Multi-threading in VBA
In VBA Excel will always wait for one command to finish until it can start the next one (single threading).
By the way the time between the VBA command to the API and the API returning the result cannot be influenced by VBA (or any other solution). This is the API's calculation time to give the result. So this is not Excel's fault that it takes long, it is the API that needs that time to calculate the result (which depends on the values you give that API how long this calculation time actually is).
I have continued to research my question. I think the bottom line is that you are at the total mercy of windows unpredictable prioritization of events. You cannot force prioritization, even if you set affinity to one CPU core and priority to high. I have tried both and indeed I see these timing outliers. See for example hints of this in this thread about getting frequency for timing calcs.
So I have to be aware that any one result has a 2+ percent probability that the timing error will be 2+ microseconds, at least on my PC running in turbo mode.
We're struggling with some aspects of the following problem:
a public transportation bus timetable consists of shifts (~ track sections) each with fixed start and end times
bus drivers need to be assigned to each of those shifts
[constraint in question] legal regulations demand that each bus driver has a 30 min break after 4 hours of driving (i.e. after driving shifts)
put differently, a driver accrues driving time when driving shifts that must not exceed 4h unless the driver takes a 30 min break in which case the accrued time is "reset to zero"
In summary, we need to track the accrued driving time of each driver in order to suppress shift assignments to enforce the 30 min break.
The underlying problem seems to sit halfway between a job shop and an assignment problem:
Like job shop problems, it has shifts (or tasks, jobs) with many no-overlap and precedence constraints between them...
...BUT our shifts (~tasks/jobs) are not pre-assigned to drivers; in contrast with job shop problems, the tasks (~shifts) need to be executed on specific machines (~drivers) and are therefore pre-assigned, so assigning them is not part of the problem
Like assignment tasks, we need to assign shifts to as few as possible drivers...
...BUT we also need to handle the aforementioned no-overlap and precedence constraints, that are not taken into account in assignment problems
So my question is, how to best model the above constraint in a constraint problem with the or-tools?
Thanks in advance!
One general technique for specifying patterns in constraint programming is the regular constraint (in Gecode, Choco, MiniZinc, among others, unsure of the status for or-tools), where patterns of variables are specified using finite automata (DFAs and NFAs) or regular expressions.
In your case, assuming that you have a sequence of variables representing what a certain driver does at each time-point, it is fairly straight-forward to specify an automaton that accepts any sequence of values that does not contain mora than four consecutive hours of driving. A sketch of such an automaton:
States:
Driving states Dn representing n time units driving (for some resolution of time units), up to n=4 hours.
Break states DnBm for a break of length m after n time units of driving, up to m=30 minutes.
Start state is D0.
Transitions:
Driving: When driving 1 unit of time, move from state Dn to D(n+1), and from a break shorter than 30 minutes from DnBm to D(n+1).
Break of 1 unit of time, move from DnBm to DnB(m+1), unless the 30 minutes break time has been reached, for which the transition goes back to D0.
Other actions handled mostly as self-loops, depending on desired semantics.
Of course, details will vary for your specific use-case.
in activity selection we sort on finish time of activities and then apply the constraint that no two activities can overlap.i want to know whether can we do it by sorting on start time andthenseeing if activities do not overlap
i was going through http://www.geeksforgeeks.org/dynamic-programming-set-20-maximum-length-chain-of-pairs/
this link has a dynamic programming solution for finding maximum length chain of pairs of numbers .. this according to me is another formulation of activity selection problem but i have searched on net and as also have read cormen but everywhere they ask to sort on finish times ...
i guess it shouldnt matter on what times(start or finish)we sort but just want to confirm the same
In greedy algorithm we always try to maximize our result. Thus, In activity selection we try to accommodate as many processes as we can in a given time interval without overlapping each other.
If you sort on start time then your solution might not be an optimal solution. Let's take an example,
Processes start Time Finish Time
A 1 9
B 3 5
C 6 8
Sorted on start Time:
If you execute process A because it starts at the earliest no other process can be executed because they will overlap. Therefore, for a given time interval you can only execute one process.
Sorted on Finish Time:
If you execute process B because it ends at the earliest you can execute process C after that. Therefore, for a given time interval you can execute two processes.
(I apologize in advance for bad formulation of my problem, please consider english is not my first language).
I have several processes (crons) and I want to "optimize" the schedule when to launch them.
For example, cron c1 starts every 3 minutes, cron c1 starts every 7 minutes and cron c3 starts every 18 minutes. Assume they last only a few seconds before stopping.
The unit of time here is 1 minute.
Now, what I want is that these crons are distributed so that we don't have a moment where many of them start and then long time interval with no cron at all. For example, if c1 and c3 both start at time 0, then they will start again together every 18 minutes. It would be better to start cron c1 at time 0 and then c3 at time 1, so that they are never launched together.
So the idea is, given a list of crons with periodicity, to plan a schedule, so that there is as much time between each cron as possible and as few as possible moments when two crons start together.
Are there some well-known algorithms about such problems?
The real-life application of this problem is: ~ 200 crons. Some of them are launched every 5 or ~10 or ~30 minutes and last very short (few seconds), some (~20 - 25) are launched every 2 hours and last a few minutes. So the idea is also that the big crons are not launched at the same time.
I am a mathematician myself and not a computer scientist, so I asked this question on https://math.stackexchange.com/, since I consider this being a "nice" question for mathematicians too.
I think you should consider the ressources used by each of your crons and then schedule your jobs from that.
I don't think there is a particular algorithm for that.
I have average response time, lets say its 10 secs, also I have a max number of parallel connections my service can handle, lay say its 10. Now, how do I calculate calls per second (CPS) value my service has handled from these data?
My guess is it's
1 / 10 (= av time) = 0.1 CPS or
1 / 10 (av time) * 10 (parallel flows) = 1 CPS.
If you are just measuring average throughput then yes, 10 calls in 10 seconds is 1 per second.
Your users/consumers may also be (more) concerned with latency (average response time) which is 10 seconds for all of them.
As noted in the comment, average is only part of the story. How does your service handle peak loads - does throughput drop off precipitously after a certain point, or is degradation more graceful as load goes up? Is 10 seconds the best possible response time, or is this better under low load conditions? Worse under high load?
There are some old but useful guidelines targeting .Net, but of general interest, here.