Constraining the range of decision variables based on other decision variables - excel

I have a regular classroom assignment problem with course sizes and class capacities. Decision variables are binary. The model allows assigning one course to more than one room as long as the total capacity assigned is bigger than the course size. The constraint I want to add to this model is to make sure that the respective sizes of the rooms assigned to each course are within a reasonable range (say 20 seats) from each other. How can this be done in a linear way? How can I prevent the model from assigning a course of 60 students to 2 rooms of 10 and 50 capacities and instead make sure their sizes are close together (preferably even equal).
I'm using Excel with OpenSolver.
Here's some sample data:
Course/Room 324A 321D 124B 328 Course Size Capacity Assigned Wasted
Management 0 0 0 1 15 25 10
Engineering 1 0 0 0 20 20 0
Science 0 1 1 0 60 75 15
Room Sizes 20 40 35 25
The objective is to minimize the total space wasted (which is 25 seats in this example).

Introduce variables minseat and maxseat and form the inequalities:
minseat(course) <= seats(room)+(1-assign(course,room))*M
maxseat(course) >= seats(room)-(1-assign(course,room))*M
maxseat(course)-minseat(course) <= 20
Alternatively put maxseat(course)-minseat(course) in the objective with some cost factor. Choose M judiciously.

Related

Giving weight to a delta

I have to analyze a delta impact for different categories but we need to focus on only with the highest priority. Example:
Col.A(dollar value)
Col.B (Forecast)
Col.C (Actual)
Col.D (delta)
2000
50
30
20
60
40
10
50
Is sumproduct the only best method? Also there are different attributes by which I need to look at the data

Altering values as a percentage to work with a graph

I have a gauge graph that goes from 0 to 100.
I have divided out my justification points as how they would show on the 0 - 100 graph. -2STDev, -1STDev, Avg, +1STDev. +2 STDev. How would I go about transferring incoming values to a 0 - 100 scale to match the graph?
On the graph of 0 - 100:
16 represents -2STDev
33 represents -1STDev
50 represents Average
66 represents +1 STDEV
83 represents +2 STDEV
My current values that I want to format to a scale of 100 to fit the graph are:
-2STDev = 63.9
-1STDev = 66.8
AVG = 69.6
+1STDev = 72.5
+2STDev = 75.4
How would I go about creating a formula to adjust these to a 0 - 100 scale? Of course my incoming value, will have to also follow this formula to be graphed upon these.
You can use the following long formula:
=IF(E6<C1,E6/C1*B1,IF(E6<C2,(E6-C1)/(C2-C1)*(B2-B1)+B1,IF(E6<C3,(E6-C2)/(C3-C2)*(B3-B2)+B2,IF(E6<C4,(E6-C3)/(C4-C3)*(B4-B3)+B3,IF(E6<C5,(E6-C4)/(C5-C4)*(B5-B4)+B4,(E6-C5)/(100-C5)*(100-B5)+B5)))))
See the location of the data so you can replace for other if required.
Keep in mind this formula does a linear conversion for values inbetween each one of the values you have.

difference between counting packets and counting the total number of bytes in the packets

I'm reading perfbook. In chapter5.2, the book give some example about statistical counters. These example can solve the network packet count problem.
Quick Quiz 5.2: Network-packet counting problem. Suppose that you need
to collect statistics on the number of networking packets (or total
number of bytes) transmitted and/or received. Packets might be
transmitted or received by any CPU on the system. Suppose further that
this large machine is capable of handling a million packets per
second, and that there is a system-monitoring package that reads out
the count every five seconds. How would you implement this statistical
counter?
There is one QuickQuiz ask about difference between counting packets and counting the total number of bytes in the packets.
I can't understand the answer. After reading it, I still don't know the difference.
The example in "To see this" paragraph, if changing number the 3 and 5 to 1, what difference does it make?
Please help me to understand it.
QuickQuiz5.26: What fundamental difference is there between counting
packets and counting the total number of bytes in the packets, given
that the packets vary in size?
Answer: When counting packets, the
counter is only incremented by the value one. On the other hand, when
counting bytes, the counter might be incremented by largish numbers.
Why does this matter? Because in the increment-by-one case, the value
returned will be exact in the sense that the counter must necessarily
have taken on that value at some point in time, even if it is
impossible to say precisely when that point occurred. In contrast,
when counting bytes, two different threads might return values that are
inconsistent with any global ordering of operations.
To see this, suppose that thread 0 adds the value three to its counter,
thread 1 adds the value five to its counter, and threads 2 and 3 sum the
counters. If the system is “weakly ordered” or if the compiler uses
aggressive optimizations, thread 2 might find the sum to be three and
thread 3 might find the sum to be five. The only possible global orders
of the sequence of values of the counter are 0,3,8 and 0,5,8, and
neither order is consistent with the results obtained.
If you missed > this one, you are not alone. Michael Scott used this
question to stump Paul E. McKenney during Paul’s Ph.D. defense.
I can be wrong but presume that idea behind that is the following: suppose there are 2 separate processes which collect their counters to be summed up for a total value. Now suppose that there are some sequences of events which occur simultaneously in both processes, for example a packet of size 10 comes to the first process and a packet of size 20 comes to the second at the same time and after some period of time a packet of size 30 comes to the first process at the same time when a packet of size 60 comes to the second process. So here is the the sequence of events:
Time point#1 Time point#2
Process1: 10 30
Process2: 20 60
Now let's build a vector of possible total counter states after the time point #1 and #2 for a weakly ordered system, considering the previous total value was 0:
Time point#1
0 + 10 (process 1 wins) = 10
0 + 20 (process 2 wins) = 20
0 + 10 + 20 = 30
Time point#2
10 + 30 = 40 (process 1 wins)
10 + 60 = 70 (process 2 wins)
20 + 30 = 50 (process 1 wins)
20 + 60 = 80 (process 2 wins)
30 + 30 = 60 (process 1 wins)
30 + 60 = 90 (process 2 wins)
30 + 90 = 110
Now presuming that there can be some period of time between time point#1 and time point#2 let's assess which values reflect the real state of the system. Apparently all states after time point#1 can be treated as valid as there was some precise moment in time when total received size was 10, 20 or 30 (we ignore the fact the the final value may not the actual one - at least it contains a value which was actual at some moment of system functioning). For the possible states after the Time point#2 the picture is slightly different. For example the system has never been in the states 40, 70, 50 and 80 but we are under the risk to get these values after the second collection.
Now let's take a look at the situation from the number of packets perspective. Our matrix of events is:
Time point#1 Time point#2
Process1: 1 1
Process2: 1 1
The possible total states:
Time point#1
0 + 1 (process 1 wins) = 1
0 + 1 (process 1 wins) = 1
0 + 1 + 1 = 2
Time point#2
1 + 1 (process 1 wins) = 2
1 + 1 (process 2 wins) = 2
2 + 1 (process 1 wins) = 3
2 + 1 (process 2 wins) = 3
2 + 2 = 4
In that case all possible values (1, 2, 3, 4) reflect a state in which the system definitely was at some point in time.

Divide excel column to N equal groups

I have a column with ordinal values. I want to have another column that ranks them in equal groups (relatively to their value).
Example: If I have a score and I want to divide to 5 equal groups:
Score
100
90
80
70
60
50
40
30
20
10
What function do I use in the new column to get this eventually:
Score Group
100 5
90 5
80 4
70 4
60 3
50 3
40 2
30 2
20 1
10 1
Thanks! (I'm guessing the solution is somewhere in mod, row and count - but I couldn't find any good solution for this specific problem)
If you don't care about how the groups are split for groups that aren't evenly divisible, you can use this formula and drag down as far as necessary:
= FLOOR(5*(COUNTA(A:A)-COUNTA(INDEX(A:A,1):INDEX(A:A,ROW())))/COUNTA(A:A),1)+1
Possibly a more efficient solution exists, but this is the first way I thought to do it.
Obviously you'll have to change the references to the A column if you want it in a different column.
See below for working example.

Find the minimum number of tanks to hold the maximum quantity of wines, at each tank maximum possible capacity

My business is in the wine reselling business, and we have this problem I've been trying to solve. We have 50 - 70 types of wine to be stored at any time, and around 500 tanks of various capacity. Each tank can only hold 1 type of wine. My job is to determine the minimum number of tanks to hold the maximum number of type of wines, each filled as close to its maximum capacity as possible, i.e 100l of wine should not be stored in a 200l tank if 2 tanks of 60l and 40l also exist.
I've been doing the job by hand in excel and want to try to automate the process, but using macros and array formulas quickly get out of hand. I can write a simple program in C and Swift, but stuck at finding a general algorithm. And pointer on where I can start is much appreciated. A full solution and I will send you a bottle ;)
Edit: for clarification, I do know how many types of wine I have and their total quantity, i.e Pinot at 700l, Merlot 2000l, etc. These change every week. The tanks however have many different capacities (40, 60, 80, 100, 200 liters etc) and change at irregular interval since they have to be taken out for cleaning and replaced. Simply using 70 tanks to hold 70 types is not possible.
Also, total quantity of wine never matches total tanks' capacity, and I need to use the minimum number of tanks to hold the maximum amount of wine. In case of insufficient capacity the amount of wine left over must be smallest possible (they'll spoil quickly). If there is left-over, the amount left over of each type must be proportional to their quantity.
A simplified example of the problem is this:
Wine:
----------
Merlot 100
Pinot 120
Tocai 230
Chardonay 400
Total: 850L
Tanks:
----------
T1 10
T2 20
T3 60
T4 150
T5 80
T6 80
T7 90
T8 80
T9 50
T10 110
T11 50
T12 50
Total: 830L
This greedy-DP algorithm attempts to perform a proportional split: for example, if you have 700l Pinot, 2000l Merlot and tank capacities 40, 60, 80, 100, 200, that means a total capacity of 480.
700 / (700 + 2000) = 0.26
2000 / (700 + 2000) = 0.74
0.26 * 480 = 125
0.74 * 480 = 355
So we will attempt to store 125l of the Pinot and 355l of the Merlot, to make the storage proportional to the amounts we have.
Obviously this isn't fully possible, because you cannot mix wines, but we should be able to get close enough.
To store the Pinot, the closest would be to use tanks 1 (40l) and 3 (80l), then use the rest for the Merlot.
This can be implemented as a subset sum problem:
d[i] = true if we can make sum i and false otherwise
d[0] = true, false otherwise
sum_of_tanks = 0
for each tank i:
sum_of_tanks += tank_capacities[i]
for s = sum_of_tanks down to tank_capacities[i]
d[s] = d[s] OR d[s - tank_capacities[i]]
Compute the proportions then run this for each type of wine you have (removing the tanks already chosen, which you can find by using the d array, I can detail if you want). Look around d[computed_proportion] to find the closest sum possible to achieve for each wine type.
This should be fast enough for a few hundred tanks, which I'm guessing don't have capacities larger than a few thousands.

Resources