Consedering that I have the following DF:
|-----------------|
|Date | Cod |
|-----------------|
|2022-08-01 | A |
|2022-08-02 | A |
|2022-08-03 | A |
|2022-08-04 | A |
|2022-08-05 | A |
|2022-08-01 | B |
|2022-08-02 | B |
|2022-08-03 | B |
|2022-08-04 | B |
|2022-08-05 | B |
|-----------------|
And considering that I have a backward observation of 2 days, how can I generate the following output DF
|------------------------------|
|RefDate | Date | Cod
|------------------------------|
|2022-08-03 | 2022-08-01 | A |
|2022-08-03 | 2022-08-02 | A |
|2022-08-03 | 2022-08-03 | A |
|2022-08-04 | 2022-08-02 | A |
|2022-08-04 | 2022-08-03 | A |
|2022-08-04 | 2022-08-04 | A |
|2022-08-05 | 2022-08-03 | A |
|2022-08-05 | 2022-08-04 | A |
|2022-08-05 | 2022-08-05 | A |
|2022-08-03 | 2022-08-01 | B |
|2022-08-03 | 2022-08-02 | B |
|2022-08-03 | 2022-08-03 | B |
|2022-08-04 | 2022-08-02 | B |
|2022-08-04 | 2022-08-03 | B |
|2022-08-04 | 2022-08-04 | B |
|2022-08-05 | 2022-08-03 | B |
|2022-08-05 | 2022-08-04 | B |
|2022-08-05 | 2022-08-05 | B |
|------------------------------|
I know that I can use loops to generate this output DF, but loops doesn't have a good performance since I can't cache the DF on memory (My original DF has approx 6 billion lines). So, what is the best way to get this output?
MVCE:
data_1=[
("2022-08-01","A"),
("2022-08-02","A"),
("2022-08-03","A"),
("2022-08-04","A"),
("2022-08-05","A"),
("2022-08-01","B"),
("2022-08-02","B"),
("2022-08-03","B"),
("2022-08-04","B"),
("2022-08-05","B")
]
schema_1 = StructType([
StructField("Date", StringType(),True),
StructField("Cod", StringType(),True)
])
df_1 = spark.createDataFrame(data=data_1,schema=schema_1)
You could try a self join. My thoughts - If your cluster and session are configured optimally, it should work with 6B rows.
data_sdf.alias('a'). \
join(data_sdf.alias('b'),
[func.col('a.cod') == func.col('b.cod'),
func.datediff(func.col('a.date'), func.col('b.date')).between(0, 2)],
'inner'
). \
drop(func.col('a.cod')). \
selectExpr('cod', 'a.date as ref_date', 'b.date as date'). \
show()
# +---+----------+----------+
# |cod| ref_date| date|
# +---+----------+----------+
# | B|2022-08-01|2022-08-01|
# | B|2022-08-02|2022-08-01|
# | B|2022-08-02|2022-08-02|
# | B|2022-08-03|2022-08-01|
# | B|2022-08-03|2022-08-02|
# | B|2022-08-03|2022-08-03|
# | B|2022-08-04|2022-08-02|
# | B|2022-08-04|2022-08-03|
# | B|2022-08-04|2022-08-04|
# | B|2022-08-05|2022-08-03|
# | B|2022-08-05|2022-08-04|
# | B|2022-08-05|2022-08-05|
# | A|2022-08-01|2022-08-01|
# | A|2022-08-02|2022-08-01|
# | A|2022-08-02|2022-08-02|
# | A|2022-08-03|2022-08-01|
# | A|2022-08-03|2022-08-02|
# | A|2022-08-03|2022-08-03|
# | A|2022-08-04|2022-08-02|
# | A|2022-08-04|2022-08-03|
# +---+----------+----------+
# only showing top 20 rows
This will generate records for the initial 2 dates as well which can be discarded.
Related
I have a spark dataframe that looks like this:
+----+------+-------------+
|user| level|value_pair |
+----+------+-------------+
| A | 25 |(23.52,25.12)|
| A | 6 |(0,0) |
| A | 2 |(11,12.12) |
| A | 32 |(17,16.12) |
| B | 22 |(19,57.12) |
| B | 42 |(10,3.2) |
| B | 43 |(32,21.0) |
| C | 33 |(12,0) |
| D | 32 |(265.21,19.2)|
| D | 62 |(57.12,50.12)|
| D | 32 |(75.12,57.12)|
| E | 63 |(0,0) |
+----+------+-------------+
How do I extract the values in the value_pair column and add them to two new columns called value1 and value2, using the comma as the separator.
+----+------+-------------+-------+
|user| level|value1 |value2 |
+----+------+-------------+-------+
| A | 25 |23.52 |25.12 |
| A | 6 |0 |0 |
| A | 2 |11 |12.12 |
| A | 32 |17 |16.12 |
| B | 22 |19 |57.12 |
| B | 42 |10 |3.2 |
| B | 43 |32 |21.0 |
| C | 33 |12 |0 |
| D | 32 |265.21 |19.2 |
| D | 62 |57.12 |50.12 |
| D | 32 |75.12 |57.12 |
| E | 63 |0 |0 |
+----+------+-------------+-------+
I know I can separate the values like so:
df = df.withColumn('value1', pyspark.sql.functions.split(df['value_pair'], ',')[0]
df = df.withColumn('value2', pyspark.sql.functions.split(df['value_pair'], ',')[1]
But how do I also get rid of the parantheses?
For the parentheses, as shown in the comments you can use regexp_replace, but you also need to include \. The backslash \ is the escape character for regular expressions.
Also, I believe you need to first remove the brackets, and then expand the column.
from pyspark.sql.functions import split
from pyspark.sql.functions import regexp_replace
df = df.withColumn('value_pair', regexp_replace(df.value_pair, "\(",""))
df = df.withColumn('value_pair', regexp_replace(df.value_pair, "\)",""))
df = df.withColumn('value1', split(df['value_pair'], ',').getItem(0)) \
.withColumn('value2', split(df['value_pair'], ',').getItem(1))
>>> df.show(truncate=False)
+----+-----+-----------+------+---------+
|user|level|value_pair |value1|value2 |
+----+-----+-----------+------+---------+
| A |25 |23.52,25.12|23.52 |25.12 |
| A |6 |0,0 |0 |0 |
| A |2 |11,12.12 |11 |12.12 |
| A |32 |17,16.12 |17 |16.12 |
| B |22 |19,57.12 |19 |57.12 |
| B |42 |10,3.2 |10 |3.2 |
| B |43 |32,21.0 |32 |21.0 |
| C |33 |12,0 |12 |0 |
| D |32 |265.21,19.2|265.21|19.2 |
| D |62 |57.12,50.12|57.12 |50.12 |
| D |32 |75.12,57.12|75.12 |57.12 |
| E |63 |0,0 |0 |0 |
+----+-----+-----------+------+---------+
As noticed, I changed slightly your code on how you grab the 2 items.
More information can be found here
I want to remove the duplicated names from the cells and merge them. This dataframe is generated after concatenating multiple dataframes.
My dataframe as under:
| | Customer ID | Category | VALUE |
| -:|:----------- |:------------- | -------:|
| 0 | GETO90 | Baby Sets | 1090.0 |
| 1 | GETO90 | Girls Dresses | 5357.0 |
| 2 | GETO90 | Girls Jumpers | 2823.0 |
| 3 | SETO90 | Girls Top | 3398.0 |
| 4 | SETO90 | Shorts | 7590.0 |
| 5 | SETO90 | Shorts | 7590.0 |
| 6 | RETO90 | Pants | 6590.0 |
| 7 | RETO90 | Pants | 6590.0 |
| 8 | RETO90 | Jeans | 8590.0 |
| 9 | YETO90 | Jeans | 9590.0 |
| 10| YETO90 | Jeans | 2590.0 |
I want to merge the first column and the expected dataframe is mentioned below:
| | Customer ID | Category | VALUE |
| -:|:----------- |:------------- | -------:|
| 0 | GETO90 | Baby Sets | 1090.0 |
| 1 | | Girls Dresses | 5357.0 |
| 2 | | Girls Jumpers | 2823.0 |
| 3 | SETO90 | Girls Top | 3398.0 |
| 4 | | Shorts | 7590.0 |
| 5 | | Shorts | 7590.0 |
| 6 | RETO90 | Pants | 6590.0 |
| 7 | | Pants | 6590.0 |
| 8 | | Jeans | 8590.0 |
| 9 | YETO90 | Jeans | 9590.0 |
| 10| | Jeans | 2590.0 |
Use duplicated with loc:
df.loc[df.duplicated('Customer ID'), 'Customer ID'] = ''
I have dataframe like this:
+---+--------------------------------------+-----------+
| | envelopeid | message |
+---+--------------------------------------+-----------+
| 1 | d55edb65-dc77-41d0-bb53-43cf01376a04 | CMN.00002 |
| 2 | d55edb65-dc77-41d0-bb53-43cf01376a04 | CMN.00004 |
| 3 | d55edb65-dc77-41d0-bb53-43cf01376a04 | CMN.11001 |
| 4 | 5cb72b9c-adb8-4e1c-9296-db2080cb3b6d | CMN.00002 |
| 5 | 5cb72b9c-adb8-4e1c-9296-db2080cb3b6d | CMN.00001 |
| 6 | f4260b99-6579-4607-bfae-f601cc13ff0c | CMN.00202 |
| 7 | 8f673ae3-0293-4aca-ad6b-572f138515e6 | CMN.00002 |
| 8 | fee98470-aa8f-4ec5-8bcd-1683f85727c2 | TKP.00001 |
| 9 | 88926399-3697-4e15-8d25-6cb37a1d250e | CMN.00002 |
| 10| 88926399-3697-4e15-8d25-6cb37a1d250e | CMN.00004 |
+---+--------------------------------------+-----------+
I've grouped it with grouped = df.groupby('envelopeid')
And I need to remove all groups from the dataframe and stay only that groups that have messages (CMN.00002) or (CMN.00002 and CMN.00004) only.
Desired dataframe:
+---+--------------------------------------+-----------+
| | envelopeid | message |
+---+--------------------------------------+-----------+
| 7 | 8f673ae3-0293-4aca-ad6b-572f138515e6 | CMN.00002 |
| 9 | 88926399-3697-4e15-8d25-6cb37a1d250e | CMN.00002 |
| 10| 88926399-3697-4e15-8d25-6cb37a1d250e | CMN.00004 |
+---+--------------------------------------+-----------+
tried
(grouped.message.transform(lambda x: x.eq('CMN.00001').any() or (x.eq('CMN.00002').any() and x.ne('CMN.00002' or 'CMN.00004').any()) or x.ne('CMN.00002').all()))
but it is not working properly
Try:
grouped = df.loc[df['message'].isin(['CMN.00002', 'CMN.00002', 'CMN.00004'])].groupby('envelopeid')
Try this: df[df.message== 'CMN.00002']
outdf = df.groupby('envelopeid').filter(lambda x: tuple(x.message)== ('CMN.00002',) or tuple(x.message)== ('CMN.00002','CMN.00004'))
So i figured it up.
resulting dataframe will got only groups that have only CMN.00002 message or CMN.00002 and CMN.00004. This is what I need.
I used filter instead of transform.
I am new to spark (with python) and have searched all through for solutions to what I'm trying to do but haven't found anything that relates to this.
I Have two data frames, One called quantity and another called price
Quantity
+----+-----+-----+----+
|ID| Price_perf | Size|Sourceid|
+---- +----- +----- +----+
| 1 | NULL | 3 | 223|
| 1 | NULL | 3 | 223|
| 1 | NULL | 3 | 220|
| 2 | NULL | 6 | 290|
| 2 | NULL | 6 | 270|
+----+-----+-----+----+
Price
+----+-----+-----+----+
|ID| Price| Size|Date|Sourceid|
+---- +----- +----- +----+
| 1 | 7.5 | 3 |2017-01-03| 223|
| 1 | 39 | 3 |2012-01-06| 223|
| 1 | 12 | 3 |2009-04-01| 223|
| 1 | 28 | 3 |2011-11-08| 223|
| 1 | 9 | 3 |2012-09-12| 223|
| 1 | 15 | 3 |2017-07-03| 220|
| 1 | 10 | 3 |2017-05-03 | 220|
| 1 | 33 | 3 |2012-03-08 | 220|
+----+-----+-----+----+
Firstly, I am trying to join the above two dataframes and return a data frame that contains only values that have the same ID and SourceID
I have tried to do that by doing the following:
c= quantity.join(price,price.id==quantity.id, price.souceid==quantity.sourceid "left")
c.show()
This is the result I want to get but I'm not getting:
+----+-----+-----+----+
|ID| Price_perf|Price|Date| Size|Sourceid|
+---- +----- +----- +----+
| 1 | NULL |7.5 |2017-01-03 |3 | 223|
| 1 | NULL | 9 |2012-01-06 |3 | 223|
| 1 | NULL | 12 |2009-04-01|3 | 223|
| 1 | NULL | 28 |2011-11-08| 3 | 223|
| 1 | NULL | 9 |2012-09-12| 3 | 223|
| 1 | NULL | 15 |2017-07-03 |3 | 220|
| 1 | NULL | 10 |2017-05-03 |3 | 220|
| 1 | NULL |33 | 2012-03-08 |3 | 220|
+----+-----+-----+----+
Secondly, after doing the join, I'm trying to get the difference in price between the min and max dates in the joined data frame and input it as the Price_perf
This is what I've tried:
def modify_values(c):
for x in c:
if quantity.sourceid == price.sourceid:
return price.price(min(Date)) - price.price(max(Date))
else:
return "Not found"
ol_val = udf(modify_values, StringType())
ol_val.show()
So the final output should look something like this:
+----+-----+-----+----+
|ID| Price_perf|Price|Date| Size|Sourceid|
+---- +----- +----- +----+
| 1 | 4.5 |7.5 |2017-01-03 |3 | 223|
| 1 | 4.5 | 9 |2012-01-06 |3 | 223|
| 1 | 4.5 | 12 |2009-04-01|3 | 223|
| 1 | 4.5 | 28 |2011-11-08| 3 | 223|
| 1 | 4.5 | 9 |2012-09-12| 3 | 223|
| 1 | 18 | 15 |2017-07-03 |3 | 220|
| 1 | 18 | 10 |2017-05-03 |3 | 220|
| 1 | 18 |33 | 2012-03-08 |3 | 220|
+----+-----+-----+----+
If you only want matches then you actually want an inner join, which is the default type. And then since your column names are the same you can just list them out so that the resultant join only has one column for each instead of 2. Although normally you need to use && instead of a comma for multiple predicates
c = quantity.join(price,['id','sourceid'])
c.show()
As far as your price_perf, I'm not sure what you really want. The min and max are going to be constant within the same data, so your example doesn't make a lot of sense currently.
Oil Blending
An oil company produces three brands of oil: Regular, Multigrade, and
Supreme. Each brand of oil is composed of one or more of four crude stocks, each having a different lubrication index. The relevant data concerning the crude stocks are as follows.
+-------------+-------------------+------------------+--------------------------+
| Crude Stock | Lubrication Index | Cost (€/barrell) | Supply per day (barrels) |
+-------------+-------------------+------------------+--------------------------+
| 1 | 20 | 7,10 | 1000 |
+-------------+-------------------+------------------+--------------------------+
| 2 | 40 | 8,50 | 1100 |
+-------------+-------------------+------------------+--------------------------+
| 3 | 30 | 7,70 | 1200 |
+-------------+-------------------+------------------+--------------------------+
| 4 | 55 | 9,00 | 1100 |
+-------------+-------------------+------------------+--------------------------+
Each brand of oil must meet a minimum standard for a lubrication index, and each brand
thus sells at a different price. The relevant data concerning the three brands of oil are as
follows.
+------------+---------------------------+---------------+--------------+
| Brand | Minimum Lubrication index | Selling price | Daily demand |
+------------+---------------------------+---------------+--------------+
| Regular | 25 | 8,50 | 2000 |
+------------+---------------------------+---------------+--------------+
| Multigrade | 35 | 9,00 | 1500 |
+------------+---------------------------+---------------+--------------+
| Supreme | 50 | 10,00 | 750 |
+------------+---------------------------+---------------+--------------+
Determine an optimal output plan for a single day, assuming that production can be either
sold or else stored at negligible cost.
The daily demand figures are subject to alternative interpretations. Investigate the
following:
(a) The daily demands represent potential sales. In other words, the model should contain demand ceilings (upper limits). What is the optimal profit?
(b) The daily demands are strict obligations. In other words, the model should contain demand constraints that are met precisely. What is the optimal profit?
(c) The daily demands represent minimum sales commitments, but all output can be sold. In other words, the model should permit production to exceed the daily commitments. What is the optimal profit?
QUESTION
I've been able to construct the following model in Excel and solve it via OpenSolver, but I'm only able to integrate the mix for the Regular Oil.
I'm trying to work my way through the book Optimization Modeling with Spreadsheets by Kenneth R. Baker but I'm stuck with this exercise. While I could transfer the logic from another blending problem I'm not sure how to construct the model for multiple blendings at once.
I modeled the problem as a minimization problem on the cost of the different crude stocks. Using the Lubrication Index data I built the constraint for the R-Lub Index as a linear constraint. So far the answer seems to be right for the Regular Oil. However using this approach I've no idea how to include even the second Multigrade Oil.
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| Decision Variables | | | | | | | | |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| | C1 | C2 | C3 | C4 | | | | |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| Inputs | 1000 | 0 | 1000 | 0 | | | | |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| | | | | | | | | |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| Objective Function | | | | | | Total | | |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| Cost | 7,10 € | 8,50 € | 7,70 € | 9,00 € | | 14.800,00 € | | |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| | | | | | | | | |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| Constraints | | | | | | LHS | | RHS |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| C1 supply | 1 | | | | | 1000 | <= | 1000 |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| C2 supply | | 1 | | | | 0 | <= | 1100 |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| C3 supply | | | 1 | | | 1000 | <= | 1200 |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| C4 supply | | | | 1 | | 0 | <= | 1100 |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| R- Lub Index | -5 | 15 | 5 | 30 | | 0 | >= | 0 |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| R- Output | 1 | 1 | 1 | 1 | | 2000 | = | 2000 |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| | | | | | | | | |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| Blending Data | | | | | | | | |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
| R- Lub | 20 | 40 | 30 | 55 | | 25 | >= | 25 |
+--------------------+--------+--------+--------+--------+--+-------------+----+------+
Here is the model with Excel formulars:
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| Decision Variables | | | | | | | | |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| | C1 | C2 | C3 | C4 | | | | |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| Inputs | 1000 | 0 | 1000 | 0 | | | | |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| | | | | | | | | |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| Objective Function | | | | | | Total | | |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| Cost | 7,1 | 8,5 | 7,7 | 9 | | =SUMMENPRODUKT(B5:E5;B8:E8) | | |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| | | | | | | | | |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| Constraints | | | | | | LHS | | RHS |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| C1 supply | 1 | | | | | =SUMMENPRODUKT($B$5:$E$5;B11:E11) | <= | 1000 |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| C2 supply | | 1 | | | | =SUMMENPRODUKT($B$5:$E$5;B12:E12) | <= | 1100 |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| C3 supply | | | 1 | | | =SUMMENPRODUKT($B$5:$E$5;B13:E13) | <= | 1200 |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| C4 supply | | | | 1 | | =SUMMENPRODUKT($B$5:$E$5;B14:E14) | <= | 1100 |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| R- Lub Index | -5 | 15 | 5 | 30 | | =SUMMENPRODUKT($B$5:$E$5;B15:E15) | >= | 0 |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| R- Output | 1 | 1 | 1 | 1 | | =SUMMENPRODUKT($B$5:$E$5;B16:E16) | = | 2000 |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| | | | | | | | | |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| Blending Data | | | | | | | | |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
| R- Lub | 20 | 40 | 30 | 55 | | =SUMMENPRODUKT($B$5:$E$5;B19:E19)/SUMME($B$5:$E$5) | >= | 25 |
+--------------------+------+-----+------+----+--+----------------------------------------------------+----+------+
A nudge in the right direction would be a tremendous help.
I think you want your objective to be Profit, which I would define as the sum of sales value - sum of cost.
To include all blends, develop calculations for Volume produced, Lube Index, Cost, and Value for each blend. Apply constraints for volume of stock used, volume produced, and lube index, and optimize for Profit.
I put together the model as follows ...
Columns A through D is the information you provided.
The 10's in G2:J5 are seed values for the stock volumes used in each blend. Solver will manipulate these.
Column K contains the total product volume produced. These will be constrained in different ways, as per your investigation (a), (b), and (c). It is =SUM(G3:J3) filled down.
Column L is the Lube Index for the product. As you noted, it is a linear blend - this is typically not true for blending problems. These values will be constrained in Solver. It is {=SUMPRODUCT(G3:J3,TRANSPOSE($B$2:$B$5))/$K3} filled down. Note that it is a Control-Shift-Enter (CSE) formula, required because of the TRANSPOSE.
Column M is the cost of the stock used to create the product. This is used in the Profit calculation. It is {=SUMPRODUCT(G3:J3,TRANSPOSE($C$2:$C$5))}, filled down. This is also a CSE formula.
Column N is the value of the product produced. This is used in the Profit calculation. It is =K3*C8 filled down.
Row 7 is the total stock volume used to generate all blends. These values will be constrained in Solver. It is =SUM(G3:G5), filled to the right.
The profit calculation is =SUM(N3:N5)-SUM(M3:M5).
Below is a snap of the Solver dialog box ...
It does the following ...
The objective is to maximize profit.
It will do this by manipulating the amount of stock that goes into each blend.
The first four constraints ($G$7 through $J$7) ensure the amount of stock available is not violated.
The next three constraints ($K$3 through $K$5) are for case (a) - make no more than product than there is demand.
The last three constraints ($L$3 through $L$5) make sure the lube index meets the minimum specification.
Not shown - I selected options for GRG Nonlinear and selected "Use Multistart" and deselected "Require Bounds on Variables".
Below is the result for case (a) ...
For case (b), change the constraints on Column K to be "=" instead of "<=". Below is the result ...
For case (c), change the constraints on Column K to be ">=". Below is the result ...
I think I came up with a solution, but I'm unsure if this is correct.
| Decision Variables | | | | | | | | | | | | | | | | |
|--------------------|---------|--------|--------|--------|-------------|--------|--------|--------|--------|--------|--------|--------|---|--------------------------------|----|------|
| | C1R | C1M | C1S | C2R | C2M | C2S | C3R | C3M | C3S | C4R | C4M | C4S | | | | |
| Inputs | 1000 | 0 | 0 | 800 | 0 | 300 | 0 | 1200 | 0 | 200 | 300 | 600 | | | | |
| | | | | | | | | | | | | | | | | |
| Objective Function | | | | | | | | | | | | | | Total Profit (Selling - Cost) | | |
| Cost | 7,10 € | 7,10 € | 7,10 € | 8,50 € | 8,50 € | 8,50 € | 7,70 € | 7,70 € | 7,70 € | 9,00 € | 9,00 € | 9,00 € | | 3.910,00 € | | |
| | | | | | | | | | | | | | | | | |
| Constraints | | | | | | | | | | | | | | LHS | | RHS |
| Regular | -5 | | | 15 | | | 5 | | | 30 | | | | 13000 | >= | 0 |
| Multi | | -15 | | | 5 | | | -5 | | | 20 | | | 0 | >= | 0 |
| Supreme | | | -30 | | | -10 | | | -20 | | | 5 | | 0 | >= | 0 |
| C1 Supply | 1 | 1 | 1 | | | | | | | | | | | 1000 | <= | 1000 |
| C2 Supply | | | | 1 | 1 | 1 | | | | | | | | 1100 | <= | 1100 |
| C3 Supply | | | | | | | 1 | 1 | 1 | | | | | 1200 | <= | 1200 |
| C4 Supply | | | | | | | | | | 1 | 1 | 1 | | 1100 | <= | 1100 |
| Regular Demand | 1 | | | 1 | | | 1 | | | 1 | | | | 2000 | >= | 2000 |
| Multi Demand | | 1 | | | 1 | | | 1 | | | 1 | | | 1500 | >= | 1500 |
| Supreme Demand | | | 1 | | | 1 | | | 1 | | | 1 | | 900 | >= | 750 |
| | | | | | | | | | | | | | | | | |
| | | | | | | | | | | | | | | | | |
| Selling | | | | | | | | | | | | | | | | |
| Regular | 8,50 € | x | 2000 | = | 17.000,00 € | | | | | | | | | | | |
| Multi | 9,00 € | x | 1500 | = | 13.500,00 € | | | | | | | | | | | |
| Supreme | 10,00 € | x | 900 | = | 9.000,00 € | | | | | | | | | | | |
| | | | | | 39.500,00 € | | | | | | | | | | | |