I am trying to solve a maximization problem using Pyomo which has a recursive relationship. I am trying to maximize the revenue from a battery and it involves updating the state of charge of the battery every hour (which is the recursive relationship here). I am using the following code:
import pyomo
import numpy as np
from pyomo.environ import *
import pandas as pd
model = ConcreteModel()
N = 24 #number of hours
lmpdata = np.random.randint(1,10,24) #LMP Data (to be imported from MISO/PJM)
R = 0 #discount
eta_s = 0.99 #self-discharge efficiency
eta_c = 0.95 #round-trip efficiency
gammas_min = 0.1 #fraction of energy capacity to reserve for discharging
gammas_max = 0.05 #fraction of energy capacity to reserve for charging
S_bar = 50 #energy capacity
Q_bar = 50 #energy charge/discharge rating
model.qd = Var(range(N), within = NonNegativeReals) #variables for energy sold at time t
model.qr = Var(range(N), within = NonNegativeReals) #variables for energy purchased at time t
model.obj = Objective(expr = sum((model.qd[i]-model.qr[i])*lmpdata[i]*np.exp(-R*(i+1)) for i in range(N)), sense = maximize) #objective function
model.SOC = np.zeros(N) #state of charge (s(t) in Sandia's Model)
model.SOC[0] = 25 #SOC at hour 0
#recursion relation describing the SOC
def con_rule1(model,i):
model.SOC[i] = eta_s*model.SOC[i-1] + eta_c*model.qr[i-1] - model.qd[i-1]
return (eta_s*model.SOC[i-1] + eta_c*model.qr[i-1] - model.qd[i-1]== model.SOC[i])
#def con_rule1(model,i):
model.con1 = Constraint(range(1,N), rule = con_rule1)
#model.con2 = Constraint(expr = eta_s*SOC[N-1] + eta_c*model.qr[N-1] - model.qd[N-1] == SOC[0]) #SOC relation for the last hour
#SOC boundaries
def con_rule2(model,i):
return (gammas_min*S_bar <= eta_s*model.SOC[i] + eta_c*model.qr[i] - model.qd[i] <= (1-gammas_max)*S_bar)
model.con3 = Constraint(range(N), rule = con_rule2)
#limits the total energy charged over each time step to the energy
#charge limit (derived from the power limit)
#It restricts the throughput based on the power rating
def con_rule3(model,i):
return (0 <= model.qr[i]+model.qd[i] <= Q_bar)
model.con4 = Constraint(range(N),rule = con_rule3)
def pyomo_postprocess(options=None, instance=None, results=None):
model.qd.display()
model.qr.display()
model.pprint()
However, when I try to run the code, I am getting the following error:
Implicit conversion of Pyomo NumericValue type `<class 'pyomo.core.kernel.expr_coopr3._SumExpression'>' to a float is
disabled. This error is often the result of using Pyomo components as
arguments to one of the Python built-in math module functions when
defining expressions. Avoid this error by using Pyomo-provided math
functions.
I could not find any reference to Pyomo's math function in its documentation. It would be great if anyone could help me solve this problem!
Pyomo defines its own set of math module functions for operations like exp, log, sin, etc. If you want to use any of these functions in your Pyomo expressions you should make sure they are the ones provided by Pyomo and not from some other Python package. I think the issue with your model is that you are using np.exp in your Objective function. The Pyomo math functions are automatically imported when you import pyomo.environ so you should be able to replace np.exp with exp to get the Pyomo-defined function.
Related
I am trying to do an optimization of an energy system which is constituted of 2 batteries that are supposed to supply energy when a signal (a request of energy) is sent.
I have created an abstract model in Pyomo to represent my problem and so far I manage to make it work, however my problem is that my data will continuously change depending on the results of my optimization. For example if a signal is received and the batteries provide some energy then the State of Charge (SoC) will decrease (as there is less charge). I want to be able to update this value such that at the next optimization (when a successive signal comes in) my problem is solved using the real SoC.
Another way to formulate this would be: is there a way to use dataframes as input parameters to my Pyomo optimization?
This is my code. My set is called ASSETS because technically I would have multiple assets of different sorts (i.e. a classic lithium battery and maybe an hydrogen storage).
# iterative1.py
from pyomo.environ import *
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
## CREATING MODEL, SET AND PARAM
model = AbstractModel()
# Sets of PTU for which the model is being created for
# model.PTU = Set()
model.ASSETS = Set()
# Set up the param
model.MinPower = Param(model.ASSETS)
model.MaxPower = Param(model.ASSETS)
model.Capacity = Param(model.ASSETS)
model.SoC = Param(model.ASSETS)
model.P_rated = Param(model.ASSETS)
# DATA FROM the EMS csv
FR = 20 #requet of power
# model.SoC = 0.9
P_rated = 1 #how much the asset is already in use during the request of power
# Decision variable
# model.Psh = Var(model.PTU, within=Reals)
model.Psh = Var(model.ASSETS, within=Reals)
# Objective Function
def objective_rule(model):
return FR - sum(model.Psh[i] for i in model.ASSETS)
model.PowerProvided = Objective(rule=objective_rule, sense=minimize)
# Constraints
# defining the rules
def MinPowerRated_rule(model,i): # Min rated power limit
return - model.MaxPower[i] <= model.Psh[i]
def MaxPowerRated_rule(model,i): # Max rated power limit
return model.Psh[i] <= model.MaxPower[i]
# def PowerRated_rule(model,i):
# return model.MinPower[i] <= model.Psh[i] <= model.MaxPower[i]
def MaxCapacityLimits_rule(model,i): #Checks that the power flex is within the limits of the storage (discharge limit)
return model.Psh[i] <= model.Capacity[i]*model.SoC[i]/4
def MinCapacityLimits_rule(model,i): #Checks that the power flex is within the limits of the storage (charge limit)
return model.Psh[i] >= - model.Capacity[i]*model.SoC[i]/4
def MaxPowerAvailable_rule(model,i):
return model.Psh[i] <= model.MaxPower[i] - P_rated
# return model.Psh[i] <= model.MaxPower[i] - model.P_rated[i]
def MinPowerAvailable_rule(model,i):
return model.Psh[i] >= - (model.MaxPower[i] - P_rated)
# return model.Psh[i] >= - (model.MaxPower[i] - model.P_rated[i])
# activating the constraints
model.MaxPowerRated = Constraint(model.ASSETS, rule=MaxPowerRated_rule)
model.MinPowerRated = Constraint(model.ASSETS, rule=MinPowerRated_rule)
model.MaxCapacityLimits = Constraint(model.ASSETS, rule=MaxCapacityLimits_rule)
model.MinCapacityLimits = Constraint(model.ASSETS, rule=MinCapacityLimits_rule)
model.MaxPowerAvailable = Constraint(model.ASSETS, rule=MaxPowerAvailable_rule)
model.MinPowerAvailable = Constraint(model.ASSETS, rule=MinPowerAvailable_rule)
#create model instance
data = DataPortal() #DataPortal handles the .dat file
data.load(filename="abstract.dat", model=model)
instance = model.create_instance(data)
opt = SolverFactory('glpk')
opt.solve(instance)
and I am using the following .dat file to get the parameters for the constraints and objective function.
ASSETS := 1 2;
param MinPower :=
1 0
2 0;
param MaxPower :=
1 15
2 15;
param Capacity :=
1 30
2 30;
param SoC :=
1 0.9
2 0.9;
I have tried to change SoC with a dataframe that I would update after every optimization but unfortunately I get an error.
I'd advise a couple of things...
Switch over to a ConcreteModel in pyomo. They are easier to deal with and you can use basic python to read data of different types (if needed) or just encode the data into the program. It is easier to deal with and it will be more adaptable than trying to change something in an AbstractModel that you have loaded.
You model is "instantaneous" or static right now. It does not represent the passage of time, which is fine. So if I understand you right, some part of the input data will change and you want to re-solve with no dependency on prior solutions. So, as a proof-of-concept, you could (a) make a concrete model, (b) get it running and QA it, (c) put it inside of a loop to solve and outside of the loop, change the input to the parameter and re-solve.
I have been looking around and the best way to do it is using a Dictionary.
Pyomo has some examples that can be used as a template. Inside the dictionary values of a dataframe can be directly inserted so that it creates this dynamic aspect (in the sense that the parameters will be changing according to the dataframe).
Someone else has advise to use a ConcreteModel but then you will have a very static (and very long to write) model which is not ideal (in the majority of the cases).
Given a dataset consisting of money transactions, I am trying to use kernel density estimation to form clusters of transactions by their transaction amount. To do this, I identify the local minima of the density and use these as boundaries for the different clusters. I am able to do this on the whole dataset.
However, now I want to again use KDE, but use it on groups of data. That is, I want to estimate separate kernel densities for each group of transactions. The transactions are grouped on the basis of the counter party bank account from which they are sent. Currently, I use a naïve approach where I just loop over all counter parties. However, this is very inefficient, and as I am using spark I would like to be able to do this in parallel. I am not sure how to do this, as I am quite new to pyspark.
Any suggestions on how to do this?
Code that executes KDE over all data
from pyspark.mllib.stat import KernelDensity
from scipy.signal import argrelextrema
from matplotlib.pyplot import plot
from bisect import bisect
dat_rdd = sdf_pos.select("amount").rdd
dat_rdd_amounts = dat_rdd.map(lambda x: float(x[0]))
kd = KernelDensity()
kd.setBandwidth(10.0)
kd.setSample(dat_rdd_amounts)
s = np.linspace(0, 3000, num=50)
e = kd.estimate(s)
mi = argrelextrema(e, np.less)[0]
print("Minima:", s[mi])
minima_array = f.array([f.lit(i) for i in s[mi]])
user_func = f.udf(bisect)
sdf_pos = sdf_pos.withColumn("amount_group",
user_func(minima_array, f.col("amount")).cast('integer'))
Code that executes KDE separately for each group
counter_parties = sdf_pos.select("CP").distinct().collect()
sdf_pos = sdf_pos.withColumn("minima_array", f.array(f.lit(-1)))
dat_rdd = sdf_pos.select(["amount", "CP"]).rdd
for cp in counter_parties:
dat_rdd_amounts = dat_rdd.filter(lambda y: y[1] == cp[0]).map(lambda x: float(x[0]))
kd = KernelDensity()
kd.setBandwidth(10.0)
kd.setSample(dat_rdd_amounts)
s = np.linspace(0, 3000, num=50)
e = kd.estimate(s)
mi = argrelextrema(e, np.less)[0]
minima_array = f.array([f.lit(i) for i in s[mi]])
sdf_pos = sdf_pos.withColumn("minima_array",
f.when(f.col("CP") == cp[0], minima_array).otherwise(f.col("minima_array")))
user_func = f.udf(bisect)
sdf_pos = sdf_pos.withColumn("amount_group", user_func(f.col("minima_array"), f.col("amount")))
I am trying to do a comparative monte carlo calculation with brightway2 using different impact assessment methods. I thought about using the switch_method method to be more efficient, since the technosphere matrix is the same for a given iteration. However, I am getting an assertion error. A code to reproduce it could be something like this
import brighway as bw
bw.projects.set_current('ei35') # project with ecoinvent 3.5
db = bw.Database("ei_35cutoff")
# select two different transport activities to compare
activity_name = 'transport, freight, lorry >32 metric ton, EURO4'
for activity in bw.Database("ei_35cutoff"):
if activity['name'] == activity_name:
truckE4 = bw.Database("ei_35cutoff").get(activity['code'])
print(truckE4['name'])
break
activity_name = 'transport, freight, lorry >32 metric ton, EURO6'
for activity in bw.Database("ei_35cutoff"):
if activity['name'] == activity_name:
truckE6 = bw.Database("ei_35cutoff").get(activity['code'])
print(truckE6['name'])
break
demands = [{truckE4: 1}, {truckE6: 1}]
# impact assessment method:
recipe_midpoint=[method for method in bw.methods.keys()
if method[0]=="ReCiPe Midpoint (H)"]
mc_mm = bw.MonteCarloLCA(demands[0], recipe_midpoint[0])
next(mc_mm)
If I try switch method I get the assertion error.
mc_mm.switch_method(recipe_midpoint[1])
assert mc_mm.method==recipe_midpoint[1]
mc_mm.redo_lcia()
next(mc_mm)
Am I doing something wrong here?
I usually store characterization factor matrices in a temporary dict and multiply these cfs with the LCI resulting from MonteCarloLCA directly.
import brightway2 as bw
import numpy as np
# Generate objects for analysis
bw.projects.set_current("my_mcs")
my_db = bw.Database('db')
my_act = my_db.random()
my_demand = {my_act:1}
my_methods = [bw.methods.random() for _ in range(2)]
I wrote this simple function to get characterization factor matrices for the product system I will generate in the MonteCarloLCA. It uses a temporara "sacrificial LCA" object that will have the same A and B matrices as the MonteCarloLCA.
This may seem like a waste of time, but it is only done once, and will make MonteCarlo quicker and simpler.
def get_C_matrices(demand, list_of_methods):
""" Return a dict with {method tuple:cf_matrix} for a list of methods
Uses a "sacrificial LCA" with exactly the same demand as will be used
in the MonteCarloLCA
"""
C_matrices = {}
sacrificial_LCA = bw.LCA(demand)
sacrificial_LCA.lci()
for method in list_of_methods:
sacrificial_LCA.switch_method(method)
C_matrices[method] = sacrificial_LCA.characterization_matrix
return C_matrices
Then:
# Create array that will store mc results.
# Shape is (number of methods, number of iteration)
my_iterations = 10
mc_scores = np.empty(shape=[len(my_methods), my_iterations])
# Instantiate MonteCarloLCA object
my_mc = bw.MonteCarloLCA(my_demand)
# Get characterization factor matrices
my_C_matrices = get_C_matrices(my_demand, my_methods)
# Generate results
for iteration in range(my_iterations):
lci = next(my_mc)
for i, m in enumerate(my_methods):
mc_scores[i, iteration] = (my_C_matrices[m]*my_mc.inventory).sum()
All your results are in mc_scores. Each row corresponds to a method, each column to an MC iteration.
Not very elegant, but try this:
iterations = 10
simulations = []
for _ in range(iterations):
mc_mm = MonteCarloLCA(demands[0], recipe_midpoint[0])
next(mc_mm)
mcresults = []
for i in demands:
print(i)
for m in recipe_midpoint[0:3]:
mc_mm.switch_method(m)
print(mc_mm.method)
mc_mm.redo_lcia(i)
print(mc_mm.score)
mcresults.append(mc_mm.score)
simulations.append(mcresults)
CC_truckE4 = [i[1] for i in simulations] # Climate Change, truck E4
CC_truckE6 = [i[1+3] for i in simulations] # Climate Change, truck E6
from matplotlib import pyplot as plt
plt.plot(CC_truckE4 , CC_truckE6, 'o')
If you then make a test and do twice the simulation for the same demand vector, by setting demands = [{truckE4: 1}, {truckE4: 1}] and plot the result you should get a straight line. This means that you are doing dependent sampling and re-using the same tech matrix for each demand vector and for each LCIA. I am not 100% sure of this but I hope it answers your question.
I'm trying to figure how to implement a weighted cum sum primitive for Featuretools. The weighting shall depend on time_since_last like
cum_sum (amount) = sum_{i} exp( -a_{i} ) * amount_{i}
where i are rolling 6 Month periods....
above you find the original question. after a while of try and error I came up with this code for my purpose:
using the data and initial setup for entity and relation from here
def weight_time_until(array, time):
diff = pd.DatetimeIndex(array) - time
s = np.floor(diff.days/365/0.5)
aWidth = 9
a = math.log(0.1) / ( -(aWidth -1) )
w = np.exp(-a*s)
return w
WeightTimeUntil = make_trans_primitive(function=weight_time_until,
input_types=[Datetime],
return_type=Numeric,
uses_calc_time=True,
description="Calc weight using time until the cutoff time",
name="weight_time_until")
features, feature_names = ft.dfs(entityset = es, target_entity = 'clients',
agg_primitives = ['sum'],
trans_primitives = [WeightTimeUntil, MultiplyNumeric])
when I does above I came close to the feature I want but at the end I did not get it right which I do not understand. So I got feature
SUM(loans.WEIGHT_TIME_UNTIL(loan_start))
but not
SUM(loans.loan_amount * loans.WEIGHT_TIME_UNTIL(loan_start))
What did I miss here???
I tried further....
My guess was a type miss match! but the "types" are the same. Anyway I tried the following:
1) es["loans"].convert_variable_type("loan_amount",ft.variable_types.Numeric)
2) loans["loan_amount_"] = loans["loan_amount"]*1.0
For (1) as well for (2) I get the more promising resulting feature:
loan_amount_ * WEIGHT_TIME_UNTIL(loan_start)
and also
loan_amount * WEIGHT_TIME_UNTIL(loan_start)
but only when I have the target value = loans instead of clients which actually was not my intention.
This primitive doesn't currently exist. However, you can create your own custom primitive to accomplish this calculation.
Here is an example calculating the rolling sum, which can be updated to do a weighted sum using the appropriate pandas or python method
from featuretools.primitives import TransformPrimitive
from featuretools.variable_types import Numeric
class RollingSum(TransformPrimitive):
"""Calculates the rolling sum.
Description:
Given a list of values, return the rolling sum.
"""
name = "rolling_sum"
input_types = [Numeric]
return_type = Numeric
uses_full_entity = True
def __init__(self, window=1, min_periods=None):
self.window = window
self.min_periods = min_periods
def get_function(self):
def rolling_sum(values):
"""method is passed a pandas series"""
return values.rolling(window=self.window, min_periods=self.min_periods).sum()
return rolling_sum
I am using "arch" package of python . I am fitting a GARCH(1,1) model with mean model ARX. After the fitting, we can call the conditional volatility directly. However, I don't know how to call the modeled conditional mean values
Any help?
If you use the attribute resid you can compute fitted values. For example
import datetime as dt
import pandas_datareader.data as web
st = dt.datetime(1990,1,1)
en = dt.datetime(2016,1,1)
data = web.get_data_yahoo('^GSPC', start=st, end=en)
returns = 100 * data['Adj Close'].pct_change().dropna()
from arch import arch_model
am = arch_model(returns, mean='AR',lags=1)
res = am.fit(update_freq=5)
fitted = returns - res.resid
fitted.plot()