Python GEKKO: Value of parameter changes while solving the model - python-3.x
I face the following problem with GEKKO: some parameters (.Param) are changing (others not) when solving a model and I cannot determine why.
Background: I am currently trying to translate code from EViews (see gennaro.zezza.it) to python. I use GEKKO to simulate a system consisting out of 11 equations (for now). I do want to use parameters (instead of constants which seem to work perfectly fine) as I need to ('exogenously') change their value over time (and thus need an array).
Example: In the following example, an 'economic system' reacts to new government expenditures. Here, I particularly face problems with "m.alpha1" and "m.alpha2" - if they are introduced as ".Param" their value will change to 1.0 (instead of 0.6 and 0.4) when solving the model. How can I stop GEKKO from doing this? (Again, I want to be able to change, e.g., alpha1 to 0.7 after time x. E.g., lower and upper bounds won't help here.)
Thanks for your help!!
Code:
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
import plotly.graph_objects as go
# Initialize model
m = GEKKO(remote=False)
tstart = 1945
tend = 2000
tdur = tend-tstart+1
m.time = np.linspace(0, tend-tstart, tdur)
# Model parameters
m.t = m.Param(value=m.time)
# Exogenous parameters
alpha1_ex = 0.6
alpha2_ex = 0.4
theta_ex = 0.2
w_ex = 1
# -as .Const
m.alpha1 = m.Const(value=alpha1_ex, name='Propensity to consume out of income')
m.alpha2 = m.Const(value=alpha2_ex, name='Propensity to consume out of wealth')
#m.theta = m.Const(value=theta_ex, name='Tax rate')
#m.w = m.Const(value=w_ex, name='Wage rate')
# -as .Param: issues with alpha1 & alpha2
#m.alpha1 = m.Param(value=np.full(tdur,alpha1_ex), name='Propensity to consume out of income')
#m.alpha2 = m.Param(value=np.full(tdur,alpha2_ex), name='Propensity to consume out of wealth')
m.theta = m.Param(value=np.full(tdur,theta_ex), name='Tax rate')
m.w = m.Param(value=np.ones(tdur), name='Wage rate')
# no issues with g_d
m.g_d = m.Param(value=np.zeros(tdur), name='Government goods, demand')
m.g_d[1:] = 20
# Endogenous variables
m.c_d = m.Var(value=0, name='Consumption goods demand by households')
m.c_s = m.Var(value=0, name='Consumption goods supply')
m.g_s = m.Var(value=0, name='Government goods, supply')
m.h_h = m.Var(value=0, name='Cash money held by households')
m.h_s = m.Var(value=0, name='Cash money supplied by government')
m.n_d = m.Var(value=0, name='Demand for labor')
m.n_s = m.Var(value=0, name='Supply for labor')
m.t_d = m.Var(value=0, name='Taxes, "demand"')
m.t_s = m.Var(value=0, name='Taxes, "supply"')
m.y = m.Var(value=0, name='Income (=GDP)')
m.yd = m.Var(value=0, name='Disposable income of households')
# Lag variables
m.h_h_lag = m.Var(value=0, name='Cash money held by households (t-1)')
m.delay(m.h_h,m.h_h_lag,1) # m.h_h_lag = m.h_h(t-1)
m.h_s_lag = m.Var(value=0, name='Cash money supplied by government (t-1)')
m.delay(m.h_s,m.h_s_lag,1)
# Equations
m.Equation(m.c_s == m.c_d)
m.Equation(m.g_s == m.g_d)
m.Equation(m.t_s == m.t_d)
m.Equation(m.n_s == m.n_d)
m.Equation(m.yd == m.w*m.n_s - m.t_s)
m.Equation(m.t_d == m.theta*m.w*m.n_s)
m.Equation(m.c_d == m.alpha1*m.yd + m.alpha2*m.h_h_lag)
m.Equation(m.h_s == m.h_s_lag + m.g_d - m.t_d)
m.Equation(m.h_h == m.h_h_lag + m.yd - m.c_d)
m.Equation(m.y == m.c_s + m.g_s)
m.Equation(m.n_d == m.y/m.w)
# Solve
m.options.IMODE = 4
m.solve(disp=False)
print("Alpha1 = ", m.alpha1.value)
print("Alpha2 = ", m.alpha2.value)
print("Theta = ", m.theta.value)
print("w = ", m.w.value)
# Plot results
fig, axes = plt.subplots(2, 2, sharex=True, figsize=(8, 7))
fig.canvas.manager.set_window_title('Figures Chapter 3')
fig.suptitle('SIM Model - basic')
x_major_ticks = np.arange(0,tdur,5)
axes[0,0].plot(m.time, m.g_d.value, '-', color='black', linewidth=1)
axes[0,0].legend([m.g_d.name],loc=4,fontsize=7)
axes[0,0].grid()
axes[0,0].set_xticks(x_major_ticks)
axes[1,0].plot(m.time, m.y.value, '-', color='red', linewidth=1)
axes[1,0].legend([m.y.name],loc=4,fontsize=7)
axes[1,0].grid()
axes[1,0].set_xlabel('Time (years)')
axes[1,0].set_xticks(x_major_ticks)
axes[0,1].plot(m.time, m.c_d.value, '-', color='blue', linewidth=0.75)
axes[0,1].plot(m.time, m.yd.value, '-', color='green', linewidth=0.75)
axes[0,1].legend([m.c_d.name,m.yd.name],loc=4,fontsize=7)
axes[0,1].grid()
axes[0,1].set_xticks(x_major_ticks)
ln1 = axes[1,1].plot(m.time, m.h_h.value, '-', color='purple', linewidth=0.75)
axes[1,1].tick_params(axis='y', labelcolor='purple')
ax2 = axes[1,1].twinx()
ln2 = ax2.plot(m.time, [a_i - b_i for a_i, b_i in zip(m.h_h, m.h_h_lag)], '-', color='orange', linewidth=0.75)
ax2.tick_params(axis='y', labelcolor='orange')
lns = ln1+ln2
axes[1,1].legend(lns,[m.h_h.name,'Household savings'],loc=4,fontsize=7)
axes[1,1].grid()
axes[1,1].set_xticks(x_major_ticks)
axes[1,1].set_xlabel('Time (years)')
plt.show()
Output #1: with m.alpha1 and m.alpha2 as .const
Alpha1 = 0.6
Alpha2 = 0.4
Theta = [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
w = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
Output #2: with m.alpha1 as .param
Alpha1 = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
Alpha2 = 0.4
Theta = [0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2]
w = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
The problem is that the name of the variable name='Propensity to consume out of income' is over 25 characters long.
m.alpha1 = m.Param(value=np.full(tdur,alpha1_ex), name='Propensity to consume out of income')
m.alpha2 = m.Param(value=np.full(tdur,alpha2_ex), name='Propensity to consume out of wealth')
The model file is produced correctly (gk_model0.apm) but the data file (gk_model0.csv) header is truncated to 25 characters. The files are accessible with m.open_folder(). The bug is in this line of gk_write_files.py where numbers are output as strings of length 25.
np.savetxt(os.path.join(self._path,file_name), csv_data.T, delimiter=",", fmt='%1.25s')
I've added this as a bug report with tracking on GitHub. One work-around is to use shorter variable names or leave off the variable names.
m.alpha1 = m.Param(value=np.full(tdur,alpha1_ex)) # Propensity to consume out of income
Related
I am getting ValueError: The estimator Sequential should be a classifier. how can I solve it?
I am using VotingClassifier for 31 Pre-trained models, when I wanted to do voting using VotingClassifier. I got this error ValueError: The estimator Sequential should be a classifier. The code is as shown below: estimators = [("EfficientNetB0_model",EfficientNetB0_model),("EfficientNetB1_model",EfficientNetB1_model),("DenseNet121_model",DenseNet121_model), ("DenseNet169_model",DenseNet169_model),("DenseNet201_model",DenseNet201_model),("EfficientNetB2_model",EfficientNetB2_model), ("EfficientNetB3_model",EfficientNetB3_model),("EfficientNetB4_model",EfficientNetB4_model),("EfficientNetB5_model",EfficientNetB5_model), ("EfficientNetB6_model",EfficientNetB6_model),("EfficientNetB7_model",EfficientNetB7_model),("EfficientNetV2B0_model",EfficientNetV2B0_model), ("EfficientNetV2B1_model",EfficientNetV2B1_model),("EfficientNetV2B2_model",EfficientNetV2B2_model),("EfficientNetV2B3_model",EfficientNetV2B3_model), ("EfficientNetV2L_model",EfficientNetV2L_model),("EfficientNetV2M_model",EfficientNetV2M_model),("EfficientNetV2S_model",EfficientNetV2S_model), ("InceptionResNetV2_model",InceptionResNetV2_model),("InceptionV3_model",InceptionV3_model), ("ResNet50_model",ResNet50_model),("ResNet50V2_model",ResNet50V2_model),("ResNet101_model",ResNet101_model), ("ResNet101V2_model",ResNet101V2_model),("ResNet152_model",ResNet152_model),("ResNet152V2_model",ResNet152V2_model), ("VGG16_model",VGG16_model),("VGG19_model",VGG19_model),("Xception_model",Xception_model), ("MobileNet_model",MobileNet_model),("MobileNetV2_model",MobileNetV2_model)] weights = [0.2, 0.3, 0.0, 0.1, 0.0, 0.3, 0.2, 0.1, 0.0, 0.3, 0.1, 0.3, 0.3, 0.1, 0.0, 0.1, 0.2, 0.1, 0.1, 0.1, 0.4, 0.0, 0.2, 0.1, 0.4, 0.0, 0.0, 0.1, 0.1, 0.0, 0.0 ] ensemble = VotingClassifier(estimators, weights=weights, voting= 'soft') ensemble._estimator_type = "classifier" ensemble = ensemble.fit(X_train, y_train) print(ensemble.predict(X_test)) Could you help me since I could not find any solution for that. Thank you Is there any other ways to do voting ?
Principal Component Analysis for 3D coordinates for alignment of principal axis in coordinate system
I am struggling with PCA. Problem statement: I have two geometries. Both are same, but one of them is rotated around Y axis as shown below. X1 = [0, 0, 1, 1, 2, 2, 3, 3, 3, 3, 2, 1, 2, 0, 0, 1, 1, 2, 2, 3, 3, 3, 3, 2, 1, 2] Y1 = [0, 1, 1, 2, 2, 3, 3, 2, 1, 0, 0, 0, 1, 0, 1, 1, 2, 2, 3, 3, 2, 1, 0, 0, 0, 1] Z1 = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] X2 = [0.0, 1.0, 1.0, 2.0, 2.0, 3.0, 3.0, 2.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 2.0, 2.0, 3.0, 3.0, 2.0, 1.0, 0.0, 0.0, 0.0, 1.0] Y2 = [0.0, 0.94, 0.94, 1.88, 1.88, 2.82, 2.82, 1.88, 0.94, 0.0, 0.0, 0.0, 0.94, -0.34, 0.6, 0.6, 1.54, 1.54, 2.48, 2.48, 1.54, 0.6, -0.34, -0.34, -0.34, 0.6] Z2 = [0.0, 0.34, 0.34, 0.68, 0.68, 1.03, 1.03, 0.68, 0.34, 0.0, 0.0, 0.0, 0.34, 0.94, 1.28, 1.28, 1.62, 1.62, 1.97, 1.97, 1.62, 1.28, 0.94, 0.94, 0.94, 1.28] I need both two geometries are placed at same orientation after I apply PCA on both geometries. What I have done: First I Centered the data. df_1 = df_1.drop('node_number', axis=1) Then I applied PCA from sklearn from sklearn.decomposition import PCA pca = PCA() pca.fit(data_centered) pca_scores = pca.transform(data_centered) scores_df = pd.DataFrame(pca_scores) scores_df = scores_df.set_axis(['PC1', 'PC2', 'PC3'], axis=1) scores_df = scores_df.round(2) I applied this process on first geometry. After first geometry, I have applied this process similary on second geometry also. But after applying PCA, I have got different new coordinates (X, Y, Z). coordinate of First geometry after PCA, X1_new = [-2.12, -1.41, -0.71, 0.0, 0.71, 1.41, 2.12, 1.41, 0.71, -0.0, -0.71, -1.41, -0.0, -2.12, -1.41, -0.71, 0.0, 0.71, 1.41, 2.12, 1.41, 0.71, -0.0, -0.71, -1.41, -0.0] Y1_new = [-0.38, -1.09, -0.38, -1.09, -0.38, -1.09, -0.38, 0.33, 1.03, 1.74, 1.03, 0.33, 0.33, -0.38, -1.09, -0.38, -1.09, -0.38, -1.09, -0.38, 0.33, 1.03, 1.74, 1.03, 0.33, 0.33] Z1_new = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5] coordinate of second geometry after PCA, X2__new =[-1.74, -0.33, -0.33, 1.09, 1.09, 2.5, 2.5, 1.09, -0.33, -1.74, -1.74, -1.74, -0.33, -1.74, -0.33, -0.33, 1.09, 1.09, 2.5, 2.5, 1.09, -0.33, -1.74, -1.74, -1.74, -0.33] Y2_new = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5, -0.5] Z2_new = [0.0, -0.0, -0.0, -0.0, -0.0, 0.0, 0.0, -0.0, -0.0, 0.0, 0.0, 0.0, -0.0, 0.0, -0.0, -0.0, -0.0, -0.0, 0.0, 0.0, -0.0, -0.0, 0.0, 0.0, 0.0, -0.0] What I am doing wrong? I need both geometries oriented in a same way. And I cant understand why there is all 0 in Z2_new of 2nd geometry after PCA.
How to use fit_transform with an array?
Example of array content: [ [4.9, 3.0, 1.4, 0.2, 0.0, 2.0], [4.7, 3.2, 1.3, 0.2, 0.0, 2.0], [4.6, 3.1, 1.5, 0.2, 0.0, 2.0], ... ] model = TSNE(learning_rate=100) transformed = model.fit_transform(data) I'm trying to apply tSNE to a float array, but I get an error. What should I change? ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (149,) + inhomogeneous part.
Try this example: from sklearn.manifold import TSNE import numpy as np X = np.array([[4.9, 3.0, 1.4, 0.2, 0.0, 2.0], [4.7, 3.2, 1.3, 0.2, 0.0, 2.0]]) model = TSNE(learning_rate=100) transformed = model.fit_transform(X) print(transformed)
ValueError: could not convert string to float: 'ane'
print(xtest.head()) print("predicted as",myModel.predict(xtest)) output:- age bp sg al su rbc pc ... rbcc htn dm cad appet pe ane 235 45.0 70.0 1.01 2.0 0.0 1.0 1.0 ... 4.8 0.0 0.0 1.0 1.0 0.0 1.0 [1 rows x 24 columns] predicted as [[0.99633694]] The xtest dataframe had a column named ane and the model is predicting well. But when I am giving the same input in form of dictionary as di={'age': 59, 'bp': 70, 'sg': 1.01, 'al': 1.0, 'su': 3.0, 'rbc': 0.0, 'pc': 0.0, 'pcc': 0.0, 'ba': 0.0, 'bgr': 424.0, 'bu': 55.0, 'sc': 1.7, 'sod': 138.0, 'pot': 4.5, 'hemo': 12.0, 'pcv': 37.0, 'wbcc': 10200.0, 'rbcc': 4.1, 'htn': 1.0, 'dm': 1.0, 'cad': 1.0, 'appet': 1.0, 'pe': 0.0, 'ane': 1.0 } b=pd.DataFrame(di.items()) b=b.T x['ane'] = x['ane'].astype(float) tensor = tf.convert_to_tensor(b, dtype=tf.float64) print(myModel.predict((tensor))) It's showing the following error:- ValueError: could not convert string to float: 'ane' In the training model, I did the same conversion and it worked well. My colab notebook:- https://colab.research.google.com/drive/1DomDo3adwRBQUFD0g8JVpF5jxC9HoegW
you should try this code I replaced smae code in colab also. import pandas as pd di={'age': 59, 'bp': 70, 'sg': 1.01, 'al': 1.0, 'su': 3.0, 'rbc': 0.0, 'pc': 0.0, 'pcc': 0.0, 'ba': 0.0, 'bgr': 424.0, 'bu': 55.0, 'sc': 1.7, 'sod': 138.0, 'pot': 4.5, 'hemo': 12.0, 'pcv': 37.0, 'wbcc': 10200.0, 'rbcc': 4.1, 'htn': 1.0, 'dm': 1.0, 'cad': 1.0, 'appet': 1.0, 'pe': 0.0, 'ane': 1.0 } b=pd.DataFrame(list(di.items()),index=di) b= b.drop(columns=0) b=b.T b['ane'] = b['ane'].astype(float) tensor = tf.convert_to_tensor(b, dtype=tf.float32) print(myModel.predict((tensor)))
Change colors in colormap based on range of values
Is it possible to set the lower and/or upper parts of a colorbar based on ranges of values? For example, given the ROYGBIV colormap below and optionally an offset and a range value, I'd like to change the colors below offset and/or above range. In other words, suppose offset = 20 and range = 72, I'd like to color all the values less than or equal to 20 in black and all values greater than or equal to 72 in white. I'm aware of the methods set_under and set_over, but they require changing the parameters vmin and vmax (as far as I know), which is not what I want. I want to keep the original minimum and maximum values (e.g., vmin = 0 and vmax = 100), and only (optionally) change the colors of the extremities. ROYGBIV = { "blue": ((0.0, 1.0, 1.0), (0.167, 1.0, 1.0), (0.333, 1.0, 1.0), (0.5, 0.0, 0.0), (0.667, 0.0, 0.0), (0.833, 0.0, 0.0), (1.0, 0.0, 0.0)), "green": ((0.0, 0.0, 0.0), (0.167, 0.0, 0.0), (0.333, 0.0, 0.0), (0.5, 1.0, 1.0), (0.667, 1.0, 1.0), (0.833, 0.498, 0.498), (1.0, 0.0, 0.0)), "red": ((0.0, 0.5608, 0.5608), (0.167, 0.4353, 0.4353), (0.333, 0.0, 0.0), (0.5, 0.0, 0.0), (0.667, 1.0, 1.0), (0.833, 1.0, 1.0), (1.0, 1.0, 1.0)) } rainbow_mod = matplotlib.colors.LinearSegmentedColormap("rainbow_mod", ROYGBIV, 256)
I found one way to do it using ListedColormap as explained here. The basic idea is to obtain the RGBA lists/tuples of the colors in the LinearSegmentedColormap object (numpy array) and replace the first or last few lists with replicates of the desired color. It looks something like this: under_color = [0.0, 0.0, 0.0, 1.0] # black (alpha = 1.0) over_color = [1.0, 1.0, 1.0, 1.0] # white (alpha = 1.0) all_colors = rainbow_mod(np.linspace(0, 1, 256)) vmin = 0.0 vmax = 100.0 all_colors[:int(np.round((20.0 - vmin) / (vmax - vmin) * 256)), :] = under_color all_colors[int(np.round((72.0 - vmin) / (vmax - vmin) * 256)):, :] = over_color rainbow_mod_list = matplotlib.colors.ListedColormap(all_colors.tolist())