I am in trouble in simulating the EnergyPlus-FMU by PyFMI. I created an EnergyPlus FMU using the reference building model. I am using PyFMI2.5. How do I run the do_step() function?
from pyfmi import load_fmu
model = load_fmu("MyEnergyplus.fmu")
start_time = 0
final_time = 60.0 * 60 * 24 * 3 #seconds
step_size = 60 # seconds
opts = model.simulate_options()
idf_steps_per_hour = 60
ncp = (final_time - start_time)/(3600./idf_steps_per_hour)
opts['ncp'] = ncp
t = 0
status = model.do_step(current_t = t, step_size= step_size, new_step=True)
The error I got:
File "test_fmi2.py", line 15, in <module> status = model.do_step(current_t = t, step_size= step_size, new_step=True)
AttributeError: 'pyfmi.fmi.FMUModelME2' object has no attribute 'do_step'
I double checked the APIs of PyFMI, and didn't find any problem.
How to enable the simulation? Thanks.
From the output we can see that the FMU that you have loaded is an Model Exchange FMU that do not have a do step function (only Co-Simulation FMUs have that). For more information about the different FMU types, please see the FMI specification.
To simulate an Model Exchange FMU, please use the "simulate" method. The "simulate" method is also available for Co-Simulation FMUs and is the prefered way to perform a simulation
Not knowing how you setup the fmu, I can at least say that you forgot model.initialize(start_time,final_time).
Related
I have recently started exploring the QuantLib option pricing libraries for python and have come across an error that I don't seem to understand. Basically, I am trying to price an Up&Out Barrier option using the Heston model. The code that I have written has been taken from examples found online and adapted to my specific case. Essentially, the problem is that when I run the code below I get an error that I believe is triggered at the last line of the code, i.e. the european_option.NPV() function
*** RuntimeError: wrong argument type
Can someone please explain me what I am doing wrong?
# option inputs
maturity_date = ql.Date(30, 6, 2020)
spot_price = 969.74
strike_price = 1000
volatility = 0.20
dividend_rate = 0.0
option_type = ql.Option.Call
risk_free_rate = 0.0016
day_count = ql.Actual365Fixed()
calculation_date = ql.Date(26, 6, 2020)
ql.Settings.instance().evaluationDate = calculation_date
# construct the option payoff
european_option = ql.BarrierOption(ql.Barrier.UpOut, Barrier, Rebate,
ql.PlainVanillaPayoff(option_type, strike_price),
ql.EuropeanExercise(maturity_date))
# set the Heston parameters
v0 = volatility*volatility # spot variance
kappa = 0.1
theta = v0
hsigma = 0.1
rho = -0.75
spot_handle = ql.QuoteHandle(ql.SimpleQuote(spot_price))
# construct the Heston process
flat_ts = ql.YieldTermStructureHandle(ql.FlatForward(calculation_date,
risk_free_rate, day_count))
dividend_yield = ql.YieldTermStructureHandle(ql.FlatForward(calculation_date,
dividend_rate, day_count))
heston_process = ql.HestonProcess(flat_ts, dividend_yield,
spot_handle, v0, kappa,
theta, hsigma, rho)
# run the pricing engine
engine = ql.AnalyticHestonEngine(ql.HestonModel(heston_process),0.01, 1000)
european_option.setPricingEngine(engine)
h_price = european_option.NPV()
The problem is that the AnalyticHestonEngine is not able to price Barrier options.
Check here https://www.quantlib.org/reference/group__barrierengines.html for a list of Barrier Option pricing engines.
I am using pb3 for serialization:
syntax = "proto3";
package marshalling;
import "google/protobuf/timestamp.proto";
message PrimitiveType {
oneof primitive_value {
bool boolean_value = 1;
int64 int_value = 2;
double double_value = 3;
google.protobuf.Timestamp timestamp_value = 4;
}
}
I generated a x_pb2.py file but do not know how to use it.
For example, if I would like to Marshall a timestamp to bytes, how could I do it?
With reference to The Protocol Buffer API section:
Unlike when you generate Java and C++ protocol buffer code, the Python protocol buffer compiler doesn't generate your data access code for you directly. Instead, it generates special descriptors for all your messages, enums, and fields, and some mysteriously empty classes, one for each message type...
and,
At load time, the GeneratedProtocolMessageType metaclass uses the specified descriptors to create all the Python methods you need to work with each message type and adds them to the relevant classes. You can then use the fully-populated classes in your code.
So, you can use the generated class(s) to create the object(s) and their fields like this:
p1 = primitive_types_pb2.PrimitiveType()
p1.int_value = 1234
For your use-case, you can use timestamp_pb2.Timestamp.GetCurrentTime().
Alternatively, you can refer to Timestamp along with timestamp_pb2.Timestamp.CopyFrom():
now = time.time()
seconds = int(now)
nanos = int((now - seconds) * 10**9)
timestamp = Timestamp(seconds=seconds, nanos=nanos)
p1 = primitive_types_pb2.PrimitiveType()
p1.timestamp_value.CopyFrom( timestamp )
There are other google.protobuf.timestamp_pb2 APIs that you might be interested in for your other use-cases.
Here's a complete working example (primitive_types.proto):
import time # For Timestamp.CopyFrom(). See commented code below
import primitive_types_pb2
from google.protobuf import timestamp_pb2
# serialization
p1 = primitive_types_pb2.PrimitiveType()
# Alternative to GetCurrentTime()
# now = time.time()
# seconds = int( now )
# nanos = int( (now - seconds) * 10**9 )
# timestamp = timestamp_pb2.Timestamp( seconds=seconds, nanos=nanos )
# p1.timestamp_value.CopyFrom( timestamp )
p1.timestamp_value.GetCurrentTime()
serialized = p1.SerializeToString()
# deserialization
p2 = primitive_types_pb2.PrimitiveType()
p2.ParseFromString( serialized )
print( p2.timestamp_value )
Output:
seconds: 1590581054
nanos: 648958000
References:
https://developers.google.com/protocol-buffers/docs/proto3#oneof
https://developers.google.com/protocol-buffers/docs/pythontutorial
https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#timestamp
https://googleapis.dev/python/protobuf/latest/google/protobuf/timestamp_pb2.html
Given that we could use self-defined metric in LightGBM and use parameter 'feval' to call it during training.
And for given metric, we could define it in the parameter dict like metric:(l1, l2)
My question is that how call several self-defined metric at the same time? I cannot use feval=(my_metric1, my_metric2) to get the result
params = {}
params['learning_rate'] = 0.003
params['boosting_type'] = 'goss'
params['objective'] = 'multiclassova'
params['metric'] = ['multi_error', 'multi_logloss']
params['sub_feature'] = 0.8
params['num_leaves'] = 15
params['min_data'] = 600
params['tree_learner'] = 'voting'
params['bagging_freq'] = 3
params['num_class'] = 3
params['max_depth'] = -1
params['max_bin'] = 512
params['verbose'] = -1
params['is_unbalance'] = True
evals_result = {}
aa = lgb.train(params,
d_train,
valid_sets=[d_train, d_dev],
evals_result=evals_result,
num_boost_round=4500,
feature_name=f_names,
verbose_eval=10,
categorical_feature = f_names,
learning_rates=lambda iter: (1 / (1 + decay_rate * iter)) * params['learning_rate'])
Lets' discuss on the code I share here. d_train is my training set. d_dev is my validation set (I have a different test set.) evals_result will record our multi_error and multi_logloss per iteration as a list. verbose_eval = 10 will make LightGBM print multi_error and multi_logloss of both training set and validation set at every 10 iterations. If you want to plot multi_error and multi_logloss as a graph:
lgb.plot_metric(evals_result, metric='multi_error')
plt.show()
lgb.plot_metric(evals_result, metric='multi_logloss')
plt.show()
You can find other useful functions from LightGBM documentation. If you can't find what you need, go to XGBoost documentation, a simple trick. If there is something missing, please do not hesitate to ask more.
I am trying to use Brightway's ParallelMonteCarloand MultiMonteCarloclass but have run into a KeyError. I am in a Brightway project with an LCI database:
In [1] bw.databases
Out [1] Brightway2 databases metadata with 2 objects:
biosphere3
ecoinvent 3_2 CutOff
Selecting an activity and a method:
In [2] db = bw.Database('ecoinvent 3_2 CutOff')
act = db.random()
method = ('CML 2001', 'climate change', 'GWP 100a')
My code is as follows:
In [3] ParallelMC_LCA = bw.ParallelMonteCarlo({act:1},
method = myMethod,
iterations=1000,
cpus=mp.cpu_count())
results = np.array(ParallelMC_LCA.calculate())
and
In [4] act1 = db.random()
act2 = db.random()
multiMC_LCA = bw.MultiMonteCarlo(demands = [{act1:1}, {act2:1}],
method = myMethod,
iterations = 10)
results = np.array(ParallelMC_LCA.calculate())
Both give me a KeyError: 'ecoinvent 3_2 CutOff'.
My question is: why?
This is a known issue due to differences in how multiprocessing works on Windows and Unix. Specifically, on Windows the project is not set correctly, causing a KeyError. As such, it isn't a Stack Overflow question.
from fipy import *
nx = 50
dx = 1.
mesh = Grid1D(nx=nx, dx=dx)
phi = CellVariable(name="solution variable",
mesh=mesh,
value=0.)
D = 1.
valueLeft = 1
valueRight = 0
phi.constrain(valueRight, mesh.facesRight)
phi.constrain(valueLeft, mesh.facesLeft)
eqX = TransientTerm() == ExplicitDiffusionTerm(coeff=D)
timeStepDuration = 0.9 * dx**2 / (2 * D)
steps = 100
phiAnalytical = CellVariable(name="analytical value",
mesh=mesh)
viewer = Viewer(vars=(phi, phiAnalytical),
datamin=0., datamax=1.)
viewer.plot()
x = mesh.cellCenters[0]
t = timeStepDuration * steps
try:
from scipy.special import erf
phiAnalytical.setValue(1 - erf(x / (2 * numerix.sqrt(D * t))))
except ImportError:
print "The SciPy library is not available to test the solution to \
the transient diffusion equation"
for step in range(steps):
eqX.solve(var=phi,
dt=timeStepDuration)
viewer.plot()
I am trying to implement an example from the fipy examples list which is the 1D diffusion problem. but I am not able to view the result as a plot.
I have defined viewer correctly as suggested in the code for the example. Still not helping.
The solution vector runs fine.
But I am not able to plot using the viewer function. can anyone help? thank you!
Your code works fine on my computer. Probably is a problem with a plotting library used by FiPy.
Check if the original example works (just run this from the fipy folder)
python examples/diffusion/mesh1D.py
If not, download FiPy from the github page of the project https://github.com/usnistgov/fipy and try the example again. If not, check if the plotting libraries are correctly installed.
Anyways, you should specify the platform you are using and the errors you are getting. The first time I had some problems with the plotting libraries too.