I have recently started exploring the QuantLib option pricing libraries for python and have come across an error that I don't seem to understand. Basically, I am trying to price an Up&Out Barrier option using the Heston model. The code that I have written has been taken from examples found online and adapted to my specific case. Essentially, the problem is that when I run the code below I get an error that I believe is triggered at the last line of the code, i.e. the european_option.NPV() function
*** RuntimeError: wrong argument type
Can someone please explain me what I am doing wrong?
# option inputs
maturity_date = ql.Date(30, 6, 2020)
spot_price = 969.74
strike_price = 1000
volatility = 0.20
dividend_rate = 0.0
option_type = ql.Option.Call
risk_free_rate = 0.0016
day_count = ql.Actual365Fixed()
calculation_date = ql.Date(26, 6, 2020)
ql.Settings.instance().evaluationDate = calculation_date
# construct the option payoff
european_option = ql.BarrierOption(ql.Barrier.UpOut, Barrier, Rebate,
ql.PlainVanillaPayoff(option_type, strike_price),
ql.EuropeanExercise(maturity_date))
# set the Heston parameters
v0 = volatility*volatility # spot variance
kappa = 0.1
theta = v0
hsigma = 0.1
rho = -0.75
spot_handle = ql.QuoteHandle(ql.SimpleQuote(spot_price))
# construct the Heston process
flat_ts = ql.YieldTermStructureHandle(ql.FlatForward(calculation_date,
risk_free_rate, day_count))
dividend_yield = ql.YieldTermStructureHandle(ql.FlatForward(calculation_date,
dividend_rate, day_count))
heston_process = ql.HestonProcess(flat_ts, dividend_yield,
spot_handle, v0, kappa,
theta, hsigma, rho)
# run the pricing engine
engine = ql.AnalyticHestonEngine(ql.HestonModel(heston_process),0.01, 1000)
european_option.setPricingEngine(engine)
h_price = european_option.NPV()
The problem is that the AnalyticHestonEngine is not able to price Barrier options.
Check here https://www.quantlib.org/reference/group__barrierengines.html for a list of Barrier Option pricing engines.
Related
I am a beginner in Openmodelica and trying to model a flow system with standard libraries. As shown in the picture below, I would like to simulate a system flow from different sources, each with different fluids (gases).
Example:
source_fluid_1: N2
source_fluid_2: Ar
mixed_fluid: fluid mixture of
source_fluid_1 and source_fluid_2
sink
Example of the flow system
For source_fluid_1 and source_fluid_2 I have set the composition X=1 each. In the volume mixed_fluid I have as X_start {0.5, 0.5}.
According to the error information
Base class Medium not found in scope Fluid_Mixture_Example
andÂ
Class Medium.ExtraProperty not found in scope FluidPort
it seems that I am doing something wrong with the definition of the medium.
But unfortunately I can't get any further here.
With the same fluid (gas) in both sources the model runs without problems.
In addition, I am not sure if I am doing the settings in the composition correctly.
I have looked at the Trace Substances in Modelica's examples, but they don't seem to cover my case.
I am very grateful if someone could help me.
model Fluid_Mixture_Example
Modelica.Fluid.Sources.Boundary_pT source_fluid_1(redeclare package Medium = Modelica.Media.IdealGases.SingleGases.N2, T = system.T_ambient, X = {1}, nPorts = 1, p = 5e5)
Modelica.Fluid.Sources.Boundary_pT source_fluid_2(redeclare package Medium = Modelica.Media.IdealGases.SingleGases.Ar, T = system.T_ambient, X = {1}, nPorts = 1, p = 10e5)
Modelica.Fluid.Sources.FixedBoundary sink(redeclare package Medium = Medium, nPorts = 1)
Modelica.Fluid.Fittings.SimpleGenericOrifice orifice1(redeclare package Medium = Medium, diameter = 0.0015, zeta = 1)
Modelica.Fluid.Fittings.SimpleGenericOrifice orifice2(redeclare package Medium = Medium, diameter = 0.0015, zeta = 1)
Modelica.Fluid.Vessels.ClosedVolume mixed_fluid(V = 1, X_start = {0.5, 0.5}, massDynamics = Modelica.Fluid.Types.Dynamics.FixedInitial, nPorts = 3, use_portsData = false)
end Fluid_Mixture_Example;
OpenModelica v1.18.1 (64-bit)
I am trying to learn what the various outputs of predict.coxph() mean. I am currently attempting to fit a cox model on a training set then use the resulting coefficients from the training set to make predictions in a test set (new set of data).
I see from the predict.coxph() help page that I could use type = "survival" to extract and individual's survival probability-- which is equal to exp(-expected).
Here is a code block of what I have attempted so far, using the ISLR2 BrainCancer data.
set.seed(123)
n.training = round(nrow(BrainCancer) * 0.70) # 70:30 split
idx = sample(1:nrow(BrainCancer), size = n.training)
d.training = BrainCancer[idx, ]
d.test = BrainCancer[-idx, ]
# fit a model using the training set
fit = coxph(Surv(time, status) ~ sex + diagnosis + loc + ki + gtv + stereo, data = d.training)
# get predicted survival probabilities for the test set
pred = predict(fit, type = "survival", newdata = d.test)
The predictions generated:
predict(fit, type = "survival", newdata = d.test)
[1] 0.9828659 0.8381164 0.9564982 0.2271862 0.2883800 0.9883625 0.9480138 0.9917512 1.0000000 0.9974775 0.7703657 0.9252100 0.9975044 0.9326234 0.8718161 0.9850815 0.9545622 0.4381646 0.8236644
[20] 0.2455676 0.7289031 0.9063336 0.9126897 0.9988625 0.4399697 0.9360874
Are these survival probabilities associated with a specific time point? From the help page, it sounds like these are survival probabilities at the follow-up times in the newdata argument. Is this correct?
Additional questions:
How is the baseline hazard estimated in predict.coxph? Is it using the Breslow estimator?
If type = "expected" is used, are these values the cumulative hazard? If yes, what are the relevant time points for these?
Thank you!
I'm trying to implement Double DQN (not to be confused with DQN with a slightly delayed Q-target network) in PyTorch to train an agent to play an Atari OpenAI Gym game. Here I discuss the implementation of the following formula:
Update of Q-network, formula taken from Sutton & Barto.
My first implementation is:
Q_pred = self.Q_1.forward(s_now)[T.arange(batch_size), actions.long()]
Q_next_all = self.Q_1.forward(s_next)
maxA_id = T.argmax(Q_next_all, dim=1)
Q_pred2 = self.Q_2.forward(s_next)[T.arange(batch_size), maxA_id]
Q_target = (rewards + (~dones) * self.GAMMA * Q_pred2).detach()
self.Q_1.optimizer.zero_grad()
self.Q_1.loss(Q_target, Q_pred).backward()
self.Q_1.optimizer.step()
(Q_1 and Q_2 are nn.Module classes, and all of the variables involved here are already torch tensors lying in the GPU.)
I noticed that my program ran much slower than a previous implementation which used plain DQN.
I realized that I can combine the batches entering Q_1, so there will be one combined batch being forwarded in the neural network, instead of two batches in sequence. The code becomes:
s_combined = T.cat((s_now, s_next))
Q_combined = self.Q_1.forward(s_combined)
Q_pred = Q_combined[T.arange(batch_size), actions.long()]
Q_next_all = Q_combined[batch_size:]
Q_pred2_all = self.Q_2.forward(s_next)
maxA_id = T.argmax(Q_next_all, dim=1)
Q_pred2 = Q_pred2_all[T.arange(batch_size), maxA_id]
Q_target = (rewards + (~dones) * self.GAMMA * Q_pred2).detach()
self.Q_1.optimizer.zero_grad()
self.Q_1.loss(Q_target, Q_pred).backward()
self.Q_1.optimizer.step()
(This proves that I understand how to do batch training in PyTorch, so don't mark this as a duplicate of this question.)
Furthermore, I realized that Q_1 and Q_2 can process their batches in parallel. So I looked up how to do multiprocessing in PyTorch. Unfortunately, I couldn't find a good example. I tried to adapt a code that looks similar to my scenario, and my code becomes:
def spawned():
s_combined = T.cat((s_now, s_next))
Q_combined = self.Q_1.forward(s_combined)
Q_pred = Q_combined[T.arange(batch_size), actions.long()]
Q_next_all = Q_combined[batch_size:]
mp.set_start_method('spawn', force=True)
p = mp.Process(target=spawned)
p.start()
Q_pred2_all = self.Q_2.forward(s_next)
p.join()
maxA_id = T.argmax(Q_next_all, dim=1)
Q_pred2 = Q_pred2_all[T.arange(batch_size), maxA_id]
Q_target = (rewards + (~dones) * self.GAMMA * Q_pred2).detach()
self.Q_1.optimizer.zero_grad()
self.Q_1.loss(Q_target, Q_pred).backward()
self.Q_1.optimizer.step()
This crashes with the error message:
AttributeError: Can't pickle local object 'Agent.learn.<locals>.spawned'
So how do I make this work?
(Achieving this in CUDA programming is trivial. One simply launches two device kernels using a sequential host code, and the two kernels are automatically computed in parallel in the GPU.)
I am in trouble in simulating the EnergyPlus-FMU by PyFMI. I created an EnergyPlus FMU using the reference building model. I am using PyFMI2.5. How do I run the do_step() function?
from pyfmi import load_fmu
model = load_fmu("MyEnergyplus.fmu")
start_time = 0
final_time = 60.0 * 60 * 24 * 3 #seconds
step_size = 60 # seconds
opts = model.simulate_options()
idf_steps_per_hour = 60
ncp = (final_time - start_time)/(3600./idf_steps_per_hour)
opts['ncp'] = ncp
t = 0
status = model.do_step(current_t = t, step_size= step_size, new_step=True)
The error I got:
File "test_fmi2.py", line 15, in <module> status = model.do_step(current_t = t, step_size= step_size, new_step=True)
AttributeError: 'pyfmi.fmi.FMUModelME2' object has no attribute 'do_step'
I double checked the APIs of PyFMI, and didn't find any problem.
How to enable the simulation? Thanks.
From the output we can see that the FMU that you have loaded is an Model Exchange FMU that do not have a do step function (only Co-Simulation FMUs have that). For more information about the different FMU types, please see the FMI specification.
To simulate an Model Exchange FMU, please use the "simulate" method. The "simulate" method is also available for Co-Simulation FMUs and is the prefered way to perform a simulation
Not knowing how you setup the fmu, I can at least say that you forgot model.initialize(start_time,final_time).
Given that we could use self-defined metric in LightGBM and use parameter 'feval' to call it during training.
And for given metric, we could define it in the parameter dict like metric:(l1, l2)
My question is that how call several self-defined metric at the same time? I cannot use feval=(my_metric1, my_metric2) to get the result
params = {}
params['learning_rate'] = 0.003
params['boosting_type'] = 'goss'
params['objective'] = 'multiclassova'
params['metric'] = ['multi_error', 'multi_logloss']
params['sub_feature'] = 0.8
params['num_leaves'] = 15
params['min_data'] = 600
params['tree_learner'] = 'voting'
params['bagging_freq'] = 3
params['num_class'] = 3
params['max_depth'] = -1
params['max_bin'] = 512
params['verbose'] = -1
params['is_unbalance'] = True
evals_result = {}
aa = lgb.train(params,
d_train,
valid_sets=[d_train, d_dev],
evals_result=evals_result,
num_boost_round=4500,
feature_name=f_names,
verbose_eval=10,
categorical_feature = f_names,
learning_rates=lambda iter: (1 / (1 + decay_rate * iter)) * params['learning_rate'])
Lets' discuss on the code I share here. d_train is my training set. d_dev is my validation set (I have a different test set.) evals_result will record our multi_error and multi_logloss per iteration as a list. verbose_eval = 10 will make LightGBM print multi_error and multi_logloss of both training set and validation set at every 10 iterations. If you want to plot multi_error and multi_logloss as a graph:
lgb.plot_metric(evals_result, metric='multi_error')
plt.show()
lgb.plot_metric(evals_result, metric='multi_logloss')
plt.show()
You can find other useful functions from LightGBM documentation. If you can't find what you need, go to XGBoost documentation, a simple trick. If there is something missing, please do not hesitate to ask more.