Is it possible to have exponential terms in objective function in Gurobi?
I want to optimize the following objective function:
min_x L2_norm(x-y) + log(b+ exp(k+x))
Gurobi does provide support for log and exponential functions in constraints, but I couldn't find anything for the objective function.
I am using Gurobi's python API.
The solution to formulate this is to add a new auxiliary variable and assign the value of the expression to this variable. This post in the Gurobi Community explains it pretty well.
Related
I just found Optuna and it seems they are integrated with lightGBM, but I struggle to see where I can fix parameters, e.g scoring="auc" and where I can define a gridspace to search, e.g num_leaves=[1,2,5,10].
Using https://github.com/optuna/optuna/blob/master/examples/lightgbm_tuner_simple.py as example, they just define a params dict with some fixed parameters (are all parameters not specified in that dict tuned?), and the documentation states that
It tunes important hyperparameters (e.g., min_child_samples and feature_fraction) in a stepwise manner
How can I controll which parameters are tuned and in what space, and how can I fix some parameters?
I have no knowledge of LightGBM, but since this is the first result for fixing parameters in optuna, I'll answer that part of the question:
In optuna, the search space is defined within the code of the objective function. This function should take a 'trials' object as an input, and you can create parameters by calling the suggest_float(), suggest_int() etc. functions on that trials object. For more information, see the documentation at 10_key_features/002_configurations.html
Generally, fixing a parameter is done by hardcoding it instead of calling a suggest function, but it is possible to fix specific parameters externally using the PartialFixedSampler
I am using python's MIP module for optimization. I have set up a model with few parameters. I would want to limit the number of solutions and add stop timer if I don't find any solution in given time. I have added these parameters as given below:
m = Model(name='opt', sense=MAXIMIZE, solver_name=CBC)
m.optimize(max_solutions=1, max_seconds= 300)
somehow none of them seem working to me. it does not even stop looking for a solution after given time and it returns 2 solution sometimes even if I want to limit it to 1. Is there something I am missing in syntax?
One more thing, Gurobi has an option to add multiple variable using add_Vars parameter. Is there anything similar available in MIP too?
Thanks.
Yeah I have been doing some tests myself (with the Python-MIP solver) and seen some similar issues. Apparently it's still quite new and many improvements have been implemented recently or are yet to be developed. I will post from what I've learned:
regarding max_seconds: There's been at least one (closed) issue on the official repo related to using max_seconds parameter and CBC.
regarding max_solutions: If you are using version 1.6.2 or before, here's an explanation for that: until 1.6.1, the m.optimize(max_solutions=1) wasn't setting the maximum solution parameter to CBC. In that case you should try with the following lines (or just update to current version):
m.max_solutions = 1
m.optimize()
If the former don't help for the max_seconds and max_solutions parameters, I guess you'd better post your question as an issue at the library's repo to get answers and support from the project contributors.
Adding multiple variables, similar to gurobipy's Model.addVars() method: Yes, you can do it as follows
p = {(i, j): m.add_var(var_type=CONTINUOUS, lb=0, name="p[%d,%d]" % (i, j))
for i in Set_i for j in Set_j}
In this example, we are adding a variable p_ij and specifying it's continuous
(vs. binary or integer), has lower bound 0, and the sets where the indexes run. Set_i and Set_j are Python lists. See the documentation here for a more detailed explanation on how to use it. Similarly, you can create indexed constraints using the add_constr method, similar to Gurobi's Model.AddConstrs() method.
For python built-in functions such as:
sorted()
min()
max()
what are time/space complexities, what algorithms are used?
Is it always advisable to use the built-in functions of python?
As mentioned in comments, sorted is timsort (see this post) which is O(n log(n)) and a stable sort. max and min will run in Θ(n). But, if you want to find both of them in a solution, you can find them using 3n/2 comparison instead of 2n. (Although in general they are in O(n)). To know more about the method see this post.
I am trying to find the global maxima of a function and have found this package in j. However, after reading through the examples in demo I did not know how to use it yet. Anyone who can give an example about how to find the optimized parameters in a multi-parameter function, such as the function written below? I appreciate what you will do.
given that epsilon are data points given, omega and alpha have the range between 0 and 1.
Is there a function in Julia that is similar to the solver function in Excel where I can provide and equation, and it will solve for the unknown variable? If not, does anybody know the math behind Excel's solver function?
I am not expecting anybody to solve the equation, but if it helps:
Price = (Earnings_1/(1+r)^1)+(Earnings_2/(1+r)^2)++(Earnings_3/(1+r)^3)+(Earnings_4/(1+r)^4)+(Earnings_5/(1+r)^5)+(((Earnings_5)(RiskFreeRate))/((1+r)^5)(1-RiskFreeRate))
The known variables are: Price, All Earnings, and RiskFreeRate. I am just trying to figure out how to solve for r.
Write this instead as an expression f(r) = 0 by subtracting Price over to the other side. Now it's a rootfinding problem. If you only have one variable you're solving for (looks to be the case), then Roots.jl is a good choice.
fzero(f, a::Real, b::Real)
will search for a solution between a and b for example, and the docs have more choices for algorithms when you don't know a range to start with and only give an initial condition for example.
In addition, KINSOL in Sundials.jl is good when you know you're starting close to a multidimensional root. For multidimensional and needing some robustness to the initial condition, I'd recommend using NLsolve.jl.
There's nothing out of the box no. Root finding is a science in itself.
Luckily for you, your function has an analytic first derivative with respect to r. That means that you can use Newton Raphson, which will be extremely stable for your function.
I'm sure you're aware your function behaves badly around r = -1.