I am looking at a two step approach for a optimization problem. My first step is using a MILP formulation of the problem and the second step involves using the solution from the first step as an initial solution but now with a MIQP formulation. I have been able to apply this concept in MATLAB using CPLEX. However, I am now trying the same using CVXPY with CPLEX as the solver. Now I know about the warm_start option but this does not work with the CPLEX solver. I am able to set CPLEX parameters but I am not sure how to initialize my solution. I am thinking of setting the ADVANCE START SWITCH parameter for CPLEX to 1, but now I need to set the initial solution. According to this page: http://www-eio.upc.es/lceio/manuals/cplex-11/html/usrcplex/solveMIP17.html, I need to use the method setVectors in a Concert Technology application, or by using CPXcopymipstart in a Callable Library application to set the initial solution. I am unsure of how to use this along with CVXPY.
The functionality you are looking for does not currently exist in CVXPY. CVXPY is a generic modeling layer that wraps around several solvers and it does not expose the CPLEX-specific CPXreadcopymipstarts nor CPXaddmipstarts functionality.
The fact that setting the value property of variables and using the warm_start option, as suggested in this answer, doesn't work, is a CVXPY issue. It looks like there is an open github issue for this here. In the future, this will likely be the intended solution to your general question.
For now, you'll have to use one of the CPLEX APIs directly. As you mentioned in the comments of this related stackoverflow question, you do not like the idea of using the lower-level CPLEX Python API. That leaves you with docplex as a viable option.
Related
I searched about the JAGS's manual and one post (here in 2012) about the ordinary differential equation (ode). I was thinking JAGS can because it's similar to WinBUGS (which does it through WBdiff interface). However, if I let JAGS read in my ode code, it cannot even recognize the D(y[...], t) expression.
Can JAGS deal with ode? Maybe I missed a plug-in in JAGS like WBdiff?
While WinBUGS/OpenBUGS/JAGS have almost equivalent syntax/feature sets, there are a few differences between them: one of these is that there is no ODE solver included as part of a standard JAGS installation.
However, JAGS is extensible using user-specified modules (like the plug-ins you mentioned), which provide new functions/distributions using C++ code that can then be used within JAGS when that module is loaded. It would certainly be possible to implement an ODE this way using e.g. the ODE solvers included in the boost C++ library. To do so you will need the following:
Familiarity with C++
Instructions for how to build a module for JAGS
I can't help you with the former, so this may be a dead-end for you if you have never used C++ before. But there is a tutorial available for how to build a JAGS module: https://pubmed.ncbi.nlm.nih.gov/23959766/ This article shows how to build a standalone module, but if you are happy to accept the limitation of using JAGS from R (as most people do) then it is MUCH easier to build a JAGS module within an R package - you could follow code in the runjags package as an example https://cran.r-project.org/package=runjags
If you are thinking of trying to do this yourself then I could potentially help with a few pointers along the way. Of course, it is also possible that someone else has already done this, but if so then I am not aware of it.
I am using PyTorch to carry out vision tasks, but would like to use some of what fast.ai provides since it has a lot of useful functionality. I'd prefer to work mostly in PyTorch since it's easier for me to understand what's going on, it's easier for me to find information on it online, and I want to maintain flexibility.
In https://docs.fast.ai/migrating_pytorch it's written that after I use the following imports: from fastai.vision.all import * and from migrating_pytorch import *, I should be able to start "Incrementally adding fastai goodness to your PyTorch models", which sounds great.
But when I run the second import I get ModuleNotFoundError: No module named 'migrating_pytorch'. Searching in https://github.com/fastai/fastai I also don't find any code mention of migrating_pytorch.py, nor did I manage to find something online.
(I'm using fast.ai version 2.3.1)
I'd like to know if this is indeed the way to go, and if so how to get it working. Or if there's a better way then how I should use that approach instead.
As an example, it would be nice if I could use the EarlyStoppingCallback, SaveModelCallback, and add some metrics from fast.ai instead of writing them myself, while still having everything in mostly "native" PyTorch.
Preferably the solution isn't specific to vision only, but that's my current need.
migrating_pytorch is an example script. It's in the fast.ai repo at: https://github.com/fastai/fastai/blob/master/nbs/examples/migrating_pytorch.py
The notebook that shows how to use it is at: https://github.com/fastai/fastai/blob/827e7cc0fad2db06c40df393c9569309377efac0/nbs/examples/migrating_pytorch.ipynb
For the callback example. Your training code would end up looking something like:
cbs = [EarlyStoppingCallback(), SaveModelCallback()]
learner = Learner(dls, simple_cnn(), loss_func=F.cross_entropy, cbs=cbs)
learner.fit(1)
Those two callbacks probably need some arguments, e.g. save path, etc.
I know in Integer Optimization problem, we can use differenct solver(cp-sat or original-cp).In routing problem(tsp or other)how to use differenct solve problem. I find this function(SolveModelWithSat) maybe can use different solver by the ortools document,but I have no idea how to use it.
There are parameters that control what to use.
Please note that CP-SAT implementation does not support all features.
See: This section in the parameter file
Torchscript provides torch.jit.trace and torch.jit.script to convert pytorch code from eager mode to script model. From the documentation, I can understand torch.jit.trace cannot handle control flows and other data structures present in the python. Hence torch.jit.script was developed to overcome the problems in torch.jit.trace.
But it looks like torch.jit.script works for all the cases, then why do we need torch.jit.trace?
Please help me understand the difference between these two methods
If torch.jit.script works for your code, then that's all you should need. Code that uses dynamic behavior such as polymorphism isn't supported by the compiler torch.jit.script uses, so for cases like that, you would need to use torch.jit.trace.
Is there a way I can use the deap library inside grasshopper's Python node
I want to run a genetic algorithm but the fitness function is to be calculated by grasshopper (only the fitness function, all the other things are to be taken of by deap inside the python node)
can it be done?
I am having problem with
importing the deap library in grasshopper's Python interface(I think I will be able to solve it by copying the files manually from Python path)
(major problem) grashopper doesn't allow closed loops so I cant seem to find a way to feed the fitness back into the Python node with the main code
couldnt get it to work, had to make do with the grasshopper pluggins
the problem was that you can only install iron python libraries for grasshopper
These are two well known issues with 'out-of-the-box' grasshopper but there are several plugins that can help overcome them.
Question One
The basic GHPython component uses Iron Python and can limit which libraries are compatible and able to be used. To get around this constraint there is a plugin called 'GH_CPython'. It allows you to set a locally installed python interpreter for your code, and then have access to any libraries available to that local interpreter. So if you install deap Libary locally then it will be available within the grasshopper GH_Cpython editor. Here is a link to download and install GH_CPython: https://www.food4rhino.com/en/app/ghcpython
Question Two
As you noted, Grasshopper is procedural and has limited support for recursive routines. To get around this there are several plugins that support recursion and may be able to help with your implementation. Which plugin would be best for your situation is difficult to say without a deeper description of your goals. Here are several options, each option provides recursive functionality that would allow for 'closed loops' where results of a script can be fed back as input.
Hoopsnake - very basic and has been around the longest
Anemone - A little more flexible and uses multiple components for loop start and end for cleaner-looking scripts. It also has a 'record history' functionality.
Octopus - Has a 'Loop' component that is similar to Hoopsnake. It also has a 'record history' functionality.