I'm trying to restart an optimisation in pymoo.
I have a problem defined as:
class myOptProb(Problem):
"""my body goes here"""
algorithm = NSGA2(pop_size=24)
problem = myOptProblem(opt_obj=dp_ptr,
nvars=7,
nobj=4,
nconstr=0,
lb=0.3 * np.ones(7),
ub=0.7 * np.ones(7),
parallelization=('threads', cpu_count(),))
res = minimize(problem,
algorithm,
('n_gen', 100),
seed=1,
verbose=True)
During the optimisation I write the design vectors and results to a .csv file. An example of design_vectors.csv is:
5.000000000000000000e+00, 4.079711567060104183e-01, 6.583544872784267143e-01, 4.712364759485179189e-01, 6.859360188593541796e-01, 5.653765991273791425e-01, 5.486782880836487131e-01, 5.275405748345924906e-01,
7.000000000000000000e+00, 5.211287914743063521e-01, 6.368123569438421949e-01, 3.496693260479644128e-01, 4.116734716044557763e-01, 5.343037085833151068e-01, 6.878382993278697732e-01, 5.244120877022839800e-01,
9.000000000000000000e+00, 5.425317846613321171e-01, 5.275405748345924906e-01, 4.269449637288642574e-01, 6.954464617649794844e-01, 5.318980876983187001e-01, 4.520564690494201510e-01, 5.203792876471586837e-01,
1.100000000000000000e+01, 4.579502451694219545e-01, 6.853050113762846340e-01, 3.695822666721857441e-01, 3.505318077758549089e-01, 3.540316632186925050e-01, 5.022648662707586142e-01, 3.086099221096791911e-01,
3.000000000000000000e+00, 4.121775968257620493e-01, 6.157117313805953174e-01, 3.412904026310568106e-01, 4.791574104703620329e-01, 6.634382012372381787e-01, 4.174456593494717538e-01, 4.151101354345394512e-01,
The results.csv is:
5.000000000000000000e+00, 1.000000000000000000e+05, 1.000000000000000000e+05, 1.000000000000000000e+05, 1.000000000000000000e+05,
7.000000000000000000e+00, 1.041682833582066703e+00, 3.481167125962069189e-03, -5.235115318709097909e-02, 4.634480813876099177e-03,
9.000000000000000000e+00, 1.067730307802263967e+00, 2.194702810002167534e-02, -3.195892023664552717e-01, 1.841232582360878426e-03,
1.100000000000000000e+01, 8.986880344052742275e-01, 2.969022150977750681e-03, -4.346692726475211849e-02, 4.995468429444801205e-03,
3.000000000000000000e+00, 9.638770499257821589e-01, 1.859596479928402393e-02, -2.723230073142696162e-01, 1.600910928983005632e-03,
The first column is the index of the design vector - because I thread asynchronously, I specify the indices.
I see that it should be possible to restart the optimisation via the sampling parameter for pymoo.algorithms.nsga2.NSGA2 but I couldn't find a working example. The documentation for both population and individuals is also not clear. So how can I restart a simulation with the previous results?
Yes, you can initialize the algorithm object with a population instead of doing it randomly.
I have written a small tutorial for a biased initialization:
https://pymoo.org/customization/initialization.html
Because in your case the data already exists, in a CSV or in-memory file, you might want to create a dummy problem (I have called it Constant in my example) to set the attributes in the Population object. (In the population X, F, G, CV and feasible needs to be set). Another way would be setting the attributes directly...
The biased initialization with a dummy problem is shown below. If you already use pymoo to store the csv files, you can also just np.save the Population object directly and load it. Then all intermediate steps are not necessary.
I am planning to improve checkpoint implementation in the future. So if you have some more feedback and use case which are not possible yet please let me know.
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.algorithms.so_genetic_algorithm import GA
from pymoo.factory import get_problem, G1, Problem
from pymoo.model.evaluator import Evaluator
from pymoo.model.population import Population
from pymoo.optimize import minimize
class YourProblem(Problem):
def __init__(self, n_var=10):
super().__init__(n_var=n_var, n_obj=1, n_constr=0, xl=-0, xu=1, type_var=np.double)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = np.sum(np.square(x - 0.5), axis=1)
problem = YourProblem()
# create initial data and set to the population object - for your this is your file
N = 300
X = np.random.random((N, problem.n_var))
F = np.random.random((N, problem.n_obj))
G = np.random.random((N, problem.n_constr))
class Constant(YourProblem):
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = F
out["G"] = G
pop = Population().new("X", X)
Evaluator().eval(Constant(), pop)
algorithm = GA(pop_size=100, sampling=pop)
minimize(problem,
algorithm,
('n_gen', 10),
seed=1,
verbose=True)
Related
I'm running a constrained optimisation with scipy.optimize.minimize(method='COBYLA').
In order to evaluate the cost function, I need to run a relatively expensive simulation to compute a dataset from the input variables, and the cost function is one (cheap to compute) property of that dataset. However, two of my constraints are also dependent on that expensive data.
So far, the only way I have found to constrain the optimisation is to have each of the constraint functions recompute the same dataset that the cost function already has calculated (simplified quasi-code):
def costfun(x):
data = expensive_fun(x)
return(cheap_fun1(data))
def constr1(x):
data = expensive_fun(x)
return(cheap_fun2(data))
def constr2(x):
data = expensive_fun(x)
return(cheap_fun3(data))
constraints = [{'type':'ineq', 'fun':constr1},
{'type':'ineq', 'fun':constr2}]
# initial guess
x0 = np.ones((6,))
opt_result = minimize(costfun, x0, method='COBYLA',
constraints=constraints)
This is clearly not efficient because expensive_fun(x) is called three times for every x.
I could change this slightly to include a universal "evaluate some cost" function which runs the expensive computation, and then evaluates whatever criterion it has been given. But while that saves me from having to write the "expensive" code several times, it still runs three times for every iteration of the optimizer:
# universal cost function evaluator
def criterion_from_x(x, cfun):
data = expensive_fun(x)
return(cfun(data))
def costfun(data):
return(cheap_fun1(data))
def constr1(data):
return(cheap_fun2(data))
def constr2(data):
return(cheap_fun3(data))
constraints = [{'type':'ineq', 'fun':criterion_from_x, 'args':(constr1,)},
{'type':'ineq', 'fun':criterion_from_x, 'args':(constr2,)}
# initial guess
x0 = np.ones((6,))
opt_result = minimize(criterion_from_x, x0, method='COBYLA',
args=(costfun,), constraints=constraints)
I have not managed to find any way to set something up where x is used to generate data at each iteration, and data is then passed to both the objective function as well as the constraint functions.
Does something like this exist? I've noticed the callback argument to minimize(), but that is a function which is called after each step. I'd need some kind of preprocessor which is called on x before each step, whose results are then available to the cost function and constraint evaluation. Maybe there's a way to sneak it in somehow? I'd like to avoid writing my own optimizer.
One, more traditional, way to solve this would be to evaluate the constraints in the cost function (which has all the data it needs for that, have it add a penalty for violated constraints to the main cost function, and run the optimizer without the explicit constraints, but I've tried this before and found that the main cost function can become somewhat chaotic in cases where the constraints are violated, so an optimizer might get stuck in some place which violates the constraints and not find out again.
Another approach would be to produce some kind of global variable in the cost function and write the constraint evaluation to use that global variable, but that could be very dangerous if multithreading/-processing gets involved, or if the name I choose for the global variable collides with a name used anywhere else in the code:
'''
def costfun(x):
global data
data = expensive_fun(x)
return(cheap_fun1(data))
def constr1(x):
global data
return(cheap_fun2(data))
def constr2(x):
global data
return(cheap_fun3(data))
'''
I know that some people use file I/O for cases where the cost function involves running a large simulation which produces a bunch of output files. After that, the constraint functions can just access those files -- but my problem is not that big.
I'm currently using Python v3.9 and scipy 1.9.1.
You could write a decorator class in the same vein to scipy's MemoizeJac that caches the return values of the expensive function each time it is called:
import numpy as np
class MemoizeData:
def __init__(self, obj_fun, exp_fun, constr_fun):
self.obj_fun = obj_fun
self.exp_fun = exp_fun
self.constr_fun = constr_fun
self._data = None
self.x = None
def _compute_if_needed(self, x, *args):
if not np.all(x == self.x) or self._data is None:
self.x = np.asarray(x).copy()
self._data = self.exp_fun(x)
def __call__(self, x, *args):
self._compute_if_needed(x, *args)
return self.obj_fun(self._data)
def constraint(self, x, *args):
self._compute_if_needed(x, *args)
return self.constr_fun(self._data)
Followingly, the expensive function is only evaluated once for each iteration. Then, after writing all your constraints into one constraint function, you could use it like this:
from scipy.optimize import minimize
def all_constrs(data):
return np.hstack((cheap_fun2(data), cheap_fun3(data)))
obj = MemoizeData(cheap_fun1, expensive_fun, all_constrs)
constr = {'type': 'ineq', 'fun': obj.constraint}
x0 = np.ones(6)
opt_result = minimize(obj, x0, method="COBYLA", constraints=constr)
While Joni was writing their answer, I found another one, which is admittedly more hacky. I prefer theirs, but for the sake of completeness, I wanted to post this one, too.
It's derived from the material from https://mdobook.github.io/ and the accompanying video tutorials from BYU FLow Lab, in particular this video:
The trick is to use non-local variables to keep a cache of the last evaluation of the expensive function:
import numpy as np
last_x = None
last_data = None
def compute_data(x):
data = expensive_fun(x)
return(data)
def get_last_data(x):
nonlocal last_x, last_data
if not np.array_equal(x, last_x):
last_data = compute_data(x)
last_x = x
return(last_data)
def costfun(x):
data = get_last_data(x)
return(cheap_fun1(data)
def constr1(x):
data = get_last_data(x)
return(cheap_fun2(data)
def constr2(x):
data = get_last_data(x)
return(cheap_fun3(data)
...and then everything can progress as in my original code in the question.
Reasons why I prefer Joni's class-based version:
variable scopes are clearer than with nonlocal
If some of the functions allow calculation of their Jacobian, or there are other things worth buffering, the added complexity is held in check better than with
Having a class instance do all the work also allows you to do other interesting things, like keeping a record of all past evaluations and the path taken by the optimizer, without having to use a separate callback function. Very useful for debugging/tweaking convergence if the optimizer won't converge or takes too long, but also to visualize or otherwise investigate the objective function or similar.
The same ability might actually be really cool for things like constructing a response surface model from the results of previous function evaluations. That could be used to establish a starting guess in case the expensive function is some numerical method that benefits from a good starting point.
Both approaches allow the use of "cheap" constraints which don't require the expensive function to be evaluated, by simply providing them as separate functions. Not sure whether that would help much with compute times, though. I suppose that would depend on the algorithm used by the optimizer.
I'd like to use multiprocessing in a rescource-heavy computation in a code I write, as shown in this watered-down example:
import numpy as np
import multiprocessing as multiproc
def function(r, phi, z, params):
"""returns an array of the timepoints and the corresponding values
(demanding computation in actual code, with iFFT and stuff)"""
times = np.array([1.,2.,3.])
tdependent_vals = r + z * times + phi
return np.array([times, tdependent_vals])
def calculate_func(rmax, zmax, phi, param):
rvals = np.linspace(0,rmax,5)
zvals = np.linspace(0,zmax,5)
for r in rvals:
func_at_r = lambda z: function(r, phi, z, param)[1]
with multiproc.Pool(2) as pool:
fieldvals = np.array([*pool.map(func_at_r, zvals)])
print(fieldvals) #for test, it's actually saved in a numpy array
calculate_func(3.,4.,5.,6.)
If I run this, it fails with
AttributeError: Can't pickle local object 'calculate_func.<locals>.<lambda>'
What I think the reason is, according to the documentation, only top-level defined functions can be pickled, and my in-function defined lambda can't. But I don't see any way I could make it a standalone function, at least without polluting the module with a bunch of top-level variables: the parameters are unknown before calculate_func is called, and they're changing at each iteration over rvals. This whole multiprocessing thing is very new to me, and I couldn't come up with an alternative. What would be the simplest working way to parallelize the loop over rvals and zvals?
Note: I used this answer as a starting point.
This probably isn't the best answer for this, but it's an answer, so please no hate :)
You can just write a top level wrapper function that can be serialized and have it execute functions... This is kinda like function inception a bit but I solved a similar problem in my code like this.
Here is a brief example
def wrapper(arg_list, *args):
func_str = arg_list[0]
args = arg_list[1]
code = marshal.loads(base64.b64decode(func_str.data))
func = types.FunctionType(code, globals(), "wrapped_func")
return func(*args)
def run_func(func, *args):
func_str = base64.b64encode(marshal.dumps(func.__code__, 0))
arg_list = [func_str, args]
with mp.Pool(2) as pool:
results = pool.map(wrapper, arg_list)
return results
So I am writing a GAN in tensorflow, and need the discriminator and generator to be objects. Now I am having problems with creating the training dataset for the discriminator.
Currently the relevant part of my code looks like this:
self.dataset=tf.data.Dataset.from_tensor_slices((self.y_,self.x_)) #creates dataset
self.fake_dataset=tf.data.Dataset.from_tensor_slices((self.x_fake_)) #creates dataset
self.dataset=self.dataset.shuffle(buffer_size=BUFFER_SIZE) #shuffles
self.fake_dataset=self.fake_dataset.shuffle(buffer_size=BUFFER_SIZE) #shuffles
self.dataset=self.dataset.repeat().batch(self.batch_size) #batches
self.fake_dataset=self.fake_dataset.repeat().batch(self.batch_size) #batches
self.iterator=tf.data.Iterator.from_structure(self.dataset.output_types,self.dataset.output_shapes) #creates iterators
self.fake_iterator=tf.data.Iterator.from_structure(self.fake_dataset.output_types,self.fake_dataset.output_shapes) #creates iterators
self.x=self.iterator.get_next()
self.x_fake=self.fake_iterator.get_next()
self.dataset_init_op = self.iterator.make_initializer(self.dataset,name=self.name+'_dataset_init')
self.fake_dataset_init_op=self.fake_iterator.make_initializer(self.fake_dataset,name=self.name+'_dataset_init')
What I need is for the function to alternatively give one batch of self.x, followed by one batch of self.x_fake.
Is there an easy way to do this, or will I have to results to a counter and an if statement?
Not sure if I'm understanding exactly what you need, but if you want to get use the different iterators alternatively in the same call that would be defined at graph construction time, and so you could use Python logic to choose the iterator you need. For example:
def __init__(self):
# Make graph and iterators...
self._use_fake_batch = False
def next_batch(self):
iter = self.fake_iterator if self._use_fake_batch else self.iterator
self._use_fake_batch = not self._use_fake_batch
return iter.get_next()
Or without an additional variable, using itertools:
from itertools import chain, repeat
def __init__(self):
# Make graph and iterators...
self._iterators = chain.from_iterable(repeat((self.iterator, self.fake_iterator)))
def next_batch(self):
return next(self._iterators).get_next()
When running an analysis under MPI with distributed components in a ParallelGroup, I get an error when adding a DumpRecorder to the analysis. Below is a small example that demonstrates this (this was run with the latest master branch commit aaa67a4d51f4081e9e41b250b0a76b077f6f0c21 from 28/10/2015):
import numpy as np
from openmdao.core.mpi_wrap import MPI
from openmdao.api import Component, Group, DumpRecorder, Problem, ParallelGroup
class Sliced(Component):
def __init__(self):
super(Sliced, self).__init__()
self.add_param('x', 0.)
self.add_output('y', 0.)
def solve_nonlinear(self, params, unknowns, resids):
unknowns['y'] = params['x'] * 2.
class VectorComp(Component):
def __init__(self, size):
super(VectorComp, self).__init__()
self.add_param('xin', np.zeros(size))
self.add_output('x', np.zeros(size))
def solve_nonlinear(self, params, unknowns, resids):
unknowns['x'] = params['xin'] * 2.
class Analysis(Group):
def __init__(self, size):
super(Analysis, self).__init__()
self.add('v', VectorComp(size), promotes=['*'])
par = self.add('par', ParallelGroup())
for i in range(size):
par.add('sec%02d' % i, Sliced())
self.connect('x', 'par.sec%02d.x' % i, src_indices=[i])
if __name__ == '__main__':
if MPI:
from openmdao.core.petsc_impl import PetscImpl as impl
else:
from openmdao.core.basic_impl import BasicImpl as impl
p = Problem(impl=impl, root=Analysis(4))
recorder = DumpRecorder('optimization.log')
# adding specific includes works, but leaving it out results in a crash
# recorder.options['includes'] = ['x']
p.driver.add_recorder(recorder)
p.setup()
p.run()
The error which is raised is:
RuntimeError: Cannot access remote Variable 'par.sec00.x' in this process.
I see that the recorder dumps a file per processor, so shouldn't the BaseRecorder._filter_vectors method filter out params not present on a specific processor? I'm not yet familiar enough with the code to propose a fix, so I hope the OpenMDAO devs can easily figure out what goes wrong.
Manually specifying the includes works since the Sliced parameters are then excluded, but it would be nice that this was not necessary, and dealt with under the hood.
I also want to let you guys know how excited we are about the new framework. It is so much faster that the 0.x version, and the parallel FD feature is much appreciated and works like a charm!
There were some recent changes that broke the dump recorder in parallel. We put a story up for someone to fix it, but in the meantime, you might want to try the SqliteRecorder recorder. It's what I have been using for performance testing on CADRE. You set it up the same way, but then to read the values using an sqlitedict. There is a small example in the docs, but a more practical example is here in the CADRE code:
https://github.com/OpenMDAO/CADRE/blob/master/plot_progress.py
I am doing a Monte Carlo calculation and I'd like to save the intermediate results to disk. Below is a basic version of my code. In my original version, I had a data aggregator object that would collect the results from each trajectory and then at the end calculate some statistics and write to disk, but I began to run out of memory and the files were unwieldy. I am trying instead to tack on PyTables so that I can a) flush the data to disk and b) efficiently read it back in for further processing when it's done. I am working from this tutorial. My problem is that for each run, the data that would go into the layer column is a 1xn vector where n is set at the start of the script (it's actually passed through on the command line in real life).
Python won't let me define the table descriptor class inside the aggregator class, but the size n is outside the scope of the descriptor class. I'm coming from a MATLAB background, where all of the table creation and flushing to disk is hidden behind the single matfile command, so I'm really lost here.
How should I properly initialize my data table so that it can be seen within the aggregator object? If I should be doing this differently, how can I do the least amount of damage to my already working (except for the writing to disk) code?
import tables
import numpy
class Trajectory(tables.IsDescription):
start = tables.Float32Col(shape=(1, 2))
end = tables.Float32Col(shape=(1, 2))
layer = tables.Float32Col(shape=(1, n)) # how do I pass n to here?
class AggregateResults(object):
def __init__(self, n, filename):
self.n = n
self.h5 = tables.openFile(filename, mode="w")
self.traj_group = self.h5.createGroup(self.h5.root, "Trajectories")
self.traj_table = self.h5.createTable(self.traj_group, "trajectory", Trajectory, "Single Trajectory)
def end_of_trajectory(self, results):
trajectory = self.traj_table.row
trajectory['start'] = results.start_position
trajectory['end'] = results.end_position
trajectory['layer'] = results.layer_path
trajectory.append()
trajectory.flush()
def end_of_run(self):
self.h5.close()
def do_code(aggregate):
results = # long calculation goes here
aggregate.end_of_trajectory(results)
main():
filename = "filename.h5"
n = 7
aggregate = AggregateResults(n, filename)
for x in range(100000):
do_code(aggregate)
aggregate.end_of_run()
This doesn't completely answer my own question, but in solving a different problem, I came upon a solution. Rather than saving in a table, as above, I am saving the variable length vector as a separate array as described here. Then I save each of the scalar values as an attribute of that vector.
class AggregateResults(object):
def __init__(self, n, filename):
self.n = n
self.h5 = tables.openFile(filename, mode="w")
self.traj_group = self.h5.createGroup(self.h5.root, "Trajectories")
def end_of_trajectory(self, results):
i = current_photon
current_vector_name = "vector%2" % i
current_vector = self.h5.create_array(self.traj_group, current_vector_name, results.layer)
current_vector.attrs.start = results.start
current_vector.attrs.end = results.end
trajectory.flush()
def end_of_run(self):
self.h5.close()