How to enable parallel in scipy.optimize.differential_evolution? - python-3.x

I am trying to find the global minimum of a function using differential_evolution from scipy.optimize. As explained in the scipy reference guide, I should set in the options:
updating='deferred',workers=number of cores
However, when I run the code, it freezes and does nothing. How can I solve this issue, or is there any better way for parallelizing the global optimizer?
The following is in my code:
scipy.optimize.differential_evolution(objective, bnds, args=(),
strategy='best1bin', maxiter=1e6,
popsize=15, tol=0.01, mutation=(0.5, 1),
recombination=0.7, seed=None,
callback=None, disp=False, polish=True,
init='latinhypercube', atol=0,
updating='deferred',workers=2)

Came across the same problem myself. The support for parallelism in scipy.optimize.differential_evolution was added in version 1.2.0 and the version I had was too old. When looking for the documentation, the top result also referred to the old version. The newer documentation can instead be found at https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html.
I use virtualenvironment and pip for package management, and to upgrade to the latest version of scipy I just had to run pip install --upgrade scipy. If using anaconda, you might need to do e.g. conda install scipy=1.4.1.
In order to activate the parallelism, set the workers flag to something > 1 for a specific number of cores or workers=-1 to use all available cores.
One caveat: Don't make the same mistake as me and try to run the differential evolution directly in the top level of a Python script on Windows because it won't run. This is due to how multiprocessing.Pool functions. Specifically, instead of the following:
import scipy.optimize
def minimize_me(x, *args):
... # Your code
return result
# DO NOT DO LIKE THIS
... # Prepare all the arguments
# This will give errors
result = scipy.optimize.differential_evolution(minimize_me, bounds=function_bounds, args=extraargs,
disp=True, polish=False, updating='deferred', workers=-1)
print(result)
use the code below:
import scipy.optimize
def minimize_me(x, *args):
... # Your code
return result
# DO LIKE THIS
if __name__ == "__main__":
... # Prepare all the arguments
result = scipy.optimize.differential_evolution(minimize_me, bounds=function_bounds, args=extraargs,
disp=True, polish=False, updating='deferred', workers=-1)
print(result)
See this post for more info about parallel execution on Windows: Compulsory usage of if __name__=="__main__" in windows while using multiprocessing
Note that even if not on Windows, it's anyway a good practice to use if __name__ == "__main__":.

Related

How do I disable %%cython within Python Jupyter Notebook Script?

I'm creating an NLP related Jupyter notebook, hopefully to be released for public use. I'm using Cython magic commands for some code to increase speed, using "%%cython" at the beginning of some cells.
What I want to do is make the use of such cython magic commands (and related cython commands, cdef etc) a configureable parameter that users can specify in an earlier cell.
I've tried conditionals to allow users to "turn off" cython, however these don't seem to work as "%% cython" needs to be listed on the first line of the cell.
Here's the code:
%%cython
import numpy as np # access to Numpy from Python layer
import math
cdef:
function defn here
Here's my attempt at a solution:
Configuration Cell
turn_on_cython = True # May be changed by user to false
Subsequent Cell
if (turn_on_cython == True):
%%cython
(doesn't work)

Why Can't Jupyter Notebooks Handle Multiprocessing on Windows?

On my windows 10 machine (and seemingly other people's as well), Jupyter Notebook can't seem to handle some basic multiprocessing functions like pool.map(). I can't seem to figure out why this might be, even though a solution has been suggested to call the function to be mapped as a script from another file. My question, though is why does this not work? Is there a better way to do this kind of thing beyond saving in another file?
Note that the solution was suggested in a similar question here. But I'm left wondering why this bug occurs, and whether there is another easier fix. To show what goes wrong, I'm including below a very simple version that hangs on my computer where the same function runs with no problems when the built-in function map is used.
import multiprocessing as mp
# create a grid
iterable = [3, 5, 10]
def add_3(iterable):
a = iterable + 3
return a
# Below runs no problem
results = list(map(add_3, iterable))
print(results)
# multiprocessing attempt (hangs)
def main():
pool = mp.Pool(2)
results = pool.map(add_3, iterable)
return results
if __name__ == "__main__": #Required not to spawn deviant children
results = main()
Edit: I've just tried this in Spyder and I've managed to get it to work. Unsurprisingly running the following didn't work.
if __name__ == "__main__": #Required not to spawn deviant children
results = main()
print(results)
But running it as the following does work because map uses the yield command and isn't evaluated until called which gives the typical problem.
if __name__ == "__main__": #Required not to spawn deviant children
results = main()
print(results)
edit edit:
From what I've read on the issue, it turns out that the issue is largely because of the ipython shell that jupyter uses. I think there might be an issue setting name. Either way using spyder or a different ide solved the problem, as long as you're not still running the multiprocessing function in an iPython shell.
I faced a similar problem like this. I can't use multiprocessing with function on the same script. The solution that works is to put the function on different notebook file and import it use ipynb:
from ipynb.fs.full.script_name import function_name
pool = Pool()
result = pool.map(function_name,iterable_argument)

How to suppress ImportWarning in a python unittest script

I am currently running a unittest script which successfully passes the various specified test with a nagging ImportWarning message in the console:
...../lib/python3.6/importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
return f(*args, **kwds)
....
----------------------------------------------------------------------
Ran 7 tests in 1.950s
OK
The script is run with this main function:
if __name__ == '__main__':
unittest.main()
I have read that warnings can be surpressed when the script is called like this:
python -W ignore:ImportWarning -m unittest testscript.py
However, is there a way of specifying this ignore warning in the script itself so that I don't have to call -W ignore:ImportWarning every time that the testscript is run?
Thanks in advance.
To programmatically prevent such warnings from showing up, adjust your code so that:
import warnings
if __name__ == '__main__':
with warnings.catch_warnings():
warnings.simplefilter('ignore', category=ImportWarning)
unittest.main()
Source: https://stackoverflow.com/a/40994600/328469
Update:
#billjoie is certainly correct. If the OP chooses to make answer 52463661 the accepted answer, I am OK with that. I can confirm that the following is effective at suppressing such warning messages at run-time using python versions 2.7.11, 3.4.3, 3.5.4, 3.6.5, and 3.7.1:
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import unittest
import warnings
class TestPandasImport(unittest.TestCase):
def setUp(self):
warnings.simplefilter('ignore', category=ImportWarning)
def test_01(self):
import pandas # noqa: E402
self.assertTrue(True)
def test_02(self):
import pandas # noqa: E402
self.assertFalse(False)
if __name__ == '__main__':
unittest.main()
However, I think that the OP should consider doing some deeper investigation into the application code targets of the unit tests, and try to identify the specific package import or operation which is causing the actual warning, and then suppress the warning as closely as possible to the location in code where the violation takes place. This will obviate the suppression of warnings throughout the entirety of one's unit test class, which may be inadvertently obscuring warnings from other parts of the program.
Outside the unit test, somewhere in the application code:
with warnings.catch_warnings():
warnings.simplefilter('ignore', category=ImportWarning)
# import pandas
# or_ideally_the_application_code_unit_that_imports_pandas()
It could take a bit of work to isolate the specific spot in the code that is either causing the warning or leveraging third-party software which causes the warning, but the developer will obtain a clearer understanding of the reason for the warning, and this will only improve the overall maintainability of the program.
I had the same problem, and starting my unittest script with a warnings.simplefilter() statement, as described by Nels, dit not work for me. According to this source, this is because:
[...] as of Python 3.2, the unittest module was updated to use the warnings module default filter when running tests, and [...] resets to the default filter before each test, meaning that any change you may think you are making scriptwide by using warnings.simplefilter(“ignore”) at the beginning of your script gets overridden in between every test.
This same source recommends to renew the filter inside of each test function, either directly or with an elegant decorator. A simpler solution is to define the warnings filter inside unittest's setUp() method, which is run right before each test.
import unittest
class TestSomething(unittest.TestCase):
def setUp(self):
warnings.simplefilter('ignore', category=ImportWarning)
# Other initialization stuff here
def test_a(self):
# Test assertion here.
if __name__ == '__main__':
unittest.main()
I had the same warning in Pycharm for one test when using unittest. This warning disappeared when I stopped trying to import a library during the test (I moved the import to the top where it's supposed to be). I know the request was for suppression, but this would also make it disappear if it's only happening in a select number of tests.
Solutions with def setUp suppress warnings for all methods within class. If you don't want to suppress it for all of them, you can use decorator.
From Neural Dump:
def ignore_warnings(test_func):
def do_test(self, *args, **kwargs):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
test_func(self, *args, **kwargs)
return do_test
Then you can use it to decorate single test method in your test class:
class TestClass(unittest.TestCase):
#ignore_warnings
def test_do_something_without_warning()
self.assertEqual(whatever)
def test_something_else_with_warning()
self.assertEqual(whatever)

Python command to stop and start windows services ?

what is the python command to stop and start windows services.I can't use win32serviceutil because am using latest python version 3.6.
You could use the sc command line interface provided by Windows:
import subprocess
# start the service
args = ['sc', 'start', 'Service Name']
result = subprocess.run(args)
# stop the service
args[1] = 'stop'
result = subprocess.run(args)
The existing answer is very unpythonic and not simple enough.
I was searching for a Pythonic solution and stumbled upon this question.
I have a much simpler solution that isn't Pythonic.
Just use net start servicename with os.system.
For example, if we want to start MySQL80:
import os
os.system('net start MySQL80')
Now using it as a function:
import os
def start_service(svc):
os.system(f'net start {svc}')
And to stop the service, use net stop servicename:
import os
def stop_service(svc):
os.system(f'net stop {svc}')
I know my solution isn't Pythonic but the existing answer also isn't Pythonic and so far in my Google searching I haven't found anything both relevant and Pythonic.
Install pywin32 from GitHub, there's no limitation regarding Python 3.6 in there(2017, I know) + it works directly with the Win32 API, so no os.system() or similar command calling. Also, no need for compiling, the author supplies the binaries in an installer. And there doesn't seem to be any issue with PyPI as well - the versions are matching.
Use the StartService(serviceName, args = None, machine = None) function.

restart python (or reload modules) in py.test tests

I have a (python3) package that has completely different behaviour depending on how it's init()ed (perhaps not the best design, but rewriting is not an option). The module can only be init()ed once, a second time gives an error. I want to test this package (both behaviours) using py.test.
Note: the nature of the package makes the two behaviours mutually exclusive, there is no possible reason to ever want both in a singular program.
I have serveral test_xxx.py modules in my test directory. Each module will init the package in the way in needs (using fixtures). Since py.test starts the python interpreter once, running all test-modules in one py.test run fails.
Monkey-patching the package to allow a second init() is not something I want to do, since there is internal caching etc that might result in unexplained behaviour.
Is it possible to tell py.test to run each test module in a separate python process (thereby not being influenced by inits in another test-module)
Is there a way to reliably reload a package (including all sub-dependencies, etc)?
Is there another solution (I'm thinking of importing and then unimporting the package in a fixture, but this seems excessive)?
To reload a module, try using the reload() from library importlib
Example:
from importlib import reload
import some_lib
#do something
reload(some_lib)
Also, launching each test in a new process is viable, but multiprocessed code is kind of painful to debug.
Example
import some_test
from multiprocessing import Manager, Process
#create new return value holder, in this case a list
manager = Manager()
return_value = manager.list()
#create new process
process = Process(target=some_test.some_function, args=(arg, return_value))
#execute process
process.start()
#finish and return process
process.join()
#you can now use your return value as if it were a normal list,
#as long as it was assigned in your subprocess
Delete all your module imports and also your tests import that also import your modules:
import sys
for key in list(sys.modules.keys()):
if key.startswith("your_package_name") or key.startswith("test"):
del sys.modules[key]
you can use this as a fixture by configuring on your conftest.py file a fixture using the #pytest.fixture decorator.
Once I had similar problem, quite bad design though..
#pytest.fixture()
def module_type1():
mod = importlib.import_module('example')
mod._init(10)
yield mod
del sys.modules['example']
#pytest.fixture()
def module_type2():
mod = importlib.import_module('example')
mod._init(20)
yield mod
del sys.modules['example']
def test1(module_type1)
pass
def test2(module_type2)
pass
The example/init.py had something like this
def _init(val):
if 'sample' in globals():
logger.info(f'example already imported, val{sample}' )
else:
globals()['sample'] = val
logger.info(f'importing example with val : {val}')
output:
importing example with val : 10
importing example with val : 20
No clue as to how complex your package is, but if its just global variables, then this probably helps.
I have the same problem, and found three solutions:
reload(some_lib)
patch SUT, as the imported method is a key and value in SUT, you can patch the
SUT. Example, if you use f2 of m2 in m1, you can patch m1.f2 instead of m2.f2
import module, and use module.function.

Resources