Python command to stop and start windows services ? - python-3.x

what is the python command to stop and start windows services.I can't use win32serviceutil because am using latest python version 3.6.

You could use the sc command line interface provided by Windows:
import subprocess
# start the service
args = ['sc', 'start', 'Service Name']
result = subprocess.run(args)
# stop the service
args[1] = 'stop'
result = subprocess.run(args)

The existing answer is very unpythonic and not simple enough.
I was searching for a Pythonic solution and stumbled upon this question.
I have a much simpler solution that isn't Pythonic.
Just use net start servicename with os.system.
For example, if we want to start MySQL80:
import os
os.system('net start MySQL80')
Now using it as a function:
import os
def start_service(svc):
os.system(f'net start {svc}')
And to stop the service, use net stop servicename:
import os
def stop_service(svc):
os.system(f'net stop {svc}')
I know my solution isn't Pythonic but the existing answer also isn't Pythonic and so far in my Google searching I haven't found anything both relevant and Pythonic.

Install pywin32 from GitHub, there's no limitation regarding Python 3.6 in there(2017, I know) + it works directly with the Win32 API, so no os.system() or similar command calling. Also, no need for compiling, the author supplies the binaries in an installer. And there doesn't seem to be any issue with PyPI as well - the versions are matching.
Use the StartService(serviceName, args = None, machine = None) function.

Related

pywinauto possible to keep application alive if I exit python script?

app = Application(backend="uia").start("program.exe")
I am using pywinauto to do some tasks indefinitely. However, occasionally, I need to restart the script for some external reasons. When this happens, I would like to keep the created applications open. How can I do this? I noticed if the python script errors out, the applications will stay open. But if I exit the script manually, the windows will close. So there must be some way to accomplish this.
I think pywinauto's Application.start cannot do the work. You can try:
pid = os.spawnl(os.P_NOWAIT, "program.exe")
app = Application().connect(process=pid)
os.spawnl is considered deprecated. Use subprocess module.
Combining the answer in Run a program from python, and have it continue to run after the script is killed and pywinauto official doc, you can do this:
subprocess.Popen(
['your_program', 'with args'],
# These will make sure the desktop program will alive even when shell session terminates.
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP, shell=True
)
desktop = Desktop(backend="uia")
main_win = desktop.window(title="program's window title", control_type="Window")
Why not use system command:
#Use multiple thread to avoid block of system function
import _thread as qd
import os
qd.start_new_thread(os.system,('notepad',))
from pywinauto import Application
#connect pywinauto with application via title regular expression
win=Application(backend='uia').connect(title_re='.*Notepad.*')
Then you use pywinauto to connect application via title?

Why Can't Jupyter Notebooks Handle Multiprocessing on Windows?

On my windows 10 machine (and seemingly other people's as well), Jupyter Notebook can't seem to handle some basic multiprocessing functions like pool.map(). I can't seem to figure out why this might be, even though a solution has been suggested to call the function to be mapped as a script from another file. My question, though is why does this not work? Is there a better way to do this kind of thing beyond saving in another file?
Note that the solution was suggested in a similar question here. But I'm left wondering why this bug occurs, and whether there is another easier fix. To show what goes wrong, I'm including below a very simple version that hangs on my computer where the same function runs with no problems when the built-in function map is used.
import multiprocessing as mp
# create a grid
iterable = [3, 5, 10]
def add_3(iterable):
a = iterable + 3
return a
# Below runs no problem
results = list(map(add_3, iterable))
print(results)
# multiprocessing attempt (hangs)
def main():
pool = mp.Pool(2)
results = pool.map(add_3, iterable)
return results
if __name__ == "__main__": #Required not to spawn deviant children
results = main()
Edit: I've just tried this in Spyder and I've managed to get it to work. Unsurprisingly running the following didn't work.
if __name__ == "__main__": #Required not to spawn deviant children
results = main()
print(results)
But running it as the following does work because map uses the yield command and isn't evaluated until called which gives the typical problem.
if __name__ == "__main__": #Required not to spawn deviant children
results = main()
print(results)
edit edit:
From what I've read on the issue, it turns out that the issue is largely because of the ipython shell that jupyter uses. I think there might be an issue setting name. Either way using spyder or a different ide solved the problem, as long as you're not still running the multiprocessing function in an iPython shell.
I faced a similar problem like this. I can't use multiprocessing with function on the same script. The solution that works is to put the function on different notebook file and import it use ipynb:
from ipynb.fs.full.script_name import function_name
pool = Pool()
result = pool.map(function_name,iterable_argument)

How to enable parallel in scipy.optimize.differential_evolution?

I am trying to find the global minimum of a function using differential_evolution from scipy.optimize. As explained in the scipy reference guide, I should set in the options:
updating='deferred',workers=number of cores
However, when I run the code, it freezes and does nothing. How can I solve this issue, or is there any better way for parallelizing the global optimizer?
The following is in my code:
scipy.optimize.differential_evolution(objective, bnds, args=(),
strategy='best1bin', maxiter=1e6,
popsize=15, tol=0.01, mutation=(0.5, 1),
recombination=0.7, seed=None,
callback=None, disp=False, polish=True,
init='latinhypercube', atol=0,
updating='deferred',workers=2)
Came across the same problem myself. The support for parallelism in scipy.optimize.differential_evolution was added in version 1.2.0 and the version I had was too old. When looking for the documentation, the top result also referred to the old version. The newer documentation can instead be found at https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html.
I use virtualenvironment and pip for package management, and to upgrade to the latest version of scipy I just had to run pip install --upgrade scipy. If using anaconda, you might need to do e.g. conda install scipy=1.4.1.
In order to activate the parallelism, set the workers flag to something > 1 for a specific number of cores or workers=-1 to use all available cores.
One caveat: Don't make the same mistake as me and try to run the differential evolution directly in the top level of a Python script on Windows because it won't run. This is due to how multiprocessing.Pool functions. Specifically, instead of the following:
import scipy.optimize
def minimize_me(x, *args):
... # Your code
return result
# DO NOT DO LIKE THIS
... # Prepare all the arguments
# This will give errors
result = scipy.optimize.differential_evolution(minimize_me, bounds=function_bounds, args=extraargs,
disp=True, polish=False, updating='deferred', workers=-1)
print(result)
use the code below:
import scipy.optimize
def minimize_me(x, *args):
... # Your code
return result
# DO LIKE THIS
if __name__ == "__main__":
... # Prepare all the arguments
result = scipy.optimize.differential_evolution(minimize_me, bounds=function_bounds, args=extraargs,
disp=True, polish=False, updating='deferred', workers=-1)
print(result)
See this post for more info about parallel execution on Windows: Compulsory usage of if __name__=="__main__" in windows while using multiprocessing
Note that even if not on Windows, it's anyway a good practice to use if __name__ == "__main__":.

Is there a way to programmatically clear the terminal across platforms in Python 3?

Long-time lurker, first time asker.
Is there a way to automatically clear the terminal in Python 3 regardless of what platform the app is being used in?
I've come across the following (from this answer) which utilises ANSI escape codes:
import sys
sys.stderr.write("\x1b[2J\x1b[H")
But for it to work cross-platform it requires the colorama module which appears to only work on python 2.7.
For context I'm learning Python by building a game of battleships, but after each guess I want to be able to clear the screen and re-print the board.
Any help is appreciated!
Cheers
I use a single snippet for all the platforms:
import subprocess
clear = lambda: subprocess.call('cls' if os.name=='nt' else 'clear')
clear()
Same idea but with a spoon of syntactic sugar:
import subprocess
clear = lambda: subprocess.call('cls||clear', shell=True)
clear()
I know of this method
import os
clear = lambda: os.system('cls')
clear()
I'm not sure if it works with other platforms, but it's working in windows python 3.x
import os
clear = lambda: os.system('clear')
clear()
That might work for linux and OS X, but I can't test.

Cross-platform screenshot in Python 3

There are quite a few questions like this one, but none of them seem to be both cross-platform and specifically for Python 3, and I'm struggling to find a reliable solution.
How can I take a cross-platform screenshot in Python 3?
My current solution has been to use the ImageGrab function from the PIL library, like so:
from PIL import ImageGrab
image = ImageGrab.grab()
You can use platform.system() to find the current OS, and then use a different solution depending on the operating system:
import platform
if platform.system()=="Windows":
...
elif platform.system()=="Darwin": #Mac
...
elif plarform.system()=="Linux":
...
You can use PrtSc Library.
Command : pip3 install PrtSc
Code :
import PrtSc.PrtSc as Screen
screen=Screen.PrtSc(True,"file.png")

Resources