Call one function while another function is running in python - python-3.x

I have a function that takes some time to fully complete its process, so I continuously update a status variable. I have a function that gets that status and returns in the command line. However the issue is that when my first function is running my command line is taken up by its execution. How do I allow anyone using my program to check the status without halting execution of the first function? Heres an example of what I mean
def functionA():
while(log):
status = minutes_completed/total
upload_logs(log)
set_status(id,status)
def set_status(id , status):
status_map[id] = status
def get_status(id):
return status_map[id]
How would I be able to call get_status if functionA is still running on my terminal?

Related

What is best practice to interact with subprocesses in python

I'm building an apllication which is intended to do a bulk-job processing data within another software. To control the other software automatically I'm using pyautoit, and everything works fine, except for application errors, caused from the external software, which occur from time to time.
To handle those cases, I built a watchdog:
It starts the script with the bulk job within a subprocess
process = subprocess.Popen(['python', job_script, src_path], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
It listens to the system event using winevt.EventLog module
EventLog.Subscribe('System', 'Event/System[Level<=2]', handle_event)
In case of an error occurs, it shuts down everything and re-starts the script again.
Ok, if an system error event occurs, this event should get handled in a way, that the supprocess gets notified. This notification should then lead to the following action within the subprocess:
Within the subprocess there's an object controlling everything and continuously collecting
generated data. In order to not having to start the whole job from the beginnig, after re-starting the script, this object has to be dumped using pickle (which isn't the problem here!)
Listening to the system event from inside the subprocess didn't work. It results in a continuous loop, when calling subprocess.Popen().
So, my question is how I can either subscribe for system events from inside a childproces, or communicate between the parent and childprocess - means, sending a message like "hey, an errorocurred", listening within the subprocess and then creating the dump?
I'm really sorry not being allowed to post any code in this case. But I hope (and actually think), that my description should be understandable. My question is just about what module to use to accomplish this in the best way?
Would be really happy, if somebody could point me into the right direction...
Br,
Mic
I believe the best answer may lie here: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.stdin
These attributes should allow for proper communication between the different processes fairly easily, and without any other dependancies.
Note that Popen.communicate() may suit better if other processes may cause issues.
EDIT to add example scripts:
main.py
from subprocess import *
import sys
def check_output(p):
out = p.stdout.readline()
return out
def send_data(p, data):
p.stdin.write(bytes(f'{data}\r\n', 'utf8')) # auto newline
p.stdin.flush()
def initiate(p):
#p.stdin.write(bytes('init\r\n', 'utf8')) # function to send first communication
#p.stdin.flush()
send_data(p, 'init')
return check_output(p)
def test(p, data):
send_data(p, data)
return check_output(p)
def main()
exe_name = 'Doc2.py'
p = Popen([sys.executable, exe_name], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
print(initiate(p))
print(test(p, 'test'))
print(test(p, 'test2')) # testing responses
print(test(p, 'test3'))
if __name__ == '__main__':
main()
Doc2.py
import sys, time, random
def recv_data():
return sys.stdin.readline()
def send_data(data):
print(data)
while 1:
d = recv_data()
#print(f'd: {d}')
if d.strip() == 'test':
send_data('return')
elif d.strip() == 'init':
send_data('Acknowledge')
else:
send_data('Failed')
This is the best method I could come up with for cross-process communication. Also make sure all requests and responses don't contain newlines, or the code will break.

Launching a non-blocking async function call via HTTP requests to a route in Python 3.6 Flask app

I am currently writing a small flask-based micro-service which launches other python scripts via calls to a CLI using python's subprocess module. My ultimate goal is make a non-blocking async function call triggered by http requests to a route in the service and have the service return 200 response from the route while the async function runs in the background.
I have been perusing the docs (I am using Python 3.6.3 for this service) cannot work out how to achieve this. Here is a small example of how my code is structured:
#app.route('/execute_job')
def execute_job():
params = ...
run_async_job(params)
return 'Launched async job according to params, it is now running.'
async def run_async_job(params):
command = 'run_python_cli_scripts args'
proc = subprocess.Popen(command)
# change some envs, do some file io, yada yada yada
...
while True:
if proc.poll() is not None: # the cli script is finished
return notify_external_api_job_complete()
I know that simply calling run_async_job(params) does not actually begin its execution, but instead returns an awaitable or Task which must been thrown in an event_loop. My issue is that I cannot figure out how to run this task in an event_loop such that the return in execute_ job is reached before it completes. Is this sort of thing possible? This is my first foray into async python, and I am looking for behaviour similar to what you would see in async javascript. Is trying to use async def for the function I want to be non-blocking the wrong approach or is there a way to launch the tasks in an event_loop in a non-blocking fashion so that the aforementioned return 'Launched async job according to params, it is now running.' can be reached and the function completed before run_async_job(params) completes?
Thanks in advance for your time and wisdom.
Fwiw to posterity: I opted for using a child process launched via the subprocess module. This was achievable by converting the library file I imported my async def'd function from into a script which uses command line arguments parsed from the argparse module. My route now looks like
#app.route('/execute_job')
def execute_job():
params = ...
command = ('python', params)
subprocess.Popen(command)
return 'Launched async job according to params, it is now running.'
edit: formatting

Value passed to time.sleep() determines whether code fails or not

I'm not sure what's going on here but my code fails with exit code 15 and no other errors messages depending on what value I pass to time.sleep()
import docker, docker.utils, time
Thread(target = partial(image_runner.run, create_biobox_args(app))).start()
time.sleep(1)
client = docker.Client(**docker.utils.kwargs_from_env(assert_hostname = False))
container = filter(lambda x: x['Image'] == name, client.containers())[0]['Id']
while client.inspect_container(container)["State"]["Status"] == "running":
time.sleep(20)
I have a docker container started in another thread with Thread(...).start(). However if I use time.sleep() with a value greater than 10 my code will fail with exit code 15, but work otherwise. Any idea what's going on here? I've been trying to debug this but haven't a clue.
Turns out my feature tests timeout after 10 seconds and this what causes the problem. Adjusting the length of the cucumber feature tests default timeout fixes this problem.

Nodejs async.whilst() runs only one time

I'm writing a script to batch process some text documents and insert them into a mysql database. I'm trying to use the async library because using a standard while loop blocks the event queue and prevents the insert queries from getting run until all are generated. Since that may take 10 minutes or more, I get a timeout. So, I am trying to use async to avoid blocking the main thread. However, it's not working as expected. When I run the simplest form of the code below, using node test.js, in the command line, it only executes once, instead of infinitely. It seems like the computer is terminating the node process early since it is non-blocking. This, of course, is not what I want. Why is this, and how can I get it to work correctly?
//this code should run forever, constantly printing "working". However it only runs once.
var async = require('async')
async.whilst(function(){return true},function(){console.log("working")})
The second parameter for whilst() is a function that takes in a callback that needs to be called when the current iteration is "done."
So if you modify the code this way, you'll get what you're expecting:
var async = require('async');
async.whilst(function() {
return true
}, function(cb) {
console.log("working");
cb();
});

How to run a script after Pyramid's transaction manager has returned

How can I run myscript.py after the transaction manager has returned. Additionally, I would prefer if the script was not blocking.
In my view, I am receiving a file from a POST. Since I'm creating the file with repoze.filesafe's create_file(), it keeps the file in a temporary location until the transaction manager returns. The file only exists on the harddisk in its correct path after transaction manager has returned without an error.
Therefore, I need to run my script after the transaction manager has returned.
You can register a hook to be run after commit via the transaction package. Register one in your view:
import transaction
def your_after_commit(success, arg1, arg2, kwarg1=None, kwarg2=None):
if success:
print "Transaction commit succeeded"
else:
print "Transaction commit failed"
def someview(request):
current_transaction = transaction.get()
current_transaction.addAfterCommitHook(your_after_commit, args=(1, 2), kws={kwarg1='foo', kwargs2='bar'})
This still runs your script in the context of the current request (e.g. the request does not complete until your script returns). If you need a full asynchronous setup, you'll need to move to a proper asynchronous solution such as Celery. You would not use that with a transaction hook; just register a task to be run with Celery instead.

Resources