Python asyncio wait() with cumulative timeout - python-3.x

I am writing a job scheduler where I schedule M jobs across N co-routines (N < M). As soon as one job finishes, I add a new job so that it can start immediately and run in parallel with the other jobs. Additionally, I would like to ensure that no single job takes more than a certain fixed amount of time. Any jobs that take too long should be cancelled. I have something pretty close, like this:
def update_run_set(waiting, running, max_concurrency):
number_to_add = min(len(waiting), max_concurrency - len(running))
for i in range(0, number_to_add):
next_one = waiting.pop()
running.add(next_one)
async def _run_test_invocations_asynchronously(jobs:List[MyJob], max_concurrency:int, timeout_seconds:int):
running = set() # These tasks are actively being run
waiting = set() # These tasks have not yet started
waiting = {_run_job_coroutine(job) for job in jobs}
update_run_set(waiting, running, max_concurrency)
while len(running) > 0:
done, running = await asyncio.wait(running, timeout=timeout_seconds,
return_when=asyncio.FIRST_COMPLETED)
if not done:
timeout_count = len(running)
[r.cancel() for r in running] # Start cancelling the timed out jobs
done, running = await asyncio.wait(running) # Wait for cancellation to finish
assert(len(done) == timeout_count)
assert(len(running) == 0)
else:
for d in done:
job_return_code = await d
if len(waiting) > 0:
update_run_set(waiting, running, max_concurrency)
assert(len(running) > 0)
The problem here is that say my timeout is 5 seconds, and I'm scheduling 3 jobs across 4 cores. Job A takes 2 seconds, Job B takes 6 seconds and job C takes 7 seconds.
We have something like this:
t=0 t=1 t=2 t=3 t=4 t=5 t=6 t=7
-------|-------|-------|-------|-------|-------|-------|-------|
AAAAAAAAAAAAAAA
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
However, at t=2 the asyncio.await() call returns because A completed. It then loops back up to the top and runs again. At this point B has already been running for 2 seconds, but since it starts the countdown over, and only has 4 seconds remaining until it completes, B will appear to be successful. So after 4 seconds we return again, B is successful, then we start the loop over and now C completes.
How do I make it so that B and C both fail? I somehow need the time to be preserved across calls to asyncio.wait().
One idea that I had is to do my own bookkeeping of how much time each job is allowed to continue running, and pass the minimum of these into asyncio.wait(). Then when something times out, I can cancel only those jobs whose time remaining was equal to the value I passed in for timeout_seconds.
This requires a lot of manual bookkeeping on my part though, and I can't help but wonder about floating point problems which cause me to decide that it's not time to cancel a job even though it really is). So I can't help but think that there's something easier. Would appreciate any ideas.

You can wrap each job into a coroutine that checks its timeout, e.g. using asyncio.wait_for. Limiting the number of parallel invocations could be done in the same coroutine using an asyncio.Semaphore. With those two combined, you only need one call to wait() or even just gather(). For example (untested):
# Run the job, limiting concurrency and time. This code could likely
# be part of _run_job_coroutine, omitted from the question.
async def _run_job_with_limits(job, sem, timeout):
async with sem:
try:
await asyncio.wait_for(_run_job_coroutine(job), timeout)
except asyncio.TimeoutError:
# timed out and canceled, decide what you want to return
pass
async def _run_test_invocations_async(jobs, max_concurrency, timeout):
sem = asyncio.Semaphore(max_concurrency)
return await asyncio.gather(
*(_run_job_with_limits(job, sem, timeout) for job in jobs)
)

Related

Multiprocess : Persistent Pool?

I have code like the one below :
def expensive(self,c,v):
.....
def inner_loop(self,c,collector):
self.db.query('SELECT ...',(c,))
for v in self.db.cursor.fetchall() :
collector.append( self.expensive(c,v) )
def method(self):
# create a Pool
#join the Pool ??
self.db.query('SELECT ...')
for c in self.db.cursor.fetchall() :
collector = []
#RUN the whole cycle in parallel in separate processes
self.inner_loop(c, collector)
#do stuff with the collector
#! close the pool ?
both the Outer and the Inner loop are thousands of steps ...
I think I understand how to run a Pool of couple of processes.
All the examples I found show that more or less.
But in my case I need to lunch a persistent Pool and then feed the data (c-value). Once a inner-loop process has finished I have to supply the next-available-c-value.
And keep the processes running and collect the results.
How do I do that ?
A clunky idea I have is :
def method(self):
ws = 4
with Pool(processes=ws) as pool :
cs = []
for i,c in enumerate(..) :
cs.append(c)
if i % ws == 0 :
res = [pool.apply(self.inner_loop, (c)) for i in range(ws)]
cs = []
collector.append(res)
will this keep the same pool running !! i.e. not lunch new process every time ?i
Do I need 'if i % ws == 0' part or I can use imap(), map_async() and the Pool obj will block the loop when available workers are exhausted and continue when some are freed ?
Yes, the way that multiprocessing.Pool works is:
Worker processes within a Pool typically live for the complete duration of the Pool’s work queue.
So simply submitting all your work to the pool via imap should be sufficient:
with Pool(processes=4) as pool:
initial_results = db.fetchall("SELECT c FROM outer")
results = [pool.imap(self.inner_loop, (c,)) for c in initial_results]
That said, if you really are doing this to fetch things from the DB, it may make more sense to move more processing down into that layer (bring the computation to the data rather than bringing the data to the computation).

Python3 threading on AWS Lambda

I am using flask, and have a route that sends emails to people. I am using threading to send them faster. When i run it on my local machine it takes about 12 seconds to send 300 emails. But when I run it on lambda thorough API Gateway it times out.
here's my code:
import threading
def async_mail(app, msg):
with app.app_context():
mail.send(msg)
def mass_mail_sender(order, user, header):
html = render_template('emails/pickup_mail.html', bruger_order=order.ordre, produkt=order.produkt)
msg = Message(recipients=[user],
sender=('Sender', 'infor#example.com'),
html=html,
subject=header)
thread = threading.Thread(target=async_mail, args=[create_app(), msg])
thread.start()
return thread
#admin.route('/lager/<url_id>/opdater', methods=['POST'])
def update_stock(url_id):
start = time.time()
if current_user.navn != 'Admin':
abort(403)
if request.method == 'POST':
produkt = Produkt.query.filter_by(url_id=url_id)
nyt_antal = int(request.form['bestilt_hjem'])
produkt.balance = nyt_antal
produkt.bestilt_hjem = nyt_antal
db.session.commit()
orders = OrdreBog.query.filter(OrdreBog.produkt.has(func.lower(Produkt.url_id == url_id))) \
.filter(OrdreBog.produkt_status == 'Ikke klar').all()
threads = []
for order in orders:
if order.antal <= nyt_antal:
nyt_antal -= order.antal
new_thread = mass_mail_sender(order, order.ordre.bruger.email, f'Din bog {order.produkt.titel} er klar til afhentning')
threads.append(new_thread)
order.produkt_status = 'Klar til afhentning'
db.session.commit()
for thread in threads:
try:
thread.join()
except Exception:
pass
end = time.time()
print(end - start)
return 'Emails sendt'
return ''
AWS lambda functions designed to run functions within these constraints:
Memory– The amount of memory available to the function during execution. Choose an amount between 128 MB and 3,008 MB in 64-MB increments.
Lambda allocates CPU power linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent of one full vCPU (one vCPU-second of credits per second).
Timeout – The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds.
To put this in your mail sending multi threaded python code. The function will terminate automatically either when your function execution completes successfully or it reaches configured timeout.
I understand you want single python function to send n number of emails "concurrently". To achieve this with lambda try the "Concurrency" setting and trigger your lambda function through a local script, S3 hosted html/js triggered by cloud watch or EC2 instance.
Concurrency – Reserve concurrency for a function to set the maximum number of simultaneous executions for a function. Provision concurrency to ensure that a function can scale without fluctuations in latency.
https://docs.aws.amazon.com/lambda/latest/dg/configuration-console.html
Important: All above settings will affect your lambda function execution cost significantly. So plan and compare before applying.
If you need any more help, let me know.
Thank you.

stop all async Task when they fails over threshold?

I'm using Monix Task for async control.
scenario
tasks are executed in parallel
if failure occurs over X times
stop all tasks that are not yet in complete status (as quick as better)
my solution
I come up the ideas that race between 1. result and 2. error counter, and cancel the loser.
Via Task.race if the error-counter get to threshold first, then the tasks would be canceled by Task.race.
experiment
on Ammonite REPL
{
import $ivy.`io.monix::monix:3.1.0`
import monix.eval.Task
import monix.execution.atomic.Atomic
import scala.concurrent.duration._
import monix.execution.Scheduler
//import monix.execution.Scheduler.Implicits.global
implicit val s = Scheduler.fixedPool("race", 2) // pool size
val taskSize = 100
val errCounter = Atomic(0)
val threshold = 3
val tasks = (1 to taskSize).map(_ => Task.sleep(100.millis).map(_ => errCounter.increment()))
val guard = Task(f"stop because too many error: ${errCounter.get()}")
.restartUntil(_ => errCounter.get() >= threshold)
val race = Task
.race(guard, Task.gather(tasks))
.runToFuture
.onComplete { case x => println(x); println(f"completed task: ${errCounter.get()}") }
}
issue
The outcome is depends on thread pool size !?
For pool size 1
the outcome is almost always a task success i.e. no stop.
Success(Right(.........))
completed task: 100 // all task success !
For pool size 2
it is very un-deterministic between success and failure and the cancelling is not accurate.
for example:
Success(Left(stop because too many error: 1))
completed task: 98
the canceling is as late as 98 tasks has completed.
the error count is weird small to threshold.
The default global scheduler get this same outcome behavior.
For pool size 200
it is more deterministic and the stopping is earlier thus more accurate in sense that less task was completed.
Success(Left(stop because too many error: 2))
completed task: 8
the larger of the pool size the better.
If I change Task.gather to Task.sequence execution, all issues disappeared!
What is the cause for this dependency on pool size ?
How to improve it or is there better alternative for stopping tasks once too many error occurs ?
What you're seeing is likely an effect of the monix scheduler and how it aims for fairness. It's a fairly complex topic but the documentation and scaladocs are excellent (see: https://monix.io/docs/3x/execution/scheduler.html#execution-model)
When you have only one thread (or few) it takes a while until the "guard" Task gets another turn to check. With Task.gather you start 100 tasks at once, so the scheduler is very busy and the "guard" cannot check again until the other tasks are already done.
If you have one thread per task the scheduler cannot guarantee fairness and therefore the "guard" unfairly checks much more frequently and can finish sooner.
If you use Task.sequence those 100 tasks are executed sequentially, which is why the "guard" task gets much more opportunities to finish as soon as needed. If you want to keep your code the way it is, you could use Task.gatherN(parallelism = 4) which will limit the parallelism and therefore allow your "guard" to check more often (a middleground between Task.sequence and Task.gather).
It seems a bit like Go code to me (using Task.race like Go's select) and you're also using side-effects unconstrained which further complicates understanding what's going on. I've tried to rewrite your program in a way that's more idiomatic and for complicated concurrency I usually reach for streams like Observable:
import cats.effect.concurrent.Ref
import monix.eval.Task
import monix.execution.Scheduler
import monix.reactive.Observable
import scala.concurrent.duration._
object ErrorThresholdDemo extends App {
//import monix.execution.Scheduler.Implicits.global
implicit val s: Scheduler = Scheduler.fixedPool("race", 2) // pool size
val taskSize = 100
val threshold = 30
val program = for {
errCounter <- Ref[Task].of(0)
tasks = (1 to taskSize).map(n => Task.sleep(100.millis).flatMap(_ => errCounter.update(_ + (n % 2))))
tasksFinishedCount <- Observable
.fromIterable(tasks)
.mapParallelUnordered(parallelism = 4) { task =>
task
}
.takeUntilEval(errCounter.get.restartUntil(_ >= threshold))
.map(_ => 1)
.sumL
errorCount <- errCounter.get
_ <- Task(println(f"completed tasks: $tasksFinishedCount, errors: $errorCount"))
} yield ()
program.runSyncUnsafe()
}
As you can see I no longer use global mutable side-effects but instead Ref which interally also uses Atomic but provides a functional api which we can use with Task.
For demonstration purposes I also changed the threshold to 30 and only every other task will "error". So the expected output is always around completed tasks: 60, errors: 30 no matter the thread-pool size.
I'm still using polling with errCounter.get.restartUntil(_ >= threshold) which might burn a bit too much CPU for my taste but it's close to your original idea and works well.
Usually I don't create a list of tasks up front but instead throw the inputs into the Observable and create the tasks inside of .mapParallelUnordered. This code keeps your list which is why there is no real mapping involved (it already contains tasks).
You can choose your desired parallelism much like with Task.gatherN which is pretty nice imo.
Let me know if anything is still unclear :)

Pathos queuing tasks then getting results

I have a computing task which effectively is running the same piece of code against a large number of datasets. I'm wanting to utilise large multicore systems(for instance 72 cores).
I'm testing it on a 2 core system
output_array = []
pool = ProcessPool(nodes=2)
i = 0
num_steps = len(glob.glob1(config_dir, "*.ini"))
worker_array = []
for root, dirs, files in os.walk(config_dir):
for file_name in files:
config_path = "%s/%s" % (config_dir, file_name)
row_result = pool.apipe(individual_config, config_path, dates, test_section, test_type, prediction_service, result_service)
worker_array.append(row_result)
with progressbar.ProgressBar(max_value=num_steps) as bar:
bar.update(0)
while len(worker_array) > 0:
for worker in worker_array:
if worker.ready():
result = worker.get()
output_array.append(result)
worker_array.remove(worker)
i = i + 1
bar.update(i)
"individual_config" is my worker function.
My rationale for this code is to load the data to create all the tasks(2820 tasks), put the queue objects in a list, then poll this list to pick up completed tasks and put the results into an array. A progress bar monitors how the task is progressing.
Each individual_config task, running on it's own takes 0.3-0.5 seconds, but there's 2820 of them so that'd be about 20 odd minutes on my system.
When running in this new pathos multiprocessing processingpool configuration, it's getting one or two completed every 10-15 seconds. The task is projected to take 20 hours. I'd expect some overhead of the multiprocessing, so not getting a double speed with two cores processing, but this seems to be something wrong. Suggestions?

Python - queuing one function

I've just started learning python, but I have problem with my code:
import pifacecad
# listener initialization
cad = pifacecad.PiFaceCAD()
listener = pifacecad.SwitchEventListener(chip=cad)
listener.register(4, pifacecad.IODIR_ON, blowMyMind)
listener.activate()
def blowMyMind(event):
print('some prints...')
time.sleep(4)
print('and the end.')
blowMyMind() will be fired as many times as listener it tells to. That is okay.
My goal is to deactivate listener UNTIL blowMyMind ends. Pifacecad suggest Barrier() to achieve that, at least I think that it was here for that reason(correct me if I'm wrong).
Now it's working as many times as I activate listener event, but It's not like pushing function 99 times at once, but queues it and runs one by one.
With Barriers I think it should look like this:
# Barrier
global end_barrier
end_barrier = Barrier(1)
# listener initialization
listener = pifacecad.SwitchEventListener(chip=cad)
listener.register(4, pifacecad.IODIR_ON, blowMyMind)
listener.activate()
def blowMyMind(event):
global end_barrier
test = end_barrier.wait()
print(test) # returns 0, which should not in about 5 seconds
print('some prints...')
time.sleep(4)
print('and the end.')
The funny part is when I change parties in Barrier initialization it is causing BrokenBarrierError at first listener event.
Actually I think that I completely misunderstood Barrier() I think the problem with it is that all listener events are in one thread instead of their own threads.
It's making me even more confused when I'm reading:
parties The number of threads required to pass the barrier.
from here: https://docs.python.org/3/library/threading.html
My conclusion: when initializing Barrier(X) it would be realeased when there will be X('or less', 'or more'?) number of threads. That sounds VERY stupid :D
I tried to make it that way with no luck:
# listener initialization
global busy
busy = 0
cad = pifacecad.PiFaceCAD()
listener = pifacecad.SwitchEventListener(chip=cad)
listener.register(4, pifacecad.IODIR_ON, blowMyMind)
listener.activate()
def blowMyMind(event):
global busy
if busy == 0:
busy = 1
print('some prints...')
time.sleep(4)
print('and the end.')
busy = 0
else:
return None

Resources