Execute commands every x seconds in Python 3 [duplicate] - python-3.x
I'm looking for a library in Python which will provide at and cron like functionality.
I'd quite like have a pure Python solution, rather than relying on tools installed on the box; this way I run on machines with no cron.
For those unfamiliar with cron: you can schedule tasks based upon an expression like:
0 2 * * 7 /usr/bin/run-backup # run the backups at 0200 on Every Sunday
0 9-17/2 * * 1-5 /usr/bin/purge-temps # run the purge temps command, every 2 hours between 9am and 5pm on Mondays to Fridays.
The cron time expression syntax is less important, but I would like to have something with this sort of flexibility.
If there isn't something that does this for me out-the-box, any suggestions for the building blocks to make something like this would be gratefully received.
Edit
I'm not interested in launching processes, just "jobs" also written in Python - python functions. By necessity I think this would be a different thread, but not in a different process.
To this end, I'm looking for the expressivity of the cron time expression, but in Python.
Cron has been around for years, but I'm trying to be as portable as possible. I cannot rely on its presence.
If you're looking for something lightweight checkout schedule:
import schedule
import time
def job():
print("I'm working...")
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
while 1:
schedule.run_pending()
time.sleep(1)
Disclosure: I'm the author of that library.
You could just use normal Python argument passing syntax to specify your crontab. For example, suppose we define an Event class as below:
from datetime import datetime, timedelta
import time
# Some utility classes / functions first
class AllMatch(set):
"""Universal set - match everything"""
def __contains__(self, item): return True
allMatch = AllMatch()
def conv_to_set(obj): # Allow single integer to be provided
if isinstance(obj, (int,long)):
return set([obj]) # Single item
if not isinstance(obj, set):
obj = set(obj)
return obj
# The actual Event class
class Event(object):
def __init__(self, action, min=allMatch, hour=allMatch,
day=allMatch, month=allMatch, dow=allMatch,
args=(), kwargs={}):
self.mins = conv_to_set(min)
self.hours= conv_to_set(hour)
self.days = conv_to_set(day)
self.months = conv_to_set(month)
self.dow = conv_to_set(dow)
self.action = action
self.args = args
self.kwargs = kwargs
def matchtime(self, t):
"""Return True if this event should trigger at the specified datetime"""
return ((t.minute in self.mins) and
(t.hour in self.hours) and
(t.day in self.days) and
(t.month in self.months) and
(t.weekday() in self.dow))
def check(self, t):
if self.matchtime(t):
self.action(*self.args, **self.kwargs)
(Note: Not thoroughly tested)
Then your CronTab can be specified in normal python syntax as:
c = CronTab(
Event(perform_backup, 0, 2, dow=6 ),
Event(purge_temps, 0, range(9,18,2), dow=range(0,5))
)
This way you get the full power of Python's argument mechanics (mixing positional and keyword args, and can use symbolic names for names of weeks and months)
The CronTab class would be defined as simply sleeping in minute increments, and calling check() on each event. (There are probably some subtleties with daylight savings time / timezones to be wary of though). Here's a quick implementation:
class CronTab(object):
def __init__(self, *events):
self.events = events
def run(self):
t=datetime(*datetime.now().timetuple()[:5])
while 1:
for e in self.events:
e.check(t)
t += timedelta(minutes=1)
while datetime.now() < t:
time.sleep((t - datetime.now()).seconds)
A few things to note: Python's weekdays / months are zero indexed (unlike cron), and that range excludes the last element, hence syntax like "1-5" becomes range(0,5) - ie [0,1,2,3,4]. If you prefer cron syntax, parsing it shouldn't be too difficult however.
More or less same as above but concurrent using gevent :)
"""Gevent based crontab implementation"""
from datetime import datetime, timedelta
import gevent
# Some utility classes / functions first
def conv_to_set(obj):
"""Converts to set allowing single integer to be provided"""
if isinstance(obj, (int, long)):
return set([obj]) # Single item
if not isinstance(obj, set):
obj = set(obj)
return obj
class AllMatch(set):
"""Universal set - match everything"""
def __contains__(self, item):
return True
allMatch = AllMatch()
class Event(object):
"""The Actual Event Class"""
def __init__(self, action, minute=allMatch, hour=allMatch,
day=allMatch, month=allMatch, daysofweek=allMatch,
args=(), kwargs={}):
self.mins = conv_to_set(minute)
self.hours = conv_to_set(hour)
self.days = conv_to_set(day)
self.months = conv_to_set(month)
self.daysofweek = conv_to_set(daysofweek)
self.action = action
self.args = args
self.kwargs = kwargs
def matchtime(self, t1):
"""Return True if this event should trigger at the specified datetime"""
return ((t1.minute in self.mins) and
(t1.hour in self.hours) and
(t1.day in self.days) and
(t1.month in self.months) and
(t1.weekday() in self.daysofweek))
def check(self, t):
"""Check and run action if needed"""
if self.matchtime(t):
self.action(*self.args, **self.kwargs)
class CronTab(object):
"""The crontab implementation"""
def __init__(self, *events):
self.events = events
def _check(self):
"""Check all events in separate greenlets"""
t1 = datetime(*datetime.now().timetuple()[:5])
for event in self.events:
gevent.spawn(event.check, t1)
t1 += timedelta(minutes=1)
s1 = (t1 - datetime.now()).seconds + 1
print "Checking again in %s seconds" % s1
job = gevent.spawn_later(s1, self._check)
def run(self):
"""Run the cron forever"""
self._check()
while True:
gevent.sleep(60)
import os
def test_task():
"""Just an example that sends a bell and asd to all terminals"""
os.system('echo asd | wall')
cron = CronTab(
Event(test_task, 22, 1 ),
Event(test_task, 0, range(9,18,2), daysofweek=range(0,5)),
)
cron.run()
None of the listed solutions even attempt to parse a complex cron schedule string. So, here is my version, using croniter. Basic gist:
schedule = "*/5 * * * *" # Run every five minutes
nextRunTime = getNextCronRunTime(schedule)
while True:
roundedDownTime = roundDownTime()
if (roundedDownTime == nextRunTime):
####################################
### Do your periodic thing here. ###
####################################
nextRunTime = getNextCronRunTime(schedule)
elif (roundedDownTime > nextRunTime):
# We missed an execution. Error. Re initialize.
nextRunTime = getNextCronRunTime(schedule)
sleepTillTopOfNextMinute()
Helper routines:
from croniter import croniter
from datetime import datetime, timedelta
# Round time down to the top of the previous minute
def roundDownTime(dt=None, dateDelta=timedelta(minutes=1)):
roundTo = dateDelta.total_seconds()
if dt == None : dt = datetime.now()
seconds = (dt - dt.min).seconds
rounding = (seconds+roundTo/2) // roundTo * roundTo
return dt + timedelta(0,rounding-seconds,-dt.microsecond)
# Get next run time from now, based on schedule specified by cron string
def getNextCronRunTime(schedule):
return croniter(schedule, datetime.now()).get_next(datetime)
# Sleep till the top of the next minute
def sleepTillTopOfNextMinute():
t = datetime.utcnow()
sleeptime = 60 - (t.second + t.microsecond/1000000.0)
time.sleep(sleeptime)
I know there are a lot of answers, but another solution could be to go with decorators. This is an example to repeat a function everyday at a specific time. The cool think about using this way is that you only need to add the Syntactic Sugar to the function you want to schedule:
#repeatEveryDay(hour=6, minutes=30)
def sayHello(name):
print(f"Hello {name}")
sayHello("Bob") # Now this function will be invoked every day at 6.30 a.m
And the decorator will look like:
def repeatEveryDay(hour, minutes=0, seconds=0):
"""
Decorator that will run the decorated function everyday at that hour, minutes and seconds.
:param hour: 0-24
:param minutes: 0-60 (Optional)
:param seconds: 0-60 (Optional)
"""
def decoratorRepeat(func):
#functools.wraps(func)
def wrapperRepeat(*args, **kwargs):
def getLocalTime():
return datetime.datetime.fromtimestamp(time.mktime(time.localtime()))
# Get the datetime of the first function call
td = datetime.timedelta(seconds=15)
if wrapperRepeat.nextSent == None:
now = getLocalTime()
wrapperRepeat.nextSent = datetime.datetime(now.year, now.month, now.day, hour, minutes, seconds)
if wrapperRepeat.nextSent < now:
wrapperRepeat.nextSent += td
# Waiting till next day
while getLocalTime() < wrapperRepeat.nextSent:
time.sleep(1)
# Call the function
func(*args, **kwargs)
# Get the datetime of the next function call
wrapperRepeat.nextSent += td
wrapperRepeat(*args, **kwargs)
wrapperRepeat.nextSent = None
return wrapperRepeat
return decoratorRepeat
I like how the pycron package solves this problem.
import pycron
import time
while True:
if pycron.is_now('0 2 * * 0'): # True Every Sunday at 02:00
print('running backup')
time.sleep(60) # The process should take at least 60 sec
# to avoid running twice in one minute
else:
time.sleep(15) # Check again in 15 seconds
There isn't a "pure python" way to do this because some other process would have to launch python in order to run your solution. Every platform will have one or twenty different ways to launch processes and monitor their progress. On unix platforms, cron is the old standard. On Mac OS X there is also launchd, which combines cron-like launching with watchdog functionality that can keep your process alive if that's what you want. Once python is running, then you can use the sched module to schedule tasks.
Another trivial solution would be:
from aqcron import At
from time import sleep
from datetime import datetime
# Event scheduling
event_1 = At( second=5 )
event_2 = At( second=[0,20,40] )
while True:
now = datetime.now()
# Event check
if now in event_1: print "event_1"
if now in event_2: print "event_2"
sleep(1)
And the class aqcron.At is:
# aqcron.py
class At(object):
def __init__(self, year=None, month=None,
day=None, weekday=None,
hour=None, minute=None,
second=None):
loc = locals()
loc.pop("self")
self.at = dict((k, v) for k, v in loc.iteritems() if v != None)
def __contains__(self, now):
for k in self.at.keys():
try:
if not getattr(now, k) in self.at[k]: return False
except TypeError:
if self.at[k] != getattr(now, k): return False
return True
I don't know if something like that already exists. It would be easy to write your own with time, datetime and/or calendar modules, see http://docs.python.org/library/time.html
The only concern for a python solution is that your job needs to be always running and possibly be automatically "resurrected" after a reboot, something for which you do need to rely on system dependent solutions.
Related
call several time the same subprocess python function
I need to process-parallelize some computations that are done several time. So the subprocess python function has to keep alive between two calls. In a perfect world I would need something like that: class Computer: def __init__(self, x): self.x = x # Creation of quite heavy python objects that cannot be pickled !! def call(self, y): return x+y process = Computer(4) ## NEED MAGIC HERE to keep "call" alive in a subprocess !! print(process.call(1)) # prints 5 (=4+1) print(process.call(12)) # prints 16 (=4+12) I can follow this answer and communicate via asyncio.subprocess.PIPE, but in my actual use case, the call argument is a list of list of integers the call answer is a list of strings Thus it could be cool to avoid to serialize/deserialize the arguments and return values by hand. Any ideas of how to keep the function call "alive" and ready to receive new calls ?
Here is an answer, based on this one, but several subprocesses are created each subprocess has its own identifier their calls are parallelized a small layer to allow exchange of jsons instead of plain byte strings. hello.py #!/usr/bin/python3 # This is the taks to be done. # A task consist in receiving a json assumed to be # {"vector": [...]} # and return a json with the length of the vector and # the worker id. import sys import time import json ident = sys.argv[1] while True: str_data = input() data = json.loads(str_data) command = data.get("command", None) if command == "quit": answer = {"comment": "I'm leaving", "my id": ident} print(json.dumps(answer), end="\n") sys.exit(1) time.sleep(1) # simulates 1s of heavy work answer = {"size": len(data['vector']), "my id": ident} print(json.dumps(answer), end="\n") main.py #!/usr/bin/python3 import json from subprocess import Popen, PIPE import concurrent.futures from concurrent.futures import ThreadPoolExecutor dprint = print def create_proc(arg): cmd = ["./hello.py", arg] process = Popen(cmd, stdin=PIPE, stdout=PIPE) return process def make_call(proc, arg): """Make the call in a thread.""" str_arg = json.dumps(arg) txt = bytes(str_arg + '\n', encoding='utf8') proc.stdin.write(txt) proc.stdin.flush() b_ans = proc.stdout.readline() s_ans = b_ans.decode('utf8') j_ans = json.loads(s_ans) return j_ans def search(executor, procs, data): jobs = [executor.submit(make_call, proc, data) for proc in procs] answer = [] for job in concurrent.futures.as_completed(jobs): got_ans = job.result() answer.append(got_ans) return answer def main(): n_workers = 50 idents = [f"{i}st" for i in range(0, n_workers)] executor = ThreadPoolExecutor(n_workers) # Create `n_workers` subprocesses waiting for data to work with. # The subprocesses are all different because they receive different # "initialization" id. procs = [create_proc(ident) for ident in idents] data = {"vector": [1, 2, 23]} answers = search(executor, procs, data) # takes 1s instead of 5 ! for answer in answers: print(answers) search(executor, procs, {"command": "quit"}) main()
Python Timer object lagging behind system time [duplicate]
I'm trying to schedule a repeating event to run every minute in Python 3. I've seen class sched.scheduler but I'm wondering if there's another way to do it. I've heard mentions I could use multiple threads for this, which I wouldn't mind doing. I'm basically requesting some JSON and then parsing it; its value changes over time. To use sched.scheduler I have to create a loop to request it to schedule the even to run for one hour: scheduler = sched.scheduler(time.time, time.sleep) # Schedule the event. THIS IS UGLY! for i in range(60): scheduler.enter(3600 * i, 1, query_rate_limit, ()) scheduler.run() What other ways to do this are there?
You could use threading.Timer, but that also schedules a one-off event, similarly to the .enter method of scheduler objects. The normal pattern (in any language) to transform a one-off scheduler into a periodic scheduler is to have each event re-schedule itself at the specified interval. For example, with sched, I would not use a loop like you're doing, but rather something like: def periodic(scheduler, interval, action, actionargs=()): scheduler.enter(interval, 1, periodic, (scheduler, interval, action, actionargs)) action(*actionargs) and initiate the whole "forever periodic schedule" with a call periodic(scheduler, 3600, query_rate_limit) Or, I could use threading.Timer instead of scheduler.enter, but the pattern's quite similar. If you need a more refined variation (e.g., stop the periodic rescheduling at a given time or upon certain conditions), that's not too hard to accomodate with a few extra parameters.
You could use schedule. It works on Python 2.7 and 3.3 and is rather lightweight: import schedule import time def job(): print("I'm working...") schedule.every(10).minutes.do(job) schedule.every().hour.do(job) schedule.every().day.at("10:30").do(job) while 1: schedule.run_pending() time.sleep(1)
My humble take on the subject: from threading import Timer class RepeatedTimer(object): def __init__(self, interval, function, *args, **kwargs): self._timer = None self.function = function self.interval = interval self.args = args self.kwargs = kwargs self.is_running = False self.start() def _run(self): self.is_running = False self.start() self.function(*self.args, **self.kwargs) def start(self): if not self.is_running: self._timer = Timer(self.interval, self._run) self._timer.start() self.is_running = True def stop(self): self._timer.cancel() self.is_running = False Usage: from time import sleep def hello(name): print "Hello %s!" % name print "starting..." rt = RepeatedTimer(1, hello, "World") # it auto-starts, no need of rt.start() try: sleep(5) # your long-running job goes here... finally: rt.stop() # better in a try/finally block to make sure the program ends! Features: Standard library only, no external dependencies Uses the pattern suggested by Alex Martnelli start() and stop() are safe to call multiple times even if the timer has already started/stopped function to be called can have positional and named arguments You can change interval anytime, it will be effective after next run. Same for args, kwargs and even function!
Based on MestreLion answer, it solve a little problem with multithreading: from threading import Timer, Lock class Periodic(object): """ A periodic task running in threading.Timers """ def __init__(self, interval, function, *args, **kwargs): self._lock = Lock() self._timer = None self.function = function self.interval = interval self.args = args self.kwargs = kwargs self._stopped = True if kwargs.pop('autostart', True): self.start() def start(self, from_run=False): self._lock.acquire() if from_run or self._stopped: self._stopped = False self._timer = Timer(self.interval, self._run) self._timer.start() self._lock.release() def _run(self): self.start(from_run=True) self.function(*self.args, **self.kwargs) def stop(self): self._lock.acquire() self._stopped = True self._timer.cancel() self._lock.release()
You could use the Advanced Python Scheduler. It even has a cron-like interface.
Use Celery. from celery.task import PeriodicTask from datetime import timedelta class ProcessClicksTask(PeriodicTask): run_every = timedelta(minutes=30) def run(self, **kwargs): #do something
Based on Alex Martelli's answer, I have implemented decorator version which is more easier to integrated. import sched import time import datetime from functools import wraps from threading import Thread def async(func): #wraps(func) def async_func(*args, **kwargs): func_hl = Thread(target=func, args=args, kwargs=kwargs) func_hl.start() return func_hl return async_func def schedule(interval): def decorator(func): def periodic(scheduler, interval, action, actionargs=()): scheduler.enter(interval, 1, periodic, (scheduler, interval, action, actionargs)) action(*actionargs) #wraps(func) def wrap(*args, **kwargs): scheduler = sched.scheduler(time.time, time.sleep) periodic(scheduler, interval, func) scheduler.run() return wrap return decorator #async #schedule(1) def periodic_event(): print(datetime.datetime.now()) if __name__ == '__main__': print('start') periodic_event() print('end')
Doc: Advanced Python Scheduler #sched.cron_schedule(day='last sun') def some_decorated_task(): print("I am printed at 00:00:00 on the last Sunday of every month!") Available fields: | Field | Description | |-------------|----------------------------------------------------------------| | year | 4-digit year number | | month | month number (1-12) | | day | day of the month (1-31) | | week | ISO week number (1-53) | | day_of_week | number or name of weekday (0-6 or mon,tue,wed,thu,fri,sat,sun) | | hour | hour (0-23) | | minute | minute (0-59) | | second | second (0-59) |
Here's a quick and dirty non-blocking loop with Thread: #!/usr/bin/env python3 import threading,time def worker(): print(time.time()) time.sleep(5) t = threading.Thread(target=worker) t.start() threads = [] t = threading.Thread(target=worker) threads.append(t) t.start() time.sleep(7) print("Hello World") There's nothing particularly special, the worker creates a new thread of itself with a delay. Might not be most efficient, but simple enough. northtree's answer would be the way to go if you need more sophisticated solution. And based on this, we can do the same, just with Timer: #!/usr/bin/env python3 import threading,time def hello(): t = threading.Timer(10.0, hello) t.start() print( "hello, world",time.time() ) t = threading.Timer(10.0, hello) t.start() time.sleep(12) print("Oh,hai",time.time()) time.sleep(4) print("How's it going?",time.time())
There is a new package, called ischedule. For this case, the solution could be as following: from ischedule import schedule, run_loop from datetime import timedelta def query_rate_limit(): print("query_rate_limit") schedule(query_rate_limit, interval=60) run_loop(return_after=timedelta(hours=1)) Everything runs on the main thread and there is no busy waiting inside the run_loop. The startup time is very precise, usually within a fraction of a millisecond of the specified time.
Taking the original threading.Timer() class implementation and fixing the run() method I get something like: class PeriodicTimer(Thread): """A periodic timer that runs indefinitely until cancel() is called.""" def __init__(self, interval, function, args=None, kwargs=None): Thread.__init__(self) self.interval = interval self.function = function self.args = args if args is not None else [] self.kwargs = kwargs if kwargs is not None else {} self.finished = Event() def cancel(self): """Stop the timer if it hasn't finished yet.""" self.finished.set() def run(self): """Run until canceled""" while not self.finished.wait(self.interval): self.function(*self.args, **self.kwargs) The wait() method called is using a condition variable, so it should be rather efficient.
See my sample import sched, time def myTask(m,n): print n+' '+m def periodic_queue(interval,func,args=(),priority=1): s = sched.scheduler(time.time, time.sleep) periodic_task(s,interval,func,args,priority) s.run() def periodic_task(scheduler,interval,func,args,priority): func(*args) scheduler.enter(interval,priority,periodic_task, (scheduler,interval,func,args,priority)) periodic_queue(1,myTask,('world','hello'))
I ran into a similar issue a while back so I made a python module event-scheduler to address this. It has a very similar API to the sched library with a few differences: It utilizes a background thread and is always able to accept and run jobs in the background until the scheduler is stopped explicitly (no need for a while loop). It comes with an API to schedule recurring events at a user specified interval until explicitly cancelled. It can be installed by pip install event-scheduler from event_scheduler import EventScheduler event_scheduler = EventScheduler() event_scheduler.start() # Schedule the recurring event to print "hello world" every 60 seconds with priority 1 # You can use the event_id to cancel the recurring event later event_id = event_scheduler.enter_recurring(60, 1, print, ("hello world",))
Slow multiprocessing when parent object contains large data
Consider the following snippet: import numpy as np import multiprocessing as mp import time def work_standalone(args): return 2 class Worker: def __init__(self): self.data = np.random.random(size=(10000, 10000)) # leave a trace whenever init is called with open('rnd-%d' % np.random.randint(100), 'a') as f: f.write('init called\n') def work_internal(self, args): return 2 def _run(self, target): with mp.Pool() as pool: tasks = [[idx] for idx in range(16)] result = pool.imap(target, tasks) for res in result: pass def run_internal(self): self._run(self.work_internal) def run_standalone(self): self._run(work_standalone) if __name__ == '__main__': t1 = time.time() Worker().run_standalone() t2 = time.time() print(f'Standalone took {t2 - t1:.3f} seconds') t3 = time.time() Worker().run_internal() t4 = time.time() print(f'Internal took {t3 - t4:.3f} seconds') I.e. we have an object containing a large variable that uses multiprocessing to parallelize some work that has nothing to do with that large variable, i.e. does not read from or write to. The location of the worker process has a huge impact on the runtime: Standalone took 0.616 seconds Internal took 19.917 seconds Why is this happening? I am completely lost. Note that __init__ is only called twice, so the random data is not created for every new process in the pool. The only reason I can think of why this would be slow is that data is copied around, but that would not make sense since it is never used anywhere, and python is supposed to use copy-on-write semantics. Also note that the difference disappears if you make run_internal a static method.
The issue you have is due to the target you are calling from the pool. That target is the function with the reference to Worker instance. Now, you're right that the __init__() is only called twice. But remember, when you send anything to and from the processes, python will need to pickle the data first. So, because your target is self.work_internal(), python has to pickle the Worker() instance every time the imap is called. This leads to one issue, self.data being copied over again and again. The following is the proof. I just added 1 "input" statements, and fixed the last time of time calculation. import numpy as np import multiprocessing as mp import time def work_standalone(args): return 2 class Worker: def __init__(self): self.data = np.random.random(size=(10000, 10000)) # leave a trace whenever init is called with open('rnd-%d' % np.random.randint(100), 'a') as f: f.write('init called\n') def work_internal(self, args): return 2 def _run(self, target): with mp.Pool() as pool: tasks = [[idx] for idx in range(16)] result = pool.imap(target, tasks) input("Wait for analysis") for res in result: pass def run_internal(self): self._run(self.work_internal) # self._run(work_standalone) def run_standalone(self): self._run(work_standalone) def work_internal(target): with mp.Pool() as pool: tasks = [[idx] for idx in range(16)] result = pool.imap(target, tasks) for res in result: pass if __name__ == '__main__': t1 = time.time() Worker().run_standalone() t2 = time.time() print(f'Standalone took {t2 - t1:.3f} seconds') t3 = time.time() Worker().run_internal() t4 = time.time() print(f'Internal took {t4 - t3:.3f} seconds') You can run the code, when it shows up "wait for analysis", go and check the memory usage. Like so Then on the second time you see the message, press enter. And observe the memory usage increasing and decreasing again. On the other hand, if you change self._run(self.work_internal) to self._run(work_standalone) you would notice that the speed is very fast, and the memory is not increasing, as well as the time taken is a lot shorter than doing self.work_internal. Solution One way to solve your issue is to set self.data as a static class variable. In normal cases, this would prevent instances from having to copy/reinit the variable again. This also prevented the issue from occuring. class Worker: data = np.random.random(size=(10000, 10000)) def __init__(self): pass ...
Writing an EventLoop without using asyncio
I'm getting very familiar with python's asyncio, the asynchronous programming in python, co-routines etc. I want to be able to executing several co-routines with my own custom made eventloop. I'm curious if i can write my own eventloop without importing asyncio at all
I want to be able to executing several co-routines with my own custom made eventloop. The asyncio event loop is well-tested and can be easily extended to acknowledge non-asyncio events. If you describe the actual use case, it might be easier to help. But if your goal is to learn about async programming and coroutines, read on. I'm curious if i can write my own eventloop without importing asyncio at all It's definitely possible - asyncio itself is just a library, after all - but it will take some work for your event loop to be useful. See this excellent talk by David Beazley where he demonstrates writing an event loop in front of a live audience. (Don't be put off by David using the older yield from syntax - await works exactly the same way.)
Ok, so i found an example somewhere (sorry, don't remember where, no link), and changed a little bit. An eventloop and co-routins without even importing asyncio: import datetime import heapq import types import time class Task: def __init__(self, wait_until, coro): self.coro = coro self.waiting_until = wait_until def __eq__(self, other): return self.waiting_until == other.waiting_until def __lt__(self, other): return self.waiting_until < other.waiting_until class SleepingLoop: def __init__(self, *coros): self._new = coros self._waiting = [] def run_until_complete(self): # Start all the coroutines. for coro in self._new: wait_for = coro.send(None) heapq.heappush(self._waiting, Task(wait_for, coro)) # Keep running until there is no more work to do. while self._waiting: now = datetime.datetime.now() # Get the coroutine with the soonest resumption time. task = heapq.heappop(self._waiting) if now < task.waiting_until: # We're ahead of schedule; wait until it's time to resume. delta = task.waiting_until - now time.sleep(delta.total_seconds()) now = datetime.datetime.now() try: # It's time to resume the coroutine. wait_until = task.coro.send(now) heapq.heappush(self._waiting, Task(wait_until, task.coro)) except StopIteration: # The coroutine is done. pass #types.coroutine def async_sleep(seconds): now = datetime.datetime.now() wait_until = now + datetime.timedelta(seconds=seconds) actual = yield wait_until return actual - now async def countdown(label, total_seconds_wait, *, delay=0): print(label, 'waiting', delay, 'seconds before starting countdown') delta = await async_sleep(delay) print(label, 'starting after waiting', delta) while total_seconds_wait: print(label, 'T-minus', total_seconds_wait) waited = await async_sleep(1) total_seconds_wait -= 1 print(label, 'lift-off!') def main(): loop = SleepingLoop(countdown('A', 5, delay=0), countdown('B', 3, delay=2), countdown('C', 4, delay=1)) start = datetime.datetime.now() loop.run_until_complete() print('Total elapsed time is', datetime.datetime.now() - start) if __name__ == '__main__': main()
Celery define workflows on the fly
I am using python 3.6.6 and Celery 4.2.0. I am trying to manage dynamic task workflows which could change on the fly. Workflows may contain long and short duration steps. For example: Initially I have the following task workflow: Initial workflow But, at some point I have to add another task which depends on A. So task may wait until A finish: Desired workflow from __future__ import absolute_import from celery import subtask, signals from pymemcache.client import base from test_celery.celery import app import time def get_task_uuid(task): return str(hash(frozenset(task[0], task[1])))) #app.task def add(x, y): print('add({},{}) = {} | {}'.format(x, y, x+y, time.time())) return x+y #app.task def sub(x, y): print('sub({},{}) = {} | {}'.format(x, y, x-y, time.time())) return x-y #app.task def mul(x, y): time.sleep(10) print('mul({},{}) = {} | {}'.format(x,y,x*y, time.time())) return x*y #signals.before_task_publish.connect def before_task_publish(body, exchange, routing_key, headers, properties, retry_policy, **kw): task = (body, headers['task']) uuid = get_task_uuid(task) I've been looking for any possible approach, trying to listen task signals to make D run as soon as task A succeeds (signals.task_success). Any idea?
you could use chains and link the result of the 1st task to call whatever task you want according to the result