Restricting concurrent processes with advisory file locks - linux

While fragment X of code is executed, fragment Y should not be executed and when Y is executed X should not be executed. Instead X and Y should be paused if another one is currently executed.
However any number of processes or threads are allowed to concurrently execute code X.
What about executing more than one Y in parallel? It should either also pause until another instance of Y is executed or die with an error "Cannot execute more than one Y in parallel".
Can this be implemented with Unix advisory file locks? I think I should write the number of currently executed X into the file (similar to reference counting). Or maybe I need two files: one for storing the number of concurrent processes executing X and the other for locking the first one? Now let's try to construct the algorithm.
Note that I write in Python.

This can be solved with one lock because there are two kinds of advisory file locks: exclusive and shared. Y gets an exclusive lock and X a shared lock.
Here is the actual code:
class SequentialTask:
def __init__(self, lock_file):
self.lock_file = lock_file
def sequential(self, lock_mode):
def sequential_decorator(func):
##wraps(func) # TODO: doesn't work
def func_wrapper(*args, **kwargs):
with open(self.lock_file, 'w') as file:
logger = logging.getLogger(__name__)
logger.debug("Locking a sequential request...")
flock(file, lock_mode)
try:
return func(*args, **kwargs)
finally:
logger.debug("Unlocking a sequential request...")
# unlink(self.lock_file) # Don't do: http://stackoverflow.com/q/17708885/856090
flock(file, LOCK_UN)
# unlink(self.lock_file) # Don't do: open but unlocked file may be deleted.
return func_wrapper
return sequential_decorator
Then
task = SequentialTask(VAR_DIR+"/groups-modify.lock")
and just add #task.sequential(LOCK_SH) to the function implementing X and #task.sequential(LOCK_EX) to the function implementing Y.

Related

How does asyncio enable parallel IO waits? [duplicate]

This question is motivated by my another question: How to await in cdef?
There are tons of articles and blog posts on the web about asyncio, but they are all very superficial. I couldn't find any information about how asyncio is actually implemented, and what makes I/O asynchronous. I was trying to read the source code, but it's thousands of lines of not the highest grade C code, a lot of which deals with auxiliary objects, but most crucially, it is hard to connect between Python syntax and what C code it would translate into.
Asycnio's own documentation is even less helpful. There's no information there about how it works, only some guidelines about how to use it, which are also sometimes misleading / very poorly written.
I'm familiar with Go's implementation of coroutines, and was kind of hoping that Python did the same thing. If that was the case, the code I came up in the post linked above would have worked. Since it didn't, I'm now trying to figure out why. My best guess so far is as follows, please correct me where I'm wrong:
Procedure definitions of the form async def foo(): ... are actually interpreted as methods of a class inheriting coroutine.
Perhaps, async def is actually split into multiple methods by await statements, where the object, on which these methods are called is able to keep track of the progress it made through the execution so far.
If the above is true, then, essentially, execution of a coroutine boils down to calling methods of coroutine object by some global manager (loop?).
The global manager is somehow (how?) aware of when I/O operations are performed by Python (only?) code and is able to choose one of the pending coroutine methods to execute after the current executing method relinquished control (hit on the await statement).
In other words, here's my attempt at "desugaring" of some asyncio syntax into something more understandable:
async def coro(name):
print('before', name)
await asyncio.sleep()
print('after', name)
asyncio.gather(coro('first'), coro('second'))
# translated from async def coro(name)
class Coro(coroutine):
def before(self, name):
print('before', name)
def after(self, name):
print('after', name)
def __init__(self, name):
self.name = name
self.parts = self.before, self.after
self.pos = 0
def __call__():
self.parts[self.pos](self.name)
self.pos += 1
def done(self):
return self.pos == len(self.parts)
# translated from asyncio.gather()
class AsyncIOManager:
def gather(*coros):
while not every(c.done() for c in coros):
coro = random.choice(coros)
coro()
Should my guess prove correct: then I have a problem. How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter? What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?
How does asyncio work?
Before answering this question we need to understand a few base terms, skip these if you already know any of them.
Generators
Generators are objects that allow us to suspend the execution of a python function. User curated generators are implemented using the keyword yield. By creating a normal function containing the yield keyword, we turn that function into a generator:
>>> def test():
... yield 1
... yield 2
...
>>> gen = test()
>>> next(gen)
1
>>> next(gen)
2
>>> next(gen)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
As you can see, calling next() on the generator causes the interpreter to load the test's frame, and return the yielded value. Calling next() again, causes the frame to load again into the interpreter stack, and continues on yielding another value.
By the third time next() is called, our generator was finished, and StopIteration was thrown.
Communicating with a generator
A less-known feature of generators is the fact that you can communicate with them using two methods: send() and throw().
>>> def test():
... val = yield 1
... print(val)
... yield 2
... yield 3
...
>>> gen = test()
>>> next(gen)
1
>>> gen.send("abc")
abc
2
>>> gen.throw(Exception())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in test
Exception
Upon calling gen.send(), the value is passed as a return value from the yield keyword.
gen.throw() on the other hand, allows throwing Exceptions inside generators, with the exception raised at the same spot yield was called.
Returning values from generators
Returning a value from a generator, results in the value being put inside the StopIteration exception. We can later on recover the value from the exception and use it to our needs.
>>> def test():
... yield 1
... return "abc"
...
>>> gen = test()
>>> next(gen)
1
>>> try:
... next(gen)
... except StopIteration as exc:
... print(exc.value)
...
abc
Behold, a new keyword: yield from
Python 3.4 came with the addition of a new keyword: yield from. What that keyword allows us to do, is pass on any next(), send() and throw() into an inner-most nested generator. If the inner generator returns a value, it is also the return value of yield from:
>>> def inner():
... inner_result = yield 2
... print('inner', inner_result)
... return 3
...
>>> def outer():
... yield 1
... val = yield from inner()
... print('outer', val)
... yield 4
...
>>> gen = outer()
>>> next(gen)
1
>>> next(gen) # Goes inside inner() automatically
2
>>> gen.send("abc")
inner abc
outer 3
4
I've written an article to further elaborate on this topic.
Putting it all together
Upon introducing the new keyword yield from in Python 3.4, we were now able to create generators inside generators that just like a tunnel, pass the data back and forth from the inner-most to the outer-most generators. This has spawned a new meaning for generators - coroutines.
Coroutines are functions that can be stopped and resumed while being run. In Python, they are defined using the async def keyword. Much like generators, they too use their own form of yield from which is await. Before async and await were introduced in Python 3.5, we created coroutines in the exact same way generators were created (with yield from instead of await).
async def inner():
return 1
async def outer():
await inner()
Just like all iterators and generators implement the __iter__() method, all coroutines implement __await__() which allows them to continue on every time await coro is called.
There's a nice sequence diagram inside the Python docs that you should check out.
In asyncio, apart from coroutine functions, we have 2 important objects: tasks and futures.
Futures
Futures are objects that have the __await__() method implemented, and their job is to hold a certain state and result. The state can be one of the following:
PENDING - future does not have any result or exception set.
CANCELLED - future was cancelled using fut.cancel()
FINISHED - future was finished, either by a result set using fut.set_result() or by an exception set using fut.set_exception()
The result, just like you have guessed, can either be a Python object, that will be returned, or an exception which may be raised.
Another important feature of future objects, is that they contain a method called add_done_callback(). This method allows functions to be called as soon as the task is done - whether it raised an exception or finished.
Tasks
Task objects are special futures, which wrap around coroutines, and communicate with the inner-most and outer-most coroutines. Every time a coroutine awaits a future, the future is passed all the way back to the task (just like in yield from), and the task receives it.
Next, the task binds itself to the future. It does so by calling add_done_callback() on the future. From now on, if the future will ever be done, by either being cancelled, passed an exception or passed a Python object as a result, the task's callback will be called, and it will rise back up to existence.
Asyncio
The final burning question we must answer is - how is the IO implemented?
Deep inside asyncio, we have an event loop. An event loop of tasks. The event loop's job is to call tasks every time they are ready and coordinate all that effort into one single working machine.
The IO part of the event loop is built upon a single crucial function called select. Select is a blocking function, implemented by the operating system underneath, that allows waiting on sockets for incoming or outgoing data. Upon receiving data it wakes up, and returns the sockets which received data, or the sockets which are ready for writing.
When you try to receive or send data over a socket through asyncio, what actually happens below is that the socket is first checked if it has any data that can be immediately read or sent. If its .send() buffer is full, or the .recv() buffer is empty, the socket is registered to the select function (by simply adding it to one of the lists, rlist for recv and wlist for send) and the appropriate function awaits a newly created future object, tied to that socket.
When all available tasks are waiting for futures, the event loop calls select and waits. When the one of the sockets has incoming data, or its send buffer drained up, asyncio checks for the future object tied to that socket, and sets it to done.
Now all the magic happens. The future is set to done, the task that added itself before with add_done_callback() rises up back to life, and calls .send() on the coroutine which resumes the inner-most coroutine (because of the await chain) and you read the newly received data from a nearby buffer it was spilled unto.
Method chain again, in case of recv():
select.select waits.
A ready socket, with data is returned.
Data from the socket is moved into a buffer.
future.set_result() is called.
Task that added itself with add_done_callback() is now woken up.
Task calls .send() on the coroutine which goes all the way into the inner-most coroutine and wakes it up.
Data is being read from the buffer and returned to our humble user.
In summary, asyncio uses generator capabilities, that allow pausing and resuming functions. It uses yield from capabilities that allow passing data back and forth from the inner-most generator to the outer-most. It uses all of those in order to halt function execution while it's waiting for IO to complete (by using the OS select function).
And the best of all? While one function is paused, another may run and interleave with the delicate fabric, which is asyncio.
Talking about async/await and asyncio is not the same thing. The first is a fundamental, low-level construct (coroutines) while the later is a library using these constructs. Conversely, there is no single ultimate answer.
The following is a general description of how async/await and asyncio-like libraries work. That is, there may be other tricks on top (there are...) but they are inconsequential unless you build them yourself. The difference should be negligible unless you already know enough to not have to ask such a question.
1. Coroutines versus subroutines in a nut shell
Just like subroutines (functions, procedures, ...), coroutines (generators, ...) are an abstraction of call stack and instruction pointer: there is a stack of executing code pieces, and each is at a specific instruction.
The distinction of def versus async def is merely for clarity. The actual difference is return versus yield. From this, await or yield from take the difference from individual calls to entire stacks.
1.1. Subroutines
A subroutine represents a new stack level to hold local variables, and a single traversal of its instructions to reach an end. Consider a subroutine like this:
def subfoo(bar):
qux = 3
return qux * bar
When you run it, that means
allocate stack space for bar and qux
recursively execute the first statement and jump to the next statement
once at a return, push its value to the calling stack
clear the stack (1.) and instruction pointer (2.)
Notably, 4. means that a subroutine always starts at the same state. Everything exclusive to the function itself is lost upon completion. A function cannot be resumed, even if there are instructions after return.
root -\
: \- subfoo --\
:/--<---return --/
|
V
1.2. Coroutines as persistent subroutines
A coroutine is like a subroutine, but can exit without destroying its state. Consider a coroutine like this:
def cofoo(bar):
qux = yield bar # yield marks a break point
return qux
When you run it, that means
allocate stack space for bar and qux
recursively execute the first statement and jump to the next statement
once at a yield, push its value to the calling stack but store the stack and instruction pointer
once calling into yield, restore stack and instruction pointer and push arguments to qux
once at a return, push its value to the calling stack
clear the stack (1.) and instruction pointer (2.)
Note the addition of 2.1 and 2.2 - a coroutine can be suspended and resumed at predefined points. This is similar to how a subroutine is suspended during calling another subroutine. The difference is that the active coroutine is not strictly bound to its calling stack. Instead, a suspended coroutine is part of a separate, isolated stack.
root -\
: \- cofoo --\
:/--<+--yield --/
| :
V :
This means that suspended coroutines can be freely stored or moved between stacks. Any call stack that has access to a coroutine can decide to resume it.
1.3. Traversing the call stack
So far, our coroutine only goes down the call stack with yield. A subroutine can go down and up the call stack with return and (). For completeness, coroutines also need a mechanism to go up the call stack. Consider a coroutine like this:
def wrap():
yield 'before'
yield from cofoo()
yield 'after'
When you run it, that means it still allocates the stack and instruction pointer like a subroutine. When it suspends, that still is like storing a subroutine.
However, yield from does both. It suspends stack and instruction pointer of wrap and runs cofoo. Note that wrap stays suspended until cofoo finishes completely. Whenever cofoo suspends or something is sent, cofoo is directly connected to the calling stack.
1.4. Coroutines all the way down
As established, yield from allows to connect two scopes across another intermediate one. When applied recursively, that means the top of the stack can be connected to the bottom of the stack.
root -\
: \-> coro_a -yield-from-> coro_b --\
:/ <-+------------------------yield ---/
| :
:\ --+-- coro_a.send----------yield ---\
: coro_b <-/
Note that root and coro_b do not know about each other. This makes coroutines much cleaner than callbacks: coroutines still built on a 1:1 relation like subroutines. Coroutines suspend and resume their entire existing execution stack up until a regular call point.
Notably, root could have an arbitrary number of coroutines to resume. Yet, it can never resume more than one at the same time. Coroutines of the same root are concurrent but not parallel!
1.5. Python's async and await
The explanation has so far explicitly used the yield and yield from vocabulary of generators - the underlying functionality is the same. The new Python3.5 syntax async and await exists mainly for clarity.
def foo(): # subroutine?
return None
def foo(): # coroutine?
yield from foofoo() # generator? coroutine?
async def foo(): # coroutine!
await foofoo() # coroutine!
return None
The async for and async with statements are needed because you would break the yield from/await chain with the bare for and with statements.
2. Anatomy of a simple event loop
By itself, a coroutine has no concept of yielding control to another coroutine. It can only yield control to the caller at the bottom of a coroutine stack. This caller can then switch to another coroutine and run it.
This root node of several coroutines is commonly an event loop: on suspension, a coroutine yields an event on which it wants resume. In turn, the event loop is capable of efficiently waiting for these events to occur. This allows it to decide which coroutine to run next, or how to wait before resuming.
Such a design implies that there is a set of pre-defined events that the loop understands. Several coroutines await each other, until finally an event is awaited. This event can communicate directly with the event loop by yielding control.
loop -\
: \-> coroutine --await--> event --\
:/ <-+----------------------- yield --/
| :
| : # loop waits for event to happen
| :
:\ --+-- send(reply) -------- yield --\
: coroutine <--yield-- event <-/
The key is that coroutine suspension allows the event loop and events to directly communicate. The intermediate coroutine stack does not require any knowledge about which loop is running it, nor how events work.
2.1.1. Events in time
The simplest event to handle is reaching a point in time. This is a fundamental block of threaded code as well: a thread repeatedly sleeps until a condition is true.
However, a regular sleep blocks execution by itself - we want other coroutines to not be blocked. Instead, we want tell the event loop when it should resume the current coroutine stack.
2.1.2. Defining an Event
An event is simply a value we can identify - be it via an enum, a type or other identity. We can define this with a simple class that stores our target time. In addition to storing the event information, we can allow to await a class directly.
class AsyncSleep:
"""Event to sleep until a point in time"""
def __init__(self, until: float):
self.until = until
# used whenever someone ``await``s an instance of this Event
def __await__(self):
# yield this Event to the loop
yield self
def __repr__(self):
return '%s(until=%.1f)' % (self.__class__.__name__, self.until)
This class only stores the event - it does not say how to actually handle it.
The only special feature is __await__ - it is what the await keyword looks for. Practically, it is an iterator but not available for the regular iteration machinery.
2.2.1. Awaiting an event
Now that we have an event, how do coroutines react to it? We should be able to express the equivalent of sleep by awaiting our event. To better see what is going on, we wait twice for half the time:
import time
async def asleep(duration: float):
"""await that ``duration`` seconds pass"""
await AsyncSleep(time.time() + duration / 2)
await AsyncSleep(time.time() + duration / 2)
We can directly instantiate and run this coroutine. Similar to a generator, using coroutine.send runs the coroutine until it yields a result.
coroutine = asleep(100)
while True:
print(coroutine.send(None))
time.sleep(0.1)
This gives us two AsyncSleep events and then a StopIteration when the coroutine is done. Notice that the only delay is from time.sleep in the loop! Each AsyncSleep only stores an offset from the current time.
2.2.2. Event + Sleep
At this point, we have two separate mechanisms at our disposal:
AsyncSleep Events that can be yielded from inside a coroutine
time.sleep that can wait without impacting coroutines
Notably, these two are orthogonal: neither one affects or triggers the other. As a result, we can come up with our own strategy to sleep to meet the delay of an AsyncSleep.
2.3. A naive event loop
If we have several coroutines, each can tell us when it wants to be woken up. We can then wait until the first of them wants to be resumed, then for the one after, and so on. Notably, at each point we only care about which one is next.
This makes for a straightforward scheduling:
sort coroutines by their desired wake up time
pick the first that wants to wake up
wait until this point in time
run this coroutine
repeat from 1.
A trivial implementation does not need any advanced concepts. A list allows to sort coroutines by date. Waiting is a regular time.sleep. Running coroutines works just like before with coroutine.send.
def run(*coroutines):
"""Cooperatively run all ``coroutines`` until completion"""
# store wake-up-time and coroutines
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting:
# 2. pick the first coroutine that wants to wake up
until, coroutine = waiting.pop(0)
# 3. wait until this point in time
time.sleep(max(0.0, until - time.time()))
# 4. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])
Of course, this has ample room for improvement. We can use a heap for the wait queue or a dispatch table for events. We could also fetch return values from the StopIteration and assign them to the coroutine. However, the fundamental principle remains the same.
2.4. Cooperative Waiting
The AsyncSleep event and run event loop are a fully working implementation of timed events.
async def sleepy(identifier: str = "coroutine", count=5):
for i in range(count):
print(identifier, 'step', i + 1, 'at %.2f' % time.time())
await asleep(0.1)
run(*(sleepy("coroutine %d" % j) for j in range(5)))
This cooperatively switches between each of the five coroutines, suspending each for 0.1 seconds. Even though the event loop is synchronous, it still executes the work in 0.5 seconds instead of 2.5 seconds. Each coroutine holds state and acts independently.
3. I/O event loop
An event loop that supports sleep is suitable for polling. However, waiting for I/O on a file handle can be done more efficiently: the operating system implements I/O and thus knows which handles are ready. Ideally, an event loop should support an explicit "ready for I/O" event.
3.1. The select call
Python already has an interface to query the OS for read I/O handles. When called with handles to read or write, it returns the handles ready to read or write:
readable, writable, _ = select.select(rlist, wlist, xlist, timeout)
For example, we can open a file for writing and wait for it to be ready:
write_target = open('/tmp/foo')
readable, writable, _ = select.select([], [write_target], [])
Once select returns, writable contains our open file.
3.2. Basic I/O event
Similar to the AsyncSleep request, we need to define an event for I/O. With the underlying select logic, the event must refer to a readable object - say an open file. In addition, we store how much data to read.
class AsyncRead:
def __init__(self, file, amount=1):
self.file = file
self.amount = amount
self._buffer = b'' if 'b' in file.mode else ''
def __await__(self):
while len(self._buffer) < self.amount:
yield self
# we only get here if ``read`` should not block
self._buffer += self.file.read(1)
return self._buffer
def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.file, self.amount, len(self._buffer)
)
As with AsyncSleep we mostly just store the data required for the underlying system call. This time, __await__ is capable of being resumed multiple times - until our desired amount has been read. In addition, we return the I/O result instead of just resuming.
3.3. Augmenting an event loop with read I/O
The basis for our event loop is still the run defined previously. First, we need to track the read requests. This is no longer a sorted schedule, we only map read requests to coroutines.
# new
waiting_read = {} # type: Dict[file, coroutine]
Since select.select takes a timeout parameter, we can use it in place of time.sleep.
# old
time.sleep(max(0.0, until - time.time()))
# new
readable, _, _ = select.select(list(waiting_read), [], [])
This gives us all readable files - if there are any, we run the corresponding coroutine. If there are none, we have waited long enough for our current coroutine to run.
# new - reschedule waiting coroutine, run readable coroutine
if readable:
waiting.append((until, coroutine))
waiting.sort()
coroutine = waiting_read[readable[0]]
Finally, we have to actually listen for read requests.
# new
if isinstance(command, AsyncSleep):
...
elif isinstance(command, AsyncRead):
...
3.4. Putting it together
The above was a bit of a simplification. We need to do some switching to not starve sleeping coroutines if we can always read. We need to handle having nothing to read or nothing to wait for. However, the end result still fits into 30 LOC.
def run(*coroutines):
"""Cooperatively run all ``coroutines`` until completion"""
waiting_read = {} # type: Dict[file, coroutine]
waiting = [(0, coroutine) for coroutine in coroutines]
while waiting or waiting_read:
# 2. wait until the next coroutine may run or read ...
try:
until, coroutine = waiting.pop(0)
except IndexError:
until, coroutine = float('inf'), None
readable, _, _ = select.select(list(waiting_read), [], [])
else:
readable, _, _ = select.select(list(waiting_read), [], [], max(0.0, until - time.time()))
# ... and select the appropriate one
if readable and time.time() < until:
if until and coroutine:
waiting.append((until, coroutine))
waiting.sort()
coroutine = waiting_read.pop(readable[0])
# 3. run this coroutine
try:
command = coroutine.send(None)
except StopIteration:
continue
# 1. sort coroutines by their desired suspension ...
if isinstance(command, AsyncSleep):
waiting.append((command.until, coroutine))
waiting.sort(key=lambda item: item[0])
# ... or register reads
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
3.5. Cooperative I/O
The AsyncSleep, AsyncRead and run implementations are now fully functional to sleep and/or read.
Same as for sleepy, we can define a helper to test reading:
async def ready(path, amount=1024*32):
print('read', path, 'at', '%d' % time.time())
with open(path, 'rb') as file:
result = await AsyncRead(file, amount)
print('done', path, 'at', '%d' % time.time())
print('got', len(result), 'B')
run(sleepy('background', 5), ready('/dev/urandom'))
Running this, we can see that our I/O is interleaved with the waiting task:
id background round 1
read /dev/urandom at 1530721148
id background round 2
id background round 3
id background round 4
id background round 5
done /dev/urandom at 1530721148
got 1024 B
4. Non-Blocking I/O
While I/O on files gets the concept across, it is not really suitable for a library like asyncio: the select call always returns for files, and both open and read may block indefinitely. This blocks all coroutines of an event loop - which is bad. Libraries like aiofiles use threads and synchronization to fake non-blocking I/O and events on file.
However, sockets do allow for non-blocking I/O - and their inherent latency makes it much more critical. When used in an event loop, waiting for data and retrying can be wrapped without blocking anything.
4.1. Non-Blocking I/O event
Similar to our AsyncRead, we can define a suspend-and-read event for sockets. Instead of taking a file, we take a socket - which must be non-blocking. Also, our __await__ uses socket.recv instead of file.read.
class AsyncRecv:
def __init__(self, connection, amount=1, read_buffer=1024):
assert not connection.getblocking(), 'connection must be non-blocking for async recv'
self.connection = connection
self.amount = amount
self.read_buffer = read_buffer
self._buffer = b''
def __await__(self):
while len(self._buffer) < self.amount:
try:
self._buffer += self.connection.recv(self.read_buffer)
except BlockingIOError:
yield self
return self._buffer
def __repr__(self):
return '%s(file=%s, amount=%d, progress=%d)' % (
self.__class__.__name__, self.connection, self.amount, len(self._buffer)
)
In contrast to AsyncRead, __await__ performs truly non-blocking I/O. When data is available, it always reads. When no data is available, it always suspends. That means the event loop is only blocked while we perform useful work.
4.2. Un-Blocking the event loop
As far as the event loop is concerned, nothing changes much. The event to listen for is still the same as for files - a file descriptor marked ready by select.
# old
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
# new
elif isinstance(command, AsyncRead):
waiting_read[command.file] = coroutine
elif isinstance(command, AsyncRecv):
waiting_read[command.connection] = coroutine
At this point, it should be obvious that AsyncRead and AsyncRecv are the same kind of event. We could easily refactor them to be one event with an exchangeable I/O component. In effect, the event loop, coroutines and events cleanly separate a scheduler, arbitrary intermediate code and the actual I/O.
4.3. The ugly side of non-blocking I/O
In principle, what you should do at this point is replicate the logic of read as a recv for AsyncRecv. However, this is much more ugly now - you have to handle early returns when functions block inside the kernel, but yield control to you. For example, opening a connection versus opening a file is much longer:
# file
file = open(path, 'rb')
# non-blocking socket
connection = socket.socket()
connection.setblocking(False)
# open without blocking - retry on failure
try:
connection.connect((url, port))
except BlockingIOError:
pass
Long story short, what remains is a few dozen lines of Exception handling. The events and event loop already work at this point.
id background round 1
read localhost:25000 at 1530783569
read /dev/urandom at 1530783569
done localhost:25000 at 1530783569 got 32768 B
id background round 2
id background round 3
id background round 4
done /dev/urandom at 1530783569 got 4096 B
id background round 5
Addendum
Example code at github
What is asyncio?
Asyncio stands for asynchronous input output and refers to a programming paradigm which achieves high concurrency using a single thread or event loop.
Asynchronous programming is a type of parallel programming in which a unit of work is allowed to run separately from the primary application thread. When the work is complete, it notifies the main thread about completion or failure of the worker thread.
Let's have a look in below image:
Let's understand asyncio with an example:
To understand the concept behind asyncio, let’s consider a restaurant with a single waiter. Suddenly, three customers, A, B and C show up. The three of them take a varying amount of time to decide what to eat once they receive the menu from the waiter.
Let’s assume A takes 5 minutes, B 10 minutes and C 1 minute to decide. If the single waiter starts with B first and takes B's order in 10 minutes, next he serves A and spends 5 minutes on noting down his order and finally spends 1 minute to know what C wants to eat.
So, in total, waiter spends 10 + 5 + 1 = 16 minutes to take down their orders. However, notice in this sequence of events, C ends up waiting 15 minutes before the waiter gets to him, A waits 10 minutes and B waits 0 minutes.
Now consider if the waiter knew the time each customer would take to decide. He can start with C first, then go to A and finally to B. This way each customer would experience a 0 minute wait.
An illusion of three waiters, one dedicated to each customer is created even though there’s only one.
Lastly, the total time it takes for the waiter to take all three orders is 10 minutes, much less than the 16 minutes in the other scenario.
Let's go through another example:
Suppose, Chess master Magnus Carlsen hosts a chess exhibition in which he plays with multiple amateur players. He has two ways of conducting the exhibition: synchronously and asynchronously.
Assumptions:
24 opponents
Magnus Carlsen makes each chess move in 5 seconds
Opponents each take 55 seconds to make a move
Games average 30 pair-moves (60 moves total)
Synchronously: Magnus Carlsen plays one game at a time, never two at the same time, until the game is complete. Each game takes (55 + 5) * 30 == 1800 seconds, or 30 minutes. The entire exhibition takes 24 * 30 == 720 minutes, or 12 hours.
Asynchronously: Magnus Carlsen moves from table to table, making one move at each table. She leaves the table and lets the opponent make their next move during the wait time. One move on all 24 games takes Judit 24 * 5 == 120 seconds, or 2 minutes. The entire exhibition is now cut down to 120 * 30 == 3600 seconds, or just 1 hour
There is only one Magnus Carlsen, who has only two hands and makes only one move at a time by himself. But playing asynchronously cuts the exhibition time down from 12 hours to one.
Coding Example:
Let try to demonstrate Synchronous and Asynchronous execution time using code snippet.
Asynchronous - async_count.py
import asyncio
import time
async def count():
print("One", end=" ")
await asyncio.sleep(1)
print("Two", end=" ")
await asyncio.sleep(2)
print("Three", end=" ")
async def main():
await asyncio.gather(count(), count(), count(), count(), count())
if __name__ == "__main__":
start_time = time.perf_counter()
asyncio.run(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(f"\nExecuting - {__file__}\nExecution Starts: {start_time}\nExecutions Ends: {end_time}\nTotals Execution Time:{execution_time:0.2f} seconds.")
Asynchronous - Output:
One One One One One Two Two Two Two Two Three Three Three Three Three
Executing - async_count.py
Execution Starts: 18453.442160108
Executions Ends: 18456.444719712
Totals Execution Time:3.00 seconds.
Synchronous - sync_count.py
import time
def count():
print("One", end=" ")
time.sleep(1)
print("Two", end=" ")
time.sleep(2)
print("Three", end=" ")
def main():
for _ in range(5):
count()
if __name__ == "__main__":
start_time = time.perf_counter()
main()
end_time = time.perf_counter()
execution_time = end_time - start_time
print(f"\nExecuting - {__file__}\nExecution Starts: {start_time}\nExecutions Ends: {end_time}\nTotals Execution Time:{execution_time:0.2f} seconds.")
Synchronous - Output:
One Two Three One Two Three One Two Three One Two Three One Two Three
Executing - sync_count.py
Execution Starts: 18875.175965998
Executions Ends: 18890.189930292
Totals Execution Time:15.01 seconds.
Why use asyncio instead of multithreading in Python?
It’s very difficult to write code that is thread safe. With asynchronous code, you know exactly where the code will shift from one task to the next and race conditions are much harder to come by.
Threads consume a fair amount of data since each thread needs to have its own stack. With async code, all the code shares the same stack and the stack is kept small due to continuously unwinding the stack between tasks.
Threads are OS structures and therefore require more memory for the platform to support. There is no such problem with asynchronous tasks.
How does asyncio works?
Before going deep let's recall Python Generator
Python Generator:
Functions containing a yield statement are compiled as generators. Using a yield expression in a function’s body causes that function to be a generator. These functions return an object which supports the iteration protocol methods. The generator object created automatically receives a __next()__ method. Going back to the example from the previous section we can invoke __next__ directly on the generator object instead of using next():
def asynchronous():
yield "Educative"
if __name__ == "__main__":
gen = asynchronous()
str = gen.__next__()
print(str)
Remember the following about generators:
Generator functions allow you to procrastinate computing expensive values. You only compute the next value when required. This makes generators memory and compute efficient; they refrain from saving long sequences in memory or doing all expensive computations upfront.
Generators, when suspended, retain the code location, which is the last yield statement executed, and their entire local scope. This allows them to resume execution from where they left off.
Generator objects are nothing more than iterators.
Remember to make a distinction between a generator function and the associated generator object which are often used interchangeably. A generator function when invoked returns a generator object and next() is invoked on the generator object to run the code within the generator function.
States of a generator:
A generator goes through the following states:
GEN_CREATED when a generator object has been returned for the first time from a generator function and iteration hasn’t started.
GEN_RUNNING when next has been invoked on the generator object and is being executed by the python interpreter.
GEN_SUSPENDED when a generator is suspended at a yield
GEN_CLOSED when a generator has completed execution or has been closed.
Methods on generator objects:
A generator object exposes different methods that can be invoked to manipulate the generator. These are:
throw()
send()
close()
Let's deep dive into more details explanations
The rules of asyncio:
The syntax async def introduces either a native coroutine or an asynchronous generator. The expressions async with and async for are also valid.
The keyword await passes function control back to the event loop. (It suspends the execution of the surrounding coroutine.) If Python encounters an await f() expression in the scope of g(), this is how await tells the event loop, "Suspend execution of g() until whatever I’m waiting on—the result of f()—is returned. In the meantime, go let something else run."
In code, that second bullet point looks roughly like this:
async def g():
# Pause here and come back to g() when f() is ready
r = await f()
return r
There's also a strict set of rules around when and how you can and cannot use async/await. These can be handy whether you are still picking up the syntax or already have exposure to using async/await:
A function that you introduce with async def is a coroutine. It may use await, return, or yield, but all of these are optional. Declaring async def noop(): pass is valid:
Using await and/or return creates a coroutine function. To call a coroutine function, you must await it to get its results.
It is less common to use yield in an async def block. This creates an asynchronous generator, which you iterate over with async for. Forget about async generators for the time being and focus on getting down the syntax for coroutine functions, which use await and/or return.
Anything defined with async def may not use yield from, which will raise a SyntaxError.
Just like it’s a SyntaxError to use yield outside of a def function, it is a SyntaxError to use await outside of an async def coroutine. You can only use await in the body of coroutines.
Here are some terse examples meant to summarize the above few rules:
async def f(x):
y = await z(x) # OK - `await` and `return` allowed in coroutines
return y
async def g(x):
yield x # OK - this is an async generator
async def m(x):
yield from gen(x) # NO - SyntaxError
def m(x):
y = await z(x) # NO - SyntaxError (no `async def` here)
return y
Generator Based Coroutine
Python created a distinction between Python generators and generators that were meant to be used as coroutines. These coroutines are called generator-based coroutines and require the decorator #asynio.coroutine to be added to the function definition, though this isn’t strictly enforced.
Generator based coroutines use yield from syntax instead of yield. A coroutine can:
yield from another coroutine
yield from a future
return an expression
raise exception
Coroutines in Python make cooperative multitasking possible.
Cooperative multitasking is the approach in which the running process voluntarily gives up the CPU to other processes. A process may do so when it is logically blocked, say while waiting for user input or when it has initiated a network request and will be idle for a while.
A coroutine can be defined as a special function that can give up control to its caller without losing its state.
So what’s the difference between coroutines and generators?
Generators are essentially iterators though they look like functions. The distinction between generators and coroutines, in general, is that:
Generators yield back a value to the invoker whereas a coroutine yields control to another coroutine and can resume execution from the point it gives up control.
A generator can’t accept arguments once started whereas a coroutine can.
Generators are primarily used to simplify writing iterators. They are a type of coroutine and sometimes also called as semicoroutines.
Generator Based Coroutine Example
The simplest generator based coroutine we can write is as follows:
#asyncio.coroutine
def do_something_important():
yield from asyncio.sleep(1)
The coroutine sleeps for one second. Note the decorator and the use of yield from.
Native Based Coroutine Example
By native it is meant that the language introduced syntax to specifically define coroutines, making them first class citizens in the language. Native coroutines can be defined using the async/await syntax.
The simplest native based coroutine we can write is as follows:
async def do_something_important():
await asyncio.sleep(1)
AsyncIO Design Patterns
AsyncIO comes with its own set of possible script designs, which we will discuss in this section.
1. Event loops
The event loop is a programming construct that waits for events to happen and then dispatches them to an event handler. An event can be a user clicking on a UI button or a process initiating a file download. At the core of asynchronous programming, sits the event loop.
Example Code:
import asyncio
import random
import time
from threading import Thread
from threading import current_thread
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def do_something_important(sleep_for):
print(colors[1] + f"Is event loop running in thread {current_thread().getName()} = {asyncio.get_event_loop().is_running()}" + colors[0])
await asyncio.sleep(sleep_for)
def launch_event_loops():
# get a new event loop
loop = asyncio.new_event_loop()
# set the event loop for the current thread
asyncio.set_event_loop(loop)
# run a coroutine on the event loop
loop.run_until_complete(do_something_important(random.randint(1, 5)))
# remember to close the loop
loop.close()
if __name__ == "__main__":
thread_1 = Thread(target=launch_event_loops)
thread_2 = Thread(target=launch_event_loops)
start_time = time.perf_counter()
thread_1.start()
thread_2.start()
print(colors[2] + f"Is event loop running in thread {current_thread().getName()} = {asyncio.get_event_loop().is_running()}" + colors[0])
thread_1.join()
thread_2.join()
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Event Loop Start Time: {start_time}\nEvent Loop End Time: {end_time}\nEvent Loop Execution Time: {execution_time:0.2f} seconds." + colors[0])
Execution Command: python async_event_loop.py
Output:
Try it out yourself and examine the output and you’ll realize that each spawned thread is running its own event loop.
Types of event loops
There are two types of event loops:
SelectorEventLoop: SelectorEventLoop is based on the selectors module and is the default loop on all platforms.
ProactorEventLoop: ProactorEventLoop is based on Windows’ I/O Completion Ports and is only supported on Windows.
2. Futures
Future represents a computation that is either in progress or will get scheduled in the future. It is a special low-level awaitable object that represents an eventual result of an asynchronous operation. Don’t confuse threading.Future and asyncio.Future.
Example Code:
import time
import asyncio
from asyncio import Future
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def bar(future):
print(colors[1] + "bar will sleep for 3 seconds" + colors[0])
await asyncio.sleep(3)
print(colors[1] + "bar resolving the future" + colors[0])
future.done()
future.set_result("future is resolved")
async def foo(future):
print(colors[2] + "foo will await the future" + colors[0])
await future
print(colors[2] + "foo finds the future resolved" + colors[0])
async def main():
future = Future()
await asyncio.gather(foo(future), bar(future))
if __name__ == "__main__":
start_time = time.perf_counter()
asyncio.run(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Future Start Time: {start_time}\nFuture End Time: {end_time}\nFuture Execution Time: {execution_time:0.2f} seconds." + colors[0])
Execution Command: python async_futures.py
Output:
Both the coroutines are passed a future. The foo() coroutine awaits for the future to get resolved, while the bar() coroutine resolves the future after three seconds.
3. Tasks
Tasks are like futures, in fact, Task is a subclass of Future and can be created using the following methods:
asyncio.create_task() accepts coroutines and wraps them as tasks.
loop.create_task() only accepts coroutines.
asyncio.ensure_future() accepts futures, coroutines and any awaitable objects.
Tasks wrap coroutines and run them in event loops. If a coroutine awaits on a Future, the Task suspends the execution of the coroutine and waits for the Future to complete. When the Future is done, the execution of the wrapped coroutine resumes.
Example Code:
import time
import asyncio
from asyncio import Future
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[34m", # Blue
)
async def bar(future):
print(colors[1] + "bar will sleep for 3 seconds" + colors[0])
await asyncio.sleep(3)
print(colors[1] + "bar resolving the future" + colors[0])
future.done()
future.set_result("future is resolved")
async def foo(future):
print(colors[2] + "foo will await the future" + colors[0])
await future
print(colors[2] + "foo finds the future resolved" + colors[0])
async def main():
future = Future()
loop = asyncio.get_event_loop()
t1 = loop.create_task(bar(future))
t2 = loop.create_task(foo(future))
await t2, t1
if __name__ == "__main__":
start_time = time.perf_counter()
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[3] + f"Future Start Time: {start_time}\nFuture End Time: {end_time}\nFuture Execution Time: {execution_time:0.2f} seconds." + colors[0])
Execution Command: python async_tasks.py
Output:
4. Chaining Coroutines:
A key feature of coroutines is that they can be chained together. A coroutine object is awaitable, so another coroutine can await it. This allows you to break programs into smaller, manageable, recyclable coroutines:
Example Code:
import sys
import asyncio
import random
import time
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[34m", # Blue
)
async def function1(n: int) -> str:
i = random.randint(0, 10)
print(colors[1] + f"function1({n}) is sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
result = f"result{n}-1"
print(colors[1] + f"Returning function1({n}) == {result}." + colors[0])
return result
async def function2(n: int, arg: str) -> str:
i = random.randint(0, 10)
print(colors[2] + f"function2{n, arg} is sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
result = f"result{n}-2 derived from {arg}"
print(colors[2] + f"Returning function2{n, arg} == {result}." + colors[0])
return result
async def chain(n: int) -> None:
start = time.perf_counter()
p1 = await function1(n)
p2 = await function2(n, p1)
end = time.perf_counter() - start
print(colors[3] + f"--> Chained result{n} => {p2} (took {end:0.2f} seconds)." + colors[0])
async def main(*args):
await asyncio.gather(*(chain(n) for n in args))
if __name__ == "__main__":
random.seed(444)
args = [1, 2, 3] if len(sys.argv) == 1 else map(int, sys.argv[1:])
start_time = time.perf_counter()
asyncio.run(main(*args))
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[4] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
Pay careful attention to the output, where function1() sleeps for a variable amount of time, and function2() begins working with the results as they become available:
Execution Command: python async_chained.py 11 8 5
Output:
5. Using a Queue:
In this design, there is no chaining of any individual consumer to a producer. The consumers don’t know the number of producers, or even the cumulative number of items that will be added to the queue, in advance.
It takes an individual producer or consumer a variable amount of time to put and extract items from the queue, respectively. The queue serves as a throughput that can communicate with the producers and consumers without them talking to each other directly.
Example Code:
import asyncio
import argparse
import itertools as it
import os
import random
import time
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[34m", # Blue
)
async def generate_item(size: int = 5) -> str:
return os.urandom(size).hex()
async def random_sleep(caller=None) -> None:
i = random.randint(0, 10)
if caller:
print(colors[1] + f"{caller} sleeping for {i} seconds." + colors[0])
await asyncio.sleep(i)
async def produce(name: int, producer_queue: asyncio.Queue) -> None:
n = random.randint(0, 10)
for _ in it.repeat(None, n): # Synchronous loop for each single producer
await random_sleep(caller=f"Producer {name}")
i = await generate_item()
t = time.perf_counter()
await producer_queue.put((i, t))
print(colors[2] + f"Producer {name} added <{i}> to queue." + colors[0])
async def consume(name: int, consumer_queue: asyncio.Queue) -> None:
while True:
await random_sleep(caller=f"Consumer {name}")
i, t = await consumer_queue.get()
now = time.perf_counter()
print(colors[3] + f"Consumer {name} got element <{i}>" f" in {now - t:0.5f} seconds." + colors[0])
consumer_queue.task_done()
async def main(no_producer: int, no_consumer: int):
q = asyncio.Queue()
producers = [asyncio.create_task(produce(n, q)) for n in range(no_producer)]
consumers = [asyncio.create_task(consume(n, q)) for n in range(no_consumer)]
await asyncio.gather(*producers)
await q.join() # Implicitly awaits consumers, too
for consumer in consumers:
consumer.cancel()
if __name__ == "__main__":
random.seed(444)
parser = argparse.ArgumentParser()
parser.add_argument("-p", "--no_producer", type=int, default=10)
parser.add_argument("-c", "--no_consumer", type=int, default=15)
ns = parser.parse_args()
start_time = time.perf_counter()
asyncio.run(main(**ns.__dict__))
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[4] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
Execution Command: python async_queue.py -p 2 -c 4
Output:
Lastly, let's have an example of how asyncio cuts down on wait time: given a coroutine generate_random_int() that keeps producing random integers in the range [0, 10], until one of them exceeds a threshold, you want to let multiple calls of this coroutine not need to wait for each other to complete in succession.
Example Code:
import time
import asyncio
import random
# ANSI colors
colors = (
"\033[0m", # End of color
"\033[31m", # Red
"\033[32m", # Green
"\033[36m", # Cyan
"\033[35m", # Magenta
"\033[34m", # Blue
)
async def generate_random_int(indx: int, threshold: int = 5) -> int:
print(colors[indx + 1] + f"Initiated generate_random_int({indx}).")
i = random.randint(0, 10)
while i <= threshold:
print(colors[indx + 1] + f"generate_random_int({indx}) == {i} too low; retrying.")
await asyncio.sleep(indx + 1)
i = random.randint(0, 10)
print(colors[indx + 1] + f"---> Finished: generate_random_int({indx}) == {i}" + colors[0])
return i
async def main():
res = await asyncio.gather(*(generate_random_int(i, 10 - i - 1) for i in range(3)))
return res
if __name__ == "__main__":
random.seed(444)
start_time = time.perf_counter()
r1, r2, r3 = asyncio.run(main())
print(colors[4] + f"\nRandom INT 1: {r1}, Random INT 2: {r2}, Random INT 3: {r3}\n" + colors[0])
end_time = time.perf_counter()
execution_time = end_time - start_time
print(colors[5] + f"Program Start Time: {start_time}\nProgram End Time: {end_time}\nProgram Execution Time: {execution_time:0.2f} seconds." + colors[0])
Execution Command: python async_random.py
Output:
Note: If you’re writing any code yourself, prefer native coroutines
for the sake of being explicit rather than implicit. Generator based
coroutines will be removed in Python 3.10.
GitHub Repo: https://github.com/tssovi/asynchronous-in-python
Your coro desugaring is conceptually correct, but slightly incomplete.
await doesn't suspend unconditionally, but only if it encounters a blocking call. How does it know that a call is blocking? This is decided by the code being awaited. For example, an awaitable implementation of socket read could be desugared to:
def read(sock, n):
# sock must be in non-blocking mode
try:
return sock.recv(n)
except EWOULDBLOCK:
event_loop.add_reader(sock.fileno, current_task())
return SUSPEND
In real asyncio the equivalent code modifies the state of a Future instead of returning magic values, but the concept is the same. When appropriately adapted to a generator-like object, the above code can be awaited.
On the caller side, when your coroutine contains:
data = await read(sock, 1024)
It desugars into something close to:
data = read(sock, 1024)
if data is SUSPEND:
return SUSPEND
self.pos += 1
self.parts[self.pos](...)
People familiar with generators tend to describe the above in terms of yield from which does the suspension automatically.
The suspension chain continues all the way up to the event loop, which notices that the coroutine is suspended, removes it from the runnable set, and goes on to execute coroutines that are runnable, if any. If no coroutines are runnable, the loop waits in select() until either a file descriptor a coroutine is interested in becomes ready for IO or a timeout expires. (The event loop maintains a file-descriptor-to-coroutine mapping.)
In the above example, once select() tells the event loop that sock is readable, it will re-add coro to the runnable set, so it will be continued from the point of suspension.
In other words:
Everything happens in the same thread by default.
The event loop is responsible for scheduling the coroutines and waking them up when whatever they were waiting for (typically an IO call that would normally block, or a timeout) becomes ready.
For insight on coroutine-driving event loops, I recommend this talk by Dave Beazley, where he demonstrates coding an event loop from scratch in front of live audience.
It all boils down to the two main challenges that asyncio is addressing:
How to perform multiple I/O in a single thread?
How to implement cooperative multitasking?
The answer to the first point has been around for a long while and is called a select loop. In python, it is implemented in the selectors module.
The second question is related to the concept of coroutine, i.e. functions that can stop their execution and be restored later on. In python, coroutines are implemented using generators and the yield from statement. That's what is hiding behind the async/await syntax.
More resources in this answer.
EDIT: Addressing your comment about goroutines:
The closest equivalent to a goroutine in asyncio is actually not a coroutine but a task (see the difference in the documentation). In python, a coroutine (or a generator) knows nothing about the concepts of event loop or I/O. It simply is a function that can stop its execution using yield while keeping its current state, so it can be restored later on. The yield from syntax allows for chaining them in a transparent way.
Now, within an asyncio task, the coroutine at the very bottom of the chain always ends up yielding a future. This future then bubbles up to the event loop, and gets integrated into the inner machinery. When the future is set to done by some other inner callback, the event loop can restore the task by sending the future back into the coroutine chain.
EDIT: Addressing some of the questions in your post:
How does I/O actually happen in this scenario? In a separate thread? Is the whole interpreter suspended and I/O happens outside the interpreter?
No, nothing happens in a thread. I/O is always managed by the event loop, mostly through file descriptors. However the registration of those file descriptors is usually hidden by high-level coroutines, making the dirty work for you.
What exactly is meant by I/O? If my python procedure called C open() procedure, and it in turn sent interrupt to kernel, relinquishing control to it, how does Python interpreter know about this and is able to continue running some other code, while kernel code does the actual I/O and until it wakes up the Python procedure which sent the interrupt originally? How can Python interpreter in principle, be aware of this happening?
An I/O is any blocking call. In asyncio, all the I/O operations should go through the event loop, because as you said, the event loop has no way to be aware that a blocking call is being performed in some synchronous code. That means you're not supposed to use a synchronous open within the context of a coroutine. Instead, use a dedicated library such aiofiles which provides an asynchronous version of open.
It allows you to write single-threaded asynchronous code and implement concurrency in Python. Basically, asyncio provides an event loop for asynchronous programming. For example, if we need to make requests without blocking the main thread, we can use the asyncio library.
The asyncio module allows for the implementation of asynchronous programming
using a combination of the following elements:
Event loop: The asyncio module allows an event loop per process.
Coroutines: A coroutine is a generator that follows certain conventions. Its most interesting feature is that it can be suspended during execution to wait for external processing (the some routine in I/O) and return from the point it had stopped when the external processing was done.
Futures: Futures represent a process that has still not finished. A future is an object that is supposed to have a result in the future and represents uncompleted tasks.
Tasks: This is a subclass of asyncio.Future that encapsulates and manages
coroutines. We can use the asyncio.Task object to encapsulate a coroutine.
The most important concept within asyncio is the event loop. An event loop
allows you to write asynchronous code using either callbacks or coroutines.
The keys to understanding asyncio are the terms of coroutines and the event
loop. Coroutines are stateful functions whose execution can be stopped while another I/O operation is being executed. An event loop is used to orchestrate the execution of the coroutines.
To run any coroutine function, we need to get an event loop. We can do this
with
loop = asyncio.get_event_loop()
This gives us a BaseEventLoop object. This has a run_until_complete method that takes in a coroutine and runs it until completion. Then, the coroutine returns a result. At a low level, an event loop executes the BaseEventLoop.rununtilcomplete(future) method.
If you picture an airport control tower, with many planes waiting to land on the same runway. The control tower can be seen as the event loop and runway as the thread. Each plane is a separate function waiting to execute. In reality only one plane can land on the runway at a time. What asyncio basically does it allows many planes to land simultaneously on the same runway by using the event loop to suspend functions and allow other functions to run when you use the await syntax it basically means that plane(function can be suspended and allow other functions to process

How can I parallelize a for loop in python using multiprocessing package?

Note: I don't need any communication between the processes/threads, I'm interested in completion signal only (that's the reason I posted this question as a new one since all other examples I've found communicated between each other).
How can I use multiprocessing package in Python 3 to parallelize the following piece of code (the end goal is to make it run faster):
a = 123
b = 456
for id in ids: # len(ids) = 10'000
# executes a binary with CLI flags
run_binary_with_id(id, a, b)
# i.e. runs "./hello_world_exec --id id --a a --b b" which takes about 30 seconds on average
I tried the following:
import multiprocessing as mp
def run_binary_with_id(id, a, b):
run_command('./hello_world_exec --id {} --a {} --b {}'.format(id, a, b))
if __name__ == '__main__':
ctx = mp.get_context('spawn')
q = ctx.Queue()
a = 123
b = 456
ids = range(10000)
for id in ids:
p = ctx.Process(target=run_binary_with_id, args=(id,a,b))
p.start()
p.join()
# The binary was executed len(ids) number of times, do other stuff assuming everything's completed at this point
or
for id in ids:
map.apply_async(run_binary_with_id, (id,a,b))
In a similar question the answer is the following:
def consume(iterator):
deque(iterator, max_len=0)
x=pool.imap_unordered(f,((i,j) for i in range(10000) for j in range(10000)))
consume(x)
which I don't really understand at all (why do I need this consume()).
Trying to spawn 10000 processes to run in parallel is almost certainly going to overload your system and make it run slower than running the processes sequentially due to the overhead involved in the OS having to constantly perform context switching between processes when the number of processes far exceeds the number of CPUs/cores your system has.
You can instead use multiprocessing.Pool to limit the number of worker processes spawned for the task. The Pool constructor limits the number of processes to the number of cores your system has by default, but you can fine tune it if you'd like with the processes parameter. You can then use its map method to easily map a sequence of arguments to apply to a given function to run in parallel. It can only map one argument to the function, however, so you would have to use functools.partial to supply default values for the other arguments, which in your case do not change between calls:
from functools import partial
if __name__ == '__main__':
_run_binary_with_id = partial(run_binary_with_id, a=123, b=456)
with mp.Pool() as pool:
pool.map(_run_binary_with_id, range(10000))

How to multithread with getattr, one thread per property?

Suppose I have the following object with multiple expensive properties, as so:
class Object:
def __init__(self, num):
self.num = num
#property
def expensive_property(self):
return expensive_calculation
#property
def expensive_property1(self):
return expensive_calculation
#property
def expensive_property2(self):
return expensive_calculation
Note: The number of expensive properties may increase over time.
Given a list of Objects how could I compute each expensive property per thread, for all objects in the list. I am having a hard time figuring out how I should arrange my pool.
This is kinda what I am trying to achieve:
from multithreading.dummy import Pool
from multithreading.dummy import Queue
object_list = [Object(i) for i in range(20)]
properties = [expensive_property2, expensive_propert5, expensive_property9, expensive_property3]
def get(obj, expensive_property):
return [getattr(expensive_property, o) for o in obj]
tasks = Queue()
for p in properties :
tasks.put((get, o, p))
results = []
with Pool(len(properties )) as pool:
while True:
task = tasks.get()
if task is None:
break
func, *args = task
result = pool.apply_async(func, args)
results.append(result)
This is a little crazy because apply_async has an internal queue to distribute tasks over the pool. I can imagine reasons to have another queue around observability or backpressure. Is your example your full program? or are you enqueuing work from a different process/thread?
If your computation is CPU bound one option could be to remove the queue to make things a little simpler:
def wait_all(async_results, timeout_seconds_per_task=1):
for r in async_results:
r.get(timeout_seconds)
wait_all(
[pool.apply_async(get, (o, p)) for p in properties],
timeout_seconds_per_task=1,
)
Like your example above this allows you to distribute computation across your available cpus (pool even defaults to the number of cpus on your machine). If your work is IO bound (suggested by your sleep) processes may have diminishing returns.
You'd have to benchmark but for IO bound you could create a thread pool using the same pattern https://stackoverflow.com/a/3034000/594589
Other options could be to use nonblocking IO with event loop such as gevent, or asyncio. Both would allow you to model the same pool based pattern!

asyncio with map&reduce flavor and without flooding the event loop

I am trying to use asyncio in real applications and it doesn't go that
easy, a help of asyncio gurus is needed badly.
Tasks that spawn other tasks without flooding event loop (Success!)
Consider a task like crawling the web starting from some "seeding" web-pages. Each
web-page leads to generation of new downloading tasks in exponential(!)
progression. However we don't want neither to flood the event loop nor to
overload our network. We'd like to control the task flow. This is what I
achieve well with modification of nice Maxime's solution proposed here:
https://mail.python.org/pipermail/python-list/2014-July/687823.html
map & reduce (Fail)
Well, but I'd need as well a very natural thing, kind of map() & reduce()
or functools.reduce() if we are on python3 already. That is, I'd need to
call a "summarizing" function for all the downloading tasks completed on
links from a page. This is where i fail :(
I'd propose an oversimplified but still a nice test to model the use case:
Let's use fibonacci function implementation in its ineffective form.
That is, let the coro_sum() be applied in reduce() and coro_fib be what we apply with
map(). Something like this:
#asyncio.coroutine
def coro_sum(x):
return sum(x)
#asyncio.coroutine
def coro_fib(x):
if x < 2:
return 1
res_coro =
executor_pool.spawn_task_when_arg_list_of_coros_ready(coro=coro_sum,
arg_coro_list=[coro_fib(x - 1), coro_fib(x - 2)])
return res_coro
So that we could run the following tests.
Test #1 on one worker:
executor_pool = ExecutorPool(workers=1)
executor_pool.as_completed( coro_fib(x) for x in range(20) )
Test #2 on two workers:
executor_pool = ExecutorPool(workers=2)
executor_pool.as_completed( coro_fib(x) for x in range(20) )
It would be very important that both each coro_fib() and coro_sum()
invocations are done via a Task on some worker, not just spawned implicitly
and unmanaged!
It would be cool to find asyncio gurus interested in this very natural goal.
Your help and ideas would be very much appreciated.
best regards
Valery
There are multiple ways to compute fibonacci series asynchroniously. First, check that the explosive variant fails in your case:
#asyncio.coroutine
def coro_sum(summands):
return sum(summands)
#asyncio.coroutine
def coro_fib(n):
if n == 0: s = 0
elif n == 1: s = 1
else:
summands, _ = yield from asyncio.wait([coro_fib(n-2), coro_fib(n-1)])
s = yield from coro_sum(f.result() for f in summands)
return s
You could replace summands with:
a = yield from coro_fib(n-2) # don't return until its ready
b = yield from coro_fib(n-1)
s = yield from coro_sum([a, b])
In general, to prevent the exponential growth, you could use asyncio.Queue (synchronization via communication), asyncio.Semaphore (synchonization using mutex) primitives.

IronPython Multiprocessing Module

I have some dlls written in Dotnet which I am using to access thousands of binary files. The data is compartmentalized by directories, so as a performance enhancement I thought of using multiple processes or threads to churn through the files.
I have a function, currently part of the main class (requiring self as an argument), this could easily be refactored to a private method.
My first inclination is to use the Multiprocess module, but that doesn't seem to be available for IronPython.
My next thought was to use Task
def __createThreads(self):
tasks = Array.CreateInstance(Task, 5)
for idx in range(0, 5):
tasks.append(Task.Factory.StartNew(self.__doWork, args=(idx,)))
Task.WaitAll(tasks)
def __doWork(self, idx):
for index in range (0, idx):
print "Thread: %d | Index: %d" % (idx, index)
Or to use Thread
def __createThreads(self):
threads = list()
for idx in range(0, 5):
t = Thread(ThreadStart(self.__doWork))
t.Start()
threads.append(t)
while len(threads) > 0:
time.sleep(.05)
for t in threads:
if(not t.IsAlive):
threads.remove(t)
What I cannot find is a IronPython example of how to pass arguements
Please be aware that your two examples are not exactly equivalent. The task version will only create/use actual concurrent threads when the run-time thinks it is a good idea (unless you specify TaskCreationOptions.LongRunning). You have to decide what works for your use-case.
The easiest way to pass the idx argument to the __doWork function would be using a lambda to capture the value and invocation. (please be aware of scoping issues as discussed in this question which also hints at alternative solutions for introducing an intermediate scope)
tasks.append(Task.Factory.StartNew(lambda idx = idx: self.__doWork(idx)))
As a side-note: You will have to convert your task list to an array in order for Task.WaitAll to be happy.

Resources