Python async io stream - python-3.x

I was going through following code in the asyncio doc.
import asyncio
async def tcp_echo_client(message):
reader, writer = await asyncio.open_connection(
'127.0.0.1', 8888)
print(f'Send: {message!r}')
writer.write(message.encode())
data = await reader.read(100)
print(f'Received: {data.decode()!r}')
print('Close the connection')
writer.close()
await writer.wait_closed()
asyncio.run(tcp_echo_client('Hello World!'))
However I am now able to understand why reader.read is awaitable but writer.write is not ? Since they are both I/O operations write method should also be awaitable right ?

However I am now able to understand why reader.read is awaitable but writer.write is not ? Since they are both I/O operations write method should also be awaitable right ?
Not necessarily. The fundamental asymmetry between read() and write() is that read() must return actual data, while write() operates purely by side effect. So read() must be awaitable because it needs to suspend the calling coroutine when the data isn't yet available. On the other hand, write() can be (and in asyncio is) implemented by stashing the data in some buffer and scheduling it to be written at an opportune time.
This design has important consequences, such as that writing data faster than the other side reads it causes the buffer to bloat up without bounds, and that exceptions during write() are effectively lost. Both issues are fixed by calling writer.drain() which applies backpressure, i.e. writes out the buffer to the OS, if necessary suspending the coroutine in the process. This is done until the buffer size drops beneath a threshold. The write() documentation advises that "calls to write() should be followed by drain()."
The lack of backpressure in write() is a result of asyncio streams being implemented on top of a callback-based layer in which a non-async write() is much more convenient to use than a fully asynchronous alternative. See this article by Nathaniel J Smith, the author of trio, for a detailed treatment of the topic.

Related

asyncio only processes one file at a time

I'm working a program to upload (large) files to a remote SFTP-server, while also calculating the file's SHA256. The uploads are slow, and the program is supposed to open multiple SFTP-connections.
Here is the main code:
async def bwrite(fd, buf):
log.debug('Writing %d bytes to %s', len(buf), fd)
fd.write(buf)
async def digester(digest, buf):
log.debug('Updating digest %s with %d more bytes', digest, len(buf))
digest.update(buf)
async def upload(fNames, SFTP, rename):
for fName in fNames:
inp = open(fName, "rb", 655360)
log.info('Opened local %s', fName)
digest = hashlib.sha256()
rName = rename % os.path.splitext(fName)[0]
out = SFTP.open(rName, "w", 655360)
...
while True:
buf = inp.read(bsize)
if not buf:
break
await bwrite(out, buf)
await digester(digest, buf)
inp.close()
out.close()
...
for i in range(0, len(clients)):
fNames = args[(i * chunk):((i + 1) * chunk)]
log.debug('Connection %s: %d files: %s',
clients[i], len(fNames), fNames)
uploads.append(upload(fNames, clients[i], Rename))
log.info('%d uploads initiated, awaiting completion', len(uploads))
results = asyncio.gather(*uploads)
loop = asyncio.get_event_loop()
loop.run_until_complete(results)
loop.close()
The idea is for multiple upload coroutines to run "in parallel" -- each using its own separate SFTP-connection -- pushing out one or more files to the server.
It even works -- but only a single upload is running at any time. I expected multiple ones to get control -- while their siblings awaits the bwrite and/or the digester. What am I doing wrong?
(Using Python-3.6 on FreeBSD-11, if that matters... The program is supposed to run on RHEL7 eventually...)
If no awaiting is involved, then no parallelism can occur. bwrite and digester, while declared async, perform no async operations (not launching or creating coroutines that can be awaited); if you removed the async in their name and removed the await where they're called, the code would behave identically.
The only time asyncio can get you benefits is when:
There is a blocking operation involved, and
Said blocking operation is designed for asyncio use (or involves a file descriptor that can be rewrapped for said purposes)
Your bwrite isn't doing that (it's doing normal blocking I/O on SFTP objects, which doesn't appear to be async-friendly; if it was async, your failure to either return the future it produced, or await it yourself, would usually mean it does nothing, barring the off-chance it self-scheduled a task internally), your reads from the input file aren't either (which is fine; normally, changing blocking I/O to asyncio for local file access isn't beneficial; it's all being buffered, at user level and kernel level, so you almost never block on the writes). Nor is your digester (hashing operations are CPU bound; it never makes sense to make them async unless they're being done with actual async stuff).
Since the two awaits in upload are effectively synchronous (they don't return anything that, when awaited, would actually block on actual asynchronous tasks), upload itself is effectively synchronous (it will never, under any circumstances, return control to the event loop before it completes). So even though all the other tasks are in the event loop queue, raring to go, the event loop itself has to wait until the running task blocks in an async-friendly way (with await on something that actually does background work while blocked), which never happens, and the tasks just get run sequentially, one after the other.
If an async-friendly version of your SFTP module exists, that might allow you to gain some benefit. But without it, you're probably better off using concurrent.futures.ThreadPoolExecutor or multiprocessing.pool.ThreadPool to do preemptive multitasking (which will swap out threads whenever they release the GIL, forcibly swapping between bytecodes if they don't release the GIL for awhile). That will get you parallelism on any blocking I/O (async-friendly or not), and, if the data is large enough, on the hashing work as well (hashlib is one of the only Python built-ins I know of that releases the GIL for CPU-bound work, if the data to be hashed is large enough; extension modules releasing the GIL is the only way multithreaded CPython can do more than one core's worth of CPU-bound work in a single process, even on multicore systems).

What's better readSync or createReadStream (with Symbol.asyncIterator)?

createReadStream (with Symbol.asyncIterator)
async function* readChunkIter(chunksAsync) {
for await (const chunk of chunksAsync) {
// magic
yield chunk;
}
}
const fileStream = fs.createReadStream(filePath, { highWaterMark: 1024 * 64 });
const readChunk = readChunkIter(fileStream);
readSync
function* readChunkIter(fd) {
// loop
// magic
fs.readSync(fd, buffer, 0, chunkSize, bytesRead);
yield buffer;
}
const fd = fs.openSync(filePath, 'r');
const readChunk = readChunkIter(fd);
What's better to use with a generator function and why?
upd: I'm not looking for a better way, I want to know the difference between using these features
To start with, you're comparing a synchronous file operation fs.readSync() with an asynchronous one in the stream (which uses fs.read() internally). so, that's a bit like apples and oranges for server use.
If this is on a server, then NEVER use synchronous file I/O except at server startup time because when processing requests or any other server events, synchronous file I/O blocks the entire event loop during the file read operation which drastically reduces your server scalability. Only use asynchronous file I/O, which between your two cases would be the stream.
Otherwise, if this is not on a server or any process that cares about blocking the node.js event loop during a synchronous file operation, then it's entirely up to you on which interface you prefer.
Other comments:
It's also unclear why you wrap for await() in a generator. The caller can just use for await() themselves and avoid the wrapping in a generator.
Streams for reading files are usually used in an event driven manner by adding an event listener to the data event and responding to data as it arrives. If you're just going to asynchronously read chunks of data from the file, there's really no benefit to a stream. You may as well just use fs.read() or fs.promises.read().
We can't really comment on the best/better way to solve a problem without seeing the overall problem you're trying to code for. You've just shown one little snippet of reading data. The best way to structure that depends upon how the higher level code can most conveniently use/consume the data (which you don't show).
I really didn't ask the right question. I'm not looking for a better way, I want to know the difference between using these features.
Well, the main difference is that fs.readSync() is blocking and synchronous and thus blocks the event loop, ruining the scalability of a server and should never be used (except during startup code) in a server environment. Streams in node.js are asynchronous and do not block the event loop.
Other than that difference, streams are a higher level construct than just reading the file directly and should be used when you're actually using features of the streams and should probably not be used when you're just reading chunks from the file directly and aren't using any features of streams.
In particular, error handling is not always so clear with streams, particularly when trying to use await and promises with streams. This is probably because readstreams were originally designed to be an event driven object and that means communicating errors indirectly on an error event which complicates the error handling on straight read operations. If you're not using the event driven nature of readstreams or some transform feature or some other major feature of streams, I wouldn't use them - I'd use the more traditional fs.promises.readFile() to just read data.

Do AsyncIO stream writers/readers require manually ensuring that all data is sent/received?

When dealing with sockets, you need to make sure that all data is sent/received, since you may receive incomplete chunks of data when reading. From the docs:
In general, they return when the associated network buffers have been filled (send) or emptied (recv). They then tell you how many bytes they handled. It is your responsibility to call them again until your message has been completely dealt with.
Emphasis mine. It then shows sample implementations that ensure all data has been handled in each direction.
Is the same true though when dealing with AsyncIO wrappers over sockets?
For read, it seems to be required as the docs mention that it "[reads] up to n bytes.".
For write though, it seems like as long as you call drain afterwards, you know that it's all sent. The docs don't explicitly say that it must be called repeatedly, and write doesn't return anything.
Is this correct? Do I need to check how much was read using read, but can just drain the StreamWriter and know that everything was sent?
I thought that my above assumptions were correct, then I had a look at the example TCP Client immediately below the method docs:
import asyncio
async def tcp_echo_client(message):
reader, writer = await asyncio.open_connection(
'127.0.0.1', 8888)
print(f'Send: {message!r}')
writer.write(message.encode())
data = await reader.read(100)
print(f'Received: {data.decode()!r}')
print('Close the connection')
writer.close()
asyncio.run(tcp_echo_client('Hello World!'))
And it doesn't do any kind of checking. It assumes everything is both read and written the first time.
For read, [checking for incomplete read] seems to be required as the docs mention that it "[reads] up to n bytes.".
Correct, and this is a useful feature for many kinds of processing, as it allows you to read new data as it arrives from the peer and process it incrementally, without having to know how much to expect at any point. If you do know exactly how much you expect and need to read that amount of bytes, you can use readexactly.
For write though, it seems like as long as you call drain afterwards, you know that it's all sent. The docs don't explicitly say that it must be called repeatedly, and write doesn't return anything.
This is partially correct. Yes, asyncio will automatically keep writing the data you give it in the background until all is written, so you don't need to (nor can you) ensure it by checking the return value of write.
However, a sequence of stream.write(data); await stream.drain() will not pause the coroutine until all data has been transmitted to the OS. This is because drain doesn't wait for all data to be written, it only waits until it hits a "low watermark", trying to ensure (misguidedly according to some) that the buffer never becomes empty as long as there are new writes. As far as I know, in current asyncio there is no way to wait until all data has been sent - except for manually tweaking the watermarks, which is inconvenient and which the documentation warns against. The same applies to awaiting the return value of write() introduced in Python 3.8.
This is not as bad as it sounds simply because a successful write itself doesn't guarantee that the data was actually transmitted to, let alone received by the peer - it could be languishing in the socket buffer, or in network equipment along the way. But as long as you can rely on the system to send out the data you gave it as fast as possible, you don't really care whether some of it is in an asyncio buffer or in a kernel buffer. (But you still need to await drain() to ensure backpressure.)
The one time you do care is when you are about to exit the program or the event loop; in that case, a portion of the data being stuck in an asyncio buffer means that the peer will never see it. This is why, starting with 3.7, asyncio provides a wait_closed() method which you can await after calling close() to ensure that all the data has been sent. One could imagine a flush() method that does the same, but without having to actually close the socket (analogous to the method of the same name on file objects, and with equivalent semantics), but currently there are no plans to add it.

Is there any linter that detects blocking calls in an async function?

https://www.aeracode.org/2018/02/19/python-async-simplified/
It's not going to ruin your day if you call a non-blocking synchronous
function, like this:
def get_chat_id(name):
return "chat-%s" % name
async def main():
result = get_chat_id("django")
However, if you call a blocking function, like the Django ORM, the
code inside the async function will look identical, but now it's
dangerous code that might block the entire event loop as it's not
awaiting:
def get_chat_id(name):
return Chat.objects.get(name=name).id
async def main():
result = get_chat_id("django")
You can see how it's easy to have a non-blocking function that
"accidentally" becomes blocking if a programmer is not super-aware of
everything that calls it. This is why I recommend you never call
anything synchronous from an async function without doing it safely,
or without knowing beforehand it's a non-blocking standard library
function, like os.path.join.
So I am looking for a way to automatically catch instances of this mistake. Are there any linters for Python which will report sync function calls from within an async function as a violation?
Can I configure Pylint or Flake8 to do this?
I don't necessarily mind if it catches the first case above too (which is harmless).
Update:
On one level I realise this is a stupid question, as pointed out in Mikhail's answer. What we need is a definition of a "dangerous synchronous function" that the linter should detect.
So for purpose of this question I give the following definition:
A "dangerous synchronous function" is one that performs IO operations. These are the same operations which have to be monkey-patched by gevent, for example, or which have to be wrapped in async functions so that the event loop can context switch.
(I would welcome any refinement of this definition)
So I am looking for a way to automatically catch instances of this
mistake.
Let's make few things clear: mistake discussed in article is when you call any long running sync function inside some asyncio coroutine (it can be I/O blocking call or just pure CPU function with a lot of calculations). It's a mistake because it'll block whole event loop what will lead to significant performance downgrade (more about it here including comments below answer).
Is there any way to catch this situation automatically? Before run time - no, no one except you can predict if particular function will take 10 seconds or 0.01 second to execute. On run time it's already built-in asyncio, all you have to do is to enable debug mode.
If you afraid some sync function can vary between being long running (detectable in run time in debug mode) and short running (not detectable) just execute function in background thread using run_in_executor - it'll guarantee event loop will not be blocked.

python asyncio Transport asynchronous methods vs coroutines

I'm new to asyncio and I started working with Transports to create a simple server-client program.
on the asyncio page I see the following:
Transport.close() can be called immediately after WriteTransport.write() even if data are not sent yet on the socket: both methods are asynchronous. yield from is not needed because these transport methods are not coroutines.
I searched the web (including stackoverflow) but couldn't find a good answer to the following question: what are the major differences between an asynchronous method and a coroutine?
The only 2 differences I can make are:
in coroutines you have a more fine grained control over the order of the methods the main loop executes using the yield from expression.
coroutines are generators, hence are more memory efficient.
anything else I am missing?
Thank you.
In the context asynchronous means both .write() and .close() are regular methods, not coroutines.
If .write() cannot write data immediately it stores the data in internal buffer.
.close() never closes connection immediately but schedules socket closing after all internal buffer will be sent.
So
transp.write(b'data')
transp.write(b'another data')
transp.close()
is safe and perfectly correct code.
Also .write() and .close() are not coroutines, obviously.
You should call coroutine via yield from expression, e.g. yield from coro().
But these methods are convention functions, so call it without yield from as shown in example above.

Resources