Why is BeautifulSoup related to 'Task exception was never retrieved'? - python-3.x

I want to use the coroutine to crawl and parse webpages. I write a sample and test. The program could run well in python 3.5 in ubuntu 16.04 and it will quit when all the works have been done. The source code is below.
import aiohttp
import asyncio
from bs4 import BeautifulSoup
async def coro():
coro_loop = asyncio.get_event_loop()
url = u'https://www.python.org/'
for _ in range(4):
async with aiohttp.ClientSession(loop=coro_loop) as coro_session:
with aiohttp.Timeout(30, loop=coro_session.loop):
async with coro_session.get(url) as resp:
print('get response from url: %s' % url)
source_code = await resp.read()
soup = BeautifulSoup(source_code, 'lxml')
def main():
loop = asyncio.get_event_loop()
worker = loop.create_task(coro())
try:
loop.run_until_complete(worker)
except KeyboardInterrupt:
print ('keyboard interrupt')
worker.cancel()
finally:
loop.stop()
loop.run_forever()
loop.close()
if __name__ == '__main__':
main()
While testing, I find when I shut down the program by 'Ctrl+C', there will be a error 'Task exception was never retrieved'.
^Ckeyboard interrupt
Task exception was never retrieved
future: <Task finished coro=<coro() done, defined at ./test.py:8> exception=KeyboardInterrupt()>
Traceback (most recent call last):
File "./test.py", line 23, in main
loop.run_until_complete(worker)
File "/usr/lib/python3.5/asyncio/base_events.py", line 375, in run_until_complete
self.run_forever()
File "/usr/lib/python3.5/asyncio/base_events.py", line 345, in run_forever
self._run_once()
File "/usr/lib/python3.5/asyncio/base_events.py", line 1312, in _run_once
handle._run()
File "/usr/lib/python3.5/asyncio/events.py", line 125, in _run
self._callback(*self._args)
File "/usr/lib/python3.5/asyncio/tasks.py", line 307, in _wakeup
self._step()
File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "./test.py", line 17, in coro
soup = BeautifulSoup(source_code, 'lxml')
File "/usr/lib/python3/dist-packages/bs4/__init__.py", line 215, in __init__
self._feed()
File "/usr/lib/python3/dist-packages/bs4/__init__.py", line 239, in _feed
self.builder.feed(self.markup)
File "/usr/lib/python3/dist-packages/bs4/builder/_lxml.py", line 240, in feed
self.parser.feed(markup)
File "src/lxml/parser.pxi", line 1194, in lxml.etree._FeedParser.feed (src/lxml/lxml.etree.c:119773)
File "src/lxml/parser.pxi", line 1316, in lxml.etree._FeedParser.feed (src/lxml/lxml.etree.c:119644)
File "src/lxml/parsertarget.pxi", line 141, in lxml.etree._TargetParserContext._handleParseResult (src/lxml/lxml.etree.c:137264)
File "src/lxml/parsertarget.pxi", line 135, in lxml.etree._TargetParserContext._handleParseResult (src/lxml/lxml.etree.c:137128)
File "src/lxml/lxml.etree.pyx", line 324, in lxml.etree._ExceptionContext._raise_if_stored (src/lxml/lxml.etree.c:11090)
File "src/lxml/saxparser.pxi", line 499, in lxml.etree._handleSaxData (src/lxml/lxml.etree.c:131013)
File "src/lxml/parsertarget.pxi", line 88, in lxml.etree._PythonSaxParserTarget._handleSaxData (src/lxml/lxml.etree.c:136397)
File "/usr/lib/python3/dist-packages/bs4/builder/_lxml.py", line 206, in data
def data(self, content):
KeyboardInterrupt
I looked through the offical docs of python but haven't got a clue. I try to capture the Keyboard Interrupt in coro().
try:
soup = BeautifulSoup(source_code, 'lxml')
except KeyboardInterrupt:
print ('capture exception')
raise
Everytime the 'try/except' around BeautifulSoup() capture the KeyboardInterrupt, the error will occur. It seems that BeautifulSoup contribute to the error. But how to tackle it?

When you call task.cancel() this function doesn't actually cancel task, it just "mark" task to be cancelled. Actual process of cancelling task would be started when task will resume it's execution. asyncio.CancelledError will be immediately raised inside task forcing it to be actually cancelled. Task will finish it's execution with this exception.
On the other hand asyncio warns you if some of your tasks finished with exception silently (if you didn't check result of task execution).
To avoid problems you should await task cancellation receiving asyncio.CancelledError (and probably suppressing since you don't need it then):
import asyncio
from contextlib import suppress
async def coro():
# ...
def main():
loop = asyncio.get_event_loop()
worker = asyncio.ensure_future(coro())
try:
loop.run_until_complete(worker)
except KeyboardInterrupt:
print('keyboard interrupt')
worker.cancel()
with suppress(asyncio.CancelledError):
loop.run_until_complete(worker) # await task cancellation.
finally:
loop.close()
if __name__ == '__main__':
main()

Related

how to gracefully close task loops in discord.py

I'm currently creating a discord bot that contains two task loops called check_members and check_music.
When a user enters the offline command, I'd like to gracefully stop these loops. I wrote this piece of code in my Cog class:
class MusicBot(commands.Cog):
# function called when bot is closing.
See [here](https://discordpy.readthedocs.io/en/stable/ext/commands/api.html?highlight=cog_unload#discord.ext.commands.Cog.cog_unload)
def cog_unload(self):
print("Debug")
self.check_members.cancel()
self.check_music.cancel()
print(self.check_members.is_running())
print(self.check_music.is_running())
# example of a task loop I have:
#tasks.loop(seconds=5)
async def check_members(self):
[code...]
In another script, I call the bot.close() function as follows:
await ctx.send("Going offline! See ya later.")
if self.voice is not None:
await self.disconnect()
await self.bot.close()
sys.exit(0)
When a user calls the offline command, that's what the bot prints out:
Debug
True
Task exception was never retrieved
future: <Task finished name='discord.py: on_message' coro=<Client._run_event() done, defined at /home/liuk23/.local/lib/python3.10/site-packages/discord/client.py:401> exception=SystemExit(0)>
Traceback (most recent call last):
File "/home/liuk23/Desktop/coding/Discord-bot-3/main.py", line 67, in <module>
loop.run_forever()
File "/usr/lib/python3.10/asyncio/base_events.py", line 600, in run_forever
self._run_once()
File "/usr/lib/python3.10/asyncio/base_events.py", line 1896, in _run_once
handle._run()
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/client.py", line 409, in _run_event
await coro(*args, **kwargs)
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/bot.py", line 1392, in on_message
await self.process_commands(message)
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/bot.py", line 1389, in process_commands
await self.invoke(ctx) # type: ignore
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/bot.py", line 1347, in invoke
await ctx.command.invoke(ctx)
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/core.py", line 986, in invoke
await injected(*ctx.args, **ctx.kwargs) # type: ignore
File "/home/liuk23/.local/lib/python3.10/site-packages/discord/ext/commands/core.py", line 190, in wrapped
ret = await coro(*args, **kwargs)
File "/home/liuk23/Desktop/coding/Discord-bot-3/music.py", line 223, in offline
await self.functions.offline(ctx)
File "/home/liuk23/Desktop/coding/Discord-bot-3/funcitons.py", line 241, in offline
sys.exit(0)
SystemExit: 0
As you can notice, the Debug text gets printed out, so the cog_unload function get successfully called.
Although I am closing the loops, I get the Task was never retrieved error. Am I misunderstanding the error?
From sys.exit documentation:
Raise a SystemExit exception, signaling an intention to exit the interpreter.
What is happening is that the offline task, by throwing SystemExit, stops, but since no other task is awaiting on it, that exception is never retrieved.
The underlying problem is that, to quit the application, rather than throwing through sys.exit, it would be better to stop the running loop cleanly. For example, if the loop was run with loop.run_until_complete(some_future), it's necessary to set that future with some_future.set_result(some_result).
I fixed it by stopping the main loop that holds all the bot.
In my main script I instantiate the bot in the following manner:
try:
# loop runs until stop is called
loop.run_forever()
except KeyboardInterrupt:
pass
# finally block is called when loop is being stopped
finally:
# stop and close loop
loop.stop()
sys.exit(0)
By calling loop.stop() in another script, the finally block in main.py will be called and will successfully close the loop and the whole script.
Thanks to #Ulisse Bordignon for your answer

How to start websocket server in thread using python?

I want to start websocket server in separate thread. I have tried to implement as below but
getting Runtime error as it says attached to different loop
Code:
#!/usr/bin/env python
# WS server example
import asyncio
import websockets
import threading
import time
async def hello(websocket, path):
name = await websocket.recv()
print(name)
greeting = "Hello " + name +"!"
await websocket.send(greeting)
print(greeting)
start_server = websockets.serve(hello, "localhost", 8765)
eventLoop = asyncio.new_event_loop()
time.sleep(2)
def startWebSocket(loop, server):
print("WS: thread started")
asyncio.set_event_loop(eventLoop)
asyncio.get_event_loop().run_until_complete(start_server)
eventLoop.run_forever()
print("Run web socket in threaded env")
TH = threading.Thread(target=startWebSocket, args=[eventLoop, start_server,])
TH.start()
# then do some other work after this
Output:
Run web socket in threaded env
WS: thread started
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "threadWithWs.py", line 26, in startWebSocket
asyncio.get_event_loop().run_until_complete(start_server)
File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc)
File "/usr/lib/python3.5/asyncio/tasks.py", line 564, in _wrap_awaitable
return (yield from awaitable.__await__())
File "/home/krunal/.local/lib/python3.5/site-packages/websockets/py35/server.py", line 13, in __await_impl__
server = await self._creating_server
File "/usr/lib/python3.5/asyncio/base_events.py", line 923, in create_server
infos = yield from tasks.gather(*fs, loop=self)
File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
yield self # This tells Task to wait for completion.
RuntimeError: Task <Task pending coro=<_wrap_awaitable() running at /usr/lib/python3.5/asyncio/tasks.py:564> cb=[_run_until_complete_cb() at /usr/lib/python3.5/asyncio/base_events.py:164]> got Future <_GatheringFuture pending> attached to a different loop
Code exists with this error.
How to setup loop for websocket to start websocket server in thread?
I have followed this answer but no luck.
You don't have to use threads with asyncio. It's redundant.
Do something like this (I actually don't know what websocket is, I assume start_server is a coroutine \ an awaitable).
Create another async function, deputed to keep the loop alive. Before going into a while loop, spawn your task.
You may want to add an health check, using a global variable or wrapping all into a class.
async def run():
eventLoop.create_task(start_server())
while 1:
# if not health_check():
# exit()
await asyncio.sleep(1)
eventLoop.run_until_complete(run())
Do the other stuff by spawning more tasks.
To run blocking code into an asyncio loop, use eventLoop.run_in_executor(None, blocking_code): it's basically a friendly interface to threads.
As with threads, you live into the GIL.
Embrace the way asyncio do things.

Random connection errors with aio_pika after 2 days of running

I have asyncio script which connects to rabbitmq with aio_pika library every 40 seconds and checks if there are any messages and prints them out which then repeats forever. However, usually, after 2 or so days of running, I will start receiving endless connection exception errors which will only be solved by restarting the script. Perhaps there are some obvious mistakes in the logic of my asyncio script which I am missing?
#!/usr/bin/python3
import time
import async_timeout
import asyncio
import aio_pika
async def got_message(message: aio_pika.IncomingMessage):
with message.process():
print(message.body.decode())
async def main(loop):
try:
with async_timeout.timeout(10):
connection = await aio_pika.connect_robust(
host='#',
virtualhost='#',
login='#',
password='#',
port=5671,
loop=loop,
ssl=True
)
channel = await connection.channel()
await channel.set_qos(prefetch_count=100)
queue_name='mm_message'
queue = await channel.declare_queue(auto_delete=False, name=queue_name)
routing_key='mm_msg'
await queue.bind("amq.topic", routing_key)
que_len = queue.declaration_result.message_count
if(que_len > 0):
await queue.consume(got_message)
except:
print("connection problems..")
if __name__ == "__main__":
loop = asyncio.get_event_loop()
while(True):
time.sleep(40)
loop.run_until_complete(main(loop))
This is the error I endlessly receive after some time:
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/events.py", line 125, in _run
self._callback(*self._args)
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/adapters/base_connection.py", line 364, in _handle_events
self._handle_read()
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/adapters/base_connection.py", line 415, in _handle_read
self._on_data_available(data)
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/connection.py", line 1347, in _on_data_available
self._process_frame(frame_value)
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/connection.py", line 1414, in _process_frame
if self._process_callbacks(frame_value):
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/connection.py", line 1384, in _process_callbacks
frame_value) # Args
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/callback.py", line 60, in wrapper
return function(*tuple(args), **kwargs)
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/callback.py", line 92, in wrapper
return function(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/callback.py", line 236, in process
callback(*args, **keywords)
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/connection.py", line 1332, in _on_connection_tune
self._send_connection_open()
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/connection.py", line 1517, in _send_connection_open
self._on_connection_open, [spec.Connection.OpenOk])
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/connection.py", line 1501, in _rpc
self._send_method(channel_number, method_frame)
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/connection.py", line 1569, in _send_method
self._send_frame(frame.Method(channel_number, method_frame))
File "/usr/local/lib/python3.5/dist-packages/aio_pika/pika/connection.py", line 1548, in _send_frame
raise exceptions.ConnectionClosed
aio_pika.pika.exceptions.ConnectionClosed
except:
print("connection problems..")
This will catch service Exceptions like KeyboardInterrupt, SystemExit. You should never do such thing if you're not going to reraise exception. At very lest you should write:
except Exception:
print("connection problems..")
However in context of asyncio snippet above will break mechanism of cancellation. To avoid it as explained here you should write:
try:
await operation
except asyncio.CancelledError:
raise
except Exception:
log.log('an error has occurred')
Not less important is to understand that connection should not only be opened, but closed also (regardless of what happened between opening and closing). To achieve that people usually use context managers (and in asyncio - asynchronous context managers).
aio_pika doesn't seem to be exception. As example shows you should use async with when dealing with connection:
connection = await aio_pika.connect_robust(
"amqp://guest:guest#127.0.0.1/", loop=loop
)
async with connection:
# ...

python-trio: AttributeError: sendall

I'm just trying to run the echo-client-low-level.py from the python-trio docs:
# echo-client-low-level.py
import sys
import trio
# arbitrary, but:
# - must be in between 1024 and 65535
# - can't be in use by some other program on your computer
# - must match what we set in our echo server
PORT = 12345
# How much memory to spend (at most) on each call to recv. Pretty arbitrary,
# but shouldn't be too big or too small.
BUFSIZE = 16384
async def sender(client_sock):
print("sender: started!")
while True:
data = b"async can sometimes be confusing, but I believe in you!"
print("sender: sending {!r}".format(data))
await client_sock.sendall(data)
await trio.sleep(1)
async def receiver(client_sock):
print("receiver: started!")
while True:
data = await client_sock.recv(BUFSIZE)
print("receiver: got data {!r}".format(data))
if not data:
print("receiver: connection closed")
sys.exit()
async def parent():
print("parent: connecting to 127.0.0.1:{}".format(PORT))
with trio.socket.socket() as client_sock:
await client_sock.connect(("127.0.0.1", PORT))
async with trio.open_nursery() as nursery:
print("parent: spawning sender...")
nursery.start_soon(sender, client_sock)
print("parent: spawning receiver...")
nursery.start_soon(receiver, client_sock)
trio.run(parent)
However their example produce a nasty AttributeError: sendall error:
$ python echo-client-low-level.py
parent: connecting to 127.0.0.1:12345
parent: spawning sender...
parent: spawning receiver...
sender: started!
sender: sending b'async can sometimes be confusing, but I believe in you!'
receiver: started!
Traceback (most recent call last):
File "echo-client-low-level.py", line 43, in <module>
trio.run(parent)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 1225, in run
return result.unwrap()
File "/usr/lib/python3.6/site-packages/trio/_core/_result.py", line 119, in unwrap
raise self.error
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 1334, in run_impl
msg = task.coro.send(next_send)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 923, in init
self.entry_queue.spawn()
File "/usr/lib/python3.6/site-packages/trio/_util.py", line 109, in __aexit__
await self._agen.asend(None)
File "/usr/lib/python3.6/site-packages/async_generator/_impl.py", line 274, in asend
return await self._do_it(self._it.send, value)
File "/usr/lib/python3.6/site-packages/async_generator/_impl.py", line 290, in _do_it
return await ANextIter(self._it, start_fn, *args)
File "/usr/lib/python3.6/site-packages/async_generator/_impl.py", line 202, in send
return self._invoke(self._it.send, value)
File "/usr/lib/python3.6/site-packages/async_generator/_impl.py", line 209, in _invoke
result = fn(*args)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 318, in open_nursery
await nursery._nested_child_finished(nested_child_exc)
File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 203, in open_cancel_scope
yield scope
File "/usr/lib/python3.6/site-packages/trio/_core/_multierror.py", line 144, in __exit__
raise filtered_exc
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 203, in open_cancel_scope
yield scope
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 318, in open_nursery
await nursery._nested_child_finished(nested_child_exc)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 427, in _nested_child_finished
raise MultiError(self._pending_excs)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 1334, in run_impl
msg = task.coro.send(next_send)
File "echo-client-low-level.py", line 41, in parent
nursery.start_soon(receiver, client_sock)
File "/usr/lib/python3.6/site-packages/trio/_util.py", line 109, in __aexit__
await self._agen.asend(None)
File "/usr/lib/python3.6/site-packages/async_generator/_impl.py", line 274, in asend
return await self._do_it(self._it.send, value)
File "/usr/lib/python3.6/site-packages/async_generator/_impl.py", line 290, in _do_it
return await ANextIter(self._it, start_fn, *args)
File "/usr/lib/python3.6/site-packages/async_generator/_impl.py", line 202, in send
return self._invoke(self._it.send, value)
File "/usr/lib/python3.6/site-packages/async_generator/_impl.py", line 209, in _invoke
result = fn(*args)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 318, in open_nursery
await nursery._nested_child_finished(nested_child_exc)
File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 203, in open_cancel_scope
yield scope
File "/usr/lib/python3.6/site-packages/trio/_core/_multierror.py", line 144, in __exit__
raise filtered_exc
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 203, in open_cancel_scope
yield scope
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 318, in open_nursery
await nursery._nested_child_finished(nested_child_exc)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 427, in _nested_child_finished
raise MultiError(self._pending_excs)
File "/usr/lib/python3.6/site-packages/trio/_core/_run.py", line 1334, in run_impl
msg = task.coro.send(next_send)
File "echo-client-low-level.py", line 20, in sender
await client_sock.sendall(data)
File "/usr/lib/python3.6/site-packages/trio/_socket.py", line 426, in __getattr__
raise AttributeError(name)
AttributeError: sendall
After checking the GitHub it seems that sendall has been intentionally omitted.
I'm a little confused, did I miss something?
In [3]: trio.__version__
Out[3]: '0.3.0'
Whoops, that's a bug in the docs – thanks for the catch. There used to be a sendall method on sockets, but it had problems and was a weird feature half-way between the low-level and high-level layer, so it was removed in 0.3.0. But I missed updating the docs there.
I really need to rewrite that section to use the new high-level APIs! (Bug filed.) But for now here's a quick translation of the examples into the new (better, higher-level) API:
Client:
# echo-client.py
import sys
import trio
# arbitrary, but:
# - must be in between 1024 and 65535
# - can't be in use by some other program on your computer
# - must match what we set in our echo server
PORT = 12345
# How much memory to spend (at most) on each call to recv. Pretty arbitrary,
# but shouldn't be too big or too small.
BUFSIZE = 16384
async def sender(client_stream):
print("sender: started!")
while True:
data = b"async can sometimes be confusing, but I believe in you!"
print("sender: sending {!r}".format(data))
await client_stream.send_all(data)
await trio.sleep(1)
async def receiver(client_stream):
print("receiver: started!")
while True:
data = await client_stream.receive_some(BUFSIZE)
print("receiver: got data {!r}".format(data))
if not data:
print("receiver: connection closed")
sys.exit()
async def parent():
print("parent: connecting to 127.0.0.1:{}".format(PORT))
client_stream = await trio.open_tcp_stream("127.0.0.1", PORT)
async with client_stream:
async with trio.open_nursery() as nursery:
print("parent: spawning sender...")
nursery.start_soon(sender, client_stream)
print("parent: spawning receiver...")
nursery.start_soon(receiver, client_stream)
trio.run(parent)
Server
# echo-server.py
import trio
from itertools import count
# Port is arbitrary, but:
# - must be in between 1024 and 65535
# - can't be in use by some other program on your computer
# - must match what we set in our echo client
PORT = 12345
# How much memory to spend (at most) on each call to recv. Pretty arbitrary,
# but shouldn't be too big or too small.
BUFSIZE = 16384
CONNECTION_COUNTER = count()
async def echo_server(server_stream):
# Assign each connection a unique number to make our logging easier to
# understand
ident = next(CONNECTION_COUNTER)
print("echo_server {}: started".format(ident))
try:
while True:
data = await server_stream.receive_some(BUFSIZE)
print("echo_server {}: received data {!r}".format(ident, data))
if not data:
print("echo_server {}: connection closed".format(ident))
return
print("echo_server {}: sending data {!r}".format(ident, data))
await server_stream.send_all(data)
except Exception as exc:
# Unhandled exceptions will propagate into our parent and take
# down the whole program. If the exception is KeyboardInterrupt,
# that's what we want, but otherwise maybe not...
print("echo_server {}: crashed: {!r}".format(ident, exc))
async def main():
await trio.serve_tcp(echo_server, PORT)
# We could also just write 'trio.run(serve_tcp, echo_server, PORT)', but real
# programs almost always end up doing other stuff too and then we'd have to go
# back and factor it out into a separate function anyway. So it's simplest to
# just make it a standalone function from the beginning.
trio.run(main)
They won't quite match the text, sorry! But hopefully they'll give you some chance at figuring out what's going on, until I can fix the docs for real.

sending chunks of data using webob

I tried to write a simple server-client program using webob.
1. The client send data using 'Transfer-Encoding', 'chunked'
2. the received data is then print in the server side.
The Server.py received the data correctly.
However, I got error a bunch of message from webob.
However, anyone kindly tell me what had happened or just
give a simple guide line for write such a simple program(sending chunk) with webob.
thanks!
the codes and error are as below:
server.py
from webob import Request
from webob import Response
class ChunkApp(object):
def __init__(self):
pass
def __call__(self, environ, start_response):
req = Request(environ)
for buf in self.chunkreadable(req.body_file, 65535):
print buf
resp = Response('chunk received')
return resp(environ, start_response)
def chunkreadable(self, iter, chunk_size=65536):
return self.chunkiter(iter, chunk_size) if \
hasattr(iter, 'read') else iter
def chunkiter(self, fp, chunk_size=65536):
while True:
chunk = fp.read(chunk_size)
if chunk:
yield chunk
else:
break
if __name__ == '__main__':
app = ChunkApp()
from wsgiref.simple_server import make_server
httpd = make_server('localhost', 8080, app)
print 'Serving on http://localhost:%s' % '8080'
try:
httpd.serve_forever()
except KeyboardInterrupt:
print '^C'
client.py
from httplib import HTTPConnection
conn = HTTPConnection("localhost:8080")
conn.putrequest('PUT', "/")
body = "1234567890"
conn.putheader('Transfer-Encoding', 'chunked')
conn.endheaders()
for chunk in body:
#print '%x\r\n%s\r\n' % (len(chunk), chunk)
conn.send('%x\r\n%s\r\n' % (len(chunk), chunk))
conn.send('0\r\n\r\n')
Error
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/wsgiref/handlers.py", line 86, in run
self.finish_response()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/wsgiref/handlers.py", line 127, in finish_response
self.write(data)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/wsgiref/handlers.py", line 210, in write
self.send_headers()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/wsgiref/handlers.py", line 268, in send_headers
self.send_preamble()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/wsgiref/handlers.py", line 192, in send_preamble
'Date: %s\r\n' % format_date_time(time.time())
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 324, in write
self.flush()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 52575)
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 284, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 310, in process_request
self.finish_request(request, client_address)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 641, in __init__
self.finish()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/SocketServer.py", line 694, in finish
self.wfile.flush()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe

Resources