I am playing with aiohttp+aiomysql. I want to share same connection pool instance between request calls.
So i create a global var and preinit it once in corouting call.
My code:
import asyncio
from aiohttp import web
from aiohttp_session import get_session, session_middleware
from aiohttp_session.cookie_storage import EncryptedCookieStorage
from aiohttp_session import SimpleCookieStorage
#from mysql_pool import POOL
from aiomysql import create_pool
M_POOL = None
async def get_pool(loop):
global M_POOL
if M_POOL: return M_POOL
M_POOL = await create_pool(host='127.0.0.1', port=3306, user='user', password='user', db='test', loop=loop)
return M_POOL
async def query(request):
loop = asyncio.get_event_loop()
pool = await get_pool(loop)
print(id(pool))
async with pool.acquire() as conn:
async with conn.cursor() as cur:
await cur.execute("SELECT 42;")
value = await cur.fetchone()
print(value)
return web.Response(body=str.encode(str(value)))
app = web.Application(middlewares=[session_middleware(SimpleCookieStorage())])
app.router.add_route('GET', '/query', query)
web.run_app(app)
Is it convinient way of doing this, or may be something better?
I highly discourage global variable usage.
Please take a look on aiohttp demo for canonical approach.
SiteHandler is a class that implements website views.
But you've given demos for a case where request object in access.
I have the same problem using aiohttp. In my application I've
done to parts of modules:
one is for server functionality, and one for client (crawler).
So in server part it's ok, I can user request.app['dbpool']
But in crawler part I want use db connections to, and
I can't see reasons for another one pool connection creation.
Related
I have a ros2 publisher script that sends custom messages from ros2 nodes. What I need to do is to have a subscriber (which is also my websocket server) to listen to the message that the pulisher sends then convert it to a dictionary and send it as a json from the websocket server to a connected websocket client. I have already checked the rosbridge repo but I could not make it work. It doesn't have enough documentation and I am new to ros.
I need something like this:
import rclpy
import sys
from rclpy.node import Node
import tornado.ioloop
import tornado.httpserver
import tornado.web
import threading
from custom.msg import CustomMsg
from .convert import message_to_ordereddict
wss = []
class wsHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'Online'
if self not in wss:
wss.append(self)
def on_close(self):
print 'Offline'
if self in wss:
wss.remove(self)
def wsSend(message):
for ws in wss:
ws.write_message(message)
class MinimalSubscriber(Node):
def __init__(self):
super().__init__('minimal_subscriber')
self.subscription = self.create_subscription(CustomMsg, 'topic', self.CustomMsg_callback, 10)
self.subscription # prevent unused variable warning
def CustomMsg_callback(self, msg):
ws_message = message_to_ordereddict(msg)
wsSend(ws_message)
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(tornado.web.Application(wsHandler))
http_server.listen(8888)
main_loop = tornado.ioloop.IOLoop.instance()
# Start main loop
main_loop.start()
so the callback function in MinimalSubscriber class, receives the ros message, converts it to dictionary and sends it to websocket client. I am a bit confused how to make these two threads (ros and websocket) to communicate with each other.
So I think I got a bit confused myself going through the threading. So I changed my approach and made it work using the tornado periodic callback and the spin_once function of rclpy as the callback function. I would post my solution as it might help some people who has the same issue.
import queue
import rclpy
from rclpy.node import Node
import tornado.ioloop
import tornado.httpserver
import tornado.web
from custom.msg import CustomMsg
from .convert import message_to_ordereddict
wss = []
class wsHandler(tornado.websocket.WebSocketHandler):
#classmethod
def route_urls(cls):
return [(r'/',cls, {}),]
def open(self):
print 'Online'
if self not in wss:
wss.append(self)
def on_close(self):
print 'Offline'
if self in wss:
wss.remove(self)
def make_app():
myWebHandler = wsHandler.route_urls()
return tornado.web.Application(myWebHandler)
message_queue = queue.Queue
class MinimalSubscriber(Node):
def __init__(self):
super().__init__('minimal_subscriber')
self.subscription = self.create_subscription(CustomMsg, 'topic', self.CustomMsg_callback, 10)
self.subscription # prevent unused variable warning
def CustomMsg_callback(self, msg):
msg_dict = message_to_ordereddict(msg)
msg_queue.put(msg_dict)
if __name__ == "__main__":
rclpy.init(args=args)
minimal_subscriber = MinimalSubscriber()
def send_ros_to_clients():
rclpy.spin_once(minimal_subscriber)
my_msg = msg_queue.get()
for client in ws_clients:
client.write_message(my_msg)
app = make_app()
server = tornado.httpserver.HTTPServer(app)
server.listen(8888)
tornado.ioloop.PeriodicCallback(send_ros_to_clients, 1).start()
tornado.ioloop.IOLoop.current().start()
minimal_subscriber.destroy_node()
rclpy.shutdown()
I also implemented the wsSend function into the send_ros_to_clients function. Some might say that using a global queue is not the best practice but I could not come up with another solution. I would appreciate any suggestions or corrections on my solution.
I am trying to make a discord bot using discord.py that uses the library quarry to interact with a minecraft server.
Unfortunately once I start the discord bot using bot.run() I can't start the reactor for quarry with reactor.run().
I've looked around including aiostream and asyncio but I can't find a solution. I also looked into twisted as quarry uses that.
EDIT: Including current code.
import asyncio
from twisted.internet import asyncioreactor
asyncioreactor.install(asyncio.get_event_loop())
from quarry.net.client import SpawningClientProtocol, ClientFactory
from quarry.net.auth import Profile
from discord.ext import commands
profile = Profile.from_credentials("email", "password")
bot = commands.Bot(command_prefix=">")
async def start():
loop = asyncio.get_event_loop()
loop.create_task(bot.start("token"))
client = MinecraftBotFactory(profile)
client.connect("creative.starlegacy.net", 25565)
class MinecraftBotProtocol(SpawningClientProtocol):
pass
class MinecraftBotFactory(ClientFactory):
protocol = MinecraftBotProtocol
asyncio.get_event_loop().create_task(start())
asyncio.get_event_loop().run_forever()
quarry uses twisted and in order to use it with an asyncio you will need to setup the AsyncioReactor at the start of your application.
import asyncio
from twisted.internet import asyncioreactor
asyncioreactor.install(asyncio.get_event_loop())
After that, you should only need to execute bot.start() since quarry relies on the reactor running, and just start the event loop. You cannot await reactor.run() because it's not a coroutine, so just get rid of that function.
Update: Added example and explanation
I had a chance to go over discord.py code a bit and found that discord.Client._connect() is basically "blocking code" if you await it, but only if you have another task waits for bot.start() to complete with in the same async def. In other words, it's an infinite poller loop that never returns anything, therefore cannot proceed past the await point. To avoid this you could create a Task. This example "should" work (I haven't tried it on a live server since I don't have a Minecraft client on hand). It starts a server and then when clients try to connect, they get a message saying the Minecraft server is down and then get booted. When a user tries to connect, a Discord message is also sent to the channel.
import asyncio
from twisted.internet import asyncioreactor
asyncioreactor.install(asyncio.get_event_loop())
import discord
from quarry.net.server import ServerFactory, ServerProtocol
from twisted.internet import endpoints, reactor
async def start():
loop = asyncio.get_event_loop()
minecraft_server = DowntimeFactory()
minecraft_server_host = "0.0.0.0"
minecraft_server_port = 25565
discord_bot_token = "APP_TOKEN"
discord_channel_id = 0 # CHANNEL ID
# Start a quarry server
quarry_server = endpoints.TCP4ServerEndpoint(
reactor,
port=minecraft_server_port,
interface=minecraft_server_host,
)
try:
await quarry_server.listen(minecraft_server).asFuture(loop)
except Exception as err:
print(err)
loop.stop()
# Start Discord poller
discord_client = discord.Client(loop=loop)
loop.create_task(discord_client.start(discord_bot_token))
#discord_client.event
async def on_ready():
"""
After the Discord client connects, set the discord_channel attribute in the
quarry factory so that server can access Discord functionality.
"""
discord_channel = discord_client.get_channel(discord_channel_id)
if discord_channel:
minecraft_server.discord_channel = discord_channel
await discord_channel.send("hello world")
else:
print("[!] channel no found")
loop.stop()
class DowntimeProtocol(ServerProtocol):
def packet_login_start(self, buff):
buff.discard()
self.close(self.factory.motd)
# The Discord channel might not have been set up yet
if self.factory.discord_channel:
self.factory.loop.create_task(
self.factory.discord_channel.send(
"Someone is trying to access the Minecraft server!"
)
)
class DowntimeFactory(ServerFactory):
motd = "Down for maintenance"
loop = asyncio.get_event_loop()
discord_channel = None
protocol = DowntimeProtocol
asyncio.get_event_loop().create_task(start())
try:
asyncio.get_event_loop().run_forever()
except KeyboardInterrupt:
pass
bot.run() is a blocking call, meaning that it stops the execution of the program until it is done. Try an alternative like await bot.start():
import asyncio
from discord.ext import tasks
async def start_bot():
await bot.start('my_token_goes_here')
#tasks.loop(count=1)
async def login_quarry():
# login to quarry here #
login_quarry.start()
# put any code you need BEFORE this:
asyncio.get_event_loop().run_until_complete(start_bot())
In python 3.7 and up, there is a simpler alternative:
import asyncio
from discord.ext import tasks
async def start_bot():
await bot.start('my_token_goes_here')
#tasks.loop(count=1)
async def login_quarry():
# login to quarry here #
login_quarry.start()
# put any code you need BEFORE this:
asyncio.run(start_bot())
Trying to do a test that communicates with several instances of a web-server (which also communicates between them). But the second one seems to override the first however I try. Any suggestions of how to solve this.
So far I have this:
import os
from aiohttp.test_utils import TestClient, TestServer, loop_context
import pytest
from http import HTTPStatus as hs
from mycode import main
#pytest.fixture
def cli(loop):
app = main(["-t", "--location", "global", "-g"])
srv = TestServer(app, port=40080)
client = TestClient(srv, loop=loop)
loop.run_until_complete(client.start_server())
return client
#pytest.fixture
def cli_edge(loop):
app = main(["-t", "--location", "edge", "-j", "http://127.0.0.1:40080"])
srv = TestServer(app, port=40081)
client = TestClient(srv, loop=loop)
loop.run_until_complete(client.start_server())
return client
async def test_edge_namespace(cli, cli_edge):
resp = await cli.put('/do/something', json={})
assert resp.status in [hs.OK, hs.CREATED, hs.NO_CONTENT]
resp = await cli_edge.get('/do/something')
assert resp.status in [hs.OK, hs.CREATED, hs.NO_CONTENT]
The above call to cli.put goes to the server intended for cli_edge. I will have several more tests that should communicate with the servers.
Using Python 3.7 and pytest with asyncio and aiohttp extensions.
The suggested code works, the error was elsewhere in the server implementation.
You can add:
def fin():
loop.run_until_complete(srv.close())
loop.run_until_complete(client.close())
request.addfinalizer(fin)
and the request param in the pytest fixtures to close connections nicely.
I try to implement a REQ/REP pattern, with python3 asyncio and ZeroMQ
My client async function:
import zmq
import os
from time import time
import asyncio
import zmq.asyncio
print ('Client %i'%os.getpid())
context = zmq.asyncio.Context(1)
loop = zmq.asyncio.ZMQEventLoop()
asyncio.set_event_loop(loop)
async def client():
socket = context.socket(zmq.REQ)
socket.connect('tcp://11.111.11.245:5555')
while True:
data = zmq.Message(str(os.getpid()).encode('utf8'))
start = time()
print('send')
await socket.send(data)
print('wait...')
data = await socket.recv()
print('recv')
print(time() - start, data)
loop.run_until_complete(client())
As I understand, the call to a socket.connect( "tcp://11.111.11.245:5555" ) method is a blocking method.
How to make a non-blocking connection call, in my case?
As far as I understand the ZeroMQ API, the call to .connect() method is not synchronous with building the real connection ( if not introduced by the wrapper, the underlying API is non-blocking - ref. below ).
The connection will not be performed immediately but as needed by ØMQ. Thus a successful invocation of zmq_connect() does not indicate that a physical connection was or can actually be established.
Ref.: ZeroMQ API - zmq_connect(3)
I am writing a web socket server in python. I have tried the approach below with txws, autobahn, and tornado, all with similar results.
I seem to have massive memory consumption with secure websockets and I cannot figure out where or why this might be happening. Below is an example in tornado, but I can provide examples in autobahn or txws.
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
import json
class AuthHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'new connection for auth'
def on_message(self, message):
message = json.loads(message)
client_id = message['client_id']
if client_id not in app.clients:
app.clients[client_id] = self
self.write_message('Agent Recorded')
def on_close(self):
print 'auth connection closed'
class MsgHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'new connection for msg'
def on_message(self, message):
message = json.loads(message)
to_client = message['client_id']
if to_client in app.clients:
app.clients[to_client].write_message('You got a message')
def on_close(self):
print 'msg connection closed'
app = tornado.web.Application([
(r'/auth', AuthHandler),
(r'/msg', MsgHandler)
])
app.clients = {}
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(app, ssl_options={
'certfile': 'tests/keys/server.crt',
'keyfile': 'tests/keys/server.key'
})
http_server.listen(8000)
tornado.ioloop.IOLoop.instance().start()
After making around 10,000 connections I find I am using around 700MB of memory with SSL compared to 43MB without, and I never get it back unless I kill the process. It seems like the problem is closely tied to the amount of connections made rather than messages sent.
The consumption seems to happen independent of the client (I wrote my own client and tried other clients).
Are secure websockets really that much more memory intensive that plain websockets? Or is my server code not implementing it correctly?
I think the best solution is to use a real webserver (nginx apache) as a proxy and let it manage the ssl layer.