I'm using falcon framework in python to form json responses of web api.
For instance I have a function called logic() that works for 30-90min. I want something like this:
When http-client asks for /api/somepath.json we call
somepath_handle()
somepath_handle() runs logic() in another thread/process
When logic() is finished, thread is closed
somepath_handle() reads response of logic() from return
If somepath_handle() was killed before logic() was finished, then thread/etc with logic() isn't stopped until it is finished
The code:
def somepath_handle():
run_async_logic()
response=wait_for_async_logic_response() # read response of logic()
return_response(response)
If your process takes such a long time, I advise you to send the result to the user using email, or maybe a live notification system ?
I am using a simple worker to create the queue where I am processing some commands. If add simple response storage than there will be possibility to process any requests and not loss them when connection was lost.
Example:
It's main function that used falconframework.org to response to requests.
main.py:
from flow import Flow
import falcon
import threading
import storage
__version__ = 0.1
__author__ = 'weldpua2008#gmail.com'
app = falcon.API(
media_type='application/json')
app.add_route('/flow', Flow())
THREADS_COUNT = 1
# adding the workers to process queue of command
worker = storage.worker
for _ in xrange(THREADS_COUNT):
thread = threading.Thread(target=worker)
thread.daemon = True
thread.start()
It's simple storage with worker code
storage.py:
from Queue import Queue
import subprocess
import logging
main_queque = Queue()
def worker():
global main_roles_queque
while True:
try:
cmd = main_queque.get()
#do_work(item)
#time.sleep(5)
handler = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = handler.communicate()
logging.critical("[queue_worker]: stdout:%s, stderr:%s, cmd:%s" %(stdout, stderr, cmd))
main_queque.task_done()
except Exception as error:
logging.critical("[queue_worker:error] %s" %(error))
It's class that will process any requests [POST, GET]
flow.py:
import storage
import json
import falcon
import random
class Flow(object):
def on_get(self, req, resp):
storage_value = storage.main_queque.qsize()
msg = {"qsize": storage_value}
resp.body = json.dumps(msg, sort_keys=True, indent=4)
resp.status = falcon.HTTP_200
#curl -H "Content-Type: application/json" -d '{}' http://10.206.102.81:8888/flow
def on_post(self, req, resp):
r = random.randint(1, 10000000000000)
cmd = 'sleep 1;echo "ss %s"' % str(r)
storage.main_queque.put(cmd)
storage_value = cmd
msg = {"value": storage_value}
resp.body = json.dumps(msg, sort_keys=True, indent=4)
resp.status = falcon.HTTP_200
Related
Im currently working with Python ThreadPoolExecutor and I wonder how I am able to re-execute a task once it is finished
import random
import threading
import time
from concurrent.futures import as_completed
from concurrent.futures.thread import ThreadPoolExecutor
import requests
URLS = [
'URL1',
'URL2',
'URL3',
'URL4',
'URL5',
'URL6',
'URL7',
'URL8',
'URL9',
'URL10',
'URL11',
]
def doRequest(url):
response = requests.get(f'https://www.google.se/?{url}')
time.sleep(random.randint(1, 3))
return response
def ourLoop():
with ThreadPoolExecutor(max_workers=2) as executor:
future_tasks = [
executor.submit(
doRequest,
url
) for url in URLS]
for future in as_completed(future_tasks):
response = future.result()
print(f"Got result! -> {response}")
while True:
t = threading.Thread(target=ourLoop, )
t.start()
print('Joining thread and waiting for it to finish...')
t.join()
What I wonder is that once we get a response inside the for future in as_completed(future_tasks): I want it to go back again into the "queue" so that it can constantly do the requests until I close the application. Is that possible to do that using ThreadPoolExecutor?
I'm trying to implement an async RPC client within a Flask server.
The idea is that each request spawn a thread with an uuid, and each request is going to wait until there is a response in the RpcClient queue attribute object with the correct uuid.
The problem is that one request out of two fails. I think that might be a problem with multi-threading, but I don't see where it comes from.
Bug can be seen here.
Using debug print, it seems that the message with the correct uuid is received in the _on_response callback and update the queue attribute in this instance correctly, but the queue attribute within the /rpc_call/<payload> endpoint doesn't synchronize (so queue[uuid] has a value of response in the RpcClient callback but still None in the scope of the endpoint).
My code:
from flask import Flask, jsonif
from gevent.pywsgi import WSGIServer
import sys
import os
import pika
import uuid
import time
import threading
class RpcClient(object):
"""Asynchronous Rpc client."""
internal_lock = threading.Lock()
queue = {}
def __init__(self):
self.connection = pika.BlockingConnection(
pika.ConnectionParameters(host='rabbitmq'))
self.channel = self.connection.channel()
self.channel.basic_qos(prefetch_count=1)
self.channel.exchange_declare(exchange='kaldi_expe', exchange_type='topic')
# Create all the queue and bind them to the corresponding routing key
self.channel.queue_declare('request', durable=True)
result = self.channel.queue_declare('answer', durable=True)
self.channel.queue_bind(exchange='kaldi_expe', queue='request', routing_key='kaldi_expe.web.request')
self.channel.queue_bind(exchange='kaldi_expe', queue='answer', routing_key='kaldi_expe.kaldi.answer')
self.callback_queue = result.method.queue
.
thread = threading.Thread(target=self._process_data_events)
thread.setDaemon(True)
thread.start()
def _process_data_events(self):
self.channel.basic_consume(self.callback_queue, self._on_response, auto_ack=True)
while True:
with self.internal_lock:
self.connection.process_data_events()
time.sleep(0.1)
def _on_response(self, ch, method, props, body):
"""On response we simply store the result in a local dictionary."""
self.queue[props.correlation_id] = body
def send_request(self, payload):
corr_id = str(uuid.uuid4())
self.queue[corr_id] = None
with self.internal_lock:
self.channel.basic_publish(exchange='kaldi_expe',
routing_key="kaldi_expe.web.request",
properties=pika.BasicProperties(
reply_to=self.callback_queue,
correlation_id=corr_id,
),
body=payload)
return corr_id
def flask_app():
app = Flask("kaldi")
#app.route('/', methods=['GET'])
def server_is_up():
return 'server is up', 200
#app.route('/rpc_call/<payload>')
def rpc_call(payload):
"""Simple Flask implementation for making asynchronous Rpc calls. """
corr_id = app.config['RPCclient'].send_request(payload)
while app.config['RPCclient'].queue[corr_id] is None:
#print("queue server: " + str(app.config['RPCclient'].queue))
time.sleep(0.1)
return app.config['RPCclient'].queue[corr_id]
if __name__ == '__main__':
while True:
try:
rpcClient = RpcClient()
app = flask_app()
app.config['RPCclient'] = rpcClient
print("Rabbit MQ is connected, starting server", file=sys.stderr)
app.run(debug=True, threaded=True, host='0.0.0.0')
except pika.exceptions.AMQPConnectionError as e:
print("Waiting for RabbitMq startup" + str(e), file=sys.stderr)
time.sleep(1)
except Exception as e:
worker.log.error(e)
exit(e)
I found where the bug came from:
Thedebug=True of the line app.run(debug=True, threaded=True, host='0.0.0.0') restart the server at the beginning.
The whole script is then restarted from the beginning. Because of it, another rpcClient is initialized and consume from the same queue. Problem is that the previous thread is also running. This cause two rpcClient to consume from the same thread, with one that is virtually useless.
I have this part of code which is doing psubscribe to redis. I want to run this part of code in a thread an working in the background while the other part of code will check some notifications from this below.
def psubscribe(context, param1, param2, param3):
context.test_config = load_config()
RedisConnector(context.test_config["redis_host"],
context.test_config["redis_db_index"])
redis_notification_subscriber_connector = RedisConnector(context.test_config["notification__redis_host"],
int(param3),
int(context.test_config[
"notification_redis_port"]))
context.redis_connectors = redis_notification_connector.psubscribe_to_redis_event(param1,
timeout_seconds=int(
param2)
)
what I have done till now: but its not running :(
context.t = threading.Thread(target=psubscribe, args=['param1', 'param2', 'param3'])
context.t.start()
It is actually working. I think you didn't need actually to pass context variable to your psubscribe function.
Here is an example:
Start http server that listens on port 8000 as a background thread
Send http requests to it and validate response
Feature scenario:
Scenario: Run background process and validate responses
Given Start background process
Then Validate outputs
background_steps.py file:
import threading
import logging
from behave import *
from features.steps.utils import run_server
import requests
#given("Start background process")
def step_impl(context):
context.t = threading.Thread(target=run_server, args=[8000])
context.t.daemon = True
context.t.start()
#then("Validate outputs")
def step_impl(context):
response = requests.get('http://127.0.0.1:8000')
assert response.status_code == 501
utils.py file
from http.server import HTTPServer, BaseHTTPRequestHandler
def run_server(port, server_class=HTTPServer, handler_class=BaseHTTPRequestHandler):
server_address = ('', port)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
import tornado.web
import tornado.ioloop
from apiApplicationModel import userData
from cleanArray import Label_Correction
import json
import requests
colName=['age', 'resting_blood_pressure', 'cholesterol', 'max_heart_rate_achieved', 'st_depression', 'num_major_vessels', 'st_slope_downsloping', 'st_slope_flat', 'st_slope_upsloping', 'sex_male', 'chest_pain_type_atypical angina', 'chest_pain_type_non-anginal pain', 'chest_pain_type_typical angina', 'fasting_blood_sugar_lower than 120mg/ml', 'rest_ecg_left ventricular hypertrophy', 'rest_ecg_normal', 'exercise_induced_angina_yes', 'thalassemia_fixed defect', 'thalassemia_normal',
'thalassemia_reversable defect']
class processRequestHandler(tornado.web.RequestHandler):
def post(self):
data_input_array = []
for name in colName:
x = self.get_body_argument(name, default=0)
data_input_array.append(int(x))
label = Label_Correction(data_input_array)
finalResult = int(userData(label))
output = json.dumps({"Giveput":finalResult})
self.write(output)
class basicRequestHandler(tornado.web.RequestHandler):
def get(self):
self.render('report.html')
class staticRequestHandler(tornado.web.RequestHandler):
def post(self):
data_input_array = []
for name in colName:
x = self.get_body_argument(name, default=0)
data_input_array.append(str(x))
send_data = dict(zip(colName, data_input_array))
print(send_data)
print(type(send_data))
url = "http://localhost:8887/output"
headers={}
response = requests.request('POST',url,headers=headers,data=send_data)
print(response.text.encode('utf8'))
print("DONE")
if __name__=='__main__':
app = tornado.web.Application([(r"/",basicRequestHandler),
(r"/result",staticRequestHandler),
(r"/output",processRequestHandler)])
print("API IS RUNNING.......")
app.listen(8887)
tornado.ioloop.IOLoop.current().start()
Actually I am trying to create an API and the result of the API can be used but
The page keeps on loading, and no response is shown.
Response should be a python dictionary send by post function of class processRequestHandler
After using a debugger the lines after response = requests.request('POST',url,headers=headers,data=send_data) are not executed.
The class processRequestHandler is working fine when checked with POSTMAN.
requests.request is a blocking method. This blocks the event loop and prevents any other handlers from running. In a Tornado handler, you need to use Tornado's AsyncHTTPClient (or another non-blocking HTTP client such as aiohttp) instead.
async def post(self):
...
response = await AsyncHTTPClient().fetch(url, method='POST', headers=headers, body=send_data)
See the Tornado users's guide for more information.
I want to send data through websockets as soon as a client is connected.
The Data is at an other place then the Websocket Handler. How can i get the data to the client ?
The server should hold the loop and the Handler. In the connector i connect to a tcp socket to get the data out of some hardware. I expect to have not more then 6 Websockets open once a time. The Data comes as a stream out of the TCP socket.
server.py
import os
from tornado import web, websocket
import asyncio
import connector
class StaticFileHandler(web.RequestHandler):
def set_default_headers(self):
self.set_header("Access-Control-Allow-Origin", "*")
def get(self):
self.render('index.html')
class WSHandler(websocket.WebSocketHandler):
def open(self):
print('new connection')
self.write_message("connected")
def on_message(self, message):
print('message received %s' % message)
self.write_message("pong")
def on_close(self):
print('connection closed')
public_root = 'web_src'
handlers = [
(r'/', StaticFileHandler),
(r'/ws', WSHandler),
]
settings = dict(
template_path = os.path.join(os.path.dirname(__file__), public_root),
static_path = os.path.join(os.path.dirname(__file__), public_root),
debug = True
)
app = web.Application(handlers, **settings)
sensorIP = "xxx.xxx.xxx.xxx"
if __name__ == "__main__":
app.listen(8888)
asyncio.ensure_future(connector.main_task(sensorIP))
asyncio.get_event_loop().run_forever()
connector.py
import yaml
import asyncio
class RAMReceiver:
def __init__(self, reader):
self.reader = reader
self.remote_data = None
self.initParams = None
async def work(self):
i = 0
while True:
data = await self.reader.readuntil(b"\0")
self.remote_data = yaml.load(data[:-1].decode("utf-8",
"backslashreplace"))
# here i want to emit some data
# send self.remote_data to websockets
if i == 0:
i += 1
self.initParams = self.remote_data
# here i want to emit some data after open event is
# triggered
# send self.initParams as soon as a client has connected
async def main_task(host):
tasks = []
(ram_reader,) = await asyncio.gather(asyncio.open_connection(host,
51000))
receiver = RAMReceiver(ram_reader[0])
tasks.append(receiver.work())
while True:
await asyncio.gather(*tasks)
You can use Tornado's add_callback function to call a method on your websocket handler to send the messages.
Here's an example:
1. Create an additional method on your websocket handler which will receive a message from connector.py and will send to connected clients:
# server.py
class WSHandler(websocket.WebSocketHandler):
# make it a classmethod so that
# it can be accessed directly
# from class without `self`
#classmethod
async def send_data(cls, data):
# write your code for sending data to client
2. Pass the currently running IOLoop and WSHandler.send_data to your connector.py:
# server.py
from tornado import ioloop
...
if __name__ == "__main__":
...
io_loop = ioloop.IOLoop.current() # current IOLoop
callback = WSHandler.send_data
# pass io_loop and callback to main_task
asyncio.ensure_future(connector.main_task(sensorIP, io_loop, callback))
...
3. Then modify main_task function in connector.py to receive io_loop and callback. Then pass io_loop and callback to RAMReceiver.
4. Finally, use io_loop.add_callback to call WSHandler.send_data:
class RAMReceiver:
def __init__(self, reader, io_loop, callback):
...
self.io_loop = io_loop
self.callback = callback
async def work(self):
...
data = "Some data"
self.io_loop.add_callback(self.callback, data)
...