I have a Flask app which upon certain rest call is running several modules using a ProcessPoolExecutor.
UPDATED: Added redis as a message queue (using docker, redis as redis's host)
socketio = SocketIO(app, message_queue='redis://redis')
(...)
def emit_event(evt, message):
socketio.emit(evt, message, namespace='/test')
#app.route('/info', methods=['GET'])
def info():
emit_event('update_reports', '')
(...)
if __name__ == "__main__":
socketio.run(host='0.0.0.0', threaded=True)
Now that I added redis, it still works when emitting from the main app.
Here some from the code I'm running the sub-process:
def __init__(self):
self.executor = futures.ProcessPoolExecutor(max_workers=4)
self.socketio = SocketIO(async_mode='eventlet', message_queue='redis://redis')
(...)
future = self.executor.submit(process, params)
future.add_done_callback(functools.partial(self.finished_callback, pid))
Then in that callback I'm calling the emit_event method:
def finished_callback(self, pid, future):
pid.status = Status.DONE.value
pid.finished_at = datetime.datetime.utcnow
pid.save()
self.socketio.emit('update_reports', 'done', namespace='/test')
Getting and sending/ emitting messages from/to the client from my controller works just fine, also if I call /info from curl or postman my client gets the message -but- when trying to emit an event same way from within this subprocess callback, now it shows this error:
This is mostly for notifications, like notifying when a long process has finished and stuff like that.
INFO:socketio:emitting event "update_reports" to all [/test]
ERROR:socketio:Cannot publish to redis... retrying
ERROR:socketio:Cannot publish to redis... giving up
What I'm doing wrong?
Thanks!
There are specific rules that you need to follow in setting up the Flask-SocketIO extension so that external processes can emit, which include the use of a message queue that the main and external processes use to coordinate efforts. See the Emitting from an External Process section of the documentation for instructions.
Related
I've created a script which sends messages using telethon. The receivers are not always the same: the number of receivers and their IDs are taken from a MySQL table. The multi processing script runs okay in the expected loop when started from the command prompt. But as soon as it's started as a service the messages are not send.
Please see the code below which includes the function to send out the messages. This function is called by another function which loops over the result of a MySQL query.
Can someone shine a light on the question why the function runs fine from the prompt and not as a service?
import configparser
# get configuration
config = configparser.ConfigParser()
config.read('/etc/p2000.cfg')
telegram_api_id = config.get('telegram','api_id')
telegram_api_hash = config.get('telegram','api_hash')
telegram_bot_name = config.get('telegram','bot_name')
client = TelegramClient(telegram_bot_name, telegram_api_id, telegram_api_hash)
def p2k_send_telegram(PeerID,Message):
async def main():
await client.send_message(int(PeerID), Message)
with client:
client.loop.run_until_complete(main())
Okay, the answer was easy and right in front of me! The issue could be isolated to the client variable. When running as a service under systemd the session (file) has to be defined with its full path!
Something like this:
client = TelegramClient('/full/path/to/my.session', telegram_api_id, telegram_api_hash)
I have a client with an open connection to a server which accepts push notifications from the server. I would like to display the data from the push notifications in a plotly/dash page in near real time.
I've been considering my options as discussed in the documentation page.
If I have multiple push-notification clients running in each potential plotly/dash worker process, then I had to manage de-duplicating events, doable, but bug prone and quirky to code.
The idea solution seems to be to run the push network client on only one process and push those notifications into a dcc.Store objects. I assume I would do that by populating a queue in the push clients async callback, and on a dcc.Interval timer gather any new data in that queue and place it in the dcc.Store object. Then all other callbacks get triggered on the dcc.Store object, possibly in separate python processes.
From the documentation I don't see how I would be guarantee the callback that interacts with the push network client to the main process and ensure it doesn't run on any worker processes. Is this possible? The dcc.Interval documentation doesn't make any mention of this detail.
Is there a way to force the dcc.Interval onto one process, or is that the normal operation under Dash with multiple worker processes? Or is there another recommended approach to handling data from a push notification network client?
An alternative to the Interval component pulling updates at regular intervals could be to use a Websocket component to enable push notifications. Simply add the component to the layout and add a clientside callback that performs the appropriate updates based on the received message,
app.clientside_callback("function(msg){return \"Response from websocket: \" + msg.data;}",
Output("msg", "children"), [Input("ws", "message")])
Here is a complete example using a SocketPool to setup endpoints for sending messages,
import dash_html_components as html
from dash import Dash
from dash.dependencies import Input, Output
from dash_extensions.websockets import SocketPool, run_server
from dash_extensions import WebSocket
# Create example app.
app = Dash(prevent_initial_callbacks=True)
socket_pool = SocketPool(app)
app.layout = html.Div([html.Div(id="msg"), WebSocket(id="ws")])
# Update div using websocket.
app.clientside_callback("function(msg){return \"Response from websocket: \" + msg.data;}",
Output("msg", "children"), [Input("ws", "message")])
# End point to send message to current session.
#app.server.route("/send/<message>")
def send_message(message):
socket_pool.send(message)
return f"Message [{message}] sent."
# End point to broadcast message to ALL sessions.
#app.server.route("/broadcast/<message>")
def broadcast_message(message):
socket_pool.broadcast(message)
return f"Message [{message}] broadcast."
if __name__ == '__main__':
run_server(app)
I have a pyramid application that send SSE messages. It works basically like these:
def message_generator():
for i in range(100):
print("Sending message:" + str(i))
yield "data: %s\n\n" % json.dumps({'message': str(i)})
time.sleep(random.randint(1, 10))
#view_config(route_name='events')
def events(request):
headers = [('Content-Type', 'text/event-stream'),
('Cache-Control', 'no-cache')]
response = Response(headerlist=headers)
response.app_iter = message_generator()
return response
When I browse to /events I get the events. When I move to another page the events stop, when I close the browser the events stop.
The problem happens for example if I am in /events and I switch off the computer. The server does not know that the client got lost and message_generator keeps sending messages to the void.
In this page: A Look at Server-Sent Events mention this:
...the server should detect this (when the client stops) and stop
sending further events as the client is no longer listening for them.
If the server does not do this, then it will essentially be sending
events out into a void.
Is there a way to detect this with Pyramid? I tried with
request.add_finished_callback()
but this callback seems to be called with
return response
I use Gunicorn with gevent to start the server.
Any idea is highly appreciated
From PEP 3333:
Applications returning a generator or other custom iterator should not assume the entire iterator will be consumed, as it may be closed early by the server.
Basically a WSGI server "should" invoke the close() method on the app_iter when a client disconnects (all generators, such as in your example, support this automatically). However, a server is not required to do it, and it seems many WSGI servers do not. For example, you mentioned gunicorn (which I haven't independently verified), but I did verify that waitress also does not. I opened [1] on waitress as a result, and have been working on a fix. Streaming responses in WSGI environments is shaky at best and usually depends on the server. For example, on waitress, you need to set send_bytes=0 to avoid it buffering the response data.
[1] https://github.com/Pylons/waitress/issues/236
I make a connection app with rabbitmq, it works fine but when I stop rabbitmq process all of my request get lost, I want even after killing rabitmq service, my requests get saved and after restart rabitmq service, all of my request return to their own places.
Here is my rabitmq.py:
import pika
import SimilarURLs
data = ''
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
def rabit_mq_start(Parameter):
channel.queue_declare(queue='req')
a = (take(datas=Parameter.decode()))
channel.basic_publish(exchange='',
routing_key='req',
body=str(a))
print(" [x] Sent {}".format(a))
return a
channel.start_consuming()
def take(datas):
returns = SimilarURLs.start(data=datas)
return returns
In addition, I'm sorry for writing mistakes in my question.
You need to enable publisher confirms (via the confirm_delivery method on your channel object). Then your application must keep track of what messages have been confirmed as published, and what messages have not. You will have to implement this yourself. When RabbitMQ is stopped and started again, your application can re-publish the messages that weren't confirmed.
It would be best to use the asynchronous publisher example as a guide. If you use BlockingConnection you won't get the async notifications when a message is confirmed, defeating their purpose.
If you need further assistance after trying to implement this yourself I suggest following up on the pika-python mailing list.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
My app receives jobs to do from a web server through sockets. At the moment when a job is running on the app I can then only send 2 more messages to the app before it won't receive any more.
def handlemsg (self, data):
self.sendmsg (cPickle.dumps('received')) # send web server notification received
data = cPickle.loads(data)
print data
# Terminate a Job
if data[-1] == 'terminate':
self.terminate(data[0])
# Check if app is Available
elif data[-1] == 'prod':
pass
# Run Job
else:
supply = supply_thread(data, self.app)
self.supplies[data['job_name']] = supply
supply.daemon = True
supply.start()
I can send as many prods as I like to the server. But once I send a Job that activates a thread then responses become limited. For some reason it will allow me to send another two prods while the job is running... But after that the print message will not appear it just keeps on working.
Any ideas? Thanks
I was running my data through a datagram socket configuration. I switched to a socketstream and it seemed to resolve it.
http://turing.cs.camosun.bc.ca/COMP173/notes/PySox.html
Was helpful in the resolution.