Server Sent Events with Pyramid - How to detect if the connection to the client has been lost - pyramid

I have a pyramid application that send SSE messages. It works basically like these:
def message_generator():
for i in range(100):
print("Sending message:" + str(i))
yield "data: %s\n\n" % json.dumps({'message': str(i)})
time.sleep(random.randint(1, 10))
#view_config(route_name='events')
def events(request):
headers = [('Content-Type', 'text/event-stream'),
('Cache-Control', 'no-cache')]
response = Response(headerlist=headers)
response.app_iter = message_generator()
return response
When I browse to /events I get the events. When I move to another page the events stop, when I close the browser the events stop.
The problem happens for example if I am in /events and I switch off the computer. The server does not know that the client got lost and message_generator keeps sending messages to the void.
In this page: A Look at Server-Sent Events mention this:
...the server should detect this (when the client stops) and stop
sending further events as the client is no longer listening for them.
If the server does not do this, then it will essentially be sending
events out into a void.
Is there a way to detect this with Pyramid? I tried with
request.add_finished_callback()
but this callback seems to be called with
return response
I use Gunicorn with gevent to start the server.
Any idea is highly appreciated

From PEP 3333:
Applications returning a generator or other custom iterator should not assume the entire iterator will be consumed, as it may be closed early by the server.
Basically a WSGI server "should" invoke the close() method on the app_iter when a client disconnects (all generators, such as in your example, support this automatically). However, a server is not required to do it, and it seems many WSGI servers do not. For example, you mentioned gunicorn (which I haven't independently verified), but I did verify that waitress also does not. I opened [1] on waitress as a result, and have been working on a fix. Streaming responses in WSGI environments is shaky at best and usually depends on the server. For example, on waitress, you need to set send_bytes=0 to avoid it buffering the response data.
[1] https://github.com/Pylons/waitress/issues/236

Related

Pyzmq swallows error when connecting to blocked port (firewall)

I'm trying to connect to a server using python's pyzmq package.
In my case I'm expecting an error, because the server the client connects to, blocks the designated port by a firewall.
However, my code runs through until I terminate the context and then blocks infinitely.
I tried several things to catch the error condition beforehand, but none of them succeeded.
My base example looks like this:
import zmq
endpoint = "tcp://{IP}:{PORT}"
zmq_ctx = zmq.Context()
sock = zmq_ctx.socket(zmq.PAIR)
sock.connect(endpoint) # <--- I would expect an exception thrown here, however this is not the case
sock.disconnect(endpoint)
sock.close()
zmq_ctx.term() # <--- This blocks infinetely
I extended the sample by sock.poll(1000, zmq.POLLOUT | zmq.POLLIN), hoping that the poll command would fail if the connection could not be established due to the firewall.
Then, I tried to solve the issue by setting some sock options, before the sock = zmq_ctx.socket(zmq.PAIR):
zmq_ctx.setsockopt(zmq.IMMEDIATE, 1) # Hoping, this would lead to an error on the first `send`
zmq_ctx.setsockopt(zmq.HEARTBEAT_IVL, 100) # Hoping, the heartbeat would not be successful if the connection could not be established
zmq_ctx.setsockopt(zmq.HEARTBEAT_TTL, 500)
zmq_ctx.setsockopt(zmq.LINGER, 500) # Hoping, the `zmq_ctx.term()` would throw an exception when the linger period is over
I also added temporarily a sock.send_string("bla"), but it just enqueues the msg without returning me some error and did not provide any new insights.
The only thing I can imagine to solve the problem would be using the telnet package and attempting a connection to the endpoint.
However, adding a dependency just for the purpose of testing a connection is not really satisfactory.
Do you have any idea, how to determine a blocked port from the client side still using pyzmq? I'm not very happy that the code always runs into the blocking zmq_ctx.term() in that case.

How to accept push notifications in a plotly/dash app?

I have a client with an open connection to a server which accepts push notifications from the server. I would like to display the data from the push notifications in a plotly/dash page in near real time.
I've been considering my options as discussed in the documentation page.
If I have multiple push-notification clients running in each potential plotly/dash worker process, then I had to manage de-duplicating events, doable, but bug prone and quirky to code.
The idea solution seems to be to run the push network client on only one process and push those notifications into a dcc.Store objects. I assume I would do that by populating a queue in the push clients async callback, and on a dcc.Interval timer gather any new data in that queue and place it in the dcc.Store object. Then all other callbacks get triggered on the dcc.Store object, possibly in separate python processes.
From the documentation I don't see how I would be guarantee the callback that interacts with the push network client to the main process and ensure it doesn't run on any worker processes. Is this possible? The dcc.Interval documentation doesn't make any mention of this detail.
Is there a way to force the dcc.Interval onto one process, or is that the normal operation under Dash with multiple worker processes? Or is there another recommended approach to handling data from a push notification network client?
An alternative to the Interval component pulling updates at regular intervals could be to use a Websocket component to enable push notifications. Simply add the component to the layout and add a clientside callback that performs the appropriate updates based on the received message,
app.clientside_callback("function(msg){return \"Response from websocket: \" + msg.data;}",
Output("msg", "children"), [Input("ws", "message")])
Here is a complete example using a SocketPool to setup endpoints for sending messages,
import dash_html_components as html
from dash import Dash
from dash.dependencies import Input, Output
from dash_extensions.websockets import SocketPool, run_server
from dash_extensions import WebSocket
# Create example app.
app = Dash(prevent_initial_callbacks=True)
socket_pool = SocketPool(app)
app.layout = html.Div([html.Div(id="msg"), WebSocket(id="ws")])
# Update div using websocket.
app.clientside_callback("function(msg){return \"Response from websocket: \" + msg.data;}",
Output("msg", "children"), [Input("ws", "message")])
# End point to send message to current session.
#app.server.route("/send/<message>")
def send_message(message):
socket_pool.send(message)
return f"Message [{message}] sent."
# End point to broadcast message to ALL sessions.
#app.server.route("/broadcast/<message>")
def broadcast_message(message):
socket_pool.broadcast(message)
return f"Message [{message}] broadcast."
if __name__ == '__main__':
run_server(app)

Rabbitmq keep request after stopping rabitmq procces and queue

I make a connection app with rabbitmq, it works fine but when I stop rabbitmq process all of my request get lost, I want even after killing rabitmq service, my requests get saved and after restart rabitmq service, all of my request return to their own places.
Here is my rabitmq.py:
import pika
import SimilarURLs
data = ''
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
def rabit_mq_start(Parameter):
channel.queue_declare(queue='req')
a = (take(datas=Parameter.decode()))
channel.basic_publish(exchange='',
routing_key='req',
body=str(a))
print(" [x] Sent {}".format(a))
return a
channel.start_consuming()
def take(datas):
returns = SimilarURLs.start(data=datas)
return returns
In addition, I'm sorry for writing mistakes in my question.
You need to enable publisher confirms (via the confirm_delivery method on your channel object). Then your application must keep track of what messages have been confirmed as published, and what messages have not. You will have to implement this yourself. When RabbitMQ is stopped and started again, your application can re-publish the messages that weren't confirmed.
It would be best to use the asynchronous publisher example as a guide. If you use BlockingConnection you won't get the async notifications when a message is confirmed, defeating their purpose.
If you need further assistance after trying to implement this yourself I suggest following up on the pika-python mailing list.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Flask-SocketIO - How to emit an event from a sub-process

I have a Flask app which upon certain rest call is running several modules using a ProcessPoolExecutor.
UPDATED: Added redis as a message queue (using docker, redis as redis's host)
socketio = SocketIO(app, message_queue='redis://redis')
(...)
def emit_event(evt, message):
socketio.emit(evt, message, namespace='/test')
#app.route('/info', methods=['GET'])
def info():
emit_event('update_reports', '')
(...)
if __name__ == "__main__":
socketio.run(host='0.0.0.0', threaded=True)
Now that I added redis, it still works when emitting from the main app.
Here some from the code I'm running the sub-process:
def __init__(self):
self.executor = futures.ProcessPoolExecutor(max_workers=4)
self.socketio = SocketIO(async_mode='eventlet', message_queue='redis://redis')
(...)
future = self.executor.submit(process, params)
future.add_done_callback(functools.partial(self.finished_callback, pid))
Then in that callback I'm calling the emit_event method:
def finished_callback(self, pid, future):
pid.status = Status.DONE.value
pid.finished_at = datetime.datetime.utcnow
pid.save()
self.socketio.emit('update_reports', 'done', namespace='/test')
Getting and sending/ emitting messages from/to the client from my controller works just fine, also if I call /info from curl or postman my client gets the message -but- when trying to emit an event same way from within this subprocess callback, now it shows this error:
This is mostly for notifications, like notifying when a long process has finished and stuff like that.
INFO:socketio:emitting event "update_reports" to all [/test]
ERROR:socketio:Cannot publish to redis... retrying
ERROR:socketio:Cannot publish to redis... giving up
What I'm doing wrong?
Thanks!
There are specific rules that you need to follow in setting up the Flask-SocketIO extension so that external processes can emit, which include the use of a message queue that the main and external processes use to coordinate efforts. See the Emitting from an External Process section of the documentation for instructions.

TypeScript: Large memory consumption while using ZeroMQ ROUTER/DEALER

We have recently started working on Typescript language for one of the application where a queue'd communication is expected between a server and client/clients.
For achieving the queue'd communication, we are trying to use the ZeroMQ library version 4.6.0 as a npm package: npm install -g zeromq and npm install -g #types/zeromq.
The exact scenario :
The client is going to send thousands of messages to the server over ZeroMQ. The server in-turn will be responding with some acknowledgement message per incoming message from the client. Based on the acknowledgement message, the client will send next message.
ZeroMQ pattern used :
The ROUTER/DEALER pattern (we cannot use any other pattern).
Client side code :
import Zmq = require('zeromq');
let clientSocket : Zmq.Socket;
let messageQueue = [];
export class ZmqCommunicator
{
constructor(connString : string)
{
clientSocket = Zmq.socket('dealer');
clientSocket.connect(connString);
clientSocket.on('message', this.ReceiveMessage);
}
public ReceiveMessage = (msg) => {
var argl = arguments.length,
envelopes = Array.prototype.slice.call(arguments, 0, argl - 1),
payload = arguments[0];
var json = JSON.parse(msg.toString('utf8'));
if(json.type != "error" && json.type =='ack'){
if(messageQueue.length>0){
this.Dispatch(messageQueue.splice(0, 1)[0]);
}
}
public Dispatch(message) {
clientSocket.send(JSON.stringify(message));
}
public SendMessage(msg: Message, isHandshakeMessage : boolean){
// The if condition will be called only once for the first handshake message. For all other messages, the else condition will be called always.
if(isHandshakeMessage == true){
clientSocket.send(JSON.stringify(message));
}
else{
messageQueue.push(msg);
}
}
}
On the server side, we already have a ROUTER socket configured.
The above code is pretty straight forward. The SendMessage() function is essentially getting called for thousands of messages and the code works successfully but with load of memory consumption.
Problem :
Because the behavior of ZeroMQ is asynchronous, the client has to wait on the call back call ReceiveMessage() whenever it has to send a new message to ZeroMQ ROUTER (which is evident from the flow to the method Dispatch).
Based on our limited knowledge with TypeScript and usage of ZeroMQ with TypeScript, the problem is that because default thread running the typescript code (which creates the required 1000+ messages and sends to SendMessage()) continues its execution (creating and sending more messages) after sending the first message (handshake message essentially), unless all the 1000+ messages are created and sent to SendMessage() (which is not sending the data but queuing the data as we want to interpret the acknowledgement message sent by the router socket and only based on the acknowledgement we want to send the next message), the call does not come to the ReceiveMessage() call back method.
It is to say that the call comes to ReceiveMessage() only after the default thread creating and calling SendMessage() is done doing this for 1000+ message and now there is no other task for it to do any further.
Because ZeroMQ does not provide any synchronous mechanism of sending/receiving data using the ROUTER/DEALER, we had to utilize the queue as per the above code using a messageQueue object.
This mechanism will load a huge size messageQueue (with 1000+ messages) in memory and will dequeue only after the default thread gets to the ReceiveMessage() call at the end. The situation will only worsen if say we have 10000+ or even more messages to be sent.
Questions :
We have validated this behavior certainly. So we are sure of the understanding that we have explained above. Is there any gap in our understanding of either/or TypeScript or ZeroMQ usage?
Is there any concept like a blocking queue/limited size array in Typescript which would take limited entries on queue, and block any new additions to the queue until the existing ones are queues (which essentially applies that the default thread pauses its processing till the time the call back ReceiveMessage() is called which will de-queue entries from the queue)?
Is there any synchronous ZeroMQ methodology (We have used it in similar setup for C# where we pool on ZeroMQ and received the data synchronously)?.
Any leads on using multi-threading for such a scenario? Not sure if Typescript supports multi threading to a good extent.
Note : We have searched on many forums and have not got any leads any where. The above description may have multiple questions inside one question (against the rules of stackoverflow forum); but for us all of these questions are interlinked to using ZeroMQ effectively in Typescript.
Looking forward to getting some leads from the community.
Welcome to ZeroMQ
If this is your first read about ZeroMQ, feel free to first take a 5 seconds read - about the main conceptual differences in [ ZeroMQ hierarchy in less than a five seconds ] Section.
1 ) ... Is there any gap in our understanding of either/or TypeScript or ZeroMQ usage ?
Whereas I cannot serve for the TypeScript part, let me mention a few details, that may help you move forwards. While ZeroMQ is principally a broker-less, asynchronous signalling/messaging framework, it has many flavours of use and there are tools to enforce both a synchronous and asynchronous cooperation between the application code and the ZeroMQ Context()-instance, which is the cornerstone of all the services design.
The native API provides means to define, whether a respective call ought block, until a message processing across the Context()-instance's boundary was able to get completed, or, on the very contrary, if a call ought obey the ZMQ_DONTWAIT and asynchronously return the control back to the caller, irrespectively of the operation(s) (in-)completion.
As additional tricks, one may opt to configure ZMQ_SND_HWM + ZMQ_RCV_HWM and other related .setsockopt()-options, so as to meet a specific blocking / silent-dropping behaviours.
Because ZeroMQ does not provide any synchronous mechanism of sending/receiving data
Well, ZeroMQ API does provide means for a synchronous call to .send()/.recv() methods, where the caller is blocked until any feasible message could get delivered into / from a Context()-engine's domain of control.
Obviously, the TypeScript language binding/wrapper is responsible for exposing these native API services to your hands.
3 ) Is there any synchronous ZeroMQ methodology (We have used it in similar setup for C# where we pool on ZeroMQ and received the data synchronously) ?
Yes, there are several such :
- the native API, if not instructed by a ZMQ_DONTWAIT flag, blocks until a message can get served
- the native API provides a Poller()-object, that can .poll(), if given a -1 as a long duration specifier to wait for sought for events, blocking the caller until any such event comes and appears to the Poller()-instance.
Again, the TypeScript language binding/wrapper is responsible for exposing these native API services to your hands.
... Large memory consumption ...
Well, this may signal a poor resources management care. ZeroMQ messages, once got allocated, ought become also free-d, where appropriate. Check your TypeScript code and the TypeScript language binding/wrapper sources, if the resources systematically get disposed off and free-d from memory.

Resources