Keep tcp connection open using python3.4's xmlrpc.server - python-3.x

I have a server-client application using xmlrpc.server and xmlrpc.client where the clients request data from the server. As the server only returns this data once certain conditions are met, the clients make the same call over and over again, and currently the tcp connection is re-initiated with each call. This creates a noticeable delay.
I have a fixed number of clients that connect to the server at the beginning of the application and shutdown when the whole application is finished.
I tried to google about keeping the TCP connection open, but all I could find either talked about xmlrpclib or did not apply to the python version.
Client-side code:
import xmlrpc.client as xc
server = xc.ServerProxy(host_IP,8000)
var = False
while type(var)==bool:
var = server.pull_update()
# this returns "False" while the server determines the conditions
# for the client to receive the update aren't met; and the update
# once the conditions are met
Server-side, I am extending xmlrpc.server.SimpleXMLRPCServer with the default xmlrpc.server.SimpleXMLRPCRequestHandler. The function in question is:
def export_pull_update(self):
if condition:
return self.var
else:
return False
Is there a way to get xmlrpc.server to keep the connection alive between calls for the server?
Or should I go the route of using ThreadingMixIn and not completing the client-request until the condition is met?

Related

Pyzmq swallows error when connecting to blocked port (firewall)

I'm trying to connect to a server using python's pyzmq package.
In my case I'm expecting an error, because the server the client connects to, blocks the designated port by a firewall.
However, my code runs through until I terminate the context and then blocks infinitely.
I tried several things to catch the error condition beforehand, but none of them succeeded.
My base example looks like this:
import zmq
endpoint = "tcp://{IP}:{PORT}"
zmq_ctx = zmq.Context()
sock = zmq_ctx.socket(zmq.PAIR)
sock.connect(endpoint) # <--- I would expect an exception thrown here, however this is not the case
sock.disconnect(endpoint)
sock.close()
zmq_ctx.term() # <--- This blocks infinetely
I extended the sample by sock.poll(1000, zmq.POLLOUT | zmq.POLLIN), hoping that the poll command would fail if the connection could not be established due to the firewall.
Then, I tried to solve the issue by setting some sock options, before the sock = zmq_ctx.socket(zmq.PAIR):
zmq_ctx.setsockopt(zmq.IMMEDIATE, 1) # Hoping, this would lead to an error on the first `send`
zmq_ctx.setsockopt(zmq.HEARTBEAT_IVL, 100) # Hoping, the heartbeat would not be successful if the connection could not be established
zmq_ctx.setsockopt(zmq.HEARTBEAT_TTL, 500)
zmq_ctx.setsockopt(zmq.LINGER, 500) # Hoping, the `zmq_ctx.term()` would throw an exception when the linger period is over
I also added temporarily a sock.send_string("bla"), but it just enqueues the msg without returning me some error and did not provide any new insights.
The only thing I can imagine to solve the problem would be using the telnet package and attempting a connection to the endpoint.
However, adding a dependency just for the purpose of testing a connection is not really satisfactory.
Do you have any idea, how to determine a blocked port from the client side still using pyzmq? I'm not very happy that the code always runs into the blocking zmq_ctx.term() in that case.

How to accept push notifications in a plotly/dash app?

I have a client with an open connection to a server which accepts push notifications from the server. I would like to display the data from the push notifications in a plotly/dash page in near real time.
I've been considering my options as discussed in the documentation page.
If I have multiple push-notification clients running in each potential plotly/dash worker process, then I had to manage de-duplicating events, doable, but bug prone and quirky to code.
The idea solution seems to be to run the push network client on only one process and push those notifications into a dcc.Store objects. I assume I would do that by populating a queue in the push clients async callback, and on a dcc.Interval timer gather any new data in that queue and place it in the dcc.Store object. Then all other callbacks get triggered on the dcc.Store object, possibly in separate python processes.
From the documentation I don't see how I would be guarantee the callback that interacts with the push network client to the main process and ensure it doesn't run on any worker processes. Is this possible? The dcc.Interval documentation doesn't make any mention of this detail.
Is there a way to force the dcc.Interval onto one process, or is that the normal operation under Dash with multiple worker processes? Or is there another recommended approach to handling data from a push notification network client?
An alternative to the Interval component pulling updates at regular intervals could be to use a Websocket component to enable push notifications. Simply add the component to the layout and add a clientside callback that performs the appropriate updates based on the received message,
app.clientside_callback("function(msg){return \"Response from websocket: \" + msg.data;}",
Output("msg", "children"), [Input("ws", "message")])
Here is a complete example using a SocketPool to setup endpoints for sending messages,
import dash_html_components as html
from dash import Dash
from dash.dependencies import Input, Output
from dash_extensions.websockets import SocketPool, run_server
from dash_extensions import WebSocket
# Create example app.
app = Dash(prevent_initial_callbacks=True)
socket_pool = SocketPool(app)
app.layout = html.Div([html.Div(id="msg"), WebSocket(id="ws")])
# Update div using websocket.
app.clientside_callback("function(msg){return \"Response from websocket: \" + msg.data;}",
Output("msg", "children"), [Input("ws", "message")])
# End point to send message to current session.
#app.server.route("/send/<message>")
def send_message(message):
socket_pool.send(message)
return f"Message [{message}] sent."
# End point to broadcast message to ALL sessions.
#app.server.route("/broadcast/<message>")
def broadcast_message(message):
socket_pool.broadcast(message)
return f"Message [{message}] broadcast."
if __name__ == '__main__':
run_server(app)

Server Sent Events with Pyramid - How to detect if the connection to the client has been lost

I have a pyramid application that send SSE messages. It works basically like these:
def message_generator():
for i in range(100):
print("Sending message:" + str(i))
yield "data: %s\n\n" % json.dumps({'message': str(i)})
time.sleep(random.randint(1, 10))
#view_config(route_name='events')
def events(request):
headers = [('Content-Type', 'text/event-stream'),
('Cache-Control', 'no-cache')]
response = Response(headerlist=headers)
response.app_iter = message_generator()
return response
When I browse to /events I get the events. When I move to another page the events stop, when I close the browser the events stop.
The problem happens for example if I am in /events and I switch off the computer. The server does not know that the client got lost and message_generator keeps sending messages to the void.
In this page: A Look at Server-Sent Events mention this:
...the server should detect this (when the client stops) and stop
sending further events as the client is no longer listening for them.
If the server does not do this, then it will essentially be sending
events out into a void.
Is there a way to detect this with Pyramid? I tried with
request.add_finished_callback()
but this callback seems to be called with
return response
I use Gunicorn with gevent to start the server.
Any idea is highly appreciated
From PEP 3333:
Applications returning a generator or other custom iterator should not assume the entire iterator will be consumed, as it may be closed early by the server.
Basically a WSGI server "should" invoke the close() method on the app_iter when a client disconnects (all generators, such as in your example, support this automatically). However, a server is not required to do it, and it seems many WSGI servers do not. For example, you mentioned gunicorn (which I haven't independently verified), but I did verify that waitress also does not. I opened [1] on waitress as a result, and have been working on a fix. Streaming responses in WSGI environments is shaky at best and usually depends on the server. For example, on waitress, you need to set send_bytes=0 to avoid it buffering the response data.
[1] https://github.com/Pylons/waitress/issues/236

Connection configuration loops - Prosys OPC UA Client

I'm using sample codes from documentation and I'm trying to connect to server using Prosys OPC UA Client. I have tried opcua-commander and integration objects opc ua client and it looks like server works just fine.
Here's what is happening:
After entering endpointUrl, client adds to url -- urn:NodeOPCUA-Server-default.
Client asks to specify security settings.
Client asks to choose server - only 1 option and it's urn:NodeOPCUA-Server-default.
And it goes back to step 2 and 3 over and over.
If I just minimize prosys client without closing configuration after some time I get this info in terminal:
Server: closing SESSION new ProsysOpcUaClient Session15 because of timeout = 300000 has expired without a keep alive
\x1B[46mchannel = \x1B[49m ::ffff:10.10.13.2 port = 51824
I have tried this project and it works -> node-opcua-htmlpanel. What's missing in sample code then?
After opening debugger I have noticed that each Time I select security settings and hit OK, server_publish_engine reports:
server_publish_engine:179 Cencelling pending PublishRequest with statusCode BadSecureChannelClosed (0x80860000) length = 0
This is due to a specific interoperability issue that was introduced in node-opcua#0.2.2. this will be fixed in next version of node-opcua. The resolution can be tracked here https://github.com/node-opcua/node-opcua/issues/464
The issue has been handled at the Prosys OPC Forum:
The error happens because the server sends different
EndpointDescriptions in GetEndpointsResponse and
CreateSessionResponse.
In GetEndpoints, the returned EndpointDescriptions contain
TransportProfileUri=http://opcfoundation.org/UA-Profile/Transport/uatcp-uasc-uabinary.
In CreateSessionResponse, the corresponding TransportProfileUri is
empty.
In principle, the server application is not working according to
specification. The part 4 of the OPC UA specification states that “The
Server shall return a set of EndpointDescriptions available for the
serverUri specified in the request. … The Client shall verify this
list with the list from a DiscoveryEndpoint if it used a
DiscoveryEndpoint to fetch the EndpointDescriptions. It is recommended
that Servers only include the server.applicationUri, endpointUrl,
securityMode, securityPolicyUri, userIdentityTokens,
transportProfileUri and securityLevel with all other parameters set to
null. Only the recommended parameters shall be verified by the
client.”

Thread holding up socket

My app receives jobs to do from a web server through sockets. At the moment when a job is running on the app I can then only send 2 more messages to the app before it won't receive any more.
def handlemsg (self, data):
self.sendmsg (cPickle.dumps('received')) # send web server notification received
data = cPickle.loads(data)
print data
# Terminate a Job
if data[-1] == 'terminate':
self.terminate(data[0])
# Check if app is Available
elif data[-1] == 'prod':
pass
# Run Job
else:
supply = supply_thread(data, self.app)
self.supplies[data['job_name']] = supply
supply.daemon = True
supply.start()
I can send as many prods as I like to the server. But once I send a Job that activates a thread then responses become limited. For some reason it will allow me to send another two prods while the job is running... But after that the print message will not appear it just keeps on working.
Any ideas? Thanks
I was running my data through a datagram socket configuration. I switched to a socketstream and it seemed to resolve it.
http://turing.cs.camosun.bc.ca/COMP173/notes/PySox.html
Was helpful in the resolution.

Resources