Pyzmq swallows error when connecting to blocked port (firewall) - python-3.x

I'm trying to connect to a server using python's pyzmq package.
In my case I'm expecting an error, because the server the client connects to, blocks the designated port by a firewall.
However, my code runs through until I terminate the context and then blocks infinitely.
I tried several things to catch the error condition beforehand, but none of them succeeded.
My base example looks like this:
import zmq
endpoint = "tcp://{IP}:{PORT}"
zmq_ctx = zmq.Context()
sock = zmq_ctx.socket(zmq.PAIR)
sock.connect(endpoint) # <--- I would expect an exception thrown here, however this is not the case
sock.disconnect(endpoint)
sock.close()
zmq_ctx.term() # <--- This blocks infinetely
I extended the sample by sock.poll(1000, zmq.POLLOUT | zmq.POLLIN), hoping that the poll command would fail if the connection could not be established due to the firewall.
Then, I tried to solve the issue by setting some sock options, before the sock = zmq_ctx.socket(zmq.PAIR):
zmq_ctx.setsockopt(zmq.IMMEDIATE, 1) # Hoping, this would lead to an error on the first `send`
zmq_ctx.setsockopt(zmq.HEARTBEAT_IVL, 100) # Hoping, the heartbeat would not be successful if the connection could not be established
zmq_ctx.setsockopt(zmq.HEARTBEAT_TTL, 500)
zmq_ctx.setsockopt(zmq.LINGER, 500) # Hoping, the `zmq_ctx.term()` would throw an exception when the linger period is over
I also added temporarily a sock.send_string("bla"), but it just enqueues the msg without returning me some error and did not provide any new insights.
The only thing I can imagine to solve the problem would be using the telnet package and attempting a connection to the endpoint.
However, adding a dependency just for the purpose of testing a connection is not really satisfactory.
Do you have any idea, how to determine a blocked port from the client side still using pyzmq? I'm not very happy that the code always runs into the blocking zmq_ctx.term() in that case.

Related

http: Accept error: accept tcp [::]:8080: accept4: too many open files;

I have written REST API in Golang and I am doing performance test of my API using Jmeter.
When I run the test with 300 or more users, each user sending 20 requests with a gap of 500ms between each request I get the below error:
http: Accept error: accept tcp [::]:8080: accept4: too many open files;
I am running this Go application in AWS EC2 server. I am running this app on a 8GB RAM machine.
Below is what I have tried already:
I have increased the ulimit to a sufficiently good number. When I run ulimit -n command the output is: 1048576
In my code I made sure that the response body is closed.
But, none of these solved the issue. Any help is appreciated.
Thanks in advance.
One problem could be not closing the opened files or releasing the resources.
For example: the body object in http request is of type io.ReadCloser
This read closer has a close method which you must call after your process has been finished to release the resources.
func UserHandler(w http.ResponseWriter, r *http.request) {
var user User
if err := json.NewDecoder(r.Body).Decode(&user); err != nil {
//handle error
}
defer r.Body.Close()
// More Code
}
Here calling a defer on the r.Body.Close() will lead to releasing the associated resources after the method has returned its value.
Similar To this, there are alot of methods which implement this interface, like: * os.File, sql.DB, mgo.Session* etc. So you can just check if you're properly closing the resources.

Server Sent Events with Pyramid - How to detect if the connection to the client has been lost

I have a pyramid application that send SSE messages. It works basically like these:
def message_generator():
for i in range(100):
print("Sending message:" + str(i))
yield "data: %s\n\n" % json.dumps({'message': str(i)})
time.sleep(random.randint(1, 10))
#view_config(route_name='events')
def events(request):
headers = [('Content-Type', 'text/event-stream'),
('Cache-Control', 'no-cache')]
response = Response(headerlist=headers)
response.app_iter = message_generator()
return response
When I browse to /events I get the events. When I move to another page the events stop, when I close the browser the events stop.
The problem happens for example if I am in /events and I switch off the computer. The server does not know that the client got lost and message_generator keeps sending messages to the void.
In this page: A Look at Server-Sent Events mention this:
...the server should detect this (when the client stops) and stop
sending further events as the client is no longer listening for them.
If the server does not do this, then it will essentially be sending
events out into a void.
Is there a way to detect this with Pyramid? I tried with
request.add_finished_callback()
but this callback seems to be called with
return response
I use Gunicorn with gevent to start the server.
Any idea is highly appreciated
From PEP 3333:
Applications returning a generator or other custom iterator should not assume the entire iterator will be consumed, as it may be closed early by the server.
Basically a WSGI server "should" invoke the close() method on the app_iter when a client disconnects (all generators, such as in your example, support this automatically). However, a server is not required to do it, and it seems many WSGI servers do not. For example, you mentioned gunicorn (which I haven't independently verified), but I did verify that waitress also does not. I opened [1] on waitress as a result, and have been working on a fix. Streaming responses in WSGI environments is shaky at best and usually depends on the server. For example, on waitress, you need to set send_bytes=0 to avoid it buffering the response data.
[1] https://github.com/Pylons/waitress/issues/236

First call to Microsoft.Azure.ServiceBus.Core.MessageSender.SendAsync times out, subsequent calls don't

I have some code written to communicate with an azure service bus. It sends messages to a queue. It's in a project targeting .net standard 2.0
When I run it from a .net core terminal app it runs fine. But, when the same code is called from a .net framework 4.7.2 project then the first attempt to send a message results in the following exception after 30 to 90 seconds:
"The remote party closed the WebSocket connection without completing the close handshake."
But any further messages will be sent without problem.
// This is using Microsoft.Azure.ServiceBus, if that makes any difference...
MessageSender MessageSender = new MessageSender(ConnectionString, SendQueueName;
try
{
await MessageSender.SendAsync(new Message(Encoding.UTF8.GetBytes("Test that won't work")));
}
catch(Exception e)
{
// Error will be caught here:
// "The remote party closed the WebSocket connection without completing the close handshake."
}
await MessageSender.SendAsync(new Message(Encoding.UTF8.GetBytes("Test that will work")));
Does anybody know why the first call fails? And how to make it not fail? Or fail quicker? I've tried changing the OperationTimeout and RetryPolicy but they don'e seem to have any effect.
These first connections are via port 5671/56712, which Trend antivirus intercepts. Once these have timed out then the framework falls back to using 443, which works fine.
We tried turning Trend off and running testing the connection and its pretty much instantaneous.

Keep tcp connection open using python3.4's xmlrpc.server

I have a server-client application using xmlrpc.server and xmlrpc.client where the clients request data from the server. As the server only returns this data once certain conditions are met, the clients make the same call over and over again, and currently the tcp connection is re-initiated with each call. This creates a noticeable delay.
I have a fixed number of clients that connect to the server at the beginning of the application and shutdown when the whole application is finished.
I tried to google about keeping the TCP connection open, but all I could find either talked about xmlrpclib or did not apply to the python version.
Client-side code:
import xmlrpc.client as xc
server = xc.ServerProxy(host_IP,8000)
var = False
while type(var)==bool:
var = server.pull_update()
# this returns "False" while the server determines the conditions
# for the client to receive the update aren't met; and the update
# once the conditions are met
Server-side, I am extending xmlrpc.server.SimpleXMLRPCServer with the default xmlrpc.server.SimpleXMLRPCRequestHandler. The function in question is:
def export_pull_update(self):
if condition:
return self.var
else:
return False
Is there a way to get xmlrpc.server to keep the connection alive between calls for the server?
Or should I go the route of using ThreadingMixIn and not completing the client-request until the condition is met?

SCTP server has an abnormal behaviour when connecting with a client

I have this code on a small server that is getting requests from a client using SCTP connection, I keep getting this error every now and then.
BlockingIOError: [Errno 11] Resource temporarily unavailable
I know that I can avoid it by using Try-except but I wanna have a deep understanding of the issue. any help?
my code is here. this is the server
server = ('', 29168)
sk = sctpsocket_tcp(socket.AF_INET)
sk.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sk.bindx([server])
sk.listen(5)
connection, addr = sk.accept()
while c:
a,b,c,d = connection.sctp_recv(1024)
print(c)
After going again through the SCTP library, I found a closed issue on Github with the solution

Resources