aiohttp websocket EOfStream handling - python-3.x

I'm connecting to a websocket endpoint using aiohttp's default WebsocketProtocol and out of the blue (after some time and multiple infinite loop iterations) I always get WSMsgType.ERROR with EofStream as data. To my understanding, this should not happen. I tried researching how to deal with this but have been relatively unsuccessful. Should I just close and reconnect to the endpoint? Is there a way to ensure this doesn't happen? Should I implement a specific handling algorithm?

It means connection closed by peer.
Internet is unstable transport, you should always be prepared to such situations.
Usually reconnection helps in cases like this.

Related

Rust HTTP2 request multiplexing in single thread

I am trying to write a piece of latency-sensitive Rust code that makes a few HTTP requests very quickly (ideally within a few microseconds of one another -- I know that seems ridiculous when considering the effects of network latency, but I'm co-located with the server I'm trying to work with here). The server I'm trying to work with supports HTTP/2 (I know it at least does ALPN -- not sure if it also does prior-knowledge connections).
I've tried using Reqwest's tokio-based client, but tokio's task spawning time seems to be anywhere from 10-100us (which is more than I'd like).
I've tried doing independent threads with their own HTTP clients that just make their own requests when they get an incoming message on their channels, but the time between sending the message and them receiving it can be anywhere from 1-20us (which is, again, more than I'd like).
I've also tried using libcurl's multi library, but that seems to add milliseconds of latency, which is far from ideal.
I've also tried using hyper's client, but any way that I try to enable HTTP/2 seems to run into issues.
I know that HTTP/2 can multiplex many requests into a single connection. And it seems to me that it's probably possible to do this in a single thread. Would anyone be able to point me in the right direction here?
Alright, here's some of the code I've got so far: https://github.com/agrimball/stack-overflow-examples
Not sure if this question is scoped appropriately for StackOverflow. The basic idea is simple (I want to do HTTP/2 request multiplexing and haven't found a usable lib for it in Rust -- any help?), but the story of how I got to that question and the list of alternatives I've already tried does end up making it a bit long...
In any case, thanks in advance for any help!
ninja edit: And yes, being able to do so from a single thread would be ideal since it would remove a bit of latency as well as some of the code complexity. So that part of the title is indeed relevant as well... Still striving for simplicity / elegance when possible!

how to detect connection failed in node js?

I use net.connect to make socket connection, I wonder how to detect it when a connection has failed?
It seems this doesn't work
//this will return a net.Socket and automatically connect
var client = net.connect({port:22000, host:'10.123.9.163'});
//doesn't trigger a error event even if connection fails
client.on('error', (err)=>{console.log('something wrong')});
//now an error event is emitted reasonably
client.write('hello');
when I run this piece of code, the connection should fail, and it indeed fails because when I write some data, an error occurs. But, I can not detect the connection failure. How can I do that?
=====Ready to close======
God damn it, I think I have just make a mistake. In fact the connection succeeded but due to some security strategy the server close the connection, I find out by doing a telnet. After trying other port which should definitely fail, the error event is emitted, everything go normal as expected. So, I am gonna close this question in case of misleading other people, and also thank you guys for helping me :)
The easiest and most portable way is to simply implement a 'ping-pong' check where both sides send some kind of 'ping' request every so often. If n outstanding ping requests go unanswered after some period of time, then consider the connection dead. This kind of algorithm is basically what OpenSSH uses for example (although it's not enabled by default).

Getting emails about hitting Pusher usage limits even if the stats in the backend says otherwise

I have been getting emails about my account having hit Pusher usage limits even if I haven't really gotten anywhere close to the limits based on my account stats.
I have searched the internet for clarifications and possible solutions. I only found this.
http://pusher.tenderapp.com/kb/faq-common-requests/half-open-connections-lead-to-temporarily-incorrect-connection-counts-and-webhook-call-delays
I have tried to manually close connections on page unload but it still seem to cause some problems still.
Any alternative solutions? What is this "ping/pong mechanism for detecting half-open connections" solution?
I used to work on Pusher support and from my time there I know that sometime the stats don't show the spikes in connections, if those spikes are very short lived. You may be able to see them if you zoom into the usage stats in the Pusher dashboard for your app.
The FAQ on half-open connections is the correct one to look at and is potentially the cause of some of your problems.
The ping/pong mechanism you mention is Pusher's solution to this problem. The WebSocket protocol defines this mechanism, see:
http://www.whatwg.org/specs/web-apps/current-work/multipage/network.html#ping-and-pong-frames
However, not all clients have implemented this so Pusher have added their own ping/pong solution to their protocol:
http://pusher.com/docs/pusher_protocol#ping-pong
I don't believe there is anything that you can do to stop these problems occurring, it's a networking issue where closed connections aren't being detected by the server.

choose between tcp "long" connection and "short" connection for internal service

I got an app that web server re-direct some requests to backend servers, and the backend servers(Linux) will do complicated computations and response to web server.
For the tcp socket connection management between web server and backend server, i think there are two basic strategy:
"short" connection: that is, one connection per request. This seems very easy for socket management and simplify the whole program structure. After accept, we just get some thread to process the request and finally close this socket.
"long" connection: that is, for one tcp connection, there could be multi request one by one. It seems this strategy could make better use of socket resource and bring some performance improvement(i am not quite sure). BUT it seems this brings a lot of complexity than "short" connection. For example, since now socket fd may be used by multi-threads, synchronization must be involved. and there are more, socket failure process, message sequence...
Is there any suggestions for these two strategies?
UPDATE:, #SargeATM 's answer remind me that i should tell more about the backend service.
Each request is kind of context-free. Backend service can do calculation based on one single request message. It seems to be sth. stateless.
Without getting into the architecture of the backend which I think heavily influences this decision, I prefer short connections for stateless "quick" request/response type traffic and long connections for stateful protocols like a synchronization or file transfer.
I know there is some tcp overhead for establishing a new connection (if it isn't local host) but that has never been anything I have had to optimize in my applications.
Ok I will get a little into architecture since this is important. I always use threads not per request but by function. So I would have a thread that listened on the socket. Another thread that read packets off of all the active connections and another thread doing the backend calculations and a last thread saving to a database if needed. This keep things clean and simple. Easy to measure slow spots, maintain, and to optimize later when needed if needed.
What about a third option... no connection!
If your job description and job results are both of small size, UDP sockets may be a good idea. You have even less resources to manage, as there's no need to bound the request/response to a file descriptor, which give you some flexibility for the future. Imagine you have more backend services and would like to do some load balancing – a busy service can send the job to another one with UDP address of job submitter. The latter just waits for the result and doesn't care where you performed the task.
Obviously you'd have to deal with lost, duplicated and out of order packets, but as a reward you don't have to deal with broken connections. Out of order packets are probably not a big deal if you can fit the request and response in one UDP message, duplication can be taken care of by some job ids, and lost packet... well, they can be simply resent ;-)
Consider this!
Well, you are right.
The biggest problem with persistent connections will be making sure that app got "clean" connection from pool. Without any garbage left of data from another request.
There are a lot of ways to deal with that problem, but at the end it is better to close() tainted connection and open new one than trying to clean it...

Skype Conference Procedure

I've been looking into skypes protocol or what people can make out since its a propriety protocol. I've read "An analysis of the skype peer-to-peer internet telephony protocol", though it is old it discusses a certain property which I'm looking to recreate in my own architecture. What I'm interested in is during video a conference, data is sent to one machine (the one most likely with the best bandwidth and processing power) which then redistributes to the other machines.
What is not explained is what happens when the machine receiving and sending the data has unexpectedly dropped out. Of course rather than drop the conference it would be best to find another machine to carry on receiving and distributing the data. Is there any documentation on how this performed on skype or a similar peer-to-peer VoIP?
Basically I'm looking for the fastest method to detect when a "super peer" unexpectedly drops out and quickly migrating operations to another machine.
You need to set a timeout (i.e., limit) and declare that if you don't receive communication within then, the communication is either dead (no path between the peers, reachability issue) or the remote peer is down. There is no other method.
If you have direct tcp or other connection to the super peer, you can catch events telling you the connection dies too. If your communication is relayed, and your framework automatically attempt to find a new route to your target peer, it will either find one or never find out. Hence, the necessity for a timeout.
If none hears about someone for some time, they are finally considered/declared dead.

Resources