Speeding up Socket.IO - node.js

When I listen for a client connection in Socket.IO, there seems to be a latency of 8-9 seconds as it falls back to XHR. This is too slow for most purposes, as I'm using Socket.IO to push data to users' news feeds, and a lot can happen 8 or 9 seconds.
Is there any way to speed up this failure?
EDIT:
After deploying to Nodejitsu's VPS I tried this again and the socket connection was nearly immediate (enough that a user wouldn't notice). I'm only experiencing this on my local machine. So the question may actually be: why is it so slow on my local machine?

This question is almost impossible to answer without more information on your local setup, but it's interesting that you're failing over to XHR. The following question might explain why it's failing over to XHR, but not if you're able to use the same browser successfully once it's published.
Socket.io reverting to XHR / JSONP polling for no apparent reason
Another potential problem I've read about is that your browser has cached the incorrect transport method. You could try clearing your browser cache and reconnect to see if that gets around the problem.
https://groups.google.com/group/socket_io/browse_thread/thread/e6397e89efcdbcb7/a3ce764803726804
Lastly, if you're unable to figure out why it's not going using WebSockets or FlashSockets, you could try removing them as options from your socket.io configuration so that when you're developing locally, you may be able to get past that delay for quicker development at least.

Related

Best way to detect a loss of connection to a server using Angular 4 and Nodejs

Essentially, I'm trying to work out the best way to ensure that a user is connected to the server / the internet and is thus able to make requests in my application without error.
I have come across various solutions, but I can;t really decide what is the best performing or useful.
Websockets, using Socket.io to keep an open connection with the server for each client. Also opens up the possibility for real time updates in my app, which could be a nice thing in the future. However, having lots of open sockets is sure to be hard hitting in performance.
Polling, so having an endpoint in my API that the angular app hits every 5 seconds or so to check the user is connected. Again, seems like it isn't a good idea to be hitting the server a load.
Waiting for an error, then start polling every couple of seconds to wait for the connection to be re-established. This is a little change on the above. However, you are still waiting for a user to fail, which isn't good for user experience.
Does anybody have any informed input on this issue?
Thanks

NodeJS request pipe

I'm struggling with a technical issue, and because of I'm pretty new on NodeJS world I think I don't have the proper good practise and tools to help me solve this.
Using the well known request module, I'm making a stream proxy from a remote server to the client. Almost everything is fine and working properly until a certain point, if there is too much requests at the same time the server does no longer respond. Actualy it does get the client request but is unable to go through the stream process and serve the content.
What I'm currently doing:
Creating a server with http module with http.createServer
Getting remote url from a php script using exec
Instanciate the stream
How I did it:
http://pastebin.com/a2ZX5nRr
I tried to investigate on the pooling stuff and did not understand everything, same thing the pool maxSocket was recently added, but did not helped me. I was also seting before the http.globalAgent to infinity, but I read that this was no longer limited in nodeJS from a while, so it does not help.
See here: https://nodejs.org/api/http.html#http_http_globalagent
I also read this: Nodejs Max Socket Pooling Settings but I'm wondering what is the difference between a custom agent and the global one.
I believed that it could come from the server but I tested it on a very small one and a bigger one and it was not coming from there. I think it definitely coming from my app that has to be better designed. Indeed each time I'm restarting the app instance it works again. Also if I'm starting a fork of the server meanwhile the other is not serving anything on another port it will work. So it might not be about ressources.
Do you have any clue, tools or something that may help me to understand and debug what is going on?
NPM Module that can help handle stream properly:
https://www.npmjs.com/package/pump
I made few tests, and I think I've found what I was looking for. The unpipe things more info here:
https://nodejs.org/api/stream.html#stream_readable_unpipe_destination
Can see and read this too, it leads me to understand few things about pipe remaining open when target failed or something:
http://www.bennadel.com/blog/2679-how-error-events-affect-piped-streams-in-node-js.htm
So what I've done, i'm currently unpiping pipes when stream's end event is fired. However I guess you can make this in different ways, it depends on how you want to handle the thing but you may unpipe also on error from source/target.
Edit: I still have issues, it seams that the stream is now unpiping when it does not have too. I'll have to doubile check this.

User not disconnecting in Socket.IO Node.Js app ONLY on https connection - caching issue?

I'm having this issue with my Socket.Io / Node.Js app that when I reload a page Socket.IO still thinks that the "previous" me is connected to the chat room, so it thinks there's two people in the chat room instead of one. After a while (maybe like a minute) this problem disappears.
I tried resolving it but I think there's some issue with cache - that even though I reload the browser window the previous session is still active, so it "thinks" two people are connected.
This issue doesn't occur at all times, but always on HTTPS connection, and almost always on iOS and sometimes in Safari / Chrome and other browsers (on all systems with https connection).
Do you know what the issue might really be and what would be the best way to resolve it?
I use all the standard code for setting up the Socket.IO connection and the app is running on Express / Node.Js.
I can put the code here, but it's quite a lot, the open source code is available on http://github.com/noduslabs/infranodus (the app.js file and public/entries.js is where the socket.io code is).
Thank you!

socket.io disconnects clients when idle

I have a production app that uses socket.io (node.js back-end)to distribute messages to all the logged in clients. Many of my users are experiencing disconnections from the socket.io server. The normal use case for a client is to keep the web app open the entire working day. Most of the time on the app in a work day time is spent idle, but the app is still open - until the socket.io connection is lost and then the app kicks them out.
Is there any way I can make the connection more reliable so my users are not constantly losing their connection to the socket.io server?
It appears that all we can do here is give you some debugging advice so that you might learn more about what is causing the problem. So, here's a list of things to look into.
Make sure that socket.io is configured for automatic reconnect. In the latest versions of socket.io, auto-reconnect defaults to on, but you may need to verify that no piece of code is turning it off.
Make sure the client is not going to sleep such that all network connections will become inactive get disconnected.
In a working client (before it has disconnected), use the Chrome debugger, Network tab, webSockets sub-tab to verify that you can see regular ping messages going between client and server. You will have to open the debug window, get to the network tab and then refresh your web page with that debug window open to start to see the network activity. You should see a funky looking URL that has ?EIO=3&transport=websocket&sid=xxxxxxxxxxxx in it. Click on that. Then click on the "Frames" sub-tag. At that point, you can watch individual websocket packets being sent. You should see tiny packets with length 1 every once in a while (these are the ping and pong keep-alive packets). There's a sample screen shot below that shows what you're looking for. If you aren't seeing these keep-alive packets, then you need to resolve why they aren't there (likely some socket.io configuration or version issue).
Since you mentioned that you can reproduce the situation, one thing you want to know is how is the socket getting closed (client-end initiated or server-end initiated). One way to gather info on this is to install a network analyzer on your client so you can literally watch every packet that goes over the network to/from your client. There are many different analyzers and many are free. I personally have used Fiddler, but I regularly hear people talking about WireShark. What you want to see is exactly what happens on the network when the client loses its connection. Does the client decide to send a close socket packet? Does the client receive a close socket packet from someone? What happens on the network at the time the connection is lost.
webSocket network view in Chrome Debugger
The most likely cause is one end closing a WebSocket due to inactivity. This is commonly done by load balancers, but there may be other culprits. The fix for this is to simply send a message every so often (I use 30 seconds, but depending on the issue you may be able to go higher) to every client. This will prevent it from appearing to be inactive and thus getting closed.

How to detect and possibly ignore processing a bad/hung client browser request

I'm developing a node web application. And, while testing around, one of the client chrome browser went into hung state. The browser entered into an infinite loop where it was continuously downloading all the JavaScript files referenced by the html page. I rebooted the webserver (node.js), but once the webserver came back online, it continued receiving tons of request per second from the same browser in question.
Obviously, I went ahead and terminated the client browser so that the issue went away.
But, I'm concerned, once my web application go live/public, how to handle such problem-client-connections from the server side. Since I will have no access to the clients.
Is there anything (an npm module/code?), that can make best guess to handle/detect such bad client connections from within my webserver code. And once detected, ignore any future requests from that particular client instance. I understand handling within the Node server might not be the best approach. But, at least I can save my cpu/network by not rendering to the bad requests.
P.S.
Btw, I'm planning to deploy my node web application onto Heroku with a small budget. So, if you know of any firewall/configuration that could handle the above scenario please do recommend.
I think it's important to know that this is a pretty rare case. If your application has a very large user base or there is some other reason you are concerned with DOS/DDOS related attacks, it looks like Heroku provides some DDOS security for you. If you have your own server, I would suggest looking into Nginx or HAProxy as load balancers for your app combined with fail2ban. See this tutorial.

Resources