So I set up socket.io with a NodeJS + ExpressJS server and everything is working well. The only problem is I just realized that my emit() calls are using the fallback XHR method to send the event to my server rather than the websocket connection it has open.
When I view the connection, all I see are some 2probe, 3probe, followed by a bunch of 2's and 3's being sent across the websocket. This connection appears to be open and working, so why is it falling back to long polling with XHR requests?
I am not providing any code right now because I am not sure what part would be relevant since the functional aspect of the code is working great, I just want to utilize the websocket over XHR. Let me know if there is any code you would like to see
UPDATE
So I was testing out the sockets a little more and I added a couple more emit() calls. It appears like the very first 1 or 2 emits use the long polling and then all of a sudden it changes over to using the websocket. Just curious what is happening here.
Since Socket.IO 1.x, the fallback algorithm changed from a downgrade approach to an upgrade approach.
Long polling pretty much works everywhere, so that is used at first so you can get a "connection" right away. Then in the background, an attempt is made to upgrade the long polling connection to a websocket connection. If the upgrade is successful, the long polling stops and the session switches to the websocket connection. If it's not successful, the long polling "connection" stays open and continues to be used.
Related
In my current project, my nodeJs/express will receive a HTTP request through a route.
Once received, node will then use NightmareJS to perform webscraping and subsequently execute a python script that further process the data.
Lastly, it would then update this data into MongoDB.
Everything takes about 5mins.
What I am trying to achieve is to allow my front-end to somehow receive an acknowledgement that the request was put through. But also receive an update when the above process is completed and the database is updated.
I have looked into using Long polling or socket.io. However, I don't know which one I should use or how. Or should I use rabbitMQ instead? Putting the response that it is complete into the queue while my front-end constantly querying this queue.
Long polling or socket.io are similar, socket.io has Long polling fallback if WS not supported
rabbitMQ is quite different, you cannot use rabbitMQ protocal in browser, so you need a client app, not a web
socket.io is excellent and go well along express, still there are other options, SSE (server send events), firebase. You need to feel them before you choose one, they are not that hard if you follow their official guide
4.some of my opensource might help
https://github.com/postor/sse-notify-suite
https://github.com/postor/node-realtime-db
benefits of each solution
ajax + server cache: simple
long pull: low latency
SSE: low latency, event based
socket.io: low latency, event based, high throug put, double direction, long pull fall back
I'm working with socket.io (the 1.0 version) and something weird happens. The server is very basic and without any message handling (which means only the connection signal is used and the disconnection one). Though it seems that the client sends multiple polling requests before trying to use websockets. For example here is a screenshot of the requests.
As you can see, it's really messy. There are some requests to my nodejs server, first some polling requests, then the websocket (switching protocol, indicated by the blue dot on the left) and then other requests for polling. Though I know it uses Websockets after that because there are no other polling requests once the Websocket is set. It makes my server to send some messages twice on the page load.
Does anyone ever experienced something like that ? Maybe it will just work fine. But I don't want to have this kind of behaviour. If you need additionnal information, just ask in the comments and I'll edit the main post.
Take a look at the last paragraph of New engine section. Socket.IO 1.0 first connects via XHR or JSONP, and then, if it's possible, switches transport to WebSocket on the fly. This explains why you have such messy network activity.
I have a nodejs app, and every client has an open socket connection, the reason i am using sockets because I need to update the data on the client whenever the data in the database changes by an external process.
However every other operation in my app doesn't require a socket connection and mostly initiated by the client (CRUD operations), But i am confused about one thing since I always have an open socket connection, wouldn't it be better to use that socket connection for every operation and make the app with pure socket logic?
When using websockets maybe it's fine. But if socket.io switches to XHR (AJAX) transport it might be irrational.
Take a look at the differencies of these two approaches:
In case of simple AJAX (without socket.io) when you want to get some info from server, or change something on a server, you send GET or POST request,
and server responses. Everything's fine.
But in case of socket.io (XHR transport) there is one request to send data, and another to get the response.
(You can make your own experiment - write io.set('transports', ['xhr-polling']); and try to send data to the server and make server respond -
you will see 2 AJAX requests in the Network tab)
So instead of one AJAX request socket.io makes two requests.
This is not because socket.io is bad. This is a feature of sockets approach. This approach is good if you want one side (client or server) to send messages independenly from the other. This is what socket.io does very good.
But if you want to do "request-response" stuff it's the best to use simple AJAX because of traffic economy (note that I compare simple AJAX to socket.io AJAX. Websockets - is another story).
But since this question is about approaches and can't have 100% "yes" or "no" answer, there are might be different opinions.
Sorry for English. I tried to write as clearly as I could :)
I had a use case where i was planning to poll from browser to server to check any updates for a given customer.Then i thought
of exploring push approach where webserver(in my case tomcat) can do it automatically whenever servlet running on webserver
gets any update from thirdparty.First question came to my mind how javaclass will know to which browser client it has to send
update.Then i came across the link at http://www.gianlucaguarini.com/blog/nodejs-and-a-simple-push-notification-server/.
This is the amazing link that demonstrates how push approach can be supported.But i came up with some basic question to go
ahead with this approach. These are:-
1)Does browser internally uses the websockets only to communicate with webserver or they just used TCP for that?
As per my understanding browser uses only TCP protocol though it is supported by some brosers like chrome,mozilla
2)Does the websocket (provided by io.connect('url')in the example) supported by all browsers specially IE7,IE8
As per my understanding
3)To support the push approach on browser, websockets are the only way to go?
As per my understanding, websockets are mainly used to push the data from webserver to browser(only those that support websockets)
For this first browser needs to make the websocket connection to webserver.Now server will use the created websocket to emit any
data to browser.Right?
4)Is there a possiblity when websocket get automatically disconnected like in case request gets timeout or response is awaited for long time?
5)Do we need to disconnect the socket explicitly or it will be closed automatically when browser is closed?
It would be really helpful if reply is pointwise.
WebSocket protocol is TCP protocol. It's just that it starts as HTTP and then it can be upgraded to TCP.
Internel Explorer is supposed to support WebSockets in version 10. The other major browsers (Chrome, FireFox, Safari, Opera) do fully support it.
There are many other possibilites. Simply polling, long polling ( where you make one ajax request and server responds only when he has new data ), hidden infinite iframe, use of flash, etc.
Yes.
Once an application which is using a port ( in that case a browser ) is killed, then all connections are terminated as well.
I'm trying to implement an http long polling server in Node.js, and have no idea how to close/shutdown pending requests if a timeout is reached.
3 possible solutions come to my mind:
Store the pendingRequest with a timestamp in a hash/object, then call setIntervall, so that every 1/2/x secs the pendingRequests are removed if the timestamp is too old.
set a timeout on the socket connection
Both solutions don't seem very reasonable, so what would be the Node.js way to achieve something like this?
Why don't those sound reasonable? In particular, setting a timeout on the socket seems to make sense to me, as:
There is a built-in method for doing so
An event is fired when the connection times out, allowing you to do any necessary cleanup (e.g. calling end/destroy on the socket)
I would probably go this route so that Node handles the timeout behind the scenes; however, if it makes sense for your app, I don't see any harm in keeping a timestamp and expiring connections manually.
You may be interested in these articles, each of which handles expiring connections differently:
Long polling in Node.js
How to write a Long Polling Event Push Server with node.js