I'm working with socket.io (the 1.0 version) and something weird happens. The server is very basic and without any message handling (which means only the connection signal is used and the disconnection one). Though it seems that the client sends multiple polling requests before trying to use websockets. For example here is a screenshot of the requests.
As you can see, it's really messy. There are some requests to my nodejs server, first some polling requests, then the websocket (switching protocol, indicated by the blue dot on the left) and then other requests for polling. Though I know it uses Websockets after that because there are no other polling requests once the Websocket is set. It makes my server to send some messages twice on the page load.
Does anyone ever experienced something like that ? Maybe it will just work fine. But I don't want to have this kind of behaviour. If you need additionnal information, just ask in the comments and I'll edit the main post.
Take a look at the last paragraph of New engine section. Socket.IO 1.0 first connects via XHR or JSONP, and then, if it's possible, switches transport to WebSocket on the fly. This explains why you have such messy network activity.
Related
So I set up socket.io with a NodeJS + ExpressJS server and everything is working well. The only problem is I just realized that my emit() calls are using the fallback XHR method to send the event to my server rather than the websocket connection it has open.
When I view the connection, all I see are some 2probe, 3probe, followed by a bunch of 2's and 3's being sent across the websocket. This connection appears to be open and working, so why is it falling back to long polling with XHR requests?
I am not providing any code right now because I am not sure what part would be relevant since the functional aspect of the code is working great, I just want to utilize the websocket over XHR. Let me know if there is any code you would like to see
UPDATE
So I was testing out the sockets a little more and I added a couple more emit() calls. It appears like the very first 1 or 2 emits use the long polling and then all of a sudden it changes over to using the websocket. Just curious what is happening here.
Since Socket.IO 1.x, the fallback algorithm changed from a downgrade approach to an upgrade approach.
Long polling pretty much works everywhere, so that is used at first so you can get a "connection" right away. Then in the background, an attempt is made to upgrade the long polling connection to a websocket connection. If the upgrade is successful, the long polling stops and the session switches to the websocket connection. If it's not successful, the long polling "connection" stays open and continues to be used.
I have a nodejs app, and every client has an open socket connection, the reason i am using sockets because I need to update the data on the client whenever the data in the database changes by an external process.
However every other operation in my app doesn't require a socket connection and mostly initiated by the client (CRUD operations), But i am confused about one thing since I always have an open socket connection, wouldn't it be better to use that socket connection for every operation and make the app with pure socket logic?
When using websockets maybe it's fine. But if socket.io switches to XHR (AJAX) transport it might be irrational.
Take a look at the differencies of these two approaches:
In case of simple AJAX (without socket.io) when you want to get some info from server, or change something on a server, you send GET or POST request,
and server responses. Everything's fine.
But in case of socket.io (XHR transport) there is one request to send data, and another to get the response.
(You can make your own experiment - write io.set('transports', ['xhr-polling']); and try to send data to the server and make server respond -
you will see 2 AJAX requests in the Network tab)
So instead of one AJAX request socket.io makes two requests.
This is not because socket.io is bad. This is a feature of sockets approach. This approach is good if you want one side (client or server) to send messages independenly from the other. This is what socket.io does very good.
But if you want to do "request-response" stuff it's the best to use simple AJAX because of traffic economy (note that I compare simple AJAX to socket.io AJAX. Websockets - is another story).
But since this question is about approaches and can't have 100% "yes" or "no" answer, there are might be different opinions.
Sorry for English. I tried to write as clearly as I could :)
Background: I am building a web app using NodeJS + Express. Most of the communication between client and server is REST (GET and POST) calls. I would typically use AJAX XMLHttpRequest like mentioned in https://developers.google.com/appengine/articles/rpc. And I don't seem to understand how to make my RESTful service being used for Socket.io as well.
My questions are
What scenarios should I use Socket.io over AJAX RPC?
Is there a straight forward way to make them work together. At least for Expressjs style REST.
Do I have real benefits of using socket.io(if websockets are used -- TCP layer) on non real time web applications. Like a tinyurl site (where users post queries and server responds and forgets).
Also I was thinking a tricky but nonsense idea. What if I use RESTful for requests from clients and close connection from server side and do socket.emit().
Thanks in advance.
Your primary problem is that WebSockets are not request/response oriented like HTTP is. You mention REST and HTTP interchangeably, keep in mind that REST is a methodology behind designing and modeling your HTTP routes.
Your questions,
1. Socket.io would be a good scenario when you don't require a request/response format. For instance if you were building a multiplayer game in which whoever could click on more buttons won, you would send the server each click from each user, not needing a response back from the server that it registered each click. As long as the WebSocket connection is open, you can assume the message is making it to the server. Another use case is when you need a server to contact a client sporadically. An analytics page would be a good use case for WebSockets as there is no uniform pattern as to when data needs to be at the client, it could happen at anytime.
The WebSocket connection is an HTTP GET request with a special header requesting the server to upgrade it to a WebSocket connection. Distinguishing different events and message on the WebSocket connection is up to your application logic and likely won't match REST style URIs and methods (otherwise you are replication HTTP request/reply in a sense).
No.
Not sure what you mean on the last bit.
I'll just explain more about when you want to use Socket.IO and leave the in-depth explanation to Tj there.
Generally you will choose Socket.IO when performance and/or latency is a major concern and you have a site that involves users polling for data often. AJAX or long-polling is by far easier to implement, however, it can have serious performance problems in high load situations. By high-load, I mean like Facebook. Imagine millions of people loading their feed, and every minute each user is asking the server for new data. That could require some serious hardware and software to make that work well. With Socket.IO, each user could instead connect and just indefinitely wait for new data from the server as it arrives, resulting in far less overall server traffic.
Also, if you have a real-time application, Socket.IO would allow for a much better user experience while maintaining a reasonable server load. A common example is a chat room. You really don't want to have to constantly poll the server for new messages. It would be much better for the server to broadcast new messages as they are received. Although you can do it with long-polling, it can be pretty expensive in terms of server resources.
I had a use case where i was planning to poll from browser to server to check any updates for a given customer.Then i thought
of exploring push approach where webserver(in my case tomcat) can do it automatically whenever servlet running on webserver
gets any update from thirdparty.First question came to my mind how javaclass will know to which browser client it has to send
update.Then i came across the link at http://www.gianlucaguarini.com/blog/nodejs-and-a-simple-push-notification-server/.
This is the amazing link that demonstrates how push approach can be supported.But i came up with some basic question to go
ahead with this approach. These are:-
1)Does browser internally uses the websockets only to communicate with webserver or they just used TCP for that?
As per my understanding browser uses only TCP protocol though it is supported by some brosers like chrome,mozilla
2)Does the websocket (provided by io.connect('url')in the example) supported by all browsers specially IE7,IE8
As per my understanding
3)To support the push approach on browser, websockets are the only way to go?
As per my understanding, websockets are mainly used to push the data from webserver to browser(only those that support websockets)
For this first browser needs to make the websocket connection to webserver.Now server will use the created websocket to emit any
data to browser.Right?
4)Is there a possiblity when websocket get automatically disconnected like in case request gets timeout or response is awaited for long time?
5)Do we need to disconnect the socket explicitly or it will be closed automatically when browser is closed?
It would be really helpful if reply is pointwise.
WebSocket protocol is TCP protocol. It's just that it starts as HTTP and then it can be upgraded to TCP.
Internel Explorer is supposed to support WebSockets in version 10. The other major browsers (Chrome, FireFox, Safari, Opera) do fully support it.
There are many other possibilites. Simply polling, long polling ( where you make one ajax request and server responds only when he has new data ), hidden infinite iframe, use of flash, etc.
Yes.
Once an application which is using a port ( in that case a browser ) is killed, then all connections are terminated as well.
We have an browser application (SaaS) where we would like to notify the user in the case of internet connection or server connection loss. Gmail does this very nicely, the moment I unplug the internet cable or disable network traffic it immediately says unable to reach the server and gives me a count down for retry.
What is the best way to implement something like this? Would I want the client browser issuing AJAX requests to the application server every second, or have a separate server that just reports back "alive". Scalability will be come an issue down the road.
Because GMail already checks for new e-mails every some seconds and for chat information even more frequently, it can tell without a separate request if the connection is down. If you're not using Ajax for some other sort of constant update, then yes, you would just have your server reply with some sort of "alive" signal. Note that you couldn't use a separate server because of Ajax cross-domain restrictions, however.
With the server reporting to the client (push via Comet), you have to maintain an open connection for each client. This can be pretty expensive if you have a large number of clients. Scalability can be an issue, as you mentioned. The other option is to poll. Instead of doing it every second, you can have it poll every 5-10 seconds or so.
Something else that you can look at is Web Sockets (developed as part of HTML 5), but I am not sure if it is widely supported (AFAIK only Chrome supports it).