I am creating Java ME Application, its frequently get data from server with 5 minute gap. Can I use persistent HTTP connection for each request?
Can I use same connection for each request?
Yeah, sure you can.
However, if you are going to have a server that will be handling requests from a large number of devices, you would probably NOT want persistent connections.
Related
I have a Node.js project in which the front end needs to request some data from the back end. Since it only has to do so once per session, I was thinking of using web requests instead of socket.io because I don't need a continuous connection between the front end and the back end all the time.
So my question is: How much, if at all, will the efficiency of my project increase if I use web requests instead of socket.io?
If you're making a one-time request from client to server, there is no reason to use a continuous socket.io connection or to create a socket.io connection, use it for one message and then disconnect it.
It will certainly be simpler to just use a single http request to get your data.
How much, if at all, will the efficiency of my project increase?
The main difference would be at high scale your server wouldn't need to handle lots of simultaneous socket.io connections. At small scale, you probably won't notice much of difference either way. The main reason for choosing an http request is that it's the simpler and proper architecture for making a single request from client to server. A socket.io connection has it's uses for different circumstances.
Normally i use ajax http requests to get/post data. Now i have thoughts like why shouldn't i replace all the ajax get requests with socketIO?is there any disadvantage in following this approach?
I understand that session cookies via http headers will be sent between client and server during every http requests, during client<=>server interactions using sockets, will the session cookies in browser automatically sent to the server via socket headers(if that exists)?
In which usecases should i prefer SocketIO over Http?(if you consider this as a question that demands broad answer then you can link me to some relevant articles)
WebSockets are useful when the server needs to push some real time information to the client about some events that happened on the server. This avoids the client making multiple polling AJAX calls to verify if some event has occurred on the server.
Think of a simple chat application. If the client needs to know if the other participant in a chat session has written something in order to display it, he will need to make AJAX calls at regular intervals to verify this on the server. On the other hand WebSockets allow the server to notify the client when this even occurs, so it is much more efficient in terms of network traffic. Also the WebSockets protocol allows the server to push real time information to multiple subscribed clients at the same time: for example you could have a web browser and mobile application subscribed to a WebSocket and talking to each other directly through the server. Using AJAX those kind of scenarios would be harder to achieve and would require much more stateless HTTP calls.
I understand that session cookies will be sent between client and server during every http requests, is this case the same during client<=>server interactions using sockets
The WebSockets protocol is different from the HTTP protocol. So after the initial handshake occurs (which happens over HTTP), there are no more notion of HTTP specific things such as cookies.
There's one important thing that you should be aware when using WebSockets: it requires a persistent connection to be established between the client and the server. This could make it tricky when you need to load balance your servers. Of course the different implementations of the WebSockets protocol might offer solutions to this problem. For example Socket.IO has a Redis implementation allowing the servers to keep track of connected clients through a cluster of nodes.
I want to add on an existing project some sockets with nodeJs and Socket.io.
I already have 2 servers :
An API RESTful web service, to storage and manage my datas.
A Public web service to return HTML, assets (js, css, images, ...)
On the first try, I create my socket server on the Public one. But I think it will be better if I create an other one to handle only socket query.
What do you think ? It's a good idea or just an useless who will add more problem than solve (maybe duplicate intern lib, ..)
Also, i'm using token to communicate between Public and API, do I have to create another to communication between socket and API ? Or I can use the same one ?
------[EDIT]------
As nobody didn't understand me well I have create a schema with the infrastructure I was thinking about.
It is a good way to proceed ?
The Public Server and Socket server have to be the same ? Or can be separate ?
Do I must create a socket connection between API and Socket server for each client connected ?
Thank you !
Thanks for explaining better.
First of all, while this seems reasonable, this way of using Socket.io is not the most common one. The biggest advantage of using Socket.io is that it keeps a channel open for 2-way communication. The main advantage of this is that the server itself can send messages to the client without the latter having to poll periodically.
Think, for example, of a mail client. Without sockets, the browser would have to poll periodically to check for new mail. With an open socket connection, instead, as soon as a new mail comes the server notifies the client immediately.
In your case, the benefits could be limited, and I'm not sure the additional complexity of a Socket.io server (and cost!) would really be worth the modest speed improvement on REST requests. However, at the end it's up to you.
In answer to your points
See above
If the "public server" is not written in Node.js they can't be the same application. Wether they reside on the same server, it's up to you and your budget. Ideally they should be separate, for bigger workloads.
If you just want the socket server to act as a real-time proxy, then yes, you'll have to create a socket connection for each request. How that will work is:
The client requests a resource to the Socket.io server.
The Socket.io server does the normal HTTP request to the API server (e.g. using request)
The response is returned to the client over the socket connection
The workflow represented in #3 is the reason why you should expect only moderate performance improvement. Indeed, you'll get some better latency, but most of the overhead for starting a HTTP request is still there!
I'm trying to set up a server that can handle a high sustained amount of simultaneous requests. I found that at a certain point, the server won't be able to recycle "old" TCP connections quickly enough to accommodate extreme amounts of requests.
Do websockets eliminate or decrease the amount of tcp connections that a server needs to handle, and are they a good alternative to "normal" requests?
Websockets are persistent connections so it really depends on what you're talking about. The way socket.io uses XHR is different from a typical ajax call in that it hangs onto the request for as long as possible before sending a response. It's a technique called long-polling and It's trying to simulate a persistent connection by never letting go of the request. When the request is about to timeout it sends a response and a new request is initiated immediately which it hangs onto yet again, and the cycle continues.
So I guess if you're getting flooded with connections because of ajax calls then that's probably because your client code is polling the server at some sort of interval. This means that even idle clients will be hitting your server with fury because of this polling. If that's the case then yes, socket.io will reduce your number of connections because it tries to hang onto one single connection per client for as long as possible.
These days I recommend socket.io over doing plain ajax requests. Socket.io is designed to be performant with whatever transport it settles on. The way it gracefully degrades based on what connection is possible is great and means your server will be overloaded as little as possible while still reaching as wide an audience as it can.
We have an browser application (SaaS) where we would like to notify the user in the case of internet connection or server connection loss. Gmail does this very nicely, the moment I unplug the internet cable or disable network traffic it immediately says unable to reach the server and gives me a count down for retry.
What is the best way to implement something like this? Would I want the client browser issuing AJAX requests to the application server every second, or have a separate server that just reports back "alive". Scalability will be come an issue down the road.
Because GMail already checks for new e-mails every some seconds and for chat information even more frequently, it can tell without a separate request if the connection is down. If you're not using Ajax for some other sort of constant update, then yes, you would just have your server reply with some sort of "alive" signal. Note that you couldn't use a separate server because of Ajax cross-domain restrictions, however.
With the server reporting to the client (push via Comet), you have to maintain an open connection for each client. This can be pretty expensive if you have a large number of clients. Scalability can be an issue, as you mentioned. The other option is to poll. Instead of doing it every second, you can have it poll every 5-10 seconds or so.
Something else that you can look at is Web Sockets (developed as part of HTML 5), but I am not sure if it is widely supported (AFAIK only Chrome supports it).