Considering Web Socket for a JavaEE project - node.js

The requirements of the project are:
If any user updates a record (any record), all relevant parties must be notified immediately by displaying an alert somewhere in the webpage. In previous projects, the browser would poll the server for any relevant changes every N seconds.
I have been reading on web sockets and think this is the prefect solution for this problem (I do not like polling).
I have some questions regarding Web Sockets in JavaEE. Please correct me if I am wrong.
Web Socket seems to be supported on Glassfish server not in latest version of JBoss/Wildfly.
If 1000 clients are logged in and connected to the server using Web Socket, does the server have 1000 separate sockets open for each connection? Or is the implementation similar to Node.js where a single server socket is used for all client connections. This information does not seem to be documented anywhere in JavaEE tutorials.

Websockets are TCP connections and the websocket protocol is simply an upgrade of the TCP protocol with an handshake procedure similar to the http protocol, but the websocket protocol is bidirectional.
I don't think you are getting a single web socket in Node.js. You have a connection per logged client anyway. In Node.js you have the broadcast, but this is the same as sending a message to any logged client through the related web socket. You have the same functionality in glassfish, where you simply loop on all the web sockets:
http://www.byteslounge.com/tutorials/java-ee-html5-websockets-with-multiple-clients-example
and you can do the same in weblogic:
https://docs.oracle.com/middleware/1212/wls/WLPRG/websockets.htm#WLPRG872
This is the same as Node.js without any wrapper.

Related

Multiple Socket.io app processes cause each client socket connects and disconnects repeatedly

I am working on a nodejs app with Socket.io and I did a test in a single process using PM 2 and it was no errors. Then I move to our production environment(We use Google Cloud Compute Instance).
I run 3 app processes and a iOS client connects to the server.
By the way the iOS client doesn't keep the socket connection. It doesn't send disconnect to the server. But it's disconnected and reconnect to the server. It happens continuously.
I am not sure why the server disconnects the client.
If you have any hint or answer for this, I would appreciate you.
That's probably because requests end up on a different machine rather than the one they originated from.
Straight from Socket.io Docs: Using Multiple Nodes:
If you plan to distribute the load of connections among different processes or machines, you have to make sure that requests associated with a particular session id connect to the process that originated them.
What you need to do:
Enable session affinity, a.k.a sticky sessions.
If you want to work with rooms/namespaces you also need to use a centralised memory store to keep track of namespace information, such as the Redis/Redis Adapter.
But I'd advise you to read the documentation piece I posted, things might have changed a bit since the last time I've implemented something like this.
By default, the socket.io client "tests" out the connection to its server with a couple http requests. If you have multiple server requests and those initial http requests don't go to the exact same server each time, then the socket.io connect will never get established properly and will not switch over to webSocket and it will keep attempting to use http polling.
There are two ways to fix this.
You can configure your clients to just assume the webSocket protocol will work. This will initiate the connection with one and only one http connection which will then be immediately upgraded to the webSocket protocol (with socket.io running on top of that). In socket.io, this is a transport option specified with the initial connection.
You can configure your server infrastructure to be sticky so that a request from a given client always goes back to the exact same server. There are lots of ways to do this depending upon your server architecture and how the load balancing is done between your servers.
If your servers are keeping any client state local to the server (and not in a shared database that all servers access), then you will need even a dropped connection and reconnect to go back to the same server and you will need sticky connections as your only solution. You can read more about sticky sessions on the socket.io website here.
Thanks for your replies.
I finally figured out the issue. The issue was caused by TTL of backend service in Google Cloud Load Balancer. The default TTL was 30 seconds and it made each socket connection tried to disconnect and reconnect.
So I updated the value to 3600s and then I could keep the connection.

Why do many websocket libraries implement their own application-level heartbeats?

For example socket.io has pingInterval and pingTimeout settings, nes for hapi has similar heartbeat interval settings. This is ostensibly to prevent any intermediates such as over-zealous proxies from closing what seems to be an inactive connection.
But ping/pong frames are part of the websocket protocol and seem to serve the same purpose. So why do websocket library implementors add another layer of ping/pong at the application level?
If I was pushed to guess it would be in case the websocket server is dealing with a client that doesn't respond/support the websocket protocol level ping-pongs.
I did some reading up and made some tests and I think it comes down to this:
Websocket pings are initiated by the server only
The browser Websocket API has isn't able to send ping frames and the incoming pings from the server are not exposed in any way
These pings are all about keepalive, not presence
Therefore if the server goes away without a proper TCP teardown (network lost/crash etc), the client doesn't know if the connection is still open
Adding a heartbeat at application level is a way for the client to establish the servers presence, or lack thereof. These must be sent as normal data messages because that's all the Websocket API (browser) is capable of.

Push notifications with websockets in a distributed Node.js app

I have a Node.js application deployed on multiple servers with Nginx server load-balancing the browser traffic to these servers. The server uses push notification mechanism (using websocket module) to communicate with the browser.
In the current setup, the browser loads an application page where it opens a client socket, which connects to the server. The websocket request is sent to Nginx, which sends it to one of the servers in the cluster. When an event happens on the server, it notifies the client browsers listening to the server websocket.
The problem is that each server is communicating only with a subset of the websocket clients. Also, each server is only aware of the events that take place on the server. As a result not all clients are notified of all the server events.
I can see several potential solutions:
Configure Nginx to send websocket requests from each browser to all the server in the cluster. I could not figure out how to do it. Load balancing does not support broadcasting.
Store websocket connections in the databse, so that all servers had access to it. I am not sure how to serialize the websocket connection object to store it in MongoDB.
Set up a communication mechanism among the servers in the cluster (some kind message bus) and whenever event happens, have all the servers notify the websocket clients they are tracking. This somewhat complicates the system and requires the nodes to know the addresses of each other. Which package is most suitable for such a solution?
What is the simplest way to implements distributed push notification in a Node.js app?

How to access TCP Socket via web client

I have a program in an embedded device that outputs an xml string to a socket. The embedded device has lighthttpd has a web server. I want to use a web based client (no flash/silverlight) to connect to the socket and pull the xml data every second.
I looked at Node.js with Socket.io to get what I want to do, but I am not clear about how to proceed. Searching through the Node.js and Socket.io documentation and examples I see standard client-server behavior, nothing regarding what I am trying to do.
Basically, the web server is just there to accept a connection from a client on the socket that the embedded application is outputting data to. Basically the web server's purpose is to just let the client retrieve data from the raw tcp socket that the embedded application is writing to. Please advice.
I solved the problem using Websockify, which acts as bridge between a TCP Socket and a browser.
The html client will connect to a websocket, and Websockify will listen on the websocket port and transmit data between the websocket and the tcp socket.
Web browsers have the ability to do HTTP requests (which can be web page requests or Ajax requests for data) and webSocket connections. You will need to pick one of these two mechanisms if you're sticking with stock browser access.
If the lighthttpd web server in the embedded device does not support webSockets, then your choice will like be an Ajax call from the browser to your server. This is basically just an HTTP request that make return something different than a web page (often JSON data) and is designed to fetch data from the server into a web client.
If the lighthttpd web server does support webSockets, then you could use a webSocket connection to fetch the data too. This has an advantage of being a persistent connection and allows for the server to directly send data to the client (without the client even requesting more data) whenever it wants to (more efficient for constant updates).
An Ajax connection is generally not persistent. A client sends an Ajax request, the server returns the answer and the connection is closed. The next request starts a new Ajax request.
Either Ajax requests or webSocket connections should work just fine for your use. All browsers still in use support Ajax. WebSockets are supported in modern browsers (IE10 and higher).
Once you decide upon a client connection strategy, then you'd build your web app on the embedded device that served as the middleman between the browser and the data on the embedded device. It would collect the appropriate data from the embedded device and then be able to send that to browser clients that connected and requested the data.
I'm not sure exactly why you mentioned node.js. In this circumstance, it would be used as the web server and the environment for building your app and the logic that collects the data from your device and feeds it to the requesting web browser, but it sounds like you already have lighthttpd for this purpose. Personally, I recommend node.js if it works in your environment. Combined with socket.io (for webSocket support), it's a very nice way to connect browsers directly to an embedded device. I have an attic fan controller written in node.js and running on a Raspberry Pi. The node.js app monitors temperature probes and controls relays that switch attic fans and node.js also serves as a web server for me to administer and monitor the node.js. All-in-all, it's a pretty slick environment if you already know and like programming in Javascript and there's a rich set of add-in modules to extend its capabilities available through NPM. If, however, your embedded device isn't a common device that there is already support for node.js on or it doesn't already have node.js on it, then you'd be facing a porting tasks to make node.js run on it which might be more work than using some other development environment that already runs on the device like lighthttpd.

Full-duplex messaging between remote autonomous Node.js applications over WebSockets?

There will be no human being in the loop, and both endpoints are autonomous Node.js applications operating as independent services.
Endpoint A is responsible for contacting Endpoint B via secure web socket, and maintaining that connection 24/7/365.
Both endpoints will initiate messages independently (without human intervention), and both endpoints will have an API (RESTful or otherwise) to receive and process messages. You might say that each endpoint is both a client of, and a server to, the other endpoint.
I am considering frameworks like Sails.js and LoopBack (implemented on both endpoints), as well as simply passing JSON messages over ws, but remain unclear what the most idiomatic approach would be.
Web Sockets have a lot of overhead for connecting to browsers and what not, since they try to remain compatible with HTTP. If you're just connecting a pair of servers, a simple TCP connection will suffice. You can use the net module for this.
Now, once you have that connection, how do you initiate communication? You could go through the trouble of making your own protocol, but I don't recommend it. I found that a simple RPC was easiest. You can use the rpc-stream package over any duplex stream (including your TCP socket).
For my own application, I actually installed socket.io-client and let my servers use it for RPC. Although if I were to do it again, I would use rpc-stream to skip all the overhead required for setting up a Web Socket connection.

Resources