Developing XPages for Clusters - xpages

I am about to start a new XPage project which will be used world-wide. I am a bit concerned because they are worried about performance and are therefore thinking about using this application in with a load balancer or a in a cluster. I have been looking around and I have seen that there can be issues with scoped variables (for example of the user starts the session on one server and is then sent to another, certain scoped variables go missing). I have also seen this wonderful article which focuses on performance, but does not really mention anything about a clustered environment.
Just a bit of extra info: concurrent users should not be higher than 600, but may grow over time, there are about 3000 users total. The XPage application will be a portal for a two data sources (an active database and its archive).
My question is this: as a developer, what must I pay very close attention to when developing an application that may run behind a load balancer or in a clustered environment?
Thanks for your time!

This isn't really an answer...but I can't fit this in a comment.
We faced a very similar problem.
We have an xpage SPA (Single Page Application) application that has been in production for 2-3 years, with variable user load up to 300-400 concurrent users who login for 8 hr sessions, we have 4 clustered Domino servers, 1 being a "workhorse" running all scheduled jobs, and 3 dedicated HTTP servers.
We use SSO in Domino and the 3 HTTP servers participate, so a user only has to authenticate once and can access all HTTP servers. We use a reverse proxy, so all users go to www.ourapp.com but get redirected to servera.ourapp.com, serverb.ourapp.com, etc., once they get directed to a server, the rev proxy issues a cookie to the client. This provides a "sticky" session to whichever server they have been directed to, and rev proxy will only move them to a different server if the server they are on becomes unavailable.
We use "user" managed session beans to store config for each user, so if the user moves server, if the user's bean does not exist, it will be created. But they key point is: because of sticky session, the user will only move if we bring a server down or the server was to fail. Since our app is a SPA, a lot of the user "config" is stored client side, so if they get booted to a different server (to the user, they are still pointed to www.ourapp.com) nothing really changes.
This has worked really well for us so far.
The app is also accessed by an "offline" external app, it points to the rev proxy (www.ourapp.com), but we did initially run into problems because this app was not passing back the Rev Proxy "sticky" cookie token, so 1 device was sending a request to proxy which got routed to server A, then 1 sec later to server B, then A..B..C, all sorts of headaches...since the cluster can be a few seconds out of sync, if sending requests to same doc...conflicts. As soon as we got the external app to pass back rev proxy token for each session, problem solved.
The one bit of your question I don't quite understand is: "...The XPage application will be a portal for a single database (no replicas) and an archive database (no replicas). " Does that mean the portal will be clustered, but the DB users are connecting to will not be clustered?
We haven't really coded any differently than if app was on 1 server, since the user's session is "stuck" to one server. We did need persistent document locking across all the servers. We initially used the native document locking, but $writers does not cluster, so we had to implement our own...we set a field on doc so that "lock" clustered (we also had to then implement s single lock store...sigh, can talk about that another time). Because of requirements, we have to maintain close to 1 million docs in 3 of the app databases, we generate huge amounts of audit data, but we push that out to SQL.
So I'd say this is more of an admin headache (I'm also the admin for this project, so in our case I can confirm this!) than a designer headache.
I'd also be interested to hear what anyone else has to say on this subject.

Related

How web app can used offlinely base on Service Worker?

I know there are many documents about Service Worker, also many questions already asked.
But today is a long day with me, so I'm very tired to read many many docs now.
I just want to explain my thinking about Service Worker, how it helps us serve web app offline, and I hope everybody can tell me whether it's right or not.
Everything I know about Service Worker is it intercepts on the network request job of browser, and do something. So I guess when it intercepts, it will cached every request. So when the network isn't connected, Service Worker uses the data it cached for serving to the users
Thanks for all reply,
Yes, your thoughts are right. Here I will provide some more details about the whole functioning.
A service worker (SW), like a web worker, runs on different thread than the one used by the main web app. This allows SW to keep running even when the web app is not opened, allowing for instance to receive and show web notifications.
A SW, differently from a web worker, used for generic purposes, acts specifically as a proxy between our web application and the network. However is up to us to define and implement what and how the SW has to cache data locally, otherwise, by default, the SW doesn't know what to store in the cache.
For this we have to implement caching strategies that target static assets (like .js or .css files, for instance) or even URLs (but keep in mind that the CACHE API, used by the SW, can only cache GET calls, no PUT/POST).
Once the assets or URLs we are interested are defined within the scope of a specific strategy, the SW will intercept all outgoing requests and see if there is a match and eventually provide the data from the local cache, instead of going over the network.
Of course this depends on the strategy we chose/implement.
Since the requested data is already available locally, the SW can deliver it even when the user is offline.
If interested, I wrote an article, describing in detail the service workers and some of the most common caching strategies, applied to different scenarios.

How to direct a user to an available websocket server when she logs in to my multi-server Node.js app?

This is more like a design question but I have no idea where to start.
Suppose I have a realtime Node.js app that runs on multiple servers. When a user logs in she doesn't know which server she will be assigned to. She will just login, do something and logout and that's it. A user won't be interacting with other users on a different server, nor will her details be stored on another server.
In the backend I assume the Node.js server will put the user's login details to some queue and then when there is space it will assign this user to an available server (A server that has the lowest ping value or is not full). Because there is a limit number of users on one physical server when the users try to login to a "full" server it will direct her to another available server.
I am using ws module of node.js. Is there any service available for this purpose or do I have to build my own? How difficult would that be?
I am not sure how websocket fits into this question. Ignoring it. I guess your actual question is about load balancing... Let me try paraphasing it.
Q: Does NodeJS has any load balancing feature that I can leverage?
Yes and it is called cluster in NodeJS. Instead of the traditional one node process listening on a single port, this module allows you to spawn a group of node processes and have them all binded to the same port.
This means is that all the user know is only the service's endpoint. He sends a request to it and 1 of the available server in the group will serve him whenever possible.
Alternatively using Nginx, the web server, as your load balancer is also a very popular approach to this problem.
References:
Cluster API: https://nodejs.org/api/cluster.html
Nginx as load balancer: http://nginx.org/en/docs/http/load_balancing.html
P.S
I guess the key word for googling solutions to your problem is load balancer.
Out of the 2 solutions I would recommend going the Nginx way as it is a much scalable approach
Example:
Your Node process could possibly be spread across multiple hosts (horizontal scaling). The former solution is more for vertical scaling, taking advantages of multi-cores machine.

Scaling nodejs app with pm2

I have an app that receives data from several sources in realtime using logins and passwords. After data is recieved it's stored in memory store and replaced after new data is available. Also I use sessions with mongo-db to auth user requests. Problem is I can't scale this app using pm2, since I can use only one connection to my datasource for one login/password pair.
Is there a way to use different login/password for each cluster or get cluster ID inside app?
Are memory values/sessions shared between clusters or is it separated? Thank you.
So if I understood this question, you have a node.js app, that connects to a 3rd party using HTTP or another protocol, and since you only have a single credential, you cannot connect to said 3rd party using more than one instance. To answer your question, yes it is possibly to set up your clusters to use a unique use/pw combination, the tricky part would be how to assign these credentials to each cluster (assuming you don't want to hard code it). You'd have to do this assignment when the servers start up, and perhaps use a a data store to hold these credentials and introduce some sort of locking mechanism for each credential (so that each credential is unique to a particular instance).
If I was in your shoes, however, what I would do is create a new server, whose sole job would be to get this "realtime data", and store it somewhere available to the cluster, such as redis or some persistent store. The server would then be a standalone server, just getting this data. You can also attach a RESTful API to it, so that if your other servers need to communicate with it, they can do so via HTTP, or a message queue (again, Redis would work fine there as well.
'Realtime' is vague; are you using WebSockets? If HTTP requests are being made often enough, also could be considered 'realtime'.
Possibly your problem is like something we encountered scaling SocketStream (websockets) apps, where the persistent connection requires same requests routed to the same process. (there are other network topologies / architectures which don't require this but that's another topic)
You'll need to use fork mode 1 process only and a solution to make sessions sticky e.g.:
https://www.npmjs.com/package/sticky-session
I have some example code but need to find it (over a year since deployed it)
Basically you wind up just using pm2 for 'always-on' feature; sticky-session module handles the node clusterisation stuff.
I may post example later.

Nodejs handling login on another server

suppose you had 4 machines each running an instance identical nodejs app, and users have to log in to access your website, after a user logs in is it possible to move his connection to one of the other machines?
To Clear it up:
Node 1 only holds the main app page, handles login validation and
knows how many users are on each node, and it routes the user who
logs in to the Node with the lowest number of users, or to make it
more complicated to the server which has the lowest load(not based
on the number of users but the traffic).
each of the other Nodes run CentOS with a nodejs server instance
cluster of Node processes.
i am using socket.io intensively and after login i always have
presistant connection with the client,even on my client no ajax
requests are made, everything is handled using sockets.
in my current source code, everything is combined in one nodejs app,
and i do socket authentication for login
The clients have no kind of interaction with each other, which makes the job easiear.
is it possible to pass a socket connection from one nodejs server to another?
how would you solve this problem yourself, considering that the nodejs app that handles the login and the actual nodejs app are 2 seperate machines?
I would keep it simple. I would create a load balancer node with its own application balancer application. This node will redirect to the less loaded "worker" node.js instance based on the number of authenticated use sessions on each node. This should happen even before authentication is done. All other "worker" nodes will have same main app with exactly same logic running - main page, authentication and application logic.
Just save an access token (i.e. cookie in case of http) into a database and send it from the client to the server every time it connects.
Well, it's tough to give a complete answer without having a better sense of your application's architecture... but I'll lay out my assumptions and go from there:
a) www.yourdomain.com points to Node 1.
b) Node 1 is the only server that responds to HTTP requests, Node 2 through Node 5 only communicate through sockets.
c) Once a user is authenticated through Node 1, it does not need to re-authenticate through Node 1 for subsequent communication through sockets (this one is a bit tricky, if you really want to ensure that only authenticated users can access your app that authentication must be passed over the socket connection, but it can be simpler than the full authentication performed by Node 1, and it sounds like you're doing this, just want to raise the issue)
Given those assumptions, then I would assign subdomains to each app server (perhaps node2.yourdomain.com, node3.yourdomain.com, etc?), then when Node 1 is ready to pass the client over to the app server, determine which node you want to send them to, pass that subdomain over to the client, and have the client create a socket connection to the assigned app server, then all of its communication will happen through there.
If I've misunderstood or over-simplified things, feel free to set me straight in comments.

Going session-less with NodeJS

I've been doing a lot of research lately and it appears to me that going stateless serverside brings benefits to both performance & scalability.
I am although trying to figure out how to achieve session-less-ness on Node.JS. It seems to me that basically all I have to do is assign a token to a logged in user, so I would have something like this in my DB:
{ user:'foo#example.com', pass:'123456', token:'long_id_here' }
so that the token can be send with every HTTP request like this:
/set/:key/:val/:token
to be checked against aforementioned DB object. Is this what it is actually meant to be a session-less web service?
If this is the right way, then I do not understand things like token expiry, and other security issues. I would like to be pointed out to NPM package of some sort?
On a side note, is it best for a token, to use a hash of the user+password, or to assign a different one at every login?
The reason to go sessionless is that most default session implementations use an in-memory store. That means that the session information is stored in memory local to that instance. Most websites these days are scaling out as traffic increases. This means they add more servers and balance the load between the servers. The problem with in-memory session stores is your user can log into Server 1, but if their next request is routed to Server 2, they don't have a session created yet and will appear to be logged off.
You don't necessarily need to go sessionless to scale out with node or any other server side language. You just need to use a session that isn't in local memory that would be accessible to all nodes. If you're using something like Express or Connect, you can easily use a session implementation like connect-redis which will enable you to have a fast session store which is accessible to all of your node instances so it doesn't matter which one is hit.

Resources