nodejs - keep session on windows file server - node.js

i have a problem with nodejs. I process a lot of files with nodejs. My admin has now informed me that I create hundreds of login and logoff actions on the Windows File Server within a second. He asked me to check whether this is really necessary and can possibly be minimized. I think that access to the one file always creates a new session on the Windows File Server. Is it possible to set up some kind of thread pool to keep the session active on the server?

Related

REST API - Not Logging Out - Open Sessions

I know that I need to make sure that I'm logging out when working with the REST API. But if my program has crashed a few times before the logout could happen, I know there are some hanging sessions out there. Is there a way to kill those open sessions? Or do I just need to restart IIS?
Tim, your program should be written in a way to not completely crash when an exception is thrown. Ideally, it must handle all REST API exceptions and store them in a log file for further analysis.
If you don't log out, IIS will automatically close an open session once the configured session timeout expires (see Acumatica product help for more details). There is no way to "kill" an open session. If you restart IIS or recycle an app pool, you will close all open connections, both with API and web browsers.

Node.js locking to one user at a time

I am building a really simple web application with Node.js. The purpose of this application is to allow a user to edit settings of some running computations from a browser. I would like to restrict the application to allow only one user at a time, so to avoid any conflicts. If another user connects to the application while some user is already there, the second one should be notified, that the application is in use by another user.
What is a preffered way to achieve this with Node.js?
I would recommend you build a simple session object ("model") that manages the connected users and only allows one connected session at a time. Perhaps sessions could timeout after 90 seconds of inactivity.
Here's a quick sessions tutorial which shows you how to use a raw session (req.session), a redis backend, or a mongodb backend. A basic express middleware could be used to manage the sessions and set a limit of 1.
If you want something more advanced, maybe look into Passport.

restart nodejs server programmatically

User case:
My nodejs server start with a configuration wizard that allow user to change the port and scheme. Even more, update the express routes
Question:
Is it possible to apply the such kind of configuration changes on the fly? restart the server can definitely bring all the changes online but i'm not sure how to trigger it from code.
Changing core configuration on the fly is rarely practiced. Node.js and most http frameworks do not support it neither at this point.
Modifying configuration and then restarting the server is completley valid solution and I suggest you to use it.
To restart server programatically you have to execute logics outside of the node.js, so that this process can continue once node.js process is killed. Granted you are running node.js server on Linux, the Bash script sounds like the best tool available for you.
Implementation will look something like this:
Client presses a switch somewhere on your site powered by node.js
Node.js then executes some JavaScript code which instructs your OS to execute some bash script, lets say it is script.sh
script.sh restarts node.js
Done
If any of the steps is difficult, ask about it. Though step 1 is something you are likely handling yourself already.
I know this question was asked a long time ago but since I ran into this problem I will share what I ended up doing.
For my problem I needed to restart the server since the user is allowed to change the port on their website. What I ended up doing is wrapping the whole server creation (https.createServer/server.listen) into a function called startServer(port). I would call this function at the end of the file with a default port. The user would change port by accessing endpoint /changePort?port=3000. That endpoint would call another function called restartServer(server,res,port) which would then call the startServer(port) with the new port then redirect user to that new site with the new port.
Much better than restarting the whole nodejs process.

Nodejs handling login on another server

suppose you had 4 machines each running an instance identical nodejs app, and users have to log in to access your website, after a user logs in is it possible to move his connection to one of the other machines?
To Clear it up:
Node 1 only holds the main app page, handles login validation and
knows how many users are on each node, and it routes the user who
logs in to the Node with the lowest number of users, or to make it
more complicated to the server which has the lowest load(not based
on the number of users but the traffic).
each of the other Nodes run CentOS with a nodejs server instance
cluster of Node processes.
i am using socket.io intensively and after login i always have
presistant connection with the client,even on my client no ajax
requests are made, everything is handled using sockets.
in my current source code, everything is combined in one nodejs app,
and i do socket authentication for login
The clients have no kind of interaction with each other, which makes the job easiear.
is it possible to pass a socket connection from one nodejs server to another?
how would you solve this problem yourself, considering that the nodejs app that handles the login and the actual nodejs app are 2 seperate machines?
I would keep it simple. I would create a load balancer node with its own application balancer application. This node will redirect to the less loaded "worker" node.js instance based on the number of authenticated use sessions on each node. This should happen even before authentication is done. All other "worker" nodes will have same main app with exactly same logic running - main page, authentication and application logic.
Just save an access token (i.e. cookie in case of http) into a database and send it from the client to the server every time it connects.
Well, it's tough to give a complete answer without having a better sense of your application's architecture... but I'll lay out my assumptions and go from there:
a) www.yourdomain.com points to Node 1.
b) Node 1 is the only server that responds to HTTP requests, Node 2 through Node 5 only communicate through sockets.
c) Once a user is authenticated through Node 1, it does not need to re-authenticate through Node 1 for subsequent communication through sockets (this one is a bit tricky, if you really want to ensure that only authenticated users can access your app that authentication must be passed over the socket connection, but it can be simpler than the full authentication performed by Node 1, and it sounds like you're doing this, just want to raise the issue)
Given those assumptions, then I would assign subdomains to each app server (perhaps node2.yourdomain.com, node3.yourdomain.com, etc?), then when Node 1 is ready to pass the client over to the app server, determine which node you want to send them to, pass that subdomain over to the client, and have the client create a socket connection to the assigned app server, then all of its communication will happen through there.
If I've misunderstood or over-simplified things, feel free to set me straight in comments.

Developing XPages for Clusters

I am about to start a new XPage project which will be used world-wide. I am a bit concerned because they are worried about performance and are therefore thinking about using this application in with a load balancer or a in a cluster. I have been looking around and I have seen that there can be issues with scoped variables (for example of the user starts the session on one server and is then sent to another, certain scoped variables go missing). I have also seen this wonderful article which focuses on performance, but does not really mention anything about a clustered environment.
Just a bit of extra info: concurrent users should not be higher than 600, but may grow over time, there are about 3000 users total. The XPage application will be a portal for a two data sources (an active database and its archive).
My question is this: as a developer, what must I pay very close attention to when developing an application that may run behind a load balancer or in a clustered environment?
Thanks for your time!
This isn't really an answer...but I can't fit this in a comment.
We faced a very similar problem.
We have an xpage SPA (Single Page Application) application that has been in production for 2-3 years, with variable user load up to 300-400 concurrent users who login for 8 hr sessions, we have 4 clustered Domino servers, 1 being a "workhorse" running all scheduled jobs, and 3 dedicated HTTP servers.
We use SSO in Domino and the 3 HTTP servers participate, so a user only has to authenticate once and can access all HTTP servers. We use a reverse proxy, so all users go to www.ourapp.com but get redirected to servera.ourapp.com, serverb.ourapp.com, etc., once they get directed to a server, the rev proxy issues a cookie to the client. This provides a "sticky" session to whichever server they have been directed to, and rev proxy will only move them to a different server if the server they are on becomes unavailable.
We use "user" managed session beans to store config for each user, so if the user moves server, if the user's bean does not exist, it will be created. But they key point is: because of sticky session, the user will only move if we bring a server down or the server was to fail. Since our app is a SPA, a lot of the user "config" is stored client side, so if they get booted to a different server (to the user, they are still pointed to www.ourapp.com) nothing really changes.
This has worked really well for us so far.
The app is also accessed by an "offline" external app, it points to the rev proxy (www.ourapp.com), but we did initially run into problems because this app was not passing back the Rev Proxy "sticky" cookie token, so 1 device was sending a request to proxy which got routed to server A, then 1 sec later to server B, then A..B..C, all sorts of headaches...since the cluster can be a few seconds out of sync, if sending requests to same doc...conflicts. As soon as we got the external app to pass back rev proxy token for each session, problem solved.
The one bit of your question I don't quite understand is: "...The XPage application will be a portal for a single database (no replicas) and an archive database (no replicas). " Does that mean the portal will be clustered, but the DB users are connecting to will not be clustered?
We haven't really coded any differently than if app was on 1 server, since the user's session is "stuck" to one server. We did need persistent document locking across all the servers. We initially used the native document locking, but $writers does not cluster, so we had to implement our own...we set a field on doc so that "lock" clustered (we also had to then implement s single lock store...sigh, can talk about that another time). Because of requirements, we have to maintain close to 1 million docs in 3 of the app databases, we generate huge amounts of audit data, but we push that out to SQL.
So I'd say this is more of an admin headache (I'm also the admin for this project, so in our case I can confirm this!) than a designer headache.
I'd also be interested to hear what anyone else has to say on this subject.

Resources