Actually am looking for a technical explanation for the thing that I am doing. am not an expert for orcaldb for nodejs.
I have a nodejs application used as API with Expressjs, connecting with the database is done with orcaledb single connection, BUT I am using user impersonation to make the connection to be able to take the user privileges in Oracle.
oracledb.getConnection({
user: dbConfig.user + ((user_Id === '') ? '' : '[' + user_Id + ']'),
password: dbConfig.password,
connectString: dbConfig.connectString
}
what am asking is;
Using the connection in this way do behave as Pool connection sense the user is changed every time, or its single connection. the main-user have got grant access for sub-user. main-user[sub-user].
any help is appreciated
BUT I am using user impersonation to make the connection to be able to take the user privileges in Oracle
The term for the feature you're using is "proxy authentication".
Using the connection in this way do behave as Pool connection sense the user is changed every time, or its single connection.
It's still a single connection. The two features are distinct: connection pool and proxy authentication. You can use either one independently or both together.
The idea behind the pool is to reduce the overhead associated with creating new connections (new process, memory allocation, etc.). Since you're working with an Express web server, chances are you're creating a multi-user application. That's the type of application that would benefit from a connection pool (as opposed to a job that runs every hour, for example).
There are several uses for proxy authentication, but one of the primary uses is identity preservation. In other words, rather than connect as a single generic user, you can proxy connect as the end user. This allows for better integration with security features such as roles and auditing.
To combine both, see the section of the doc on Pool Proxy Authentication.
Related
Have created a web application with nodeJs, I have a situation now
consider a table with the columns
username, password, ip addresses, database name, dbusername, dbpassword etc...
The login from the web app should connect to this table and authenticate the user.
While authenticating the db detail columns are read for that user.
And after successful login the nodejs connection should use this loaded db (only for this user).
Now, Open one more instance of the app and login with different user, this time the node should use the db for this user (might be same / different).
Being a single threaded model is it possible to have dynamic connection for each user in nodeJS?
Does opening and closing connections (with different db config) work well in this case?
Thanks,
Saran
express-session comes with functionalities that manages sessions. https://github.com/expressjs/session
passport.js is a very good solution for authentication in node.js. http://passportjs.org/
There are many tutorials out there for passport.js and session management.
For example: https://scotch.io/tutorials/easy-node-authentication-setup-and-local
Also sessions have nothing to do with single-threaded model. Even if you are multiple threaded you don't keep the connection up for each user after sending them the page. Cookies are used to identify who is visiting the site. Node.js have native functions that can also launch a cluster of Node.js processes to take advantage of the multi-core systems. In that case session can still be managed even across processes by using things like redis.
About clusters: https://nodejs.org/api/cluster.html
I have an app that receives data from several sources in realtime using logins and passwords. After data is recieved it's stored in memory store and replaced after new data is available. Also I use sessions with mongo-db to auth user requests. Problem is I can't scale this app using pm2, since I can use only one connection to my datasource for one login/password pair.
Is there a way to use different login/password for each cluster or get cluster ID inside app?
Are memory values/sessions shared between clusters or is it separated? Thank you.
So if I understood this question, you have a node.js app, that connects to a 3rd party using HTTP or another protocol, and since you only have a single credential, you cannot connect to said 3rd party using more than one instance. To answer your question, yes it is possibly to set up your clusters to use a unique use/pw combination, the tricky part would be how to assign these credentials to each cluster (assuming you don't want to hard code it). You'd have to do this assignment when the servers start up, and perhaps use a a data store to hold these credentials and introduce some sort of locking mechanism for each credential (so that each credential is unique to a particular instance).
If I was in your shoes, however, what I would do is create a new server, whose sole job would be to get this "realtime data", and store it somewhere available to the cluster, such as redis or some persistent store. The server would then be a standalone server, just getting this data. You can also attach a RESTful API to it, so that if your other servers need to communicate with it, they can do so via HTTP, or a message queue (again, Redis would work fine there as well.
'Realtime' is vague; are you using WebSockets? If HTTP requests are being made often enough, also could be considered 'realtime'.
Possibly your problem is like something we encountered scaling SocketStream (websockets) apps, where the persistent connection requires same requests routed to the same process. (there are other network topologies / architectures which don't require this but that's another topic)
You'll need to use fork mode 1 process only and a solution to make sessions sticky e.g.:
https://www.npmjs.com/package/sticky-session
I have some example code but need to find it (over a year since deployed it)
Basically you wind up just using pm2 for 'always-on' feature; sticky-session module handles the node clusterisation stuff.
I may post example later.
I am building a really simple web application with Node.js. The purpose of this application is to allow a user to edit settings of some running computations from a browser. I would like to restrict the application to allow only one user at a time, so to avoid any conflicts. If another user connects to the application while some user is already there, the second one should be notified, that the application is in use by another user.
What is a preffered way to achieve this with Node.js?
I would recommend you build a simple session object ("model") that manages the connected users and only allows one connected session at a time. Perhaps sessions could timeout after 90 seconds of inactivity.
Here's a quick sessions tutorial which shows you how to use a raw session (req.session), a redis backend, or a mongodb backend. A basic express middleware could be used to manage the sessions and set a limit of 1.
If you want something more advanced, maybe look into Passport.
I am about to start a new XPage project which will be used world-wide. I am a bit concerned because they are worried about performance and are therefore thinking about using this application in with a load balancer or a in a cluster. I have been looking around and I have seen that there can be issues with scoped variables (for example of the user starts the session on one server and is then sent to another, certain scoped variables go missing). I have also seen this wonderful article which focuses on performance, but does not really mention anything about a clustered environment.
Just a bit of extra info: concurrent users should not be higher than 600, but may grow over time, there are about 3000 users total. The XPage application will be a portal for a two data sources (an active database and its archive).
My question is this: as a developer, what must I pay very close attention to when developing an application that may run behind a load balancer or in a clustered environment?
Thanks for your time!
This isn't really an answer...but I can't fit this in a comment.
We faced a very similar problem.
We have an xpage SPA (Single Page Application) application that has been in production for 2-3 years, with variable user load up to 300-400 concurrent users who login for 8 hr sessions, we have 4 clustered Domino servers, 1 being a "workhorse" running all scheduled jobs, and 3 dedicated HTTP servers.
We use SSO in Domino and the 3 HTTP servers participate, so a user only has to authenticate once and can access all HTTP servers. We use a reverse proxy, so all users go to www.ourapp.com but get redirected to servera.ourapp.com, serverb.ourapp.com, etc., once they get directed to a server, the rev proxy issues a cookie to the client. This provides a "sticky" session to whichever server they have been directed to, and rev proxy will only move them to a different server if the server they are on becomes unavailable.
We use "user" managed session beans to store config for each user, so if the user moves server, if the user's bean does not exist, it will be created. But they key point is: because of sticky session, the user will only move if we bring a server down or the server was to fail. Since our app is a SPA, a lot of the user "config" is stored client side, so if they get booted to a different server (to the user, they are still pointed to www.ourapp.com) nothing really changes.
This has worked really well for us so far.
The app is also accessed by an "offline" external app, it points to the rev proxy (www.ourapp.com), but we did initially run into problems because this app was not passing back the Rev Proxy "sticky" cookie token, so 1 device was sending a request to proxy which got routed to server A, then 1 sec later to server B, then A..B..C, all sorts of headaches...since the cluster can be a few seconds out of sync, if sending requests to same doc...conflicts. As soon as we got the external app to pass back rev proxy token for each session, problem solved.
The one bit of your question I don't quite understand is: "...The XPage application will be a portal for a single database (no replicas) and an archive database (no replicas). " Does that mean the portal will be clustered, but the DB users are connecting to will not be clustered?
We haven't really coded any differently than if app was on 1 server, since the user's session is "stuck" to one server. We did need persistent document locking across all the servers. We initially used the native document locking, but $writers does not cluster, so we had to implement our own...we set a field on doc so that "lock" clustered (we also had to then implement s single lock store...sigh, can talk about that another time). Because of requirements, we have to maintain close to 1 million docs in 3 of the app databases, we generate huge amounts of audit data, but we push that out to SQL.
So I'd say this is more of an admin headache (I'm also the admin for this project, so in our case I can confirm this!) than a designer headache.
I'd also be interested to hear what anyone else has to say on this subject.
I am creating a TCP based game server for iOS, it involves registration and login.
Users will be stored as a collection in MongoDB.
when login is done, I generate a unique session id - How ?
I wanted to know what all data remains with node server and what can be stored in db.
data like session tokens, or collection of sockets if I am maintaining a persistent connection etc.
Node.JS does not have any sessions by default. In fact, there is a plug-in for managing sessions with MongoDB.
It's not clear that you really need sessions however. If you're opening a direct socket with socket.io, that is a defacto session.
Node.js itself does not manage sessions for you. It simply exposes an API to underlying unix facilities for Socket communication. HTTP in it self is a stateless protocol and does not have sessions either. SSH on the other hand is a stateful protocol, but I do not think either one would be good for you.
Creating a uniuqe ID is really simple, all you need to do is hash some data about the user. Their SHA(IP address + time + username). See: http://nodejs.org/api/crypto.html
One approach a lot of applications take is to create their own protocol and send messages using that. You will have to handle a lot of cases with that. And I myself have never dealt with mobile where you have serious connectivity challenges and caching requirements that are not a big problem on desktops.
To solve these problem, founder of Scribd started a company called Parse which should make it much easier for your to do things. Have a look at their website: https://parse.com/.
If you want to do some authentication however, have a look at Everyauth, it provides a lot of that for you. You can find it here: https://github.com/bnoguchi/everyauth/.