How can I use multiple dynamic DB with single code/project on run time with node.js - node.js

I want to connect dynamic mongo DB with my single code according to sub domain url.
eg.
if www.xyz.example.com then mongo DB is xyz
if www.abc.example.com then mongo DB is abc
if www.efg.example.com then mongo DB is efg
if someone hit www.xyz.example.com url then xyz DB automatically connect. if someone hit www.abc.example.com url then abc DB automatically connect.
but xyz DB connection should not disconnect. it should be remain . Because there is single code/project.
Please give a solution.

I'm not quite sure about your application use case so cannot assure the best solution.
One feasible solution is to run 3 node.js threads on 3 different ports, each connect to a specific DB instance. You can do it by running 3 different node.js process with different environment variables. Then forward the requests to each domain to different ports.
This approach has some advantages:
Ease of configuration, just need to care about deployment setting without if/else hacking in source code.
System availability, if 1 of the 3 DBs is down, only 1 domain affected, the others still work well.
NOTE: This approach just works well with small number of sub domains. If you have 30 sub domains or dynamic domains, then please re-consider your deployment architecture :). You may need to use some more advanced techniques to deal with it. A quick (but not best) way is to maintain a list of mongoose instances inside the application during application runtime, each instance is responsible for 1 sub domains. Then use req.get('host') to check the sub domain and use the corresponding mongoose instance to process the DB operations.

Related

nodejs oracle-db multiple DB connection

I wanted to create two database connection and periodically check the status of database connection. If database 1 fails I want to switch the connection to database 2. Can you give me some pointers please?
The first question is what exactly are you trying to do during the 'failover'. Is the Node.js app doing some work for users, or is it just the monitoring script? Then, next is how are your DBs configured (I'm guessing it's not RAC)?
There are all sorts of 'high availability' options and levels in Oracle, many of which are transparent to the application and are available when you use a connection pool. A few are described in the node-oracledb doc - look at Connections and High Availability. Other things can be configured in a tnsnames.ora file such as connection retries if connection requests fail.
As the most basic level the answer to your question is that you could periodically check whether a query works, or just use connection.ping(). If you go this 'roll your own' route, use a connection pool with size 1 for each DB. Then periodically get the connection from the pool and use it.
If you update your question with details, it would be easier to answer.

How to share database connection between different lambda functions

I went through some articles about taking advantage of lambda's container and sharing things like database connection between multiple instances, however, what if I have multiple lambda functions accessing the database and I want to have them share the same connection knowing that these functions call each other, for example, an API gateway calls the authenticator lambda function and then calls the insert user function, both of these functions make calls to the database, is it possible for them to share the same connection?
I'm using NodeJS but I can use a different language if it would support that.
You can't share connections between instances. Concurrent invocations do not use the same instance.
You can however share connections between invocations (which might be executed on the same container/instance). However, there you have to check if you connection is (still) open, in which case you can reuse it. Otherwise open a new one.
If you are worried about too many connections to your db just close the connections when you exit your lambda & instantiate new ones every time. You may also need to think about concurrency if that is a problem. A few weeks ago AWS added the possibility to control concurrency on a per function basis, which is neat.

Do different ports on local dynamodb mean that they are different database?

If I have two node apps (one running on port 1000 and one on 3000) and two dynamodb ports (one running on port 2000 and one on 4000). I want the 1000 port to only talk to the 2000 port and 3000 port to talk to 4000 one. I tried to do this but the data is same for both. A change in one, reflects on the other. Is it like this or is this my fault in some setup? I wish to resolve a concurrency problem in node.js without need of session token (just need a quick solution tbh), just setting up a new instance seemed easy solution.
Tips?
*different database or instance of database. I just don't want concurrency issues. I don't want test A to update the database and test B fails because it expected something else.
I can suggest two alternatives:
Start both DynamoDB local instances with separate -dbPath values. I am assuming that you aren't doing this right now, because of which both your instances must be using one data file.
If you do not specify this option, the file will be written to the current directory.
Use the -inMemory option, because of which:
DynamoDB will run in memory, instead of using a database file.
Look at more documentation here.

Mongoose create connection for multi-tenancy support in node.js

I'm researching a good way to implement multiple database for multi-tenant support using node.js + mongoose and mongodb.
I've found out that mongoose supports a method called createConnection() and I'm wondering the best practice to use that. Actually I am storing all of those connection in an array, separated by tenant. It'd be like:
var connections = [
{ tenant: 'TenantA', connection: mongoose.createConnection('tenant-a') },
{ tenant: 'TenantB', connection: mongoose.createConnection('tenant-b') }
];
Let's say the user send the tenant he will be logged in by request headers, and I get it in a very early middleware in express.
app.use(function (req, res, next) {
req.mongoConnection = connections.find({tenant: req.get('tenant')});
});
The question is, is it OK to store those connections statically or a better move would be create that connection every time a request is made ?
Edit 2014-09-09 - More info on software requirements
At first we are going to have around 3 tenants, but our plan is to increase that number to 40 in a year or two. There are more read operations than write ones, it's basically a big data system with machine learning. It is not a freemium software. The databases are quite big because the amount of historical data, but it is not a problem to move very old data to another location (we already thought about that). We plan to shard it later if we run out of available resources on our database machine, we could also separate some tenants in different machines.
The thing that most intrigues me is that some people say it's not a good idea to have prefixed collections for multitenancy but the reasons for that are very short.
https://docs.compose.io/use-cases/multi-tenant.html
http://themongodba.wordpress.com/2014/04/20/building-fast-scalable-multi-tenant-apps-with-mongodb/
I would not recommend manually creating and managing those separate connections. I don't know the details of your multi-tenant requirements (number of tenants, size of databases, expected number transactions, etc), but I think it would be better to go with something like Mongoose's useDb function. Then Mongoose can handle all the connection pool details.
update
The first direction I would explore is to setup each tenant on a separate node process. There are some interesting benefits to running your tenants in separate node processes. It makes sense from a security standpoint (isolated memory) and from a stability standpoint (one tenant process crash doesn't effect others).
Assuming you're basing the tenancy off of the URL, you would setup a proxy server in front of the actual tenant servers. It's job would be to look at the URL and route to the correct process based on that information. This is a very straightforward node http proxy setup. Each tenant instance could be the exact same code base, but launched with a different config (which tells them what mongo connection string to use).
This means you're able to design your actual application as if it wasn't multi-tenant. Each process only knows about one mongo database, and there is no multi-tenant logic necessary. It also enables you to easily split up traffic later based on load. If you need split up the tenants for performance reasons, you can do it transparently at the proxy level. The DNS can all stay the same, and you can just move the server that the instances are on behind the scenes. You can even have the proxy balance the requests for a tenant between multiple servers.

replicaset vs multi-mongos vs multiple connections

what is the difference and why use each of this features of mongoose?
for now I just need a method to transfer a document from one database to another.
Replica-Set
A replica-set are two or more MongoDB servers which mirror the same data. Reads can be served by any member of the set, but writes can only be handled by a single server (the "Master" or "Primary").
An application can only connect to the replica-set members it knows, so you need to tell it the hostnames and ports of all of them. There are cases where you want to restrict an application to specific members. In that case you wouldn't tell them about the other servers.
Multiple mongos
Another feature to scale MongoDB on multiple servers is sharding. A sharded cluster consists of multiple replica-sets or stand-alone MongoDB servers where each one has only a part of the data. This improves both read- and write performance but is technically more complex. When an application wants to connect to a cluster, it doesn't connect to the MongoDB processes directly. Each connection goes through a MongoDB router instead (mongos) which forwards each query to the mongod's who are responsible for it. For increased performance and redundancy, a cluster can have multiple mongos servers. When this is the case, the clients should pick one at random for each connection.
Multiple connections
When your application opens multiple connections to the database, it can perform multiple requests in parallel. Usually the database driver should do this automatically, so you don't have to worry about this, unless you need to connect to multiple databases at the same time or you need connections with different connection settings for some reason.

Resources