Switch Databases dynamically - node.js

I'm doing a POS(point of sale) as Saas with React in the frontend, NodeJs in backend(API Rest) and MongoDB as the database.
I've finished a basic program and now I want any user is registered will have his own database.
After read some articles and question on the internet my conclusion was switch between databases each time the frontend consume the backend(API).
General Logic:
User Log in
In the backend, I use a general database to check user credentials and also I acquire the name of the database of this user.
Each time the frontend consumes the API the next codes are executed in a middleware to know what database should use the API:
var dbUser = db.useDb('nameDataBaseUser');
var Product = dbUser.model('Product', ProductSquema);
I have the schemas and the variable 'db' defined fixed in the code:
var db = mongoose.createConnection('mongodb://localhost');
Problem:
I don't know if is the correct solution about what I am trying to make, but it seems me inefficient that the model is generated constantly each time the API is called, because in some API(i.e in some middlewares I have until 4 different models)
Question:
This is the best way? or any suggestion to face this problem?

Not sure about the idea of creating a new db for each new user. That seems to create a lot of complexity and makes it difficult to maintain, and makes it difficult to access the data for analytics and such later. Why not use a new collection per new user? That way you can use just one set of db access credentials. Furthermore, Creating a new collection happens automatically when you store data for it.

Related

Mongoose Sharing Validation/Pre-Save Methods Across NodeJS Apps

If one Nodejs app connects to a Mongo instance, and that app has defined a User schema with pre-save hooks, validation, etc.
And then another Nodejs app connects to the same database, and tries to register a User schema with different properties.
And then the second app saves a User
What happens?
I'm confused with how two Nodejs apps may communicate to the same database.
For example, it's very easy to see how one might want to have V2 of an api on a separate nodejs app developed by a separate team. But they will plug it into the same database and use the same Schema (or will they?), and I'm confused with how things are shared between the two apps.
Any help clarifying this in best-practices would be appreciated
I believe I've found the answer in the Documentation.
This connection object is then used to create and retrieve models. Models are always scoped to a single connection. docs
And
Models are fancy constructors compiled from our Schema definitions. docs
Which explains that a DB Connection 1's Schema Definitions (pre-save, etc), do not affect DB Connection 2's writes/etc.
Essentially, they are completely independent of validation and everything else. They only need to be OK in their own context.

How to fetch from nodejs-api-starter into react-starter-kit

I am trying out React-Starter-Kit for the first time and loving all the cutting edge features baked in (apollo/graphql-client in particular). A crucial part of any app for me is the database, and for that my understanding is the same author provides nodejs-api-starter which sets up a REST interface for accessing Postgres at localhost:5000 and has a graphql webui at localhost:5000/graphl.
That is about as far as I have been able to understand of the setup so far. I have changed the frontend code a little bit so a new Component "Counter" is loaded on the home page. I need to be able to make a new counter, fetch the latest counter, and increment decrement the counter. Write now the component just outputs the 'value' retrieved from the server at 5000.
I do not think I am accessing the 5000 server correctly, do I put the port in this url line somehow?
You can pull the repo down from : https://github.com/Falieson/react-starter-kit-crud-counter-demo
This is my first time setting up a nodejs api server, I am used to using MeteorJS which has pub/sub to MongoDB baked in. I am looking forward to the separation the RSK strategy (which seems more industry standard?) provides.
I've just done setting up the full site with Database from React-Stater-Kit, I'm also a newbie so I understand your frustration.
About this question, you don't need the NodeJS-API-Starter, it has enhanced function ( such as Redis cache ) and it's not suited for newbies. You should look deeper into the RSK, it already has the DB. If you ran the boilerplate and played around, change is you'll see file database.sqlite in your folder, it's the database. Here are the things you should learn:
Use SequelizeJS to connect the NodeJS server with database. Your database can be MySQL/MariaDB, PostgreSQL or SQLite. The connection is easy and there's tool to auto-generate Models from your database
How to create GraphQL's Types and Queries. If your queries need to search through the database, import Sequelize's models and use its functions.
Test your API via GraphQLi
Note: if you want to use MongoDB or other NoSQL, try Mongoose instead of Sequelize.

Real-Time Database Messaging

We've got an application in Django running against a PGSQL database. One of the functions we've grown to support is real-time messaging to our UI when data is updated in the backend DB.
So... for example we show the contents of a customer table in our UI, as records are added/removed/updated from the backend customer DB table we echo those updates to our UI in real-time via some redis/socket.io/node.js magic.
Currently we've rolled our own solution for this entire thing using overloaded save() methods on the Django table models. That actually works pretty well for our current functions but as tables continue to grow into GB's of data, it is starting to slow down on some larger tables as our engine digs through the current 'subscribed' UI's and messages out appropriately which updates are needed as which clients.
Curious what other options might exist here. I believe MongoDB and other no-sql type engines support some constructs like this out of the box but I'm not finding an exact hit when Googling for better solutions.
Currently we've rolled our own solution for this entire thing using
overloaded save() methods on the Django table models.
Instead of working on the app level you might want to work on the lower, database level.
Add a PostgreSQL trigger after row insertion, and use pg_notify to notify external apps of the change.
Then in NodeJS:
var PGPubsub = require('pg-pubsub');
var pubsubInstance = new PGPubsub('postgres://username#localhost/tablename');
pubsubInstance.addChannel('channelName', function (channelPayload) {
// Handle the notification and its payload
// If the payload was JSON it has already been parsed for you
});
See that and that.
And you will be able to to the same in Python https://pypi.python.org/pypi/pgpubsub/0.0.2.
Finally, you might want to use data-partitioning in PostgreSQL. Long story short, PostgreSQL has already everything you need :)

PouchDB - start local, replicate later

Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.

mongoose: send data back without saving

I just started using node, backbone and mongoose not so long ago to create my first web app.
At the very beginning, I followed tutorials, and used backbone client side to define models. Those models mirror my mongoose schemas server side.
When I run schema.save() on one of my models, my data is automatically sent back to the client, with an _id.
But now that my app is almost finished, I realize that I don´t really need to save anything, as the only thing I do is query an api, and the data doesn´t have to be reused.
So my question is, what is the best way to keep the same mechanisms, but without saving anything?
The end reason is that I plan to run the app on an ec2 instance, and knowing that I don´t need to save anything, I think it is more beneficial to reduce the IO usage by not having any database.
Thanks, and sorry if the question seems dumb.

Resources