We are already using loopback as our backend server for REST APIs.
Now our product demands to have multi tenancy in our system i.e seperate database per user.
So after searching for an hour , we got something like Loopback-MultiTenancy POC Sample.
The sample looks nice and exact what we need , though there are some issues we are facing using this poc and also at architecture level.
This POC create seperate folders per tenant. Each tenant folder has its own config , own datasource and its own models ,which is fine. But what we have is , we have common models for all users.
So whenever new user gets created , will have to create new tenant folder and move all models inside that folder either manually or using some script.
But when we gonna have 100 of users and say we want to change a particular model schema , so what it requires is reflect the changes in all other tenant folders also, Which is kind of very very troublesome for us.
So we are looking for better solution which doesnt ask for duplication and also serves the purpose , as of loopback.
We are kind of stuck , need some help or advice.
Thanks,
Related
I want to make a webservice and it looks like Loopback is good starting point.
To explain my question, I will describe situation
I have 2 MySQL Tables:
Users
Companies
Every User has it's Company. It's like master User for it's company.
I wish to create Products table for each company next way:
company1_Products,
company2_Products,
company3_Products
Each company have internal Users, like:
company1_Users,
company2_Users,
company3_Users
Internal users are logging in from corresponding subdomain, like
company1.myservice.com
company2.myservice.com
For the API, I want datasource to get Products from the corresponding table. So the question is, how to change datasource dynamically?
And how to handle Users? Storing in one table is not good, because internal company users could be in different companies...
Maybe there's better way to do such models?
Disclaimer: I am co-author and one of current maintainers of LoopBack.
how to change datasource dynamically?
The following StackOverflow answer describes a solution how to attach a single model (e.g. Product) to multiple datasources: https://stackoverflow.com/a/28327323/69868 This solution would work if you were creating one MySQL database per company instead of using company's name as the prefix of Product table name.
To achieve what you described, you can use model subclassing. For each company, define a new company-specific Product model inheriting from the shared Product model and changing the table name.
// common/models/company1-product.json
{
"name": "Company1_Product",
"base": "Product",
"mysql": {
"tableName": "company1_Products"
}
// etc.
}
You can even create these models on the fly using app.registry.createModel() and app.model() APIs, and then run dataSource.autoupdate to create SQL tables for the new model(s).
And how to handle Users? Storing in one table is not good, because internal company users could be in different companies...
I suppose you can use the same approach as you do for Products and as you described in your question.
Maybe there's better way to do such models?
The problem you are facing is calling multi-tenancy. I am afraid we haven't figured out an easy to use solution yet. There are many possible ways how to implement multi-tenancy.
For example, you can create one LoopBack application for each Company (tenant) and then create a top-level LoopBack or Express application to route incoming requests to appropriate tenant-specific LB app instance. See the following repository for a proof-of-concept implementation: https://github.com/strongloop/loopback-multitenant-poc
If one Nodejs app connects to a Mongo instance, and that app has defined a User schema with pre-save hooks, validation, etc.
And then another Nodejs app connects to the same database, and tries to register a User schema with different properties.
And then the second app saves a User
What happens?
I'm confused with how two Nodejs apps may communicate to the same database.
For example, it's very easy to see how one might want to have V2 of an api on a separate nodejs app developed by a separate team. But they will plug it into the same database and use the same Schema (or will they?), and I'm confused with how things are shared between the two apps.
Any help clarifying this in best-practices would be appreciated
I believe I've found the answer in the Documentation.
This connection object is then used to create and retrieve models. Models are always scoped to a single connection. docs
And
Models are fancy constructors compiled from our Schema definitions. docs
Which explains that a DB Connection 1's Schema Definitions (pre-save, etc), do not affect DB Connection 2's writes/etc.
Essentially, they are completely independent of validation and everything else. They only need to be OK in their own context.
I am using the MEAN stack for my project. I read online that it is not advisable to store image in the database itself and hence I am not doing that.
For solving this issue, now I have set up a local server (Using express) and I am serving my static image files from there.
Now I am able to use that image by using the URLs, for example:
http://localhost:4200/images/a.jpg
I am planning to host this express app eventually by using some service like heroku.
In my main website, I am achieving authentication(Sign In and Sign Up) by using MongoDb and NodeJs.
I want the images to be shown according to the specific logged in user.
Should I store my images in folder named by username of that user, so that I can genarate the URL string accordingly and access the image by :
http://localhost:4200/user1/a.jpg
Is the flow of my application correct? Is this the way I should be accessing the images for particular users?
I read somewhere that there would be a security issue because anyone having the url of the image can access it. I am not much concerned with security now as this a small project which is not meant for many users. But any suggestions for a way in which there won't be such a security issue would be helpful.
I am new to this and any advice would be helpful.
Thanks in advance.
You could use firebase for this .
Its super easy
Over there you could just create a folder with any name ans save all the images.
In the database you could just save their firebase generated link which can easily be mapped using a user_id or something like it.
I have a working Jhipster application, linked to a mysql database.
I would like to create a new application that I would connect to the first application database.
Is it possible? regarding to liquibase/entities/etc.
Why should this not be possible? MySQL itself is a multiuser DBMS, so it could handle multiple connections.
The only problem would be liquibase, because it checks if your database is valid against your changelogs. So, if your second app also uses liquibase and has not the same changelogs with same checksums, it will not start. So your second app should not use liquibase and you should remove the liquibase-stuff from the second app. The means: the first app is repsponsible for creating/updating the schema using liquibase and the second app just uses the same schema.
And you're right: the entities must be the same, because hibernate/JPA assumes the same column and entity/table names (which are given by the database)...
In my opinion, a better approach would be the microservice-way: the first application is the only who access the database directly and offers some interfaces for the entities via REST. Then, your second application simply uses the interface via a REST-Client. This also allows you to define other/modified entities via the REST-Service and your second app may not use exactly the same like in the first application.
Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.