Good day! I have created an application using nodejs + mongoose and now I want to make something like a superuser application. I need my admin panel application to connect to the same database. So, i have a question.
Should i store the same Schema file in both applications to have an ability to use my Schema methods? In other words, what is the best way to create one more API using the same db?
Thank you!
If I'm not mistaken, why not create another service that only interacts with the database? That way, the systems will refer to the same schema/DB regardless of which application you want to connect to it. So the superuser application and the normal application will just query the DB microservice that interacts the database.
Pro: source of truth for the schema for all applications and the DB queries will just be API calls
Con: additional overhead in creating your ecosystem
If you are using the same DB from two different applications, you will want to make sure those schemas are the same between the two. If one changes its inputs, the other might need to change its display (or risk not expecting all that information). Keep all this in mind during your release process.
I would suggest making the schemas an external library to both, or have the admin panel require the current app. You'll avoid getting two sets out of sync and know to look at one place for the schema definitions.
Related
I made a CRM app using NestJs with Nodejs. I designed it in a way that each team has its own database because every teams data is difference and has no relation with other teams and also it made the process of back up much easier.
However, Now that I want to deploy my service I noticed that for each team I must create a separate nodejs Instance which makes ram usage very high. Imagine just for 10 teams I may need around ~500MB ram which will hurt me economically even in short run.
Solutions
I used TypeORM in NestJs so the first thought I had was to find a way to have multiple databases (not multiple connections) having them sharing same schema but dynamicly use one of them based on request's scope and details. Which seems the best solution so I can avoid creating another NodeJs instance and in same time I now have seperate database for each team.
I read nestJs and TypeORM documents but didn't found any way to accomplish that. So my other solution was to just use one database for everone and add something like team_id column to each table to make a filter data for each team.
Is it a good way?
Is there any other solutions to use one nestJs instance but with same schema for multiple databases?
I recommend to use one database.
The database can have a table saving all of the teams and other tables will have a new team_id column as you think.
One database for each team has disadvantages.
Multiple DB Connections
Since you need to use same Entities for all of the databases for the teams, you cannot use Single Database Connection. According to every incoming API request, the server will have to switch db connections.
DB Configuration in TypeORM
For multiple databases, the configuration will be looking like below:
imports: [
...,
TypeOrmModule.forRoot({
name
type
host
port
username
password
...
}),
TypeOrmModule.forRoot({
name
type
host
port
username
password
}),
...
]
If you need to add a new team, you have to update your code base for adding a new db for the team and have to redeploy your application. (maybe you will create a new database and perform migration too?)
Backup
I agree with you that it's better to backup a single team with multiple databases. But how about when you want to backup all teams? In most of cases, I believe it will need to backup all teams, not just a specific team.
Teams Management
Where do you save a team's information? How to know what team has what db?
Maybe you saved teams somewhere(in a separated db?). To know which database connection should be used in each request, it needs to make a new query?
Cost
If there are 100 teams, you are gonna make 100 databases? Also each application has development and production environment. In some cases, there can be more environments like staging. 2 envs will double the number of dbs.
Conclusion
Of course there will be a way to automate some of the items in the above list and it's still possible to use multipe databases in NestJS + TypeORM for your project but it looks not a good way and not a worth effort for your project.
I have seen some big multi-tenant applications (like grafana) and they weren't using multiple databases strategy.
I don't know how you are storing users, but since you are speaking about teams I suppose you have a place where users are stored and assigned to a team, could it be a table in a login common database?
A solution could be to bind each team to it's own database; once a user login (accessing data from common login database) you read the team which it belongs and the database for its data, then you can access CRM data from the database bound to the team the user belongs.
I'm trying to make a service architecture which includes two Node.js apps which shares the same database. The overall service architecture looks like below (simplified version)
I'm planning to use Sequelize as an ORM to access the database. As far as I know, if a service uses Sequelize, it needs model to get the structure of data tables. In my case, api and service will access the same database, which means they should share the same Sequelize model.
So here is the question: where should I locate the common Sequelize relevant files? It seems I have two choices:
put them on the upper common location (assuming the project structure is monorepo) so that each apps can use the single same files
maintain copies of files in each apps' project folders. In this case, each apps will be independent(Let's say I want to dockerize each apps) but in case the Sequelize files modified, the same action should be done for the other.
I'm not sure how I understood is correct. Is my question valid? If so, what is the better choice and practice? I appreciate for your answers in advance.
There is no correct answer, it depends on the specific situation, but sharing a database between multiple microservices is a bad design.
Sharing a database means tight coupling at the data level. The direct consequence is that when a service modifies the database table structure, such as deleting the name field of the user table, it may break the APIs of other services and all use the sequelize user model. All services need to update the model definition and modify the implementation code of the API.
If all of your services are maintained by a team, I suggest you choose the first solution, which costs less and is easier to maintain. If your services are maintained by different teams, the two solutions are actually similar, because as long as the table structure is modified, the application layer model needs to be modified or verified whether it still works well.
Therefore, I recommend following the best practices of microservice architecture, first splitting the database vertically according to the business model, and building application APIs on top of it.
Core principles of microservices:
loose coupling
high cohesion
Here is my situation. I have an extensive REST based API that connects to a MongoDB database using Mongoose. The API is written as a standard "MEAN" stack application.
Currently, when a developer queries the API they're always connecting to the live production database. What I want to do is have an exact duplicate database as a "staging" database, where new data will be added first, vetted over a period of time, and then move to the live database. Then I want developers to be able to query either one simply by modifying their query.
I started looking into this with the Mongoose documentation, and it appears as though the models are tied to the DB connection, and if I want to have multiple connections I also have to have multiple models, one for each connection. This would be a nightmare of WET code and not the path I want to take.
What I want to do is not touch any of my code at all and simply have a switch that changes to the proper database for a given query. So my question is, how can I achieve this? Is it possible? The documentation seems to imply it is not.
Rather than trying to maintain connections two environments in the same code base have you considered setting up stage version of your application? Which database it connects to could be set through an environment variable or some other configuration option.
The developers would still then only have to make a change to query one or the other and you could migrate data from the stage database to production/live database once you have finished your vetting process.
I am new to subsonic and I'd like to know about the best practices regarding the following scenario:
Subsonic supports multiple database systems, e.g. SQLServer and MySQL. Our customers need to decide while deploying our application to their servers, which database system should be used. Long story short: the providerName, normally specified within the application configuration, should be configurable after the application is finished.
How can this be done? Do I have to generate seperate data libraries for each database system I want to support?
Thank you in advance
Marco
No you do not need to genarate seperate libraries.
How ever you can not use direct sql string as you understand but you need to go always using subsonic sql create code.
Also is good to make some tests on the diferent databases, because not all code have been 100% testes on every case.
I currently developed an app that connects to SQL Server 2005 database, so my DAL objects where generated using information from that DB.
It will also be possible to connect to an Oracle and MySQL db, all with the same table structures (aside from the normal differences in fields, such as varbinary(max) in SQL Server and BLOB in Oracle, and so on). For this purpose, I already defined multiple connection strings and multiple SubSonic providers for the different DB's the app will run on.
My question is, if I generated my objects using a SQL Server database, should the generated objects work transparently with the other DB's or do I need to generate a different DAL for each database engine I use? Should I be aware of any possible bugs I may encounter while performing these operations?
Thanks in advance for any advice on this issue.
I'm using SubSonic 2.2 by the way....
From what I've been able to test so far, I can't see an easy way to achieve what I'm trying to do.
The ideal situation for me would have been to generate SubSonic objects using SQL Server for example, and just be able to switch dynamically to MySQL by just creating at runtime the correct Provider for it along with its connection string. I got to a point where my app would correctly connect from SQL Server to a MySQL DB, but there's a point where the app fails since SubSonic internally generates queries of the form
SELECT * FROM dbo.MyTable
which MySQL doesn't support obviously. I also noticed queries that enclosed table names with brackets ([]), so it seems that there are a number of factors that would limit the use of one Provider along multiple DB engines.
I guess my only other option is to sort it out with multiple generated providers, although I must admit it does not make me comfortable knowing that I'll have N copies of basically the same classes along my project.
I would really love to hear from anyone else if they've had similar experiences. I'll be sure to post my results once I get everything sorted out and working for my project.
Has any of this changed in 3.0? This would definitely be a worthy reason for me to upgrade if life is any easier on this matter...