PostgreSQL - Continuous integration - node.js

I have a database (PostgreSQL) in development environment, which allows me to develop a GraphQL api in NodeJS. I would like to know how to do when I make modifications to the database, pass these modifications to staging and then to production automatically, without having to redo all the queries and so on in each environment.
Do you know how to do it?
Thank you

A typical solution is to use something like migrations. You should have a special table that stores an information about all applied migrations.
The first migration can just execute an initial script that creates all tables, relations, functions and so on.
The subsequent migrations modify structure according to changes in your app and you always know what migrations was applied to a certain DB.
To achieve working with migration you should find a suitable package that can create, execute and undo migrations and maybe seeders as well (something like this package).

Related

How can I switch between a live and a production database without duplicating code?

Here is my situation. I have an extensive REST based API that connects to a MongoDB database using Mongoose. The API is written as a standard "MEAN" stack application.
Currently, when a developer queries the API they're always connecting to the live production database. What I want to do is have an exact duplicate database as a "staging" database, where new data will be added first, vetted over a period of time, and then move to the live database. Then I want developers to be able to query either one simply by modifying their query.
I started looking into this with the Mongoose documentation, and it appears as though the models are tied to the DB connection, and if I want to have multiple connections I also have to have multiple models, one for each connection. This would be a nightmare of WET code and not the path I want to take.
What I want to do is not touch any of my code at all and simply have a switch that changes to the proper database for a given query. So my question is, how can I achieve this? Is it possible? The documentation seems to imply it is not.
Rather than trying to maintain connections two environments in the same code base have you considered setting up stage version of your application? Which database it connects to could be set through an environment variable or some other configuration option.
The developers would still then only have to make a change to query one or the other and you could migrate data from the stage database to production/live database once you have finished your vetting process.

Populate TingoDB with data for acceptance test

I have NodeJS app that uses MongoDB as database. I'm using native mongo driver (not mongoess).
The application allow users to work on projects and share them and the logic that decide which projects a user is allowed to see is built as mongo criteria selector.
In order to test that I've found TingoDB which looks like a great candidate for mocking the MongoDB to be able to run the real model code and check that it is working.
My question is what is the best way to load the initial data? keep it in separate file? Keep it as another model?
Thank you,
Ido.
TingoDB actually stores it's data in flat-files, so if you want, you could just keep a copy of the database in a directory and load that.
However, if you're just testing with a small amount of data, you'd probably be better off keeping the test-data as in your testing scripts, and inserting it through your application as part of the test. That way, you can easily compare the data in the application to the data you loaded in your assertions.
Finally, if you're running MongoDB in production, then you should probably use MongoDB in your tests. While they do have nearly identical APIs, they have very different performance, which should be something you're keeping track of in testing. Unless there's a need to use TingoDB during testing, I'd try to make it as similar to the production environment as possible.

Unit testing queries with MongoDB

I'm currently building a rest api and I'm struggling to find the best way to unit test each route.
A route handler performs various things and one of them is to execute a query to mongodb. I can unit test the route handler by using stubs, but if I'm testing the query I cannot stub the query itself, I need to have an in-memory mongodb that I could reset and insert new data for each test.
How do you test queries? I'm thinking that the only real way to ensure that the query does what I need is to use a real mongodb database installed in the testing machine (typically in the same machine used for developing).
yes, just as for relation databases, you need to have real base. if mongo offers in-memory auto-created version then it's easy. if not, then each developer has to have running mongo before he runs integration tests. for CI you can have one single dedicated mongo but than you have to prevent concurrent access (schema creation, multiple transactions etc). you should also implement automatic creation of schema if needed and empty database before each test. in relational db rollback is usually enough. when it's not enough then trimming all tables helps. although we had to implement it manually as we couldn't find any existing tools

DB migration: Single creation script vs change sets

I am creating a DB schema per customer. So whenever a new customer registers I need to quickly create their schema in runtime.
Option 1
In runtime, use Liquibase (or equivalent) to run all the changesets to generate the latest schema.
Cons:
This is slow, there can be multiple historical change setsa which are not relevant now any more (create table and year later drop it).
Liquibase is used here in runtime and not just "migration time". Not sure if this is a good idea.
Standartizing on Liquibase as a mean to create schema will force all developers to use it during development. We try to avoid loading more tools on the developers.
Option 2
After each build we generate a temporary DB using Liquibase changesets. Then from the DB we create a clean schema creation script based on the current snapshot. Then when a new customer comes we just run the clean script, not the full change set history.
Cons:
Next time I run liquibase it will try to run from changeset 1. A workaround might be to include in the generation script the creation of the changeset table and inserting to it the latest changeset.
New schemas are created using one script, while old schemas go through the changeset process. In theory this might cause a different schema. However, the single script went through the changeset process as well so I can't think of exact case that will cause an error, this is a theoretical problem for now.
What do you think?
I would suggest option #1 for the consistency.
Database updates can be complex and the less chance for variation the better. That means you should have your developers create the liquibase changeSets initially to update their databases as they are implementing new features to know they are running as they expect and then know that those same steps will be ran in QA all the way through production. It is an extra tool they need to deal with, but it should be easy to integrate into their standard workflow in a way that is easy for them to use.
Similarly, I usually recommend leaving non-relevant historical changeSets in your changeLog because if you remove them you are deviating from your known-good update path. Databases are fast with most operations, especially on a system with little to no data. If you have specific changeSets that are no longer needed and are excessively expensive for one reason or another you can remove them on a case by case basis, but I would suggest doing that very rarely.
You are right that creating a database snapshot from a liquibase script should be identical to running the changeLog--as long as you include the databasechangelog table in the snapshot. However, sticking with an actual Liquibase update all the way through to production will allow you to use features such as contexts, preconditions and changelog parameters that may be helpful in your case as well.
There are two approaches for database deployment:
Build once deploy many – this approach uses the same principle as the native code, compile once and copy the binaries across the environments. From database point of view this approach means that the deploy script is generated once and then executed across environments.
Build & Deploy on demand – this approach generates the delta script when needed in order to handle any out of process changes.
If you use the Build & Deploy on demand approach you can generate the delta script for the entire schema or work-item / changeset.

node-mongo-native migration framework

I'm working on a node.js server, and using MongoDB with node-mongo-native.
I'm looking for a db migration framework, similar to Rails migrations. Any recommendations?
I'm not aware of a specific native Node.js tool for doing MongoDB migrations .. but you do have the option of using tools written in other languages (for example, Mongoid Rails Migrations).
It's worth noting that the approach to Schema design and data modelling in MongoDB is different from relational databases. In particular, there is no requirement for a collection to have a consistent or predeclared schema so many of the traditional migration actions such as adding and removing columns are not required.
However .. migrations which involve data transformations can still be useful.
If your application is expecting data to be in a certain format (eg. you want to split a "name" field into "first name" and "last name") there are several strategies you could use if the idea of using migration tools written in another programming language isn't appealing:
handle data differences in your application logic, so old and new data formats are both acceptable (perhaps "upgrading" records to match a newer format as they are updated)
write a script to do a once off data migration
contribute MongoDB helpers to node-migrate
I've just finished writing a basic migration framework based on node-mongo-native: https://github.com/afloyd/mongo-migrate. It will allow you to migrate up & down, as well as migrating up/down to a specific revision number. It was initially based on node-migrate, but obviously needed to be changed a bit to make it work.
The revision history is stored in mongodb and not on the file system like node-migrate, allowing collaboration on the same project using a single database. Otherwise each developer running migrations could cause migrations to run more than once against a database.
The migrations themselves are file-based, also helping with collaboration on a single project where each developer is (or is not) not using the same database. So when each dev runs the migration, all migration files not already run against his/her database will be run.
Check out the documentation for more info.

Resources