Both ActiveRecord (from Rails) and Sequelize (another node.js ORM) provide a way to initialize a database, creating the table structures from the model definitions. Rails does that through the rails db:schema:load command, while Sequelize has the sync() method that does the same. By using that, we don't need to run the entire migration stack of the application to start a fresh database, neither save SQL dumps on the project repository.
Coming from this background, I was expecting Bookshelf.js or Knex.js to have some kind of similar functionality, but I couldn't find it on the documentation of both projects.
I decided then to take a look at the source code of the Ghost blogging engine, which uses Bookshelf, and I found out that they treat the database initialization inside their own codebase:
https://github.com/TryGhost/Ghost/blob/e40290a/core/server/data/schema/schema.js
https://github.com/TryGhost/Ghost/blob/e40290a/core/server/data/migration/populate.js
https://github.com/TryGhost/Ghost/blob/e40290a/core/server/data/schema/commands.js
I'd like to avoid having to write my own code to treat things like this specially because other options like Sequelize offer this out of the box.
Is there any common practice, plugin or library recommended for database schema loading on Bookshelf?
I think you were on the right track and maybe just missed the docs.
http://knexjs.org/#Migrations
For loading a schema, you can run the migration stack if the migrations are written to be idempotent.
Another option is to use the database's export and import features. For example: https://www.postgresql.org/docs/9.1/static/app-pgdump.html
Rails achieves its migrations by having two rake tasks db:schema:dump and db:schema:load. Dump will dump your db schema to a local schema.rb file in a custom ruby format. db:schema:load loads that file.
So you could achieve something similar using pg_dump and pg_restore and an npm package script. Just make the script use node's exec method to call pg_dump and dump to the same file every time.
Related
Most of my dev experience is based on Ruby on Rails. The framework supports having a DB schema in two formats:
RoR DSL
SQL for cases when DSL is not enough. For instance, having an initially deffered unique constraint in PostgreSQL.
If it's needed to set up a DB from scratch, for instance in CI, it's possible to run a CLI task that will use either of the files and apply it without any further need to run migration files.
About two weeks ago we started a project that based on ExpressJS + PrismaJS and now we need to have a custom SQL for the DB structure.
After reading the Prisma docs I found that it's possible to write a custom SQL inside migration files and this is exactly what we need for our production. However, we also would like to have the same DB schema in our CI. Is there any other way to have the same schema for CI as we have in production without running migration files one by one as I can do with RoR?
Is there a way to generate jHipster code without Liquibase?
Or Is there a jHipster generator for database scripts in some kind of separate project of database scripts like in Oracle, MySQL, P SQL etc.
What I mean is Liqibase does not meet my needs, like I want to create tablespaces for my Oracle DB, create a user and grant privileges to it and then create my SQL scripts with in pure SQL not in Liquibase XML file as I want my table data to reside in my tablespace and want to specify the size of columns based on my requirements. And when I'm done with creating the SQL file I just have to run the jar file for all the DB work.
If there is not an option for generating schema without Liquibase. I'm considering disabling the Liquibase and generate the tables manually. Can I disable the Liqibase? There are some online suggestions but for jHipster 3.12 but I have not tried it yet.
Disable Liquibase temporarily in JHipster 2.26
Yes you can use the no-liquibase profile (here is the documentation), and Liquibase will not run anymore
However, JHipster will still generate the Liquibase files for you: you can just ignore them, or even delete them
I personally often use that profile, even if I use Liquibase a lot, as this speeds up deployment (of course, I only use it when I don't modify my database, but I'm not modifying it all the time). So this is nice trick to know, whether you like Liquibase or not.
Then, I'm pretty sure you can do your database-specific scripts with Liquibase, or run them before Liquibase, so maybe this is another solution you could use.
Is there any tool that works similar to Django South, but for Node?
Now I'm working with Sequelize. If I got it right, Sequelize does not have an option to create migration files based on existing models.
So, to create a new model/table, the steps are:
Create model with sequelize model:create <model meta>.
Edit generated migration file - add actual code for creating tables
in DB under up section.
Run migration with sequelize db:migrate.
I'm looking for something that can create migration files based on existing models, manage it similar to what South can do for Django.
Is there any option?
I have written a step-by-step guide on how to auto-create migrations with Sequelize in another post. Here is a summary...
The closest thing with Sequelize is
Sequelize Auto Migrations.
It allows you to have an iteration cycle like the following:
Create/update model -- by hand or with sequelize-cli)
Run makemigrations to auto-generate up and down migrations
Repeat as necessary
While this is very helpful, I've found it to be lacking in some critical areas:
The down migrations can be created incorrectly. So it may try to drop a table before its dependent tables have been dropped.
There are certain configurations around multi-field indexes that it has not correctly output.
There are currently 10 outstanding PRs, so it seems like a few additional contributors are attempting to make it more production-ready... but I've yet to find anything as clean and reliable as Django Migrations (formerly Django South).
TypeORM supports model based migrations. It can sync your db to your models directly, it can also create migration files too.
I think prisma is another option. It doesn't seem like that popular but it's a promising one.
Either way, it's just ridiculous that there are no solid tools for this. I've worked on django and .net projects in the last years and creating migrations is just easy with them. But when you try to use node.js for backend, you get stuck at a lot of points.
I was using sequelize and when I saw there was no official way to create automatic migrations from models, I've dropped using it. Maintaining your models with manually written migrations becomes very hard in my experience.
Now my only alternative is TypeORM and it just bugs me there is no other alternative in case TypeORM just goes unmaintained or if I want to use another library etc.
I'm seriously thinking dropping using node.js for backend. Yet, there are good tools to create projects integrated with modern front-end tools (like Next.js), finding a good orm is a big problem.
Take a look at https://typeorm.io/#/migrations/generating-migrations. I am in the same situation you was 4 years ago.
My options:
Waterline just for ORM and diff tool (like dbdiff) to make a file with differences from a new schema (generated by waterline migration with 'drop') vs production schema. With that output, you run query by query in a safe mode.
Prev option plus knex migrations. But you have to make your own migrations files. Knex doesnt have a schema file to compare but there is a request feature https://github.com/knex/knex/issues/1086.
Using sails but change waterline for sequalize and give a try at the #paulmest answer.
Use sails but change waterline for typeorm an use that auto generated.
Over the years, my team has tried Sequelize and TypeORM, but neither of them were great experiences. TypeORM had tons of cryptic problems and thousands of issues are unaddressed/open for years on end. Sequelize was generally fine, but lack of type support made it time-consuming to work with.
For the last 1.5 years, my team has been using Hasura. It's honestly been a breath of fresh air. Hasura's migration system is pretty barebones compared to Django South, but it is straight-forward and has a clean happy path.
You can use the Hasura Console (a localhost web editor) to create tables and add/remove columns. Those changes will be automatically distilled into schema changes stored in .sql files in a migration directory. You can modify these files. You can run a migration command from the command line to apply them when you want.
Since Hasura is a GraphQL engine, it also comes with a schematized SDK that is TypeScript compatible so you get incredible editor support.
All of this and much more is available in the open source version of their product. We have not needed to pay for any higher tier.
You can find more information here: https://hasura.io/docs/latest/graphql/core/migrations/index.html
I'm currently using postgresql for a database.
I come from more of a rails background where we create a migration and then run rake db:migrate to migrate the database.
How can I do something similar in sails.js?
Do I need to?
With an unmodified config/models.js file each time you sails lift it will prompt you for one of 3 possible options, detailed in the docs here:
safe -- No migrations are run
alter -- Sails will attempt to migrate the data as intelligently as possible
drop -- Sails will drop the database and run all of the migrations.
Equivalent to rake db: drop db:migrate
It's recommended that you only use safe in production, and run you migrations either by hand, or using one of the following modules (non-exhaustive list):
https://github.com/building5/sails-db-migrate
https://github.com/BlueHotDog/sails-migrations
In development however you're generally safe to modify your config/models.js file to set the value of the migrate attribute to the alter setting.
I am exploring writing a Sails.js application that talks to a mongodb data store. I've got it all working just fine but I'd like to be able to add some sort of initialisation process that runs at the npm install point that runs analogously to rake admin:create in the Rails world.
I'm guessing that this could be done by via a grunt task somehow, but I figured, before I roll my own solution, I'd ask other Sails / MongoDB users what they actually do in the real-world and see if a consistent, or best-practice solution has emerged.
Database initialization for Sails apps is typically done in the config/bootstrap.js file. A simple one would be something like:
module.exports = function(cb) {
User.findOrCreate(
// Search for user with "admin" flag
{admin: true},
// Create one if no such user is found
{admin: true, name: 'Admin User', ...}
).exec(cb);
}
The bootstrap will run only when Sails is lifted, and the server won't start until the callback is executed. If you send anything as the first argument to the callback, it'll be considered an error (like most standard Node callbacks) which will halt lifting of the server.
Note that you can't create the actual database with Waterline. A Grunt task would be a fine place for that, but you have to consider that Grunt doesn't have access to the Sails config, so you'd have to hard-code the name of the database to use, or find some other way to make it configurable. This could have implications for different environments (i.e. development vs. production). In our projects at Balderdash, we typically leave database creation as a manual step for the developer after they check out the code, and then use bootstrap.js to initialize the data. This allows the developer to maintain their own database settings using config/local.js.
All that being said, I can see the value in setting up a Grunt task for creating a db for production, to make it easier to deploy your Sails app to its final destination.
I personally think it was a wise decision by the sails.js team to use Grunt. Grunt provides a reliable and flexible solution to whatever automation you need.
I use grunt for everything in my sails.js projects including....
Importing data from legacy databases
Compiling sass (even though sails uses less by default)
Compiling coffeescript
Assembling knockout templates
Loading indexes into elasticsearch
You wont be disappointed using Grunt, It's pretty easy to get started with....So get to it!