How to deal with calling sequelize.sync() first? - node.js

I'm a bit new to developing in nodejs, so this is probably a simple problem. I'm building a typical webapp based on express + sequelize. I'm using sqlite in-memory since I'm just prototyping at the moment. I understand if I were to use a persistent sqlite file, this may not be a problem, but that's not my goal at the moment. Consider the following:
var User = sequelize.define("User", {
"username": DataTypes.STRING,
// etc, etc, etc
});
sequelize.sync();
User.build({
"username": "mykospark"
});
At first, I got an error on User.build() about the Users table not existing yet. I realized that sequelize.sync() was being called async, and the insert was happening before the table was created. I then re-arranged my code so that the User.build() call was inside of sequelize.sync().complete() which fixes the problem, but I'm not sure how to apply this to the rest of my project.
My project uses models in a bunch of different places. It is my understanding that I just want to call sequelize.sync() once after my models are defined, then they can be used freely. I could probably find some way to block the entire nodejs app until sequelize.sync() finishes, but that doesn't seem like good form. I suppose I could wrap every single model operation into a sequelize.sync().complete() call, but that doesn't seem right either.
So how do people usually deal with this?

Your .sync() call should be called once within your app.js file. However, you might have additional calls if you manage multiple databases in one server. Typically your .sync() call will be in your server file and the var User = sequelize.define("ModelName"... will be in your models/modelName.js file. Sequelize suggests this type of guidance to "create a maintainable application where the database logic is collected in the models folder". This will help you as your development grows. Later in the answer, I'll provide an easy step to follow for initializing the file structure.
So for your case, you would have app.js, models/index.js and models/users.js. Where app.js would be your server running the .sync() method. In the models folder you will have the required index.js folder where you configure a connection to the database and collect all the model definitions. Finally you have your user.js files where you add your model with class and instance methods. Below is an example of the models/user.js file you might find helpful.
user.js
module.exports = function(sequelize, DataTypes) {
return sequelize.define('User', {
username: DataTypes.STRING,
},{
classMethods: {
doSomething: function(successcb, errcb, request) {}
},
instanceMethods: {
someThingElse: function(successcb, errcb, request) {}
}
});
};
models/index.js --> See here
EDIT 03/14/17
Now the best option to setup your node app with sequelize is to use sequelize-cli. This is sequelize migrations and has very useful functionality in development and production environments. For the scope of this question and revision to the answer, the best approach is the following:
npm install sequelize-cli
Use npm install sequelize-cli -g if you want it installed globally.
Then initialize sequelize migrations:
sequelize init
It should install the following folders and files structure in the folder you initiated the command:
config:
-config.json
models:
-index.js
seeders:
migrations:
If you want to create a model you can run the following command and it will auto generate the file structure for you. Here is an example
sequelize model:create --name User --attributes "user:string email:string"
Next you should be able to see the new model page in models/page.js.
config:
-config.json
models:
-index.js
-user.js
-page.js
seeders:
migrations:
You'll need to then go into you models/index.js and define your new model for your database to access the correct path for that model. Here is an example:
models/index.js
var sq = new Sequelize(dbname, user, password, config);
db = {
Sequelize: Sequelize,
sequelize: sq,
page: sq.import(__dirname + '/page.js'),
user: sq.import(__dirname + '/user.js')
}
module.exports = db;
If you need to make changes to the model you can go into the migrations folder and add methods. Follow the sequelize migration docs here. Now, about the app.js server. Before you run your server you need to initialize your databases. I use the following script to initialize the database before running the server to setup a postgres db:
postgresInit.sh
[...]
`sudo -u postgres createdb -U postgres -O $PG_USER $PG_DB. `
If you prefer a javascript solution, there is an SO solution here
app.js
[...]
console.log('this will sync your table to your database')
console.log('and the console should read out Executing (default): CREATE TABLE IF NOT EXISTS "TABLE NAME"....')
db.sequelize.sync(function(err){});

Related

How to use postgres or SQL Server with nestjs

I am from .net background and few days back started using nestjs for one of my project.
I love the way nestjs built but few question here.
When using .net I can easily connect .net application with database by using some ORM like Ado.net, EF etc.
How do I connect nestjs with Postgres or SQL Server database?
Whatever will be the answer I would also like to know will that be suitable for enterprise applications?
Is there any reference site?
Thanks in advance
The docs show examples of how to conenct to a database using TypeORM and Sequilize, or you can roll your own dynamic module with custom providers if you want to use something else. There are a few packages around for knex, pg-promise, and massiveORM. These should all be more than suitable for enterprise applications, but if anything shows up as an issue, make sure to notify the owners of the repository.
Here is the basic structure I created to set up my current project with NestJS and PostgreSQL just in case this helps.
I added the following packages:
$ npm install --save knex objection objection-graphql pg
Created the nest database module and added this module to the "imports" and "exports" within the app.module.ts:
./src/database/database.module.ts
Then added the following files and folders:
./src/database/migrations
./src/database/models
./src/database/base.model.ts
./src/database/user.model.ts
./src/database/seeds
./src/database/migration.stub
./src/database/seed.stub
At the root level, I placed the knexfile.ts
import 'dotenv/config';
import Knex from 'knex';
import { knexSnakeCaseMappers } from 'objection';
module.exports = {
development: {
client: 'pg',
connection: process.env.DATABASE_URL,
migrations: {
directory: './src/database/migrations',
stub: './src/database/migration.stub',
},
seeds: {
directory: './src/database/seeds',
stub: './src/database/seed.stub'
},
...knexSnakeCaseMappers()
},
production: {
client: 'pg',
connectio: process.env.DATABASE_URL,
migrations: {
directory: __dirname + '/database/migrations',
},
seeds: {
directory: __dirname + '/database/seeds/production',
},
},
} as Knex.Config;
Then for each new Module, I called the model from the database folder
import { UserModel } from '../database/models/user.model'
That's it. Here you have connected PostgreSQL with NestJS.
As others have mentioned the NestJS manual is a well written resource for a detailed reference.
RON

How I can store and manage a db schema in Arango or Neo4j in Node.js project

I can't understand how to manage a schema to have the same database structure for different environments(development, testing and production). And in deployment server.
How to define, store and update ArangoDB or Neo4j schema.
I know sequelize utility for SQL databases. It have tool for migrations between different schema versions (http://docs.sequelizejs.com/manual/tutorial/migrations.html) to better understand what I want.
There is an abstract migration framework https://github.com/tj/node-migrate
You can use it with any database because it allows you execute any code and save state to any place (by default to file)
Here is how I configured it for ArangoDB
npm i migrate --save-dev
add script to package.json
"scripts": {
"migrate": "migrate"
},
create migrations folder
npm run migrate init
create migration
npm run migrate create test
replace migration implementation with
require('dotenv').config();
const arangojs = require('arangojs');
const db = new arangojs.Database({ url: process.env.DB_HOST });
db.useDatabase(process.env.DB_NAME);
db.useBasicAuth(process.env.DB_USERNAME, process.env.DB_PASSWORD);
module.exports.up = async (next) => {
const collection = db.collection('test');
await collection.create();
next();
};
module.exports.down = async (next) => {
const collection = db.collection('test');
await collection.drop();
next();
};
add to .gitignore
.migrate
run migration
npm run migrate up
rollback migration
npm run migrate down
There are Schema evolution tools for ArangoDB, namely migrantverde and its successor arangoMigo.
ArangoMigo uses Yaml files to describe schemata while Migrant verde used XML files.

How can I pass a library to a lower dependency in node?

I'm trying to create a library that uses the ORM Bookshelf.js (so basically a wrapper on top of Bookshelf.js if you will). Now Bookshelf.js requires you to run a driver installation to be able to connect to the correct database (such as: npm install mysql) in the root of your project.
My own library also requires this, since Bookshelf.js requires it, but it keeps saying Error: Cannot find module 'mysql' which is normal since the Bookshelf instance does not have the mysql dependency in its node_modules folder.
If I type out the structure it would look like this:
* New Project
** mysql
** LibThatUsesBookshelf
**** Bookshelf
Now Bookshelf.js probably requires this structure:
* New Project
** LibThatUsesBookshelf
**** mysql
**** Bookshelf
Is there a way that I am able to still get bookshelf working with the first structure shown?
P.S. I know the explanation might be confusing, but it is a hard problem and I got no other way of explaining it.
Let the client of your module pass in an instance to bookshelf:
// mysuperfancymodule.js
module.exports = function(bookshelf) {
// do fancy things with the bookshelf instance
}
Then in your client application:
// some app code:
var mySuperFancyModule = require('mysuperfancymodule'); // assuming it is published to npm
var knex = require('knex')({
// init knex...
});
var bookshelf = require('bookshelf')(knex);
mySuperFancyModule(bookshelf); // init my super fancy module

Testing database related code in NodeJS

Using PostgreSQL.
I'm trying to set up a proper test suite for an API. So far it works but the tests are done on the main database directly. I then have to remove everything my tests created, or edited in such case.
I know it's a bad thing to do (because I can forget to revert a change, or mess up the sequences). So I would like to create a test database with the same structure and base data, and then delete it afterwards. Is this approach the good one to take in this case?
And if I want to do it this way, how should I do it? Is there a way in NodeJS to execute an SQL script? I tried with a shell script but so far it's a been complete mess with the permissions, so I figured it would be easier with NodeJS directly.
I'm using Mocha for my tests.
I would suggest a separate test database. It can be light, and you will want to know the data that is in there (so you can test it!). A base dataset that can handle all your business rules can be exported as a SQL file (or some export method).
Typically your application will have a connection to the database, and your test framework will have some approach to run a method prior to tests starting. It is here that you specify the test DB. Your database access objects (DAOs), or scripts, methods, will utilize the main connection in some way, either as a method parameter, or require statement, etc.
As an example, I'm using the knex module to connect to the DB and build queries. I initialize and reference my single DB connection as specified in their docs.
var Knex = require( 'knex' );
Knex.knex = Knex.initialize( {
client : 'mysql',
connection : {
host : 'my.domain.com',
user : 'dbrole',
password : 'password',
database : 'productiondb',
charset : 'utf8'
}
} );
My DAOs get the connection like this:
var knex = require('knex').knex;
Now in my unit tests, before a test suite is run, I can set my connection to be the test DB
var Knex = require( 'knex' );
Knex.knex = Knex.initialize( {
client : 'mysql',
connection : {
host : '127.0.0.1',
user : 'root',
password : 'root',
database : 'testdb',
charset : 'utf8'
}
} );
And there you have it! Exact same code is used in test and production, and your production DB is de-coupled from your tests. This pattern can work with a lot of frameworks, so you'll have to adapt (and clean up your tests if they are junking up the test DB, maybe a restore to default when all tests are complete).
Edit: By the way, knex works with postgre and is a fun way to build queries in pure node JS. It can also execute raw SQL.

Using sails.js with an existing postgres database

I was looking at using Sails for an app that we are developing.
I'm using the sails-postgresql adapter which uses the waterline orm.
I have an existing database that I want to connect to.
If I create a model using generate something
and then in my model I have
attributes:{
title:{type:'String'}
}
If I browse to localhost/something the orm deletes all the columns in the something table except title.
Is there a way to stop it from doing this? This app should not delete columns on this database.
Thanks!
I am the author of Sails-Postgresql. Sails has an ORM called Waterline that it uses for managing data. The default setting assumes that you would want to auto-migrate your database to match your model attributes. Because Postgresql is a SQL database the Sails-Postgresql adapter has a setting called syncable that defaults to true. This would be false in a NoSQL database like redis.
This is easy to turn off if you want to manage your database columns yourself. You can add migrate: safe to your model and it won't try and update your database schema when you start Sails.
module.exports = {
adapter: 'postgresql',
migrate: 'safe',
attributes: {
title: { type: 'string' }
}
};
Sails doesn't have anything like migrations in Rails. It uses auto-migrations to attempt to remove this from your development process and then leaves updating your production schema to you.

Resources