How to resolve SequelizeDatabaseError NaN when using sequelize? - node.js

I am getting this error when running a simple sequelize.js query for my model.
Executing (default): CREATE TABLE IF NOT EXISTS `Books` (`id` INTEGER PRIMARY KEY, `title` VARCHAR(255), `author` VARCHAR(255), `genre` VARCHAR(255), `first_published` INTEGER);
Executing (default): PRAGMA INDEX_LIST(`Books`)
Executing (default): PRAGMA INDEX_INFO(`sqlite_autoindex_books_1`)
Executing (default): CREATE TABLE IF NOT EXISTS `Patrons` (`id` INTEGER PRIMARY KEY, `first_name` VARCHAR(255), `last_name` VARCHAR(255), `address` VARCHAR(255), `email` VARCHAR(255), `library_id` VARCHAR(255), `zip_code` INTEGER);
Executing (default): PRAGMA INDEX_LIST(`Patrons`)
Executing (default): SELECT count(*) AS `count` FROM `Books` AS `Book`;
Executing (default): SELECT `id`, `title`, `author`, `genre`, `first_published` FROM `Books` AS `Book` ORDER BY `Book`.`title` LIMIT 0, 4;
GET /1 304 410.149 ms - -
GET /stylesheets/style.css 304 1.446 ms - -
Executing (default): SELECT count(*) AS `count` FROM `Books` AS `Book`;
Executing (default): SELECT `id`, `title`, `author`, `genre`, `first_published` FROM `Books` AS `Book` ORDER BY `Book`.`title` LIMIT NaN, 4;
Unhandled rejection SequelizeDatabaseError: SQLITE_ERROR: no such column: NaN
at Query.formatError (C:\Users\user\Downloads\Sandbox\10\node_modules\sequelize\lib\dialects\sqlite\query.js:423:16)
at afterExecute (C:\Users\user\Downloads\Sandbox\10\node_modules\sequelize\lib\dialects\sqlite\query.js:119:32)
at replacement (C:\Users\user\Downloads\Sandbox\10\node_modules\sqlite3\lib\trace.js:19:31)
at Statement.errBack (C:\Users\user\Downloads\Sandbox\10\node_modules\sqlite3\lib\sqlite3.js:16:21)
The funny thing is the code works fine, its just the console is spitting the above error out everytime the route is hit.
I basically have a simple route using express router that takes in an id param. I then use the id param to calculate some values for LIMIT and OFFSET to use in my pagination.
It seems to me that the query is being executed twice, the second time the offset is NaN
First execution:
Executing (default): SELECT `id`, `title`, `author`, `genre`, `first_published` FROM `Books` AS `Book` ORDER BY `Book`.`title` LIMIT 0, 4;
Second execution:
Executing (default): SELECT `id`, `title`, `author`, `genre`, `first_published` FROM `Books` AS `Book` ORDER BY `Book`.`title` LIMIT NaN, 4;
This is the code:
var express = require('express');
var router = express.Router();
var Book = require('../models').Book;
router.get('/:page', function (req, res, next) {
Book.findAndCountAll({
order: [ ['title'] ],
offset: ((req.params.page - 1) * 4),
limit: 4
}).then(function (book) {
let pages = Math.ceil(book.count / 4);
res.render('index', {
content: book.rows,
pagination: pages,
title: 'Express'
});
});
});

Cast req.params.page to number using something like this: +req.params.page before subtracting and multiplication. By default params object contains string values (API).

Related

AzureMobileClient DatabaseFirst manually create datatable fail?

I try to user AzureMobileClient in my XamarinForm with Database First model. For now I do not use offline sync.
So I use this script to create the table in my AZURE SQL DB:
CREATE TABLE [dbo].[TodoItems] (
-- This must be a string suitable for a GUID
[Id] NVARCHAR (128) NOT NULL,
-- These are the system properties
[Version] ROWVERSION NOT NULL,
[CreatedAt] DATETIMEOFFSET (7) NOT NULL,
[UpdatedAt] DATETIMEOFFSET (7) NULL,
[Deleted] BIT NOT NULL,
-- These are the properties of our DTO not included in EntityFramework
[Text] NVARCHAR (MAX) NULL,
[Complete] BIT NOT NULL,
);
CREATE CLUSTERED INDEX [IX_CreatedAt]
ON [dbo].TodoItems([CreatedAt] ASC);
ALTER TABLE [dbo].[TodoItems]
ADD CONSTRAINT [PK_dbo.TodoItems] PRIMARY KEY NONCLUSTERED ([Id] ASC);
CREATE TRIGGER [TR_dbo_TodoItems_InsertUpdateDelete] ON [dbo].[TodoItems]
AFTER INSERT, UPDATE, DELETE AS
BEGIN
UPDATE [dbo].[TodoItems]
SET [dbo].[TodoItems].[UpdatedAt] = CONVERT(DATETIMEOFFSET,
SYSUTCDATETIME())
FROM INSERTED WHERE inserted.[Id] = [dbo].[TodoItems].[Id]
END;
Based on the sample TodoItem provided by Azure. I can do a GetAllItems without any bug (the table is empty for now). But when I try to insert a item I got this error on my Azure backend:
{[Message, The operation failed with the following error: 'Cannot insert the value NULL into column 'CreatedAt', table 'TechCenterCentaur.dbo.TodoItems'; column does not allow nulls. INSERT fails.The statement has been terminated.'.]}
Normally Azure is supposed to take care of that automatically?
I just do that in my XF code:
TodoItem cl = new TodoItem();
cl.Name = "Test";
await _todoTable.InsertAsync(cl);
The call is well made to the backend with a TodoItem containing only Test and all the other fields are null. The exception occurred in the backend :
public async Task<IHttpActionResult> PostTodoItem(TodoItem item)
{
try
{
TodoItem current = await InsertAsync(item); //crash here
return CreatedAtRoute("Tables", new { id = current.Id }, current);
}
catch (System.Exception e)
{
throw;
}
}
Any suggestion?
Ok I found the solution. The problem was in my SQL table. I was missing the 2 ALTER TABLE for new GUID for ID and the CreatedDate.
Here my new script:
USE [TechCenterCentaur]
GO
/****** Object: Table [dbo].[TodoItems] Script Date: 2017-11-08 11:09:14
******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[TodoItems](
[Id] [nvarchar](128) NOT NULL,
[Text] [nvarchar](max) NULL,
[Complete] [bit] NOT NULL,
[Version] [timestamp] NOT NULL,
[CreatedAt] [datetimeoffset](7) NOT NULL,
[UpdatedAt] [datetimeoffset](7) NULL,
[Deleted] [bit] NOT NULL,
CONSTRAINT [PK_dbo.TodoItems] PRIMARY KEY NONCLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
GO
ALTER TABLE [dbo].[TodoItems] ADD DEFAULT (newid()) FOR [Id]
GO
ALTER TABLE [dbo].[TodoItems] ADD DEFAULT (sysutcdatetime()) FOR
[CreatedAt]
GO
CREATE TRIGGER [TR_dbo_TodoItems_InsertUpdateDelete] ON [dbo].[TodoItems]
AFTER INSERT, UPDATE, DELETE AS
BEGIN
UPDATE [dbo].[TodoItems]
SET [dbo].[TodoItems].[UpdatedAt] = CONVERT(DATETIMEOFFSET,
SYSUTCDATETIME())
FROM INSERTED WHERE inserted.[Id] = [dbo].[TodoItems].[Id]
END;
GO

Inserting timestamp into Cassandra

I have a table created as follows:
CREATE TABLE my_table (
date text,
id text,
time timestamp,
value text,
PRIMARY KEY (id));
CREATE INDEX recordings_date_ci ON recordings (date);
I'm able to simply add a new row to the table using the following Node code:
const cassandra = require('cassandra-driver');
const client = new cassandra.Client({ contactPoints: ['localhost'], keyspace: 'my_keyspace'});
const query = 'INSERT INTO my_table (date, id, time, url) VALUES (?, ?, ?, ?)';
client.execute(query, ['20160901', '0000000000', '2016-09-01 00:00:00+0000', 'random url'], function(err, result) {
if (err){
console.log(err);
}
console.log('Insert row ended:' + result);
});
However, I get the following error:
'Error: Expected 8 or 0 byte long for date (24)
When I change the timestamp to epoc time:
client.execute(query, ['20160901', '0000000000', 1472688000, 'random url']
I get:
d
OverflowError: normalized days too large to fit in a C int
I'm able to insert new rows via cqlsh so I'm probably missing something with the node.js driver.
Any idea?
Thanks
Where you have a string 2016-09-01 00:00:00+0000, instead use new Date('2016-09-01 00:00:00+0000').

Equivalent sequelize operation for given sql server query

SELECT *
FROM (SELECT *, ROW_NUMBER() OVER ( ORDER BY No_ ) AS RowNum
FROM Item) DerivedTable
WHERE RowNum >= 501 AND RowNum <= 501 + ( 5 - 1 );
I think the older sql server versions do no support FETCH ROWS and NEXT ROWS which is equivalent to OFFSET and LIMIT in mysql, the above query seems the only way to apply that logic.
How can sequelize implement the above query, which creates a virtual table "DerivedTable" with a column "RowNum" that is used in the WHERE clause.
Is there any other method to do this in sequelize, maybe including raw query or anything else?
It seems you are not alone with this issue. With SQL Server 2012, you can just use:
Model
.findAndCountAll({
where: {
title: {
$like: 'foo%'
}
},
offset: 10,
limit: 2
})
.then(function(result) {
console.log(result.count);
console.log(result.rows);
});
However since you are on an earlier version it seems you are stick with having to hand write the query.
Something like this:
var theQuery = 'declare #rowsPerPage as bigint; '+
'declare #pageNum as bigint;'+
'set #rowsPerPage='+rowsPerPage+'; '+
'set #pageNum='+page+'; '+
'With SQLPaging As ( '+
'Select Top(#rowsPerPage * #pageNum) ROW_NUMBER() OVER (ORDER BY ID asc) '+
'as resultNum, * '+
'FROM myTableName)'+
'select * from SQLPaging with (nolock) where resultNum > ((#pageNum - 1) * #rowsPerPage);';
sequelize.query(theQuery)
.spread(function(result) {
console.log("Good old paginated results: ", result);
});
});
see this and this

Nodejs Sequelize.sync() hanging on SHOW INDEX?

When restating our azure server the equalise.sync() method is hanging on the SHOW INDEX of a table it has just created. Even though the table is almost identical to one just created.
The model for our relation 'Teachers' is as follows:
'use strict';
module.exports = function(sequelize, DataTypes) {
var Teacher = sequelize.define('Teacher', {
email: DataTypes.STRING
} , {
classMethods: {
associate: function(models) {
// associations can be defined here
}
}
});
return Teacher;
};
And this is the code from a student relation that was successfully created, proceeds past SHOW INDEX and does not cause the script to hang:
'use strict';
module.exports = function(sequelize, DataTypes) {
var Student = sequelize.define('Student', {
email: DataTypes.STRING
}, {
classMethods: {
associate: function(models) {
// associations can be defined here
}
}
});
return Student;
};
And this is where the code gets up to before hanging/crashing:
d-i89-237-11:IoTSchool-Backend mitchell$ npm start
> express-example#0.0.0 start /Users/mitchell/Documents/deco3801/IoTSchool-Backend
> node ./bin/www
Executing (default): CREATE TABLE IF NOT EXISTS `Events` (`id` INTEGER auto_increment , `ename` VARCHAR(255), `edata` VARCHAR(255), `type` INTEGER, `time` VARCHAR(255), `deviceid` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB;
Executing (default): SHOW INDEX FROM `Events` FROM `IoTSchool`
Executing (default): CREATE TABLE IF NOT EXISTS `Schools` (`id` INTEGER NOT NULL auto_increment , `name` VARCHAR(255), `email` VARCHAR(255), `password` VARCHAR(255), `accessToken` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB;
Executing (default): SHOW INDEX FROM `Schools` FROM `IoTSchool`
Executing (default): CREATE TABLE IF NOT EXISTS `Photons` (`id` INTEGER NOT NULL auto_increment , `sid` VARCHAR(255), `type` VARCHAR(255), `pid` VARCHAR(255) NOT NULL UNIQUE, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `SchoolId` INTEGER, PRIMARY KEY (`id`), FOREIGN KEY (`SchoolId`) REFERENCES `Schools` (`id`) ON DELETE SET NULL ON UPDATE CASCADE) ENGINE=InnoDB;
Executing (default): SHOW INDEX FROM `Photons` FROM `IoTSchool`
Executing (default): CREATE TABLE IF NOT EXISTS `Students` (`id` INTEGER NOT NULL auto_increment , `email` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB;
Executing (default): SHOW INDEX FROM `Students` FROM `IoTSchool`
Executing (default): CREATE TABLE IF NOT EXISTS `Users` (`id` INTEGER NOT NULL auto_increment , `username` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB;
Executing (default): SHOW INDEX FROM `Users` FROM `IoTSchool`
Executing (default): CREATE TABLE IF NOT EXISTS `Tasks` (`id` INTEGER NOT NULL auto_increment , `title` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `UserId` INTEGER NOT NULL, PRIMARY KEY (`id`), FOREIGN KEY (`UserId`) REFERENCES `Users` (`id`) ON DELETE CASCADE ON UPDATE CASCADE) ENGINE=InnoDB;
Executing (default): SHOW INDEX FROM `Tasks` FROM `IoTSchool`
Executing (default): CREATE TABLE IF NOT EXISTS `Teachers` (`teacherid` INTEGER auto_increment , `email` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`teacherid`)) ENGINE=InnoDB;
Executing (default): SHOW INDEX FROM `Tasks` FROM `iotschoolmysql`
Unhandled rejection SequelizeConnectionError: Quit inactivity timeout
at Quit._callback (/Users/mitchell/Documents/deco3801/IoTSchool-Backend/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:104:30)
at Quit.Sequence.end (/Users/mitchell/Documents/deco3801/IoTSchool-Backend/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
at /Users/mitchell/Documents/deco3801/IoTSchool-Backend/node_modules/mysql/lib/protocol/Protocol.js:393:18
at Array.forEach (native)
at /Users/mitchell/Documents/deco3801/IoTSchool-Backend/node_modules/mysql/lib/protocol/Protocol.js:392:13
at process._tickCallback (node.js:355:11)
Unhandled rejection SequelizeConnectionError: Quit inactivity timeout
at Quit._callback (/Users/mitchell/Documents/deco3801/IoTSchool-Backend/node_modules/sequelize/lib/dialects/mysql/connection-manager.js:104:30)
at Quit.Sequence.end (/Users/mitchell/Documents/deco3801/IoTSchool-Backend/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
at /Users/mitchell/Documents/deco3801/IoTSchool-Backend/node_modules/mysql/lib/protocol/Protocol.js:393:18
at Array.forEach (native)
at /Users/mitchell/Documents/deco3801/IoTSchool-Backend/node_modules/mysql/lib/protocol/Protocol.js:392:13
at process._tickCallback (node.js:355:11)

Stop sequelize promise chain without call back hell

I'm new to using node js, so it's very likely I misunderstand the concept of "promise" and "callback hell". In any case, I need suggestions on how to avoid the following code:
var Sequelize = require('sequelize');
var DB = new Sequelize('project1db', 'john', 'password123', {
host: 'localhost',
dialect: 'mysql'
});
var DB_PREFIX = 't_';
DB.query(
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'user` ( ' +
'`user_id` int(11) UNSIGNED NOT NULL' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8;',{type: DB.QueryTypes.RAW})
.then(function(results) {
DB.query(
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'organization` ( ' +
'`organization_id` int(11) UNSIGNED NOT NULL AUTO_INCREMENT ' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8; ', {type:DB.QueryTypes.RAW})
.then(function(results) {
DB.query(
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'user_organization` ( ' +
'`user_id` int(11) UNSIGNED NOT NULL ' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8; ')
.then(function(){
DB.query(
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'content` ( ' +
'`content_id` int(11) UNSIGNED NOT NULL ' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8; ', {type:DB.QueryTypes.RAW})
.then(function(){
// more queries
}).catch(function(err){console.log(err);});
}).catch(function(err){console.log(err);});
}).catch(function(err){console.log(err);});
}).catch(function(err){console.log(err);});
Ignore the fact that I'm creating tables with SQL instead of using Sequelize migration scripts, because I'm just trying to illustrate the point that I have A LOT of mysql queries that should run in series. If a query fails, then I need to stop the entire script and not let the subsequent .then() function fire. In my Sequelize code, I achieved this by nesting a lot of raw query function calls, then and catch statements. This is going to be very difficult to troubleshoot if I have 100 of these nested callback statements.
Are there alternatives for me to consider beside nesting all these callback functions?
Sequelize uses (a modified version of) the bluebird promises library, which means that this should work:
var Promise = Sequelize.Promise;
Promise.each([
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'user` ( ' +
'`user_id` int(11) UNSIGNED NOT NULL' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8;',
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'organization` ( ' +
'`organization_id` int(11) UNSIGNED NOT NULL AUTO_INCREMENT ' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8; ',
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'user_organization` ( ' +
'`user_id` int(11) UNSIGNED NOT NULL ' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8; ',
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'content` ( ' +
'`content_id` int(11) UNSIGNED NOT NULL ' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8; ',
], function runQuery(query) {
return DB.query(query, { type: DB.QueryTypes.RAW });
}).then(function() {
console.log('all done');
}).catch(function(err) {
console.log(err);
});
It uses the static version of .each(), which will iterate over the array items sequentially, pass each to the runQuery iterator (which returns a promise), and will stop when a promise is rejected.
Did you not already answer your own question by not using migration scripts? By default, you`d want to run migration scripts to set up your database and have it logged so you know when you migrated or when you last migrated.
If you need sequential SQL commands, you can still do that within 1 command. The query will run sequential anyway. If you want to have every single table be a model, make migration scripts for that model, don`t do it like this.
In order to avoid "promise hell" - which is just the same problem as "callback hell" - one can return each Promise inside a top-level thenable:
DB.query(
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'user` ( ' +
'`user_id` int(11) UNSIGNED NOT NULL' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8;',{type: DB.QueryTypes.RAW})
.then(function(results) {
return DB.query(
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'organization` ( ' +
'`organization_id` int(11) UNSIGNED NOT NULL AUTO_INCREMENT ' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8; ', {type:DB.QueryTypes.RAW})
}).then(function(results) {
return DB.query(
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'user_organization` ( ' +
'`user_id` int(11) UNSIGNED NOT NULL ' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8; ')
}).then(function(){
return DB.query(
'CREATE TABLE IF NOT EXISTS `'+DB_PREFIX+'content` ( ' +
'`content_id` int(11) UNSIGNED NOT NULL ' +
') ENGINE=InnoDB DEFAULT CHARSET=utf8; ', {type:DB.QueryTypes.RAW})
}).then(function(){
// more queries
})
.catch(function(err){console.log(err);});
The Promise system allows chaining in this fashion, which removes the need for the high level of nesting and indentation. Note also that only one catch is required - if a thenable fails, it skips forward to the next available catch().

Resources