Knex vs. mysql2 based on performance, stability, and ES8+ support - node.js

Does anybody have a hands-on experience with both DB-libraries — knex vs. mysql2?
After some googling (e.g. at NPMCompare), I'm still curious to know, based on real experience, what are the pros & contra of both options?
So far, the only real advantage of using knex over mysql2, that I clearly see, is its universal support of MSSQL, MySQL, PostgreSQL, SQLite3, and Oracle, while the latter supports MySQL only, but since currently I'm focusing on MySQL only, this knex's feature seems to be less relevant.
The parameters I would consider:
Performance & load resistance;
Stability (production ready);
Native ES8+ support (callback-hell-free, no extra Util.promisify wrappers, ESM/MJS support);
Short and clear, the less verbose the better.

I'm using knex on my primary project, I think that you are trying to compare apples with oranges, because Knex is a query builder that underline uses (mysql2) as the transport lib (in a case of MySql usage).
Benefits that I see in Knex are:
Prevents SQL injection by default.
Lets you build queries really easily without much on an effort
Lets you compose queries as you would compose javascript functions (this is a big big advantage in my opinion).
Since # 3 is a such big advantage in my opinion it is better to demonstrate it:
Think you have 2 endpoints
/users/list - which suppose to return a list of users ({id, name})
/users/:id - which suppose to return a single user with the same structure.
You can implement it like this.
async function getAllUsers() {
return db('users').columns('id', 'name'); //think that this can consist of many joins
}
async function getUserById(userId) {
return getAllUsers().where('id', userId);
}
Look how getUserById is re-uses the same query (may be really complex), and just "adding" the limitation that it requires.
Performance wise, I don't think that this abstraction has a big cost, (I didn't noticed any performance issues yet)
I'm not sure what do you refer as stability, but Knex has a really cool TS support which can make your queries strongly typed.
interface User {
id: number;
name: string;
}
const users = await db<User>('users').columns('id', 'name'); // it will autocomplete the columns names & users will be of type User[] automatically.
With a combination of auto generating these db type from the DB using #typed-code/schemats it makes the work & refactoring sooo much better.
As of ES6, Knex supports by default Promises & callbacks, so you can choose whatever suits you.
Other cool features that I'm using is auto converting between cases, my db has a snake case style as for tables & columns names but in my node I work with camel case, using knex-stringcase plugin.
Migrations, allow you to define how to build / upgrade your schema with code, which can help you to auto update your production schema from CI.
Mysql2 is a low level driver above the DB.

Related

JOOQ vs SQL Queries

I am on jooq queries now...I feel the SQL queries looks more readable and maintainable and why we need to use JOOQ instead of using native SQL queries.
Can someone explains few reason for using the same?
Thanks.
Here are the top value propositions that you will never get with native (string based) SQL:
Dynamic SQL is what jOOQ is really really good at. You can compose the most complex queries dynamically based on user input, configuration, etc. and still be sure that the query will run correctly.
An often underestimated effect of dynamic SQL is the fact that you will be able to think of SQL as an algebra, because instead of writing difficult to compose native SQL syntax (with all the keywords, and weird parenthesis rules, etc.), you can think in terms of expression trees, because you're effectively building an expression tree for your queries. Not only will this allow you to implement more sophisticated features, such as SQL transformation for multi tenancy or row level security, but every day things like transforming a set of values into a SQL set operation
Vendor agnosticity. As soon as you have to support more than one SQL dialect, writing SQL manually is close to impossible because of the many subtle differences in dialects. The jOOQ documentation illustrates this e.g. with the LIMIT clause. Once this is a problem you have, you have to use either JPA (much restricted query language: JPQL) or jOOQ (almost no limitations with respect to SQL usage).
Type safety. Now, you will get type safety when you write views and stored procedures as well, but very often, you want to run ad-hoc queries from Java, and there is no guarantee about table names, column names, column data types, or syntax correctness when you do SQL in a string based fashion, e.g. using JDBC or JdbcTemplate, etc. By the way: jOOQ encourages you to use as many views and stored procedures as you want. They fit perfectly in the jOOQ paradigm.
Code generation. Which leads to more type safety. Your database schema becomes part of your client code. Your client code no longer compiles when your queries are incorrect. Imagine someone renaming a column and forgetting to refactor the 20 queries that use it. IDEs only provide some degree of safety when writing the query for the first time, they don't help you when you refactor your schema. With jOOQ, your build fails and you can fix the problem long before you go into production.
Documentation. The generated code also acts as documentation for your schema. Comments on your tables, columns turn into Javadoc, which you can introspect in your client language, without the need for looking them up in the server.
Data type bindings are very easy with jOOQ. Imagine using a library of 100s of stored procedures. Not only will you be able to access them type safely (through code generation), as if they were actual Java code, but you don't have to worry about the tedious and useless activity of binding each single in and out parameter to a type and value.
There are a ton of more advanced features derived from the above, such as:
The availability of a parser and by consequence the possibility of translating SQL.
Schema management tools, such as diffing two schema versions
Basic ActiveRecord support, including some nice things like optimistic locking.
Synthetic SQL features like type safe implicit JOIN
Query By Example.
A nice integration in Java streams or reactive streams.
Some more advanced SQL transformations (this is work in progress).
Export and import functionality
Simple JDBC mocking functionality, including a file based database mock.
Diagnostics
And, if you occasionally think something is much simpler to do with plain native SQL, then just:
Use plain native SQL, also in jOOQ
Disclaimer: As I work for the vendor, I'm obviously biased.

How is Node.js Knex similar/different to Sequelize?

The answer I got from an IRC channel:
Sequelize is an ORM that includes some query builder stuff; Knex is just a query builder, not an ORM.
ORMs don't actually fit very well in many use cases, it's easy to run up against the limits of what they can express, and end up needing to break your way out of them.
But that doesn't really explain the pros and cons of each. I am looking for an explanation, and possibly a simple example (use case) highlighting those similarities / differences.
Why would one use one over the other?
Sequelize is full blown ORM forcing you to hide SQL behind object representation. Knex is plain query builder, which is way too low level tool for application development.
Better to use objection.js it combines good parts of ORMs without compromising power of writing any kind of SQL queries.
Here is good article about it from the author of objection.js https://www.jakso.me/blog/objection-to-orm-hatred
Disclaimer: I'm knex maintainer and been also involved in development of objection.js.
Think of it like this which is the better performance and which is easier to learn.
As low level Database driver
For postgresql you can use pg as a query builder
As intermediate level you can use knex
As high level you can use ORM like sequelize, bookshelf, objection which is based on knex
Now low level doesn’t mean a bad thing. It’s the best performance you can get but the down side is you need to learn queries of the database you are using
Now knex is the same as a query builder the same cost operation
Now the highest level have the highest cost
But it’s easy to learn but the down size if you learn sequelize and decided to use objection they are different so you will need to learn another ORM
My suggestion if you want the best performance for a scalable complex backend server you can use query builder or knex
If you want to feel like dealing with objects instances like mongoose you can use Sequelize.
The only difference is the cost operating and it’s not large.
But ORMs have more functionality.
Of course you can refer to this article to understand more
About ORM
https://blog.logrocket.com/why-you-should-avoid-orms-with-examples-in-node-js-e0baab73fa5/

Node MVC app with dynamodb ORM including associations

I want to build a Node.js MVC app.
My data is stored in dynamoDB. I'm looking for a suitable framework for this.
I'm mainly debating between:
Express.js (for the controllers) with vogels as the ORM (for the
models),
Sails.js with the dynamoDB adapter .
I prefer to have associations support between models so that I'll not need to implement it myself in my code.
Can anybody advise what are the props and cons of both options? Can i do with the second option everything I can do in the first option but with less code? Any other recommendations?
Firstly, this is very opinion based question. So, I would just give my opinion. This does not mean one is far better than the other.
I have used Vogels for some use cases. I have found it very useful. Some of the advantages of Vogels are:-
1) Parallel scans - Helps to improve performance which developers would most likely come across at some point of the project especially if you are going to maintain millions of records in DynamoDB
2) Supports both global and local secondary indexes - Based on the query pattern, the application would most likely require index on tables. So this feature is very helpful
3) Data type and validation support using Joi (Joi Link)
4) Automatic addition of audit timestamp fields such as updatedAt, createdAt
5) Automatic key value generation in UUID format
6) Chainable API for query and scan operations - You can chain multiple filter conditions with limit option for pagination and sort the results as well
7) Load multiple models with in a single request (Batch get items feature)
8) Basic streaming api
9) Some good sample codes for many features which is very important for developers

SailsJS - SQL queries and Data Access Objects

I just recently started to learn NodeJS/SailsJS and have few questions.
Please, note that I have strong Java background ( this affects my style of thinking and the architecture ).
How to prevent SQL injection in SailsJS?
Basically, I have:
User.query(query, function(err, result) {
if (err)
return next(err);
//
res.json({ data : stepsCount });
});
But where/how should I put parameters for SQL query?
In Java I used to make something like:
Query q = new Query("select * from User where id = :id");
q.setParameter("id", some-value);
What about Data Access Object?
I am feeling uncomfortable having SQL queries in Controller.
Do you have any best practices for that? Maybe some example projects?
All example projects I've found so far do not use some complex SQL queries.
They are more like school projects using some predefined methonds in Domain classes ( like User.create, User.find ) etc.
Thank you in advance.
Best Regards,
Maksim
Sailsjs use a DAO library written in javascript (of course). it is called waterlinejs The documentation is here
Basically, it means, if you want to find User with specific id, you only need
User.findOne({id:xxx}).then(function(data){
res.json(data);
})
This is like the main advantage of using sailsjs, because waterlinejs can have different adapters, but all of them can construct User model, and access them like this. So if you use sails-mysql-adapter, then it will create a user table, with id, name...etc as column, if you use memory adapter, it will be stored in memory.
It does provide User.query('select.....',callback) in case what you want cannot be achieved by this DAO methods. But because it is for last resort, for query building, sailsjs does not have native support, you can certainly use package like sprintf to build sql.
Since you are a java programmer, (just like me), as a side note, I'd like you to remember these findOne() methods provided by waterlinejs are asynchronous, and also in a promise manner. They are quite different from java. I spent a lot of time wrap my head around this, but as soon as I am comfortable with this idea, I start to love it right away.
Let me know if this is clear.

Are the Node.JS MongoDB sorting/filtering functions available outside the database?

The MongoDB sorting functions are pretty neato. Can you use them on objects and/or arrays that have nothing to do with the database itself?
var mongo = require('mongodb'),
Server = mongo.Server,
Db = mongo.Db,
sortingFun = mongo.internalSortFilterFunction(); // By the miracle of imagination, this is a made-up line.
There is, for example, this awesome little node project called sift: MongoDB inspired array filtering. But there are more similar tools, different opinions, and projects merging and disappearing.
Considering it's popularity, MongoDB is quite probably gonna hang around. For that reason, plus the added bonus of being exactly similar instead of pretty similar, I was wondering if a specific object/model/function within node-mongodb could be linked from the require('mongodb') specifically for using the sorting and filtering functions on custom objects/arrays.
The sorting is done in the mongo server, not the client. It's also not particularily fast -- big collections should be pre-sorted, but that's another issue.
The mongo server is afaik written in C++ and uses custom types, separate from the JS engine, called BSON.
So if there is no sort implementation on the client for javascript, which would be an absurd feature, you can't use server sort.
Edit: If you really really want to use the sort, performance be damned, you could insert js objects into the DB, effectively converting them to BSON in mongo collections. Then sort it and pull it from the DB. Indexes etc will need to be recreated for every call to that function. Mongodb also refuses to sort for big collections sans index (limit being somewhere around 1000 I believe)
PS. I haven't read the source. I can't imagine a JS realtime, indexless sort that matches the speed of MongoDB's sort esp. when distributed (sharded). But you can write node.js modules in C++, and if BSON is similar enough to V8 JS objects (wouldn't think so), you might be able to port it. I wouldn't go down that road because it's probably not going to be a big speed increase compared to reimplementing it in JS, a reimplementation which would be a lot easier to create and maintain.

Resources