I'm trying and failing at learning to use this kdbxweb library. The documentation for it is confusing me, probably because I lack some prerequisite knowledge that is standard and so the documentation isn't really written for me yet.
Below is my code where I'm trying to learn to use it. All I really want to use this for is a place to store passwords rather than in plain text in a way I can send script to a team member and they can setup a similar credentials database either within the script or outside it and it will pull in their various ODBC database passwords.
The idea eventually would be to create the entry name as the name of the given ODBC connection and then based on a request to initiate connection the UID and PWD would be retrieved and added into connection string. I'm trying to get away from MS Access/VBA for this sort of thing and learn to use NodeJS/TypeScript for it instead.
import * as fs from 'fs';
import * as kdbx from 'kdbxweb';
(async() => {
try {
const database = kdbx.Kdbx.create(new kdbx.Credentials(kdbx.ProtectedValue.fromString('test')),'credentials');
//const group = database.createGroup(database.getDefaultGroup(),'subgroup');
//const entry = database.createEntry(group);
//entry.fields.set('Password',kdbx.ProtectedValue.fromString('test'));
//entry.pushHistory();
//entry.times.update();
await database.save();
//fs.writeFileSync('credentials/credentials.kdbx',data);
} catch (e :any) {
throw e;
}
})();
The error I'm getting when trying to do this is "argon2 not implemented" and while argon2 is mentioned at the top of documentation, I don't understand what that is even talking about in the least. It sounded like it has to do with an additional cryptography API that I don't think I even should need. I tried to take the code of the example implementation but I had no idea how to actually make use of that at all.
I also tried reading code for the web-app written using this library, but the way it's integrated into the application makes it completely impossible for me to parse at this point. I can't tell what type of objects are being passed around/etc. to trace the flow of information.
Old solution below
I found a better solution, it was painful to learn how to do this, but I did eventually get it working. I'm using the node C++ implementation of argon instead and it no longer echos the minified script into console
This is setup as argon/argon2node.ts, and requires the argon2 node library. I think now that I got this working if I wanted to switch to the rust version or something like that I could probably work that out. It's mostly about figuring out exactly where the parameters need to go, since sometimes the names are a little different and you have to convert various parameters around.
import { Argon2Type, Argon2Version } from "kdbxweb/dist/types/crypto/crypto-engine";
import argon from 'argon2';
export default async function argon2(
password: ArrayBuffer,
salt: ArrayBuffer,
memory: number,
iterations: number,
length: number,
parallelism: number,
type: Argon2Type,
version: Argon2Version
): Promise<ArrayBuffer> {
try {
//https://github.com/keeweb/kdbxweb/blob/master/test/test-support/argon2.ts - reviewed this and eventually figured out how to switch to the C++ implementation below after much pain
const hashArr = new Uint8Array(await argon.hash(
Buffer.from(new Uint8Array(password)), {
timeCost: iterations,
memoryCost: memory,
parallelism: parallelism,
version: version,
type: type,
salt: Buffer.from(new Uint8Array(salt)),
raw: true
}
));
return Promise.resolve(hashArr);
} catch (e) {
return Promise.reject(e);
}
}
And below is my odbc credentials lookup based on it
import * as fs from 'fs';
import * as kdbx from 'kdbxweb';
import argon2 from './argon/argon2node';
import * as byteUtils from 'kdbxweb/lib/utils/byte-utils';
export default async(title : string) => {
try {
kdbx.CryptoEngine.setArgon2Impl(argon2);
const readBuffer = byteUtils.arrayToBuffer(fs.readFileSync('./SQL/credentials/credentials.kdbx'));
const database = await kdbx.Kdbx.load(
readBuffer,
new kdbx.Credentials(kdbx.ProtectedValue.fromString('CredentialsStorage1!'))
);
let result;
database.getDefaultGroup().entries.forEach((e) => {
if(e.fields.get('Title') === title) {
const password = (<kdbx.ProtectedValue>e.fields.get('Password')).getText();
const user = <string>e.fields.get('UserName');
result = `UID=${user};PWD=${password}`;
return;
}
});
return result;
} catch(e : any) {
throw e;
}
}
To resolve this, I had to do a bit of reading to understand buffers and arraybuffers and such from the documentation a bit, which wasn't easy but I eventually figured it out and created below testing reading and writing entries and such. I still have a bit to learn but this is close enough I thought it worth sharing for anyone else who may try to use this
I also had to get a copy of argon2-asm.min.js and argon2.ts which I pulled from the github for keeweb which is built with this library.
import * as fs from 'fs';
import * as kdbx from 'kdbxweb';
import { argon2 } from './argon/argon2';
function toArrayBuffer(buffer : Buffer) {
return buffer.buffer.slice(buffer.byteOffset, buffer.byteOffset + buffer.byteLength);
}
function toBuffer(byteArray : ArrayBuffer) {
return Buffer.from(byteArray);
}
(async() => {
try {
kdbx.CryptoEngine.setArgon2Impl(argon2);
fs.unlinkSync('./SQL/credentials/credentials.kdbx');
const database = kdbx.Kdbx.create(new kdbx.Credentials(kdbx.ProtectedValue.fromString('test')),'credentials');
const entry = database.createEntry(database.getDefaultGroup());
entry.fields.set('Title','odbc');
entry.fields.set('Password',kdbx.ProtectedValue.fromString('test'));
const data = await database.save();
fs.writeFileSync('./SQL/credentials/credentials.kdbx',new DataView(data));
const readData = toArrayBuffer(fs.readFileSync('./SQL/credentials/credentials.kdbx'));
console.log('hithere');
const read = await kdbx.Kdbx.load(
readData,
new kdbx.Credentials(kdbx.ProtectedValue.fromString('test'))
);
console.log('bye');
console.log(read.getDefaultGroup().entries[0].fields.get('Title'));
const protectedPass = <kdbx.ProtectedValue>read.getDefaultGroup().entries[0].fields.get('Password');
console.log(
new kdbx.ProtectedValue(
protectedPass.value,
protectedPass.salt
).getText()
);
} catch (e :any) {
console.error(e);
throw e;
}
})();
Things that I don't grasp I'd like to understand better include why the argon implementation isn't built-in. He says " Due to complex calculations, you have to implement it manually " but this just seems odd. Perhaps not appropriate for this forum, but would be nice to know about alternatives if this is slow or something.
I'm referring to node-postgres package below, but I guess this question is rather generic.
There is this trivial example where you 1) acquire (connect) a connection (client) from the pool in the top level http request handler, 2) do all business inside of that handler and 3) release it back to the pool after you're done.
I guess it works fine for that example, but as soon as your app becomes somewhat bigger this becomes painfull soon.
I'm thinking of these two options, but I'm not quite sure...
do the "get client + work + release client" approach everywhere I need to talk to db.
This seems like a good choice, but will it not lead to eating up more than one connection/client per the top http request (there are parallel async db calls in many places in my project)?
try to assign a globaly shared reference to one client/connection accessible via require()
Is this a good idea and actually reasonably doable? Is it possible to nicely handle the "back to the pool release" in all ugly cases (errors in parallel async stuff for example)?
Thank you.
Well, I lost some time trying to figure that out. At the end, after some consideration and influenced by John Papa's code I decided use a database module like this:
var Q = require('q');
var MongoClient = require('mongodb').MongoClient;
module.exports.getDb = getDb;
var db = null;
function getDb() {
return Q.promise(theDb);
function theDb(resolve, reject, notify) {
if (db) {
resolve(db);
} else {
MongoClient.connect(mongourl, mongoOptions, function(err, theDb) {
resolve(db);
}
});
}
}
}
So, when I need to perform a query:
getDb().then(function(db) {
//performe query here
});
At least for Mongodb this is good practice as seen here.
The best advise would depend on the type of database and the basic framework that represents the database.
In case of Postgres, the basic framework/driver is node-postgres, which has embedded support for connection pool. That support is however low-level.
For high-level access see pg-promise, which provides automatic connection management, support for tasks, transactions and much more.
Here is what has worked well for me.
var pg = require('pg');
var config = { pg : 'postgres://localhost/postgres' };
pg.connect(config.pg, function(err, client, done) {
client.query('SELECT version();', function (err, results) {
done();
//do something with results.rows
});
});
A methodological question:
I'm implementing an API interface to some services, using node.js, mongodb and express.js.
On many (almost all) sites I see code like this:
method(function(err, data) {
assert.equal(null, err);
});
The question is: should I keep assert statements in my code at production time (at least for 'low significance' errors)? Or, are these just for testing code, and I should better handle all errors each time?
You definitively should not keep them in the production environment.
If you google a bit, there are a plethora of alternative approaches to strip out them.
Personally, I'd use the null object pattern by implementing two wrappers in a separate file: the former maps its method directly to the one exported by the module assert, the latter offers empty functions and nothing more.
Thus, at runtime, you can plug in the right one by relying on some global variable previously correctly set, like process.env.mode. Within your files, you'll have only to import the above mentioned module and use it instead of using directly assert.
This way, all around your code you'll never see error-prone stuff like myAssert && myAssert(cond), instead you'll have ever a cleaner and safer myAssert(cond) statement.
It follows a brief example:
// myassert.js
var assert = require('assert');
if('production' === process.env.mode) {
var nil = function() { };
module.exports = {
equal = nil;
notEqual = nil;
// all the other functions
};
} else {
// a wrapper like that one helps in not polluting the exported object
module.exports = {
equal = function(actual, expected, message) {
assert.equal(actual, expected, message);
},
notEqual = function(actual, expected, message) {
assert.notEqual(actual, expected, message);
},
// all the other functions
}
}
// another_file.js
var assert = require('path_to_myassert/myassert');
// ... your code
assert(true, false);
// ... go on
Yes! asserts are good in production code.
Asserts allow a developer to document assumptions that the code makes, making code easier to read and maintain.
It is better for an assert to fail in production than allow the undefined behaviour that the assert was protecting. When an assert fails you can more easily see the problem and fix it.
Knowing your code is working within assumptions is far more valuable than a small performance gain.
I know opinions differ here. I have offered a 'Yes' answer because I am interested to see how people vote.
probably no
ref: When should assertions stay in production code?
Mostly in my code i put error handling function in a separate file , and use same error method everywhere, it mostly depends on logic anyways
like ppl generally forget this
process.on('uncaughtException', function (err) {
console.log(err);
})
and err==null doesn't hurts , it checks both null and undefined
I'm relatively new to Node and am working on a project using knex and bookshelf. I'm having a little bit of trouble unit testing my code and I'm not sure what I'm doing wrong.
Basically I have a model (called VorcuProduct) that looks like this:
var VorcuProduct = bs.Model.extend({
tableName: 'vorcu_products'
});
module.exports.VorcuProduct = VorcuProduct
And a function that saves a VorcuProduct if it does not exist on the DB. Quite simple. The function doing this looks like this:
function subscribeToUpdates(productInformation, callback) {
model.VorcuProduct
.where({product_id: productInformation.product_id, store_id: productInformation.store_id})
.fetch()
.then(function(existing_model) {
if (existing_model == undefined) {
new model.VorcuProduct(productInformation)
.save()
.then(function(new_model) { callback(null, new_model)})
.catch(callback);
} else {
callback(null, existing_model)
}
})
}
Which is the correct way to test this without hitting the DB? Do I need to mock fetch to return a model or undefined (depending on the test) and then do the same with save? Should I use rewire for this?
As you can see I'm a little bit lost, so any help will be appreciated.
Thanks!
I have been using in-memory Sqlite3 databases for automated testing with great success. My tests take 10 to 15 minutes to run against MySQL, but only 30 seconds or so with an in-memory sqlite3 database. Use :memory: for your connection string to utilize this technique.
A note about unit tesing - This is not true unit testing, since we're still running a query against a database. This is technically integration testing, however it runs within a reasonable time period and if you have a query-heavy application (like mine) then this technique is going to prove more effective at catching bugs than unit testing anyway.
Gotchas - Knex/Bookshelf initializes the connection at the start of the application, which means that you keep the context between tests. I would recommend writing a schema create/destroy script so that you and build and destroy the tables for each test. Also, Sqlite3 is less sensitive about foreign key constraints than MySQL or PostgreSQL, so make sure you run your app against one of those every now and then to ensure that your constraints will work properly.
This is actually a great question which brings up both the value and limitations of unit testing.
In this particular case the non-stubbed logic is pretty simple -- just a simple if block, so it's arguable whether it's this is worth the unit testing effort, so the accepted answer is a good one and points out the value of small scale integration testing.
On the other hand the exercise of doing unit testing is still valuable in that it points out opportunities for code improvements. In general if the tests are too complicated, the underlying code can probably use some refactoring. In this case a doesProductExist function can likely be refactored out. Returning the promises from knex/bookshelf instead of converting to callbacks would also be a helpful simplification.
But for comparison here's my take on what true unit-testing of the existing code would look like:
var rewire = require('rewire');
var sinon = require('sinon');
var expect = require('chai').expect;
var Promise = require('bluebird');
var subscribeToUpdatesModule = rewire('./service/subscribe_to_updates_module');
var subscribeToUpdates = subscribeToUpdatesModule.__get__(subscribeToUpdates);
describe('subscribeToUpdates', function () {
before(function () {
var self = this;
this.sandbox = sinon.sandbox.create();
var VorcuProduct = subscribeToUpdatesModule.__get__('model').VorcuProduct;
this.saveStub = this.sandbox.stub(VorcuProduct.prototype, 'save');
this.saveStub.returns(this.saveResultPromise);
this.fetchStub = this.sandbox.stub()
this.fetchStub.returns(this.fetchResultPromise);
this.sandbox.stub(VorcuProduct, 'where', function () {
return { fetch: self.fetchStub };
})
});
afterEach(function () {
this.sandbox.restore();
});
it('calls save when fetch of existing_model succeeds', function (done) {
var self = this;
this.fetchResultPromise = Promise.resolve('valid result');
this.saveResultPromise = Promise.resolve('save result');
var callback = function (err, result) {
expect(err).to.be.null;
expect(self.saveStub).to.be.called;
expect(result).to.equal('save result');
done();
};
subscribeToUpdates({}, callback);
});
// ... more it(...) blocks
});
While coding in Node.js, I encountered many situations when it is so hard to implement some elaborated logic mixed with database queries (I/O).
Consider an example written in python. We need to iterate over an array of values, for each value we query the database, then, based on the results, we need to compute the average.
def foo:
a = [1, 2, 3, 4, 5]
result = 0
for i in a:
record = find_from_db(i) # I/O operation
if not record:
raise Error('No record exist for %d' % i)
result += record.value
return result / len(a)
The same task in Node.js
function foo(callback) {
var a = [1, 2, 3, 4, 5];
var result = 0;
var itemProcessed = 0;
var error;
function final() {
if (itemProcessed == a.length) {
if (error) {
callback(error);
} else {
callback(null, result / a.length);
}
}
}
a.forEach(function(i) {
// I/O operation
findFromDb(function(err, record) {
itemProcessed++;
if (err) {
error = err;
} else if (!record) {
error = 'No record exist for ' + i;
} else {
result += record.value;
}
final();
});
});
}
You can see that such code much harder to write/read, and it is more prone to errors.
My questions:
Is there a way to make above Node.js code cleaner?
Imagine more sophisticated logic. For example, when we obtained a record from the db, we might need do another db query based on some conditions. In Node.js that becomes a nightmare. What are common patterns for dealing with such tasks?
Based on your experience, does the performance gain deserves the productivity loss when you code with Node.js?
Is there other asynchronous I/O framework/language that is easier to work with?
To answer your questions:
There are libraries such as async which provide a variety of solutions for common scenarios when working with asynchronous tasks. For "callback hell" concerns, there are many ways to avoid that as well, including (but not limited to) naming your functions and pulling them out, modularizing your code, and using promises.
More or less what you currently have is a fairly common pattern: having counter and function index variables with an array of functions to call. Again, async can help here because it reduces this kind of boilerplate that you will probably find yourself repeating often. async currently doesn't have methods that really allow for skipping individual tasks, but you could easily do this yourself if you are writing the boilerplate (just increment the function index variable by 2 for example).
From my own experience, if you properly design your javascript code with asynchronous in mind and use a lot of tools like async, you will find it easier to develop with node. Writing for asynchronous vs synchronous in node is typically always going to be more complicated (although less so with generators, fibers, etc. as compared to callbacks/promises).
I personally think that deciding on a language based upon that single aspect is not worthwhile. You have to consider much much more than just the design of the language, for example the size of the community, availability of third party libraries, performance, technical support options, ease of code debugging, etc.
Just write your code more compactly:
// parallel version
function foo (cb) {
var items = [ 1, 2, 3, 4, 5 ];
var pending = items.length;
var result = 0;
items.forEach(function (item) {
findFromDb(item, function (err, record) {
if (err) return cb(err);
if (!record) return cb(new Error('No record for: ' + item))
result += record.value / items.length;
if (-- pending === 0) cb(null, result);
});
});
}
That clocks in at 13 source lines of code compared to the 9 sloc for python that you posted. However, unlike the python that you posted, this code runs all the jobs in parallel.
To do the same thing in series, a trick I usually do is a next() function defined inline that invokes itself and pops a job off of an array:
// sequential version
function foo (cb) {
var items = [ 1, 2, 3, 4, 5 ];
var len = items.length;
var result = 0;
(function next () {
if (items.length === 0) return cb(null, result);
var item = items.shift();
findFromDb(item, function (err, record) {
if (err) return cb(err);
if (!record) return cb(new Error('No record for: ' + item))
result += record.value / len;
next();
});
})();
}
This time, 15 lines. The nice thing is that you can easily control whether the actions should happen in parallel or sequentially or somewhere in between. That is not so easy in a language like python where everything is synchronous and you've got to do lots of work-arounds like threads or evented libraries to get things back up to asynchronous. Try implementing a parallel version of what you have in python! It would most certainly be longer than the node version.
As for the promise/async route: it's not actually all that hard or bad to use ordinary functions for these relatively simple kinds of tasks. In the future (or in node 0.11+ with --harmony) you can use generators and a library like co, but that feature isn't widely deployed yet.
Everyone here seems to be suggesting async, which is a great library. But to give another suggestion, you should take a look at Promises , which is a new built-in being introduced to the language (and currently has several very good polyfills). It allows you to write asynchronous code in a way that looks much more structured. For example, take a look at this code:
var items = [ 1, 2, 3, 4 ];
var processItem = function(item, callback) {
// do something async ...
};
var values = [ ];
items.forEach(function(item) {
processItem(item, function(err, value) {
if (err) {
// something went wrong
}
values.push(value);
// all of the items have been processed, move on
if (values.length === items.length) {
doSomethingWithValues(values, function(err) {
if (err) {
// something went wrong
}
// and we're done
});
}
});
});
function doSomethingWithValues(values, callback) {
// do something async ...
}
Using promises, it would be written something like this:
var items = [ 1, 2, 3, 4 ];
var processItem = function(item) {
return new Promise(function(resolve, reject) {
// do something async ...
});
};
var doSomethingWithValues = function(values) {
return new Promise(function(resolve, reject) {
// do something async ...
});
};
// promise.all returns a new promise that will resolve when all of the promises passed to it have resolved
Promise.all(items.map(processItem))
.then(doSomethingWithValues)
.then(function() {
// and we're done
})
.catch(function(err) {
// something went wrong
});
The second version is much cleaner and simpler, and that barely even scratches the surface of promises real power. And, like I said, Promises are in es6 as a new language built-in, so (eventually) you won't even need to load in a library, it will just be available.
don't use anonymous (un-named) functions they make the code ugly and they make debugging much harder, so always name your functions and define them outside the function scope not inline.
that is a real issue with Node.js (it is called callback hell or pyramid of doom ,..) you can solve this issue by using promises or using async.js which have so many functions for handling different situations (waterfall, parallel, series, auto, ...)
well the performance gain is absolutely a good thing and it is not that much loss (when you start to master it) and also the Node.js community is great.
Check async.js, q.
The more I work with async the more I love it and I like node more. Let me give you a simple example of what I have for a server initialization.
async.parallel ({
"job1": loadFromCollection1,
"job2": loadFromCollection2,
},
function (initError, results) {
if (initError) {
console.log ("[INIT] Server initialization error occurred: " + JSON.stringify(initError, null, 3));
return callback (initError);
}
// Do more stuff with the results
});
In fact, this very same approach can be followed and one can pass different arguments to the different functions that correspond to the various jobs; see for example Passing arguments to async.parallel in node.js.
To be perfectly honest with you, I prefer the node-way which is also non-blocking. I think node forces someone to have a better design and sometimes you spend time creating more definitions and grouping functions and objects in arrays so that you can write better code. The reason I think is that in the end you want to exploit some variant of async and mix and merge stuff accordingly. In my opinion, spending some extra time and thinking about the code a bit more is well worth it when you also take into account that node is asynchronous.
Other than that, I think it is a habit. The more one writes code for node, the more one improves and writes better asynchronous code. What is good on node is that it really forces someone to write more robust code since one starts respecting all the error codes from all the functions much more. For example, how often do people check, say if malloc or new have succeeded and one does not have an error handler for a NULL pointer after the command has been issued? Writing asynchronous code though forces one to respect the events and the error codes that the events have. I guess one obvious reason is that one respects the code that one writes and in the end we have to write code that returns errors so that caller knows what happened.
I really think that you need to give it more time and start working with async more. That's all.
"If you try to code bussiness db login using pure node.js, you go straight to callback hell"
I've recently created a simple abstraction named WaitFor to call async functions in sync mode (based on Fibers): https://github.com/luciotato/waitfor
check the database example:
Database example (pseudocode)
pure node.js (mild callback hell):
var db = require("some-db-abstraction");
function handleWithdrawal(req,res){
try {
var amount=req.param("amount");
db.select("* from sessions where session_id=?",req.param("session_id"),function(err,sessiondata) {
if (err) throw err;
db.select("* from accounts where user_id=?",sessiondata.user_ID),function(err,accountdata) {
if (err) throw err;
if (accountdata.balance < amount) throw new Error('insufficient funds');
db.execute("withdrawal(?,?),accountdata.ID,req.param("amount"), function(err,data) {
if (err) throw err;
res.write("withdrawal OK, amount: "+ req.param("amount"));
db.select("balance from accounts where account_id=?", accountdata.ID,function(err,balance) {
if (err) throw err;
res.end("your current balance is " + balance.amount);
});
});
});
});
}
catch(err) {
res.end("Withdrawal error: " + err.message);
}
Note: The above code, although it looks like it will catch the exceptions, it will not.
Catching exceptions with callback hell adds a lot of pain, and i'm not sure if you will have the 'res' parameter
to respond to the user. If somebody like to fix this example... be my guest.
using wait.for:
var db = require("some-db-abstraction"), wait=require('wait.for');
function handleWithdrawal(req,res){
try {
var amount=req.param("amount");
sessiondata = wait.forMethod(db,"select","* from session where session_id=?",req.param("session_id"));
accountdata= wait.forMethod(db,"select","* from accounts where user_id=?",sessiondata.user_ID);
if (accountdata.balance < amount) throw new Error('insufficient funds');
wait.forMethod(db,"execute","withdrawal(?,?)",accountdata.ID,req.param("amount"));
res.write("withdrawal OK, amount: "+ req.param("amount"));
balance=wait.forMethod(db,"select","balance from accounts where account_id=?", accountdata.ID);
res.end("your current balance is " + balance.amount);
}
catch(err) {
res.end("Withdrawal error: " + err.message);
}
Note: Exceptions will be catched as expected.
db methods (db.select, db.execute) will be called with this=db
Your Code
In order to use wait.for, you'll have to STANDARDIZE YOUR CALLBACKS to function(err,data)
If you STANDARDIZE YOUR CALLBACKS, your code might look like:
var wait = require('wait.for');
//run in a Fiber
function process() {
var a = [1, 2, 3, 4, 5];
var result = 0;
a.forEach(function(i) {
// I/O operation
var record = wait.for(findFromDb,i); //call & wait for async function findFromDb(i,callback)
if (!record) throw new Error('No record exist for ' + i);
result += record.value;
});
return result/a.length;
}
function inAFiber(){
console.log('result is: ',process());
}
// run the loop in a Fiber (keep node spinning)
wait.launchFiber(inAFiber);
see? closer to python and no callback hell