Node.js Crypto AES Cipher - node.js

For some odd reason, Node's built-in Cipher and Decipher classes aren't working as expected. The documentation states that cipher.update
"Returns the enciphered contents, and can be called many times with new data as it is streamed."
The docs also state that cipher.final
"Returns any remaining enciphered contents."
However, in my tests you must call cipher.final to get all of the data, thus rendering the Cipher object worthless, and to process the next block you have to create a new Cipher object.
var secret = crypto.randomBytes(16)
, source = crypto.randomBytes(8)
, cipher = crypto.createCipher("aes128", secret)
, decipher = crypto.createDecipher("aes128", secret);
var step = cipher.update(source);
var end = decipher.update(step);
assert.strictEqual(source.toString('binary'), end); // should not fail, but does
Note that this happens when using crypto.createCipher or crypto.createCipheriv, with the secret as the initialization vector. The fix is to replace lines 6 and 7 with the following:
var step = cipher.update(source) + cipher.final();
var end = decipher.update(step) + decipher.final();
But this, as previously noted, renders both cipher and decipher worthless.
This is how I expect Node's built-in cryptography to work, but it clearly doesn't. Is this a problem with how I'm using it or a bug in Node? Or am I expecting the wrong thing? I could go and implement AES directly, but that would be time-consuming and annoying. Should I just create a new Cipher or Decipher object every time I need to encrypt or decrypt? That seems expensive if I'm doing so as part of a stream.

I was having two problems: the first is that I assumed, incorrectly, that the size of a block would be 64 bits, or 8 bytes, which is what I use to create the "plaintext." In reality the internals of AES split the 128 bit plaintext into two 64 bit chunks, and go from there.
The second problem was that despite using the correct chunk size after applying the above changes, the crypto module was applying auto padding, and disabling auto padding solved the second problem. Thus, the working example is as follows:
var secret = crypto.randomBytes(16)
, source = crypto.randomBytes(16)
, cipher = crypto.createCipheriv("aes128", secret, secret); // or createCipher
, decipher = crypto.createDecipheriv("aes128", secret, secret);
cipher.setAutoPadding(false);
decipher.setAutoPadding(false);
var step = cipher.update(source);
var end = decipher.update(step);
assert.strictEqual(source.toString('binary'), end); // does not fail

AES uses block sizes of 16 bytes (not two times 8 as you were suggesting). Furthermore, if padding is enabled it should always pad. The reason for this is that otherwise the unpadding algorithm cannot distinguish between padding and the last bytes of the plaintext.
Most of the time you should not expect the ciphertext to be the same size as the plain text. Make sure that doFinal() is always called. You should only use update this way for encryption / decryption if you are implementing your own encryption scheme.

There's a node.js issue with calling update multiple times in a row. I suppose it's been solved and reflected in the next release.

Related

Encrypt/Decrypt aes256cbc in Nodejs

I'm working on a porject where I need develop a Encrypt/Decrypt string in nodejs.
I receive the string the next format: pTS3JQzTxrSbd+cLESXHpg==
this string is generate from this page: https://encode-decode.com/aes-256-cbc-encrypt-online/
and use the aes-256-cbc standard
the code that i implemented is the next:
var CryptoJS = require("crypto-js");
var key = 'TEST_KEY';
var text = 'pTS3JQzTxrSbd+cLESXHpg==';
function decript(text, key) {
return CryptoJS.AES.decrypt(text.trim(), key);
}
console.log(decript(text, key).toString(CryptoJS.enc.Utf8));
But i always get an empty response.
could you say to me what is the issue?
thanks a lot!
As the documentation explains and I just answered yesterday, CryptoJS.AES when given a 'key' that is a string treats it as a password and uses password-based key derivation compatible with openssl enc. That is different from and incompatible with what your linked website does, which is not clearly stated, but based on the list of cipher names is almost certainly internally calling OpenSSL's 'EVP' interface, which means among other things that if you specify a key too short for the algorithm, as you did, it uses whatever happens to be adjacent in memory, which apparently was zero-value bytes (not unusual for programs run on operating systems newer than about 1980), and it either uses the default IV of zero bytes or similarly sets it to something that is zero bytes. And for CBC it uses PKCS5/7 padding, which is compatible with CryptoJS (and most other things). Therefore:
const CryptoJS = require('crypto-js');
var key = CryptoJS.enc.Latin1.parse("TEST_KEY\0\0\0\0\0\0\0\0"+"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
var iv = CryptoJS.enc.Latin1.parse("\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
var ctx = CryptoJS.enc.Base64.parse("pTS3JQzTxrSbd+cLESXHpg==")
var enc = CryptoJS.lib.CipherParams.create({ciphertext:ctx})
console.log( CryptoJS.AES.decrypt (enc,key,{iv:iv}) .toString(CryptoJS.enc.Utf8) )
->
Test text

Why Are node scrypt Hashes the Same Given the Same Inputs?

I was trying to find a compare or verify function for node's built-in crypto module, specifically for scrypt, as most password-hashing modules I have used have such a function. Then, I discovered why this was an impossible task: All hashes generated with these algorithms using the same parameters generate the same string (technically buffer). This is the case for many of crypto's hashing functions, including its pbkdf2 implementation.
Why is this safe? Isn't the whole (modern) point of a password/message hashing function that you can't generate the same password/message again using the same input? This is how the various bcrypt modules work, as well as the original version of scrypt, from which the built-in version, the one I'm asking about, got derived.
For example:
let scryptHash1;
let scryptHash2;
let scryptHash3;
let pbkdfHash1;
let pbkdfHash2;
let pbkdfHash3;
const key1 = 'my secret key';
const key2 = 'my other secret key';
const salt = 'my salt';
crypto.scrypt(key1, salt, 16, hash => scryptHash1 = hash);
crypto.scrypt(key1, salt, 16, hash => scryptHash2 = hash);
crypto.scrypt(key2, salt, 16, hash => scryptHash3 = hash);
scryptHash1.toString() === scryptHash2.toString(); // true
scryptHash1.toString() === scryptHash3.toString(); // false
crypto.pbkdf2(key1, salt, 16, 16, 'sha256', hash => pbkdfHash1 = hash);
crypto.pbkdf2(key1, salt, 16, 16, 'sha256', hash => pbkdfHash2 = hash);
crypto.pbkdf2(key2, salt, 16, 16, 'sha256', hash => pbkdfHash3 = hash);
pbkdfHash1.toString() === pbkdfHash2.toString(); // true
pbkdfHash1.toString() === pbkdfHash3.toString(); // false
I originally asked this question on Cryptography, as I'm more concerned about the safety than anything else, as I want to move from bcrypt to scrypt. However, as multiple people pointed out, and as I feared, the question is more about API design. That being said, any accepted answer should include why this method is safe, or safe enough to switch over (granting that "safe enough" is never safe enough). I took security as my major, but I'm now a web dev, and security changes all the time, though the core concepts stay mostly the same.
You seem to have some fundamental misunderstanding about password hashing. First and foremost, just as any hash function a password hashing function is also a function in the mathematical sense. I.e. it is simply a mapping that assigns a fixed value from its range to every element of its input domain.
What sets password hashes apart from regular hashes is two things: First, they are designed to be slow and/or use large amounts of memory when evaluated. (This is irrelevant for our discussion here.) And second they take a second input, the salt.
For a password hashing function H you want that for any fixed password m and any two salts s≠ s' it not only holds that H(m,s)≠ H(m,s'), but also given both hash values and salts you should not be able to detect that they are hash values of the same m.
What you seem confused about are different choices of API design. Specifically who gets to choose the salt. Every time a new password m is hashed (e.g. to be entered into a database), a fresh uniformly random salt s should be chosen and then the hash value h:=H(m,s) is computed and both h and s are stored in the database. Whenever someone claiming to be that same user submits a password m' to authenticate themselves, what happens is that (h,s) is retrieved and its checked whether h=H(m',s).
Now the question is who chooses the salt. It appears that APIs you are familiar with do not trust the user to do so. So when you make a call to hash password m, the library will choose a salt s, compute h and output h'=(h,s) as a "hash value". To check whether a password m' is correct, you then submit h',m' and the library will extract the salt, recompute the hash and compare.
The library you are now looking at expects the user to choose the salt. I.e., each time you create a new entry in a password database you have to choose a new salt, compute h=H(m,s) and store both (h,s). Since the library in this case does not attempt to "hide" anything from you, you need to take care of the comparison.

mongodb: Unique index on encrypted field

I'm encrypting SSNs in mongodb. However, I need to use the SSN as a unique identifier to make sure that a person with that SSN does not insert a duplicate. So basically I want to check for duplicate SSNs before saving. However I'm unsure if I'll be able to do this after encrypting this field with an AES function. If I encrypt and sign 2 strings which are identical with AES, will the output still be identical?
If not, what would be a good alternative? I had thought about hashing the SSN, but an SSN seems to have such little entropy(its 9 numeric digits, some of which are somewhat predictable). If I salt, I lose the ability to assign a unique index on that field, unless I use a static salt which doesn't really do much.
Addition
I would be encrypting at the application level using the node.js crypto core module.
Using the same symmetric AES key to encipher 2 identical strings will produce an identical output. Therefore you can identify whether or not the encrypted field is unique by comparing it to a value enciphered with the same key.
PoC:
var crypto = require('crypto');
var cipher = crypto.createCipher('aes-256-ctr', "someString");
var cipher2 = crypto.createCipher('aes-256-ctr', "someString");
var crypted = cipher.update("hello world",'utf8','hex');
var crypted2 = cipher2.update("hello world",'utf8','hex');
crypted === crypted2 //true

How to generate short unique names for uploaded files in nodejs

I need to name uploaded files by short unique identifier like nYrnfYEv a4vhAoFG hwX6aOr7. How could I ensure uniqueness of files?
Update: shortid is deprecated. Use Nano ID instead. The answer below applies to Nano ID as well.
(Posting my comments as answer, with responses to your concerns)
You may want to check out the shortid NPM module, which generates short ids (shockingly, I know :) ) similar to the ones you were posting as example. The result is configurable, but by default it's a string between 7 and 14 characters (length is random too), all URL-friendly (A-Za-z0-9\_\- in a regex).
To answer your (and other posters') concerns:
Unless your server has a true random number generator (highly unlikely), every solution will use a PRNG (Pseudo-Random Number Generator). shortid uses Node.js crypto module to generate PRNG numbers, however, which is a much better generator than Math.random()
shortid's are not sequential, which makes it even harder to guess them
While shortid's are not guaranteed to be unique, the likelihood of a collision is extremely small. Unless you generate billions of entries per year, you could safely assume that a collision will never happen.
For most cases, relying on probability to trust that collisions won't happen is enough. If your data is too important to risk even that tiny amount, you could make the shortid basically 100% unique by just prepending a timestamp to it. As an additional benefit, the file names will be harder to guess too. (Note: I wrote "basically 100% unique" because you could still, in theory, have a collision if two items are generated in the same timestamp, i.e. the same second. However, I would never be concerned of this. To have a real 100% certainty your only option is to run a check against a database or the filesystem, but that requires more resources.)
The reason why shortid doesn't do that by itself is because for most applications the likelihood of a collision is too small to be a concern, and it's more important to have the shortest possible ids.
One option could be to generate unique identifiers (UUID) and rename the file(s) accordingly.
Have a look at the kelektiv/node-uuid npm module.
EXAMPLE:
$ npm install uuid
...then in your JavaScript file:
const uuidv4 = require('uuid/v4'); // I chose v4 ‒ you can select others
var filename = uuidv4(); // '110ec58a-a0f2-4ac4-8393-c866d813b8d1'
Any time you execute uuidv4() you'll get a very-fresh-new-one.
NOTICE: There are other choices/types of UUIDs. Read the module's documentation to familiarize with those.
Very simple code. produce a filename almost unique
or if that's not enough you check if the file exists
function getRandomFileName() {
var timestamp = new Date().toISOString().replace(/[-:.]/g,"");
var random = ("" + Math.random()).substring(2, 8);
var random_number = timestamp+random;
return random_number;
}
export default generateRandom = () => Math.random().toString(36).substring(2, 15) + Math.random().toString(23).substring(2, 5);
As simple as that!
function uniqueFileName( filePath, stub)
{
let id = 0;
let test = path.join(filePath, stub + id++);
while (fs.existsSync(test))
{
test = path.join(filePath, stub + id++);
}
return test;
}
I think you might be confused about true-random and pseudo-random.
Pseudo-random strings 'typically exhibit stastical randomness while being generated by an entirely deterministic casual process'. What this means is, if you are using these random values as entropy in a cryptographic application, you do not want to use a pseudo-random generator.
For your use, however, I believe it will be fine - just check for potential (highly unlikely) clashes.
All you are wanting to do is create a random string - not ensure it is 100% secure and completely random.
Try following snippet:-
function getRandomSalt() {
var milliseconds = new Date().getTime();
var timestamp = (milliseconds.toString()).substring(9, 13)
var random = ("" + Math.random()).substring(2, 8);
var random_number = timestamp+random; // string will be unique because timestamp never repeat itself
var random_string = base64_encode(random_number).substring(2, 8); // you can set size here of return string
var return_string = '';
var Exp = /((^[0-9]+[a-z]+)|(^[a-z]+[0-9]+))+[0-9a-z]+$/i;
if (random_string.match(Exp)) { //check here whether string is alphanumeric or not
return_string = random_string;
} else {
return getRandomSalt(); // call recursivley again
}
return return_string;
}
File name might have an alphanumeric name with uniqueness according to your requirement. Unique name based on the concept of timestamp of current time because current time never repeat itself in future and to make it strong i have applied a base64encode which will be convert it into alphanumeric.
var file = req.files.profile_image;
var tmp_path = file.path;
var fileName = file.name;
var file_ext = fileName.substr((Math.max(0, fileName.lastIndexOf(".")) || Infinity) + 1);
var newFileName = getRandomSalt() + '.' + file_ext;
Thanks

Javacard KeyAgreement differs from BouncyCastle KeyAgreement

My problem looks like this. I have generated keys on the card and the terminal sides. I have on the terminal side the card public and private keys and the terminals public and private keys, and the same on the card side (i'm doing tests so thats why i have all of them on the terminal and on the card). When i generate KeyAgreement (terminal side) for the card as private and for the terminal as private the secters are the same, so the generation is OK and i get a 24 bytes (192 bit) secret. When i generate the the secrets on the card (2 cases like on the terminal) the secrets are also the same, but they ale shorter - 20 bytes (160 bit). Here are the generation codes. the terminal:
ECPublicKey publicKey;
ECPrivateKey privateKey;
...
KeyAgreement aKeyAgree = KeyAgreement.getInstance("ECDH", "BC");
aKeyAgree.init(privateKey);
aKeyAgree.doPhase(publicKey, true);
byte[] aSecret = aKeyAgree.generateSecret();
and the card side:
eyAgreement = KeyAgreement.getInstance(KeyAgreement.ALG_EC_SVDP_DH, false);
short length = terminalEcPublicKey.getW(array, (short) 0);
keyAgreement.init(cardEcPrivateKey);
short secretlength = keyAgreement.generateSecret(array, (short)0, length, buffer, (short)0);
There is a problem in your implementation of KeyAgreement.ALG_EC_SVDP_DH in the terminal side. The correct length of output of the this method of key agreement should always be 20 bytes since SHA-1 is being performed on the derived output.
So in your terminal side, you should perform SHA-1 after generating the secret data.

Resources