DIGEST-MD5 implementation in NodeJS - node.js

I'm trying to approach a local XMPP server (Openfire) with a NodeJS application.
I would like to use the DIGEST-MD5 mechanism (I am aware that it has been declared obsolete).
I found this article explaining how the mechanism works:
https://wiki.xmpp.org/web/SASL_and_DIGEST-MD5
However, when implementing the mechanism as described in the article, my calculated response is incorrect.
I have done my best to find out what I'm doing wrong, but I can't seem to figure it out.
I am certain that the rest of my stanza is correct, it's just the response that isn't right.
Here is my code for calculating the response:
var x = username + ':' + realm + ':' + pswd;
var y = crypto.createHash('md5').update(x).digest();
var a1 = y + ':' + nonce + ':' + cnonce + ':' + authzid;
var a2 = 'AUTHENTICATE:' + digestUri;
var ha1 = crypto.createHash('md5').update(a1).digest("hex");
var ha2 = crypto.createHash('md5').update(a2).digest("hex");
var kd = ha1 + ':' + nonce + ':00000001:' + cnonce + ':auth:' + ha2;
var z = crypto.createHash('md5').update(kd).digest("hex");
Where z is the final response.
As you can see I am making use of the crypto library for my hashing.
The example mentioned in the article above is a follows:
username="rob",realm="cataclysm.cx",nonce="OA6MG9tEQGm2hh",cnonce="OA6MHXh6VqTrRk",nc=00000001,qop=auth,digesturi="xmpp/cataclysm.cx",response=d388dad90d4bbd760a152321f2143af7,charset=utf-8,authzid="rob#cataclysm.cx/myResource"
When I plug all these values into my own implementation (with the password being 'secret'), my calculated response is:
5093acf6b3bc5687231539507cc2fb20
instead of the expected d388dad90d4bbd760a152321f2143af7.
Other examples don't give me the right result either.
So, what on earth am I doing wrong?
Any and all help would be greatly appreciated!

When calculating the response, the third line concatenates the buffer y (which contains an MD5 hash) with strings, whereby y is implicitly converted to a string using UTF-8.
However, an arbitrary byte sequence, such as a hash, will be corrupted by a UTF8 encoding, s. here. To prevent this, the individual parts should be concatenated as buffers rather than strings.
In contrast, the remaining concatenations (in this and the other lines) are not critical, since they are true strings:
var crypto = require('crypto');
var charset = 'utf-8';
var username = 'chris';
var realm = 'elwood.innosoft.com';
var nonce = 'OA6MG9tEQGm2hh';
var nc = '00000001';
var cnonce = 'OA6MHXh6VqTrRk';
var digestUri = 'imap/elwood.innosoft.com';
var response = 'd388dad90d4bbd760a152321f2143af7';
var qop = 'auth'
var pswd = 'secret';
var x = username + ':' + realm + ':' + pswd;
var y = crypto.createHash('md5').update(x).digest();
var a1 = Buffer.concat([y, Buffer.from(':' + nonce + ':' + cnonce, 'utf8')]); // + ':' + authzid; // Concat buffers instead of strings
var a2 = 'AUTHENTICATE:' + digestUri;
var ha1 = crypto.createHash('md5').update(a1).digest('hex');
var ha2 = crypto.createHash('md5').update(a2).digest('hex');
var kd = ha1 + ':' + nonce + ':00000001:' + cnonce + ':auth:' + ha2;
var z = crypto.createHash('md5').update(kd).digest('hex');
console.log(z); // d388dad90d4bbd760a152321f2143af7
Note that the sample data in the code snippet is not from the website linked in the question, but from RFC2831 (Using Digest Authentication as a SASL Mechanism), chapter 4:
charset=utf-8,username="chris",realm="elwood.innosoft.com",nonce="OA6MG9tEQGm2hh",nc=00000001,cnonce="OA6MHXh6VqTrRk",digest-uri="imap/elwood.innosoft.com",response=d388dad90d4bbd760a152321f2143af7,qop=auth
The code returns the result specified in response, which evidences the correctness of the calculated result.
I don't think that the example of the website linked in the question is consistent. The expected result is identical to that from the RFC example, although some of the relevant input data for the digests is different.
A coincidental match is very unlikely (s. collision probability for MD5), so with high probability the input data and expected result of the linked website do not belong together. In addition, the password is not specified there.
The algorithm applied is described in detail in chapter 2.1.2.1 Response-value. Note that in the RFC example authzid is not specified in contrast to the example off the linked website. According to the description in chapter 2.1.2.1, in this case ':' + authzid is simply to be omitted.
As already mentioned in the question, MD5 and thus the algorithm used here is deprecated (s. also RFC6331).

Related

Is it possible to create a JWT authentication or something similar in Node.js without using frameworks?

I'm creating a Website and I've understood that the best authentication is through JWT, but I'm not a fun of frameworks because I like to go deep in the code and understand all the code in my files. Thus I'm asking if someone have done it, or something similar, in pure Node.js and if could give me an explanation of how to do that.
Thanks
Yes, it's of course possible, just consider how frameworks are made. There's no magic involved, just knowledge and a lot of javascript code.
You can find the sources of most frameworks on Github and study them there.
In a first step, you should make yourself familiar with the basics of JWT, e.g. with the help of this introduction and by reading RFC7519.
You'll find out, that a JWT basically consists of base64url encoded JSON objects and a base64url encoded signature.
The simplest signature algorithm is HS256 (HMAC-SHA256).
In the jwt.io debugger window, you see in the right column the pseudo code for creating a JWT signature:
HMACSHA256(
base64UrlEncode(header) + "." +
base64UrlEncode(payload),
secret
)
so basically you need to learn:
what information goes into the JWT header and payload (i.e. claims)
how to base64url encode a string or bytearray
how to create a SHA256 hash
how to use the hashing algorithm to create a HMAC.
With this, you already have a basic JWT framework that would allow you to create a signed token and verify the signature.
In the next step you can add "features" like
verification of expiration time
verification of issuer, audience
advanced signing algoritms.
You can use the jwt.io debugger to check the if your token can be decoded and verified.
Depending on your needs, you might use something like this:
export const setJwtToken = (headers, payload) => {
const base64Encode = str => {
const utf8str = decodeURI(encodeURI(str))
const b64 = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_"
const len = str.length
let dst = ""
let i
for (i = 0; i <= len - 3; i += 3) {
dst += b64.charAt(utf8str.charCodeAt(i) >>> 2)
dst += b64.charAt(((utf8str.charCodeAt(i) & 3) << 4) | (utf8str.charCodeAt(i + 1) >>> 4))
dst += b64.charAt(((utf8str.charCodeAt(i + 1) & 15) << 2) | (utf8str.charCodeAt(i + 2) >>> 6))
dst += b64.charAt(utf8str.charCodeAt(i + 2) & 63)
}
if (len % 3 == 2) {
dst += b64.charAt(utf8str.charCodeAt(i) >>> 2)
dst += b64.charAt(((utf8str.charCodeAt(i) & 3) << 4) | (utf8str.charCodeAt(i + 1) >>> 4))
dst += b64.charAt(((utf8str.charCodeAt(i + 1) & 15) << 2))
}
else if (len % 3 == 1) {
dst += b64.charAt(utf8str.charCodeAt(i) >>> 2)
dst += b64.charAt(((utf8str.charCodeAt(i) & 3) << 4))
}
return dst
}
const headers = JSON.stringify(headers)
const payload = JSON.stringify(payload)
const token = `${base64Encode(headers)}.${base64Encode(payload)}`
console.log(token)
}

How to get byte length of a base64 string in node.js?

I'd like to calculate the size of an image file received as base64 encoded string like:
'data:image/png;base64,aBdiVBORw0fKGgoAAA...'
in order to make sure that the file is not larger than certain size, say 5MB.
How can I acheive this in node.js?
I've seen similar question here but could not apply the answer in my node app as I get:
SyntaxError: Unexpected token :
You need to remove the data... part
const img = 'data:image/png;base64,aBdiVBORw0fKGgoAAA';
const buffer = Buffer.from(img.substring(img.indexOf(',') + 1));
console.log("Byte length: " + buffer.length);
console.log("MB: " + buffer.length / 1e+6);
Actually, there is not much to it.
If you know the size of the Base64 image all you have to do is divide by 1.37.
As Base64 algorithm is linear the result is also.
For more info see here.
To calculate the string size that you already have you can use this solution:
function byteCount(s) {
return encodeURI(s).split(/%..|./).length - 1;
}
and divide the result by 1.37.
var src = "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7";
var base64str = src.substring(src.indexOf(',') + 1)
var decoded = atob(base64str);
console.log("FileSize: " + decoded.length);

Retrieve only the Whole Number

I want to retrieve the whole number part of 1.027863, so the code/function should give me 1 as the answer.
My requirement is to provide the number of SMSs present in a string by splitting the same into blocks of 153 characters.
So if there are 307 character 307/153 = 2.0065, I would take 2 using the ParseInt() function and add 1 to the same indicating that there are 3 parts to the SMS.
However assuming there are 306 characters which is a multiple of 153, my code is adding 1 to the answer making it wrong.
Sample of what I have done :
var String = "Hello test SMS for length and split it into sections of 153 characters for Bulk SMS system to send to respective customers and indicate the count of messages. SMS file is getting generated as per attached file. After successful campaign Launch, file can be downloaded from Documents view in Campaign screen.";
var i = 0;
var pat= new RegExp("^[a-zA-Z0-9]+$");
var flg = pat.test(String);
if (flg == true) //checking the language to be English
{
var length = String.length;
if(length > 160)
{
var msgcount = length / 153;
var sAbs = parseInt(msgcount);
var sTot = sAbs + 1;
TheApplication().RaiseErrorText("Character " + length +" / 670 ( " + sTot +" msg)" );
}
}
Where RaiseErrorText is used to Display in the format:
Characters 307 / 1530 (3 msg)
Maybe there is a better way to write this. Any suggestions experts?
Instead of your:
var msgcount = length / 153;
var sAbs = parseInt(msgcount);
var sTot = sAbs + 1;
You could just go:
var sTot = Math.ceil(length / 153);

Querying on bson fields in Mongo DB using the native mongodb module for node

I have a mongo collection which has been filled with documents by a program using the Mongo DB C# driver.
If I run a find
client.connect('mongodb://127.0.0.1:27017/foo', function(err, db) {
var things = db.collection('things');
things.find({}, ['ThingId']).limit(1).toArray(function(err, docs) {
console.log(docs[0]);
}
}
and look at what is stored then I see something like
{ _id: 1234235341234,
ThingID:
{ _bsontype: 'Binary',
sub_type: 3,
position: 16,
buffer: <Buffer a2 96 8d 7f fa e4 a4 48 b4 80 4a 19 f3 32 df 8e> }}
I've read the documentation and tried things like:
console.log(mongojs.Binary(docs[i].SessionId.buffer, 3).value());
but I can't print the ThingId as a UUID string to the console
and I definitely can't query on it!
My goal is to query by passing in the GUID strings to find so I can select documents using Ids I know the C# generated (and can see using RoboMongo)
Any help appreciated hugely!
Update:
As pointed out by #wes-widner the mongo c# driver team have a UUID helper js file that helps convert between different UUIDs and we use that in RoboMongo to query directly. But the BinData which it uses is only available at the mongo shell and I'm not aware of how to access it using node.
The linked answer shows how to query using uuidHelper and BinData when using the mongo shell essentially what Im asking is how to do that within node
Not really sure if this is what you are looking for but it was what I was looking for when I got to this page. I have java.util.UUID/fromString created UUIDs as primary keys in Mongo and I want to use the normal string UUID in the UI. I am using node-mongodb-native.
var Binary = require('mongodb').Binary;
var toUUID, toBinData;
module.exports.toUUID = toUUID = function(binId) {
var hex = binId.toString('hex');
return
hex.substr(0, 8) + '-' +
hex.substr(8, 4) + '-' +
hex.substr(12, 4) + '-' +
hex.substr(16, 4) + '-' +
hex.substr(20, 12);
};
module.exports.toBinData = toBinData = function(uuid) {
var buf = new Buffer(uuid.replace(/-/g, ''), 'hex');
return new Binary(buf, Binary.SUBTYPE_UUID_OLD);
};
Update
It turned out that while the above works just fine (since it does the conversion similarly to both ways) it does not produce the same string UUID what I see in my Clojure code. But the same uuidhelpers to the rescue - below works for the Java legacy UUIDs.
var Binary = require('mongodb').Binary;
var toJUUID, toBinData;
module.exports.toJUUID = toJUUID = function(binId) {
var hex = binId.buffer.toString('hex');
var msb = hex.substr(0, 16);
var lsb = hex.substr(16, 16);
msb = msb.substr(14, 2) + msb.substr(12, 2) + msb.substr(10, 2) + msb.substr(8, 2) + msb.substr(6, 2) + msb.substr(4, 2) + msb.substr(2, 2) + msb.substr(0, 2);
lsb = lsb.substr(14, 2) + lsb.substr(12, 2) + lsb.substr(10, 2) + lsb.substr(8, 2) + lsb.substr(6, 2) + lsb.substr(4, 2) + lsb.substr(2, 2) + lsb.substr(0, 2);
hex = msb + lsb;
return hex.substr(0, 8) + '-' + hex.substr(8, 4) + '-' + hex.substr(12, 4) + '-' + hex.substr(16, 4) + '-' + hex.substr(20, 12);
};
module.exports.toBinData = toBinData = function(uuid) {
var hex = uuid.replace(/[{}-]/g, "");
var msb = hex.substr(0, 16);
var lsb = hex.substr(16, 16);
msb = msb.substr(14, 2) + msb.substr(12, 2) + msb.substr(10, 2) + msb.substr(8, 2) + msb.substr(6, 2) + msb.substr(4, 2) + msb.substr(2, 2) + msb.substr(0, 2);
lsb = lsb.substr(14, 2) + lsb.substr(12, 2) + lsb.substr(10, 2) + lsb.substr(8, 2) + lsb.substr(6, 2) + lsb.substr(4, 2) + lsb.substr(2, 2) + lsb.substr(0, 2);
hex = msb + lsb;
return new Binary(new Buffer(hex, 'hex'), Binary.SUBTYPE_UUID_OLD);
};
Following the same copy/paste method you can strip working C# code from the helpers. You only need to handle the buffers a little different.

Different output from toBase64() in CFML on 2 different machines

FINAL EDIT: SOLVED, upgrading local dev to railo 3.3.4.003 resolved the issue.
I have to RC4 encrypt some strings and have them base64 encoded and I'm running into a situation where the same input will generate different outputs on 2 different dev setups.
For instance, if I have a string test2#mail.com
On one machine (DEV-1) I'll get: DunU+ucIPz/Z7Ar+HTw=
and on the other (DEV-2) it'll be: DunU+ucIlZfZ7Ar+HTw=
First, I'm rc4 encrypting it through a function found here.
Next I'm feeding it to: toBase64( my_rc4_encrypted_data, "iso-8859-1")
As far as I can tell the rc4 encryption output is the same on both (or I'm missing something).
Below are SERVER variables from both machines as well as the encryption function.
Is this something we'll simply have to live with or is there something I can do to 'handle it properly' (for a lack of a better word).
I'm concerned that in the future this will bite me and wonder it it can be averted.
edit 1:
Output from my_rc4_encrypted_data.getBytes() returns:
dev-1:
Native Array (byte[])
14--23--44--6--25-8-63-63--39--20-10--2-29-60
dev-2:
Native Array (byte[])
14--23--44--6--25-8-63-63--39--20-10--2-29-60
(no encoding passed to getBytes() )
DEV-1 (remote)
server.coldfusion
productname Railo
productversion 9,0,0,1
server.java
archModel 64
vendor Sun Microsystems Inc.
version 1.6.0_26
server.os
arch amd64
archModel 64
name Windows Server 2008 R2
version 6.1
server.railo
version 3.3.2.002
server.servlet
name Resin/4.0.18
DEV-2 (local)
server.coldfusion
productname Railo
productversion 9,0,0,1
server.java
vendor Oracle Corporation
version 1.7.0_01
server.os
arch x86
name Windows 7
version 6.1
server.railo
version 3.2.2.000
server.servlet
name Resin/4.0.18
RC4 function:
function RC4(strPwd,plaintxt) {
var sbox = ArrayNew(1);
var key = ArrayNew(1);
var tempSwap = 0;
var a = 0;
var b = 0;
var intLength = len(strPwd);
var temp = 0;
var i = 0;
var j = 0;
var k = 0;
var cipherby = 0;
var cipher = "";
for(a=0; a lte 255; a=a+1) {
key[a + 1] = asc(mid(strPwd,(a MOD intLength)+1,1));
sbox[a + 1] = a;
}
for(a=0; a lte 255; a=a+1) {
b = (b + sbox[a + 1] + key[a + 1]) Mod 256;
tempSwap = sbox[a + 1];
sbox[a + 1] = sbox[b + 1];
sbox[b + 1] = tempSwap;
}
for(a=1; a lte len(plaintxt); a=a+1) {
i = (i + 1) mod 256;
j = (j + sbox[i + 1]) Mod 256;
temp = sbox[i + 1];
sbox[i + 1] = sbox[j + 1];
sbox[j + 1] = temp;
k = sbox[((sbox[i + 1] + sbox[j + 1]) mod 256) + 1];
cipherby = BitXor(asc(mid(plaintxt, a, 1)), k);
cipher = cipher & chr(cipherby);
}
return cipher;
}
Leigh wrote:
But be sure to use the same encoding in your test ie
String.getBytes(encoding) (Edit) If you omit it, the jvm default is
used.
Leigh is right - RAILO-1393 resulted in a change to toBase64 related to charset encodings in 3.3.0.017, which is between the 3.3.2.002 and 3.2.2.000 versions you are using.
As far as I can tell the rc4 encryption output is the same on both (or I'm missing something). Below are SERVER variables from both machines as well as the encryption function.
I would suggest saving the output to two files and then comparing the file size or, even better, a file comparison tool. Base64 encoding is a standard approach to converting binary data into string data.
Assuming that your binary files are both exactly 100% the same, on both of your servers try converting the data to base 64 and then back to binary again. I would predict that only one (or neither) of the servers are able to convert the data back to binary again. At that point, you should have a clue about which server is causing your problem and can dig in further.
If they both can reverse the base 64 data to binary and the binary is correct on both servers... well, I'm not sure.

Resources