CryptoJS with hex key not decrypting properly - node.js

I have a dataset that was encrypted that I'm trying to decrypt with CryptoJS. I've tried various combination of things but for some reason the result is not what I'm expecting. I have provided the key below and also the text I want to decrypt. What I'm expecting at msg1 is 32 characters long but I keep getting 48. Its as of its padding it with an extra 16 characters.
Thanks in advance for the help.
key = 'd13484fc2f28fd0426ffd201bbd2fe6ac213542d28a7ca421f17adc0cf234381';
text = '8bf3955488af91feb7bd87220910cee0';
decrypt(text: string): void{
let msg1 = CryptoJS.AES.decrypt(text, CryptoJS.enc.Hex.parse(key), { mode: CryptoJS.mode.ECB, padding: CryptoJS.pad.NoPadding});
msg1 = CryptoJS.enc.Hex.stringify(msg1 );
}

Solving it is pretty simple, but reading the docs and the code, I'm not quite clear why.
This is clearly wrong:
let msg1 = CryptoJS.AES.decrypt(text, CryptoJS.enc.Hex.parse(key), { mode: CryptoJS.mode.ECB, padding: CryptoJS.pad.NoPadding});
Given your description, you are expecting the byte sequence represented by the hex digits "8bf3955488af91feb7bd87220910cee0" to be the body. But that's not what your passing. You're passing the characters. So when it decrypts it, the first byte is the ASCII value of 8 (0x38), not 0x8b. Given that, you should be parsing the hex like this:
let msg1 = CryptoJS.AES.decrypt(CryptoJS.enc.Hex.parse(text), ...
But, for reasons I'm having trouble understanding, that doesn't work. decrypt expects Base64 (or at least it will accept Base64). I can't find any documentation that says this, and the code creates the decrypt function magically in a way that I don't fully understand, and this is why I really hate doing crypto work in JavaScript.
That's out of my system now, so let's get to the answer:
cipher = CryptoJS.enc.Base64.stringify(CryptoJS.enc.Hex.parse(text))
let msg1 = CryptoJS.AES.decrypt(cipher, CryptoJS.enc.Hex.parse(key), { mode: CryptoJS.mode.ECB, padding: CryptoJS.pad.NoPadding});
And that should give you the result you're expecting.

Given:
key = 'd13484fc2f28fd0426ffd201bbd2fe6ac213542d28a7ca421f17adc0cf234381';
text = '8bf3955488af91feb7bd87220910cee0';.
Decrypting the text with the key actually produces: C5640000B550000079320000217C0000.
See AES Calc
Verify the encoding that CryptoJS.AES.decrypt requires for it's inputs and output encoding.

Related

Crypto.decipher.final for 'aes-256-cbc' algorithm with invalid key fails with bad decrypt

I am able to use use node.js Crypto module to encrypt and decrypt a message using Cipher and Decipher classes with 'aes-256-cbc' algorithm like so:
var crypto = require('crypto');
var cipherKey = crypto.randomBytes(32); // aes-256 => key length is 256 bits => 32 bytes
var cipherIV = crypto.randomBytes(16); // aes block size = initialization vector size = 128 bits => 16 bytes
var cipher = crypto.createCipheriv('aes-256-cbc', cipherKey, cipherIV);
var message = 'Hello world';
var encrypted = cipher.update(message, 'utf8', 'hex') + cipher.final('hex');
console.log('Encrypted \'' + message + '\' as \'' + encrypted + '\' with key \''+ cipherKey.toString('hex') + '\' and IV \'' + cipherIV.toString('hex') + '\'');
// Outputs: Encrypted 'Hello world' as '2b8559ce4227c3c3c200ea126cb50957' with key '50f7a656cfa3c4f90796a972b2f6eedf41b589da705fdec95b9d25c180c16cf0' and IV '6b28c13d63af14cf05059a2a2caf370c'
var decipher = crypto.createDecipheriv('aes-256-cbc', cipherKey, cipherIV);
var decrypted = decipher.update(encrypted, 'hex', 'utf8') + decipher.final('utf8');
console.log('Decrypted \'' + encrypted + '\' as \'' + decrypted + '\' with key \''+ cipherKey.toString('hex') + '\' and IV \'' + cipherIV.toString('hex') + '\'');
// Outputs: Decrypted '2b8559ce4227c3c3c200ea126cb50957' as 'Hello world' with key '50f7a656cfa3c4f90796a972b2f6eedf41b589da705fdec95b9d25c180c16cf0' and IV '6b28c13d63af14cf05059a2a2caf370c'
However when I try to decrypt the message using a wrong key to, perhaps naively, demonstrate an attacker will not be able decrypt the message unless the key is known, I get Error: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt at Decipheriv.final (internal/crypto/cipher.js:164:28):
var differentCipherKey = crypto.randomBytes(32);
var decipherDifferentKey = crypto.createDecipheriv('aes-256-cbc', differentCipherKey, cipherIV);
decrypted = decipherDifferentKey.update(encrypted, 'hex', 'utf8') + decipherDifferentKey.final('utf8');
What was I was hoping to get is unintelligible text. bad decrypt was featured in other SO questions either regarding openssl version mismatch between encrypting and decrypting or too-short initialization vector in the same case but I believe my case is a different scenario. Does AES somehow known that encrypted text was generated with a different key?
Tested on node v12.13.0 on Windows 10 and also in repl.it running v10.16.0.
EDIT:
As suggested in the answers the issue was with default padding, in order to see unintelligible output one needs to disable auto-padding on both cipher and deciphers and pad manually:
var requirePadding = 16 - Buffer.byteLength(message, 'utf8');
var paddedMessage = Buffer.alloc(requirePadding, 0).toString('utf8') + message;
cipher.setAutoPadding(false)
Full example here
Another answer has correctly identified the issue as a padding problem. I might summarize the issue like so:
Block ciphers can only operate on data that has a length that is a multiple of the cipher's block size. (AES has a block size of 128 bits.)
In order to make variously-sized inputs conform to the block size, the library adds padding. This padding has a particular format (For example, when adding padding of length N, repeat the value N for the last N bytes of the input.)
When decrypting, the library checks that correct padding exists. Since your badly-decrypted data is arbitrary noise, it is very unlikely to have a valid pad.
You may turn this check off with decipher.setAutoPadding(false) before you do update. However, note that this will include the padding in your decrypted output. Here is a modified repl.it instance that uses setAutoPadding.
The CBC mode requires padding, you did not define one, but the library applied one for you as default. The default is PKCS7Padding which supports from 1 to up to 256 bytes of the block size.
Each padding has a specific format so that it can be uniquely removed from the decrypted text without ambiguity. For example, if the plaintext missing two characters to match the block size, 16-byte in AES, then the PKCS7 padding adds 0202 (in hex) indicating that 2 characters are added and each has value as the number of added characters. If 5 missing 0505050505, etc. In the below xy is a byte.
xyxyxyxyxyxyxyxyxyxyxyxyxyxyxy01
xyxyxyxyxyxyxyxyxyxyxyxyxyxy0202
xyxyxyxyxyxyxyxyxyxyxyxyxy030303
...
xyxy0E0E0E0E0E0E0E0E0E0E0E0E0E0E
xy0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F
and if the last block is a full block, a new block completely filled with padding
xyxyxyxyxyxyxyxyxyxyxyxyxyxyxyxy 10101010101010101010101010101010
After the decryption, firstly the padding is checked. if the padding has not a correct format as specified in rfc 2315, then one can say that there is a padding error.
In this case, while decrypting the library checks the padding and warns you about this. To prevent the padding oracle attacks you don't get an incorrect padding warning. You get a bad decrypt.
The library knows that the key results with a valid padding or not, nothing more. There may be more than one key that results with valid padding even with a negligible probability where integrity is helpful.
In modern Cryptography, we don't use CBC mode anymore. We prefer Authenticated Encryption (AE) modes like AES-GCM or ChaCha20-Poly1305. AE modes provide confidentiality, integrity, and authentication in a bundle.
THE Galois Counter Mode (GCM), internally uses CTR mode in which there is no padding therefore they are free from padding oracle attacks.

How to convert padbytes function to coldfusion

I have the following code in node and I am trying to convert to ColdFusion:
// a correct implementation of PKCS7. The rijndael js has a PKCS7 padding already implemented
// however, it incorrectly pads expecting the phrase to be multiples of 32 bytes when it should pad based on multiples
// 16 bytes. Also, every javascript implementation of PKCS7 assumes utf-8 char encoding. C# however is unicode or utf-16.
// This means that chars need to be treated in our code as 2 byte chars and not 1 byte chars.
function padBytes(string){
const strArray = [...new Buffer(string, 'ucs2')];
const need = 16 - ((strArray.length) % 16);
for(let i = 0; i < need; i++) {
strArray.push(need);
}
return Buffer.from(strArray);
}
I'm trying to understand exactly what this function is doing to convert it. As I think I understand it, it's converting the string to UTF-16 (UCS2) and then adding padding to each character. However, I don't understand why the need variable is the value it is, nor how exactly to achieve that in CF.
I also don't understand why it's only pushing the same value into the array over and over again. For starters, in my example script the string is 2018-06-14T15:44:10Z testaccount. The string array length is 64. I'm not sure how to achieve even that in CF.
I've tried character encoding, converting to binary and stuff to UTF-16 and just don't understand well enough the js function to replicate it in ColdFusion. I feel I'm missing something with the encoding.
EDIT:
The selected answer solves this problem, but because I was eventually trying to use the input data for encryption, the easier method was to not use this function at all but do the following:
<cfset stringToEncrypt = charsetDecode(input,"utf-16le") />
<cfset variables.result = EncryptBinary(stringToEncrypt, theKey, theAlgorithm, theIV) />
Update:
We followed up in chat and turns out the value is ultimately used with encrypt(). Since encrypt() already handles padding (automatically), no need for the custom padBytes() function. However, it did require switching to the less commonly used encryptBinary() function to maintain the UTF-16 encoding. The regular encrypt() function only handles UTF-8, which produces totally different results.
Trycf.com Example:
// Result with sample key/iv: P22lWwtD8pDrNdQGRb2T/w==
result = encrypt("abc", theKey, theAlgorithm, theEncoding, theIV);
// Result Result with sample key/iv: LJCROj8trkXVq1Q8SQNrbA==
input = charsetDecode("abc", "utf-16le");
result= binaryEncode(encryptBinary(input, theKey, theAlgorithm, theIV), "base64);
it's converting the string to utf-16
(ucs2) and then adding padding to each character.
... I feel I'm missing something with the encoding.
Yes, the first part seems to be decoding the string as UTF-16 (or UCS2 which are slightly different). As to what you're missing, you're not the only one. I couldn't get it to work either until I found this comment which explained "UTF-16" prepends a BOM. To omit the BOM, use either "UTF-16BE" or "UTF-16LE" depending on the endianess needed.
why it's only pushing the same value into the array over and over again.
Because that's the definition of PCKS7 padding. Instead of padding with something like nulls or zeroes, it calculates how many bytes padding are needed. Then uses that number as the padding value. For example, say a string needs an extra three bytes padding. PCKS7 appends the value 3 - three times: "string" + "3" + "3" + "3".
The rest of the code is similar in CF. Unfortunately, the results of charsetDecode() aren't mutable. You must build a separate array to hold the padding, then combine the two.
Note, this example combines the arrays using CF2016 specific syntax, but it could also be done with a simple loop instead
Function:
function padBytes(string text){
var combined = [];
var padding = [];
// decode as utf-16
var decoded = charsetDecode(arguments.text,"utf-16le");
// how many padding bytes are needed?
var need = 16 - (arrayLen(decoded) % 16);
// fill array with any padding bytes
for(var i = 0; i < need; i++) {
padding.append(need);
}
// concatenate the two arrays
// CF2016+ specific syntax. For earlier versions, use a loop
combined = combined.append(decoded, true);
combined = combined.append(padding, true);
return combined;
}
Usage:
result = padBytes("2018-06-14T15:44:10Z testaccount");
writeDump(binaryEncode( javacast("byte[]", result), "base64"));

How to convert an hex string into ascii string in Lua

I have no knowledge at all about LUA and I'm trying to craft a small script for nginx.
I'm using the following library (https://github.com/openresty/lua-resty-string) to encrypt some data. Specifically I'm using the code for AES 256 CBC (SHA-512, salted) encryption and storing the hex-encoded encrypted string as shown in the example.
The issue now is that I need to get that hex string back to the decrypt method which expects an ASCII string.
This is an example of the encrypted hex string:
fdbcc47fe5825d49ac3429d4f8408fa4b6528dd99d938f122ee7f00ab71ae0c5c73d29d4f54ea1fbefe706b5dca04f6b6c6b8b96d9807ef58eaba07c6c6cefaf6ad8673b43a4e243fb2912fb4ff93de6488c4795ebb09ecd7a40b7c9dc2003be4ff93425d2d74688208fa4d2a8d22f32490666550f4b01340de708d7aa5bc8468d171da400f59fcff4e7d371d7ab9b48fdfde29aefc0af78b2f934927a7713994c1e8f9435067c851efc5d300405c74d
Just had to write one recently for pretty much same reason. Abuse gsub - capture each two chars and replace them with pre-calculated values from hexnumber->character map.
-- Needs to be only done once
local hex_to_char = {}
for idx = 0, 255 do
hex_to_char[("%02X"):format(idx)] = string.char(idx)
hex_to_char[("%02x"):format(idx)] = string.char(idx)
end
-- Sometime later
str = "fdbcc47fe5825d49ac3429d4f8408fa4b6528dd99d938f122ee7f00ab71ae0c5c73d29d4f54ea1fbefe706b5dca04f6b6c6b8b96d9807ef58eaba07c6c6cefaf6ad8673b43a4e243fb2912fb4ff93de6488c4795ebb09ecd7a40b7c9dc2003be4ff93425d2d74688208fa4d2a8d22f32490666550f4b01340de708d7aa5bc8468d171da400f59fcff4e7d371d7ab9b48fdfde29aefc0af78b2f934927a7713994c1e8f9435067c851efc5d300405c74d"
print(str:gsub("(..)", hex_to_char))

Is this kind of encryption "safe"?

I must first say I have never studied cryptography, and everything I know on this topic is just basic notions.
We were looking at a fast and easy way to encrypt some data (to be stored into a database) using a password.
I know the "safest" algorithm is AES, but it's probably too complicated for us and I know it requires us to obtain authorizations from the US government, etc.
We thought about this (simple) algorithm, which reminds me (but I may be wrong) a sort of "One time pad".
(it's not written in any specific language... it's just the idea :) )
// The string we need to encrypt
string data = "hello world";
// Long string of random bytes that will be generated the first time we need to encrypt something
string randomData = "aajdfskjefafdsgsdewrbhf";
// The passphrase the user selected
string passphrase = "foo";
// Let's generate the encryption key, using randomData XOR passphrase (repeating this one)
string theKey = "";
j = 0;
for(i = 0; i < randomData.length; i++)
{
theKey += randomData[i] ^ passphrase[j];
j++;
if(j == passphrase.length) j = 0;
}
// Encrypt the data, using data XOR theKey (with theKey.length >= data.length)
string encryptedData = "";
for(i = 0; i < data.length; i++)
{
encryptedData += data[i] ^ theKey[i];
}
On disk, we will store then only randomData and encryptedData.
passphrase will be asked to the user every time.
How safe will an algorithm like this be?
Except with a brute force, are there other ways this could be cracked? I don't think statistical analysis will work on this, does it?
Is it "as safe as" a One Time Pad?
Thank you!
You can just import an AES library and let it do all the heavy work. Authorizations from the US government? It is a public function, and the US government also uses it.
No, this is not secure.
If the random data is stored alongside the encrypted data, then it is simply equivalent to XORing with the passphrase: this is because the attacker can simply XOR the encrypted data with the random data, and obtain plaintext XOR passphrase as the result.
This is extremely weak. Statistical analysis would crack it in the blink of an eye. Some diligent pen-and-paper guesswork would probably crack it pretty quickly too.
The only exception would be if (1) randomData was taken from a truly crypto-strength source, (2) randomData was at least as long as your plaintext data, (3) randomData was never, ever re-used for a different message, and (4) you got rid of passphrase altogether and treated randomData as your key. In that case you'd have what amounts to a one-time pad.
No, it isn't safe. Using xor with random data and password this way is completely wrong.
A one time pad cryptograpy needs the random data to be the same length as the data to be encrypted.

Remove trailing "=" when base64 encoding

I am noticing that whenever I base64 encode a string, a "=" is appended at the end. Can I remove this character and then reliably decode it later by adding it back, or is this dangerous? In other words, is the "=" always appended, or only in certain cases?
I want my encoded string to be as short as possible, that's why I want to know if I can always remove the "=" character and just add it back before decoding.
The = is padding. <!------------>
Wikipedia says
An additional pad character is
allocated which may be used to force
the encoded output into an integer
multiple of 4 characters (or
equivalently when the unencoded binary
text is not a multiple of 3 bytes) ;
these padding characters must then be
discarded when decoding but still
allow the calculation of the effective
length of the unencoded text, when its
input binary length would not be a
multiple of 3 bytes (the last non-pad
character is normally encoded so that
the last 6-bit block it represents
will be zero-padded on its least
significant bits, at most two pad
characters may occur at the end of the
encoded stream).
If you control the other end, you could remove it when in transport, then re-insert it (by checking the string length) before decoding.
Note that the data will not be valid Base64 in transport.
Also, Another user pointed out (relevant to PHP users):
Note that in PHP base64_decode will accept strings without padding, hence if you remove it to process it later in PHP it's not necessary to add it back. – Mahn Oct 16 '14 at 16:33
So if your destination is PHP, you can safely strip the padding and decode without fancy calculations.
I wrote part of Apache's commons-codec-1.4.jar Base64 decoder, and in that logic we are fine without padding characters. End-of-file and End-of-stream are just as good indicators that the Base64 message is finished as any number of '=' characters!
The URL-Safe variant we introduced in commons-codec-1.4 omits the padding characters on purpose to keep things smaller!
http://commons.apache.org/codec/apidocs/src-html/org/apache/commons/codec/binary/Base64.html#line.478
I guess a safer answer is, "depends on your decoder implementation," but logically it is not hard to write a decoder that doesn't need padding.
In JavaScript you could do something like this:
// if this is your Base64 encoded string
var str = 'VGhpcyBpcyBhbiBhd2Vzb21lIHNjcmlwdA==';
// make URL friendly:
str = str.replace(/\+/g, '-').replace(/\//g, '_').replace(/\=+$/, '');
// reverse to original encoding
if (str.length % 4 != 0){
str += ('===').slice(0, 4 - (str.length % 4));
}
str = str.replace(/-/g, '+').replace(/_/g, '/');
See also this Fiddle: http://jsfiddle.net/7bjaT/66/
= is added for padding. The length of a base64 string should be multiple of 4, so 1 or 2 = are added as necessary.
Read: No, you shouldn't remove it.
On Android I am using this:
Global
String CHARSET_NAME ="UTF-8";
Encode
String base64 = new String(
Base64.encode(byteArray, Base64.URL_SAFE | Base64.NO_PADDING | Base64.NO_CLOSE | Base64.NO_WRAP),
CHARSET_NAME);
return base64.trim();
Decode
byte[] bytes = Base64.decode(base64String,
Base64.URL_SAFE | Base64.NO_PADDING | Base64.NO_CLOSE | Base64.NO_WRAP);
equals this on Java:
Encode
private static String base64UrlEncode(byte[] input)
{
Base64 encoder = new Base64(true);
byte[] encodedBytes = encoder.encode(input);
return StringUtils.newStringUtf8(encodedBytes).trim();
}
Decode
private static byte[] base64UrlDecode(String input) {
byte[] originalValue = StringUtils.getBytesUtf8(input);
Base64 decoder = new Base64(true);
return decoder.decode(originalValue);
}
I had never problems with trailing "=" and I am using Bouncycastle as well
If you're encoding bytes (at fixed bit length), then the padding is redundant. This is the case for most people.
Base64 consumes 6 bits at a time and produces a byte of 8 bits that only uses six bits worth of combinations.
If your string is 1 byte (8 bits), you'll have an output of 12 bits as the smallest multiple of 6 that 8 will fit into, with 4 bits extra. If your string is 2 bytes, you have to output 18 bits, with two bits extra. For multiples of six against multiple of 8 you can have a remainder of either 0, 2 or 4 bits.
The padding says to ignore those extra four (==) or two (=) bits. The padding is there tell the decoder about your padding.
The padding isn't really needed when you're encoding bytes. A base64 encoder can simply ignore left over bits that total less than 8 bits. In this case, you're best off removing it.
The padding might be of some use for streaming and arbitrary length bit sequences as long as they're a multiple of two. It might also be used for cases where people want to only send the last 4 bits when more bits are remaining if the remaining bits are all zero. Some people might want to use it to detect incomplete sequences though it's hardly reliable for that. I've never seen this optimisation in practice. People rarely have these situations, most people use base64 for discrete byte sequences.
If you see answers suggesting to leave it on, that's not a good encouragement if you're simply encoding bytes, it's enabling a feature for a set of circumstances you don't have. The only reason to have it on in that case might be to add tolerance to decoders that don't work without the padding. If you control both ends, that's a non-concern.
If you're using PHP the following function will revert the stripped string to its original format with proper padding:
<?php
$str = 'base64 encoded string without equal signs stripped';
$str = str_pad($str, strlen($str) + (4 - ((strlen($str) % 4) ?: 4)), '=');
echo $str, "\n";
Using Python you can remove base64 padding and add it back like this:
from math import ceil
stripped = original.rstrip('=')
original = stripped.ljust(ceil(len(stripped) / 4) * 4, '=')
Yes, there are valid use cases where padding is omitted from a Base 64 encoding.
The JSON Web Signature (JWS) standard (RFC 7515) requires Base 64 encoded data to omit
padding. It expects:
Base64 encoding [...] with all trailing '='
characters omitted (as permitted by Section 3.2) and without the
inclusion of any line breaks, whitespace, or other additional
characters. Note that the base64url encoding of the empty octet
sequence is the empty string. (See Appendix C for notes on
implementing base64url encoding without padding.)
The same applies to the JSON Web Token (JWT) standard (RFC 7519).
In addition, Julius Musseau's answer has indicated that Apache's Base 64 decoder doesn't require padding to be present in Base 64 encoded data.
I do something like this with java8+
private static String getBase64StringWithoutPadding(String data) {
if(data == null) {
return "";
}
Base64.Encoder encoder = Base64.getEncoder().withoutPadding();
return encoder.encodeToString(data.getBytes());
}
This method gets an encoder which leaves out padding.
As mentioned in other answers already padding can be added after calculations if you need to decode it back.
For Android You may have trouble if You want to use android.util.base64 class, since that don't let you perform UnitTest others that integration test - those uses Adnroid environment.
In other hand if You will use java.util.base64, compiler warns You that You sdk may to to low (below 26) to use it.
So I suggest Android developers to use
implementation "commons-codec:commons-codec:1.13"
Encoding object
fun encodeObjectToBase64(objectToEncode: Any): String{
val objectJson = Gson().toJson(objectToEncode).toString()
return encodeStringToBase64(objectJson.toByteArray(Charsets.UTF_8))
}
fun encodeStringToBase64(byteArray: ByteArray): String{
return Base64.encodeBase64URLSafeString(byteArray).toString() // encode with no padding
}
Decoding to Object
fun <T> decodeBase64Object(encodedMessage: String, encodeToClass: Class<T>): T{
val decodedBytes = Base64.decodeBase64(encodedMessage)
val messageString = String(decodedBytes, StandardCharsets.UTF_8)
return Gson().fromJson(messageString, encodeToClass)
}
Of course You may omit Gson parsing and put straight away into method Your String transformed to ByteArray

Resources