Generating Alternative Initial Value while wrapping keys with AES - node.js

Am following the instructions on https://datatracker.ietf.org/doc/html/rfc5649#section-3 and I have gotten to a point where I need to generate the LSB(32,A) for the Alternative Initial Value (AIV). Am using NodeJS with buffers to implement the algorithm. My understanding is 32-bits === buffer.length == 4 or in other words, length 4 of buffer is the 32-bits referenced in the article. I have padded the key after converting it buffer then padding with the length value of 8 - (length % 8) with 0s as value as indicated in the article. Now the thing I have not been able to figure out is getting the value of 32-bit MLI. How do I get the MLI, I just know its Message Length Indicator but thats all I know about it.
Example:
const key = Buffer.from('base64 key', 'base64');
const kek = Buffer.from('A65959A6', 'hex');
Now here I have only MSB(32, A) but not LSB(32, A), how do I get the value, and is there anything I am doing wrong, please help I have already spent alot of time trying to figure this out.
Scenario: Let's say my key length is 75, now I have to pad the remaining 5 characters for it to be multiples of 8, now how do I generate LSB(32, A) in this case?

Related

What is the difference between a Non-base 58 character and a invalid checksum private WIF key?

I am just playing around with a NodeJS dependency called CoinKey
Here is the question:
I randomly generate WIF keys with this function:
case 'crc':
let randomChars = 'cbldfganhijkmopqwesztuvxyr0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
let generatedPrivateKey = ''
for (let i = 0; i < 50; i++)
generatedPrivateKey += randomChars.charAt(Math.floor(Math.random() * randomChars.length))
return prefix + generatedPrivateKey
I have here 2 real examples:
1 = LSY3MrnXcjnW5QU2ymwZn6UYu412jr577U9AnR9akwTg2va7h9B -> Invalid checksum
2 = L2l37v6EPDN423O02BqV02T2PK1A6vnjBO1HRmgsZ6384LNdSCj -> Non-base58 character
I call the function CoinKey.fromWif(privateKey) with of course one of the 2 private keys above. But why does key 1 give me the error Invalid checksum and key 2 gives me the error Non-base58 character?
I am just a simple developer, I don't have any knowledge about encryption etc. The only thing i know that i try to generate a WIF key, and a WIF key is a shorter encryption of a larger private key. And yes i also know that it's almost as good as impossible to brute-force such a big private key but as i said i am just playing around.
The base 58 character set only contains 58 characters. a-z, A-Z, 0-9 are 62, so four of them are not valid. Case 2 apparently contains one or more of the invalid characters.
And each key has a checksum. So if all the characters are valid the checksum is checked. Looks like your first case has all valid characters, purely by coincidence, but not the correct checksum.
It's pretty nonstandard and a bit inefficient; base 64 is commonly used.

Node.JS AES decryption truncates initial result

I'm attempting to replicate some python encryption code in node.js using the built in crypto library. To test, I'm encrypting the data using the existing python script and then attempting to decrypt using node.js.
I have everything working except for one problem, doing the decryption results in a truncated initial decrypted result unless I grab extra data, which then results in a truncated final result.
I'm very new to the security side of things, so apologize in advance if my vernacular is off.
Python encryption logic:
encryptor = AES.new(key, AES.MODE_CBC, IV)
<# Header logic, like including digest, salt, and IV #>
for rec in vect:
chunk = rec.pack() # Just adds disparate pieces of data into a contiguous bytearray of length 176
encChunk = encryptor.encrypt(chunk)
outfile.write(encChunk)
Node decryption logic:
let offset = 0;
let derivedKey = crypto.pbkdf2Sync(secret, salt, iterations, 32, 'sha256');
let decryptor = crypto.createDecipheriv('aes-256-cbc', derivedKey, iv);
let chunk = data.slice(offset, (offset + RECORD_LEN))
while(chunk.length > 0) {
let clearChunk = decryptor.update(chunk);
// unpack clearChunk and do something with that data
offset += RECORD_LEN;
chunk = data.slice(offset, offset + RECORD_LEN);
}
I would expect my initial result to print something like this to hex:
54722e34d8b2bf158db6b533e315764f87a07bbfbcf1bd6df0529e56b6a6ae0f123412341234123400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
And it gets close, expect it cuts off the final 16 bytes (in example above the final 32 "0's" would be missing). This shifts all following decryptions by those 16 bytes, meaning those 32 "0's" are added to the front of the next decrypted chunk.
If I add 16 bytes to the initial chunk size (meaning actually grab more data, not just shift the offset) then this solves everything on the front end, but results in the final chunk losing it's last 16 bytes of data.
One thing that seems weird to me: The initial chunk has a length of 176, but the decrypted results has a length of 160. All other chunks have lengths 176 before and after decryption. I'm assuming I'm doing something wrong with how I'm initializing the decryptor which is causing it to expect an extra 16 bytes of data at the beginning, but can't for the life of me figure out what.
I must be close since the decrypted data is correct, minus the mystery shifting, even when reading in large amounts of data. Just need to figure out this final step.
Short version based on your updated code: if you are absolutely certain that every block will be 176 bytes (i.e. a multiple of 16), then you can add cipher.setAutoPadding(false) to your Node code. If that's not true, or for more about why, read on.
At the end of your decryption, you need to call decryptor.final to get the final block.
If you have all the data together, you can decrypt it in one call:
let clearChunk = decryptor.update(chunk) + decryptor.final()
update() exists so that you can pass data to the decryptor in chunks. For example, if you had a very large file, you may not want a full copy of the encrypted data plus a full copy of the decrypted data in memory at the same time. You can therefore read encrypted data from the file in chunks, pass it to update(), and write out the decrypted data in chunks.
The input data using CBC mode must be a multiple of 16 bytes long. To ensure this, we typically use PKCS7 padding. That will pad out your input data to a multiple of 16. If it's already a multiple of 16 it will add an extra block of 16 bytes. The padding value is the number of padding values. So if your block is 12 bytes long, it will be padded with 04040404. If it's a multiple of 16, then the padding is 16 bytes of 0x10. This padding system lets the decryptor validate that it's removing the right amount of padding. This is likely what's causing your 176/160 issue.
This padding issue is why there's a final() call. The system needs to know which block is the last block so it can remove the padding. So the first call to update() will always return one fewer blocks than you pass in, since it's holding onto it until it knows whether it's the last block.
Looking at your Python code, I think it's not padding at all (most Python libraries I'm familiar with don't pad automatically). As long as the input is certain to be a multiple of 16, that's ok. But the default for Node is to expect padding. If you know that your size will always be a multiple of 16, then you can change the Node side with cipher.setAutoPadding(false). If you don't know for certain that the input size will always be a multiple of 16, then you need to add a pad() call on the Python side for the final block.

Advice for decoding binary/hex WAV file metadata - Pro Tools UMID chunk

Pro Tools (AVID's DAW software) has a process for managing and linking to all of it's unique media using a Unique ID field, which gets embedded in to the WAV file in the form of a umid metadata chunk. Examining a particular file inside Pro Tools, I can see that the file's Unique ID comes in the form of an 11 character string, looking like: rS9ipS!x6Tf.
When I examine the raw data inside the WAV file, I find a 32-byte block of data - 4 bytes for the chars 'umid'; 4 bytes for the size of the following data block - 24; then the 24-byte data block, which, when examined in Hex Fiend, looks like this:
00000000 0000002A 5B7A5FFB 0F23DB11 00000000 00000000
As you can see, there are only 9 bytes that contain any non-zero information, but this is somehow being used to store the 11 char Unique ID field. It looks to me as if something is being done to interpret this raw data to retrieve that Unique ID string, but all my attempts to decode the raw data have not been at all fruitful. I have tried using https://gchq.github.io/CyberChef/ to run it through all the different formats that would make sense, but nothing it pointing me in the right direction. I have also tried looking at the data in 6-bit increments to see if it's being compressed in some way (9 bytes * 8 bits == 72 == 12 blocks * 6 bits) but have not had any luck stumbling on a pattern yet.
So I'm wondering if anyone has any specific tips/tricks/suggestions on how best to figure out what might be happening here - how to unpack this data in such a way that I might be able to end up with enough information to generate those 11 chars, of what I'm guessing would most likely be UTF-8.
Any and all help/suggestions welcome! Thanks.
It seems to be a base64 encoding only with a slightly different character map, here is my python implementation that I find best matches Pro Tools.
char_map = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789#!"
def encode_unique_id(uint64_value):
# unique id is a uint64_t, clamp
value = uint64_value & 0xFFFFFFFFFFFFFFFF
if value == 0:
return ""
# calculate the min number of bytes
# needed store value for int
byte_length = 0
tmp = value
while tmp:
tmp =tmp >> 8
byte_length += 1
# calculate number of chars needed to store encoding
char_total, remainder = divmod(byte_length * 8, 6)
if remainder:
char_total += 1
s = ""
for i in range(char_total):
value, index = divmod(value, 64)
s += char_map[index]
return s
Running encode_unique_id(0x2A5B7A5FFB0F23DB11) should give you rS9ipS!x6Tf

How to convert padbytes function to coldfusion

I have the following code in node and I am trying to convert to ColdFusion:
// a correct implementation of PKCS7. The rijndael js has a PKCS7 padding already implemented
// however, it incorrectly pads expecting the phrase to be multiples of 32 bytes when it should pad based on multiples
// 16 bytes. Also, every javascript implementation of PKCS7 assumes utf-8 char encoding. C# however is unicode or utf-16.
// This means that chars need to be treated in our code as 2 byte chars and not 1 byte chars.
function padBytes(string){
const strArray = [...new Buffer(string, 'ucs2')];
const need = 16 - ((strArray.length) % 16);
for(let i = 0; i < need; i++) {
strArray.push(need);
}
return Buffer.from(strArray);
}
I'm trying to understand exactly what this function is doing to convert it. As I think I understand it, it's converting the string to UTF-16 (UCS2) and then adding padding to each character. However, I don't understand why the need variable is the value it is, nor how exactly to achieve that in CF.
I also don't understand why it's only pushing the same value into the array over and over again. For starters, in my example script the string is 2018-06-14T15:44:10Z testaccount. The string array length is 64. I'm not sure how to achieve even that in CF.
I've tried character encoding, converting to binary and stuff to UTF-16 and just don't understand well enough the js function to replicate it in ColdFusion. I feel I'm missing something with the encoding.
EDIT:
The selected answer solves this problem, but because I was eventually trying to use the input data for encryption, the easier method was to not use this function at all but do the following:
<cfset stringToEncrypt = charsetDecode(input,"utf-16le") />
<cfset variables.result = EncryptBinary(stringToEncrypt, theKey, theAlgorithm, theIV) />
Update:
We followed up in chat and turns out the value is ultimately used with encrypt(). Since encrypt() already handles padding (automatically), no need for the custom padBytes() function. However, it did require switching to the less commonly used encryptBinary() function to maintain the UTF-16 encoding. The regular encrypt() function only handles UTF-8, which produces totally different results.
Trycf.com Example:
// Result with sample key/iv: P22lWwtD8pDrNdQGRb2T/w==
result = encrypt("abc", theKey, theAlgorithm, theEncoding, theIV);
// Result Result with sample key/iv: LJCROj8trkXVq1Q8SQNrbA==
input = charsetDecode("abc", "utf-16le");
result= binaryEncode(encryptBinary(input, theKey, theAlgorithm, theIV), "base64);
it's converting the string to utf-16
(ucs2) and then adding padding to each character.
... I feel I'm missing something with the encoding.
Yes, the first part seems to be decoding the string as UTF-16 (or UCS2 which are slightly different). As to what you're missing, you're not the only one. I couldn't get it to work either until I found this comment which explained "UTF-16" prepends a BOM. To omit the BOM, use either "UTF-16BE" or "UTF-16LE" depending on the endianess needed.
why it's only pushing the same value into the array over and over again.
Because that's the definition of PCKS7 padding. Instead of padding with something like nulls or zeroes, it calculates how many bytes padding are needed. Then uses that number as the padding value. For example, say a string needs an extra three bytes padding. PCKS7 appends the value 3 - three times: "string" + "3" + "3" + "3".
The rest of the code is similar in CF. Unfortunately, the results of charsetDecode() aren't mutable. You must build a separate array to hold the padding, then combine the two.
Note, this example combines the arrays using CF2016 specific syntax, but it could also be done with a simple loop instead
Function:
function padBytes(string text){
var combined = [];
var padding = [];
// decode as utf-16
var decoded = charsetDecode(arguments.text,"utf-16le");
// how many padding bytes are needed?
var need = 16 - (arrayLen(decoded) % 16);
// fill array with any padding bytes
for(var i = 0; i < need; i++) {
padding.append(need);
}
// concatenate the two arrays
// CF2016+ specific syntax. For earlier versions, use a loop
combined = combined.append(decoded, true);
combined = combined.append(padding, true);
return combined;
}
Usage:
result = padBytes("2018-06-14T15:44:10Z testaccount");
writeDump(binaryEncode( javacast("byte[]", result), "base64"));

what does it mean to convert octet strings to nonnegative integers in RSA?

I am trying to implement RSA PKCS #1 based on this spec
http://www.emc.com/emc-plus/rsa-labs/pkcs/files/h11300-wp-pkcs-1v2-2-rsa-cryptography-standard.pdf
However, I am not sure what the purpose of OS2IP is in page 9. Assume my message is integer 258 and private key is e. Also assume we don't do any other formatting besides OS2IP.
So I will convert the 258 into octet strings and store it into char buf[2] = {0x02, 0x01}. Now before I compute the exponentiation 258^e. I need to call OS2IP to reverse the byte order and save it to buf_new[2] = {0x01, 0x02}. Now 0x0102 = 258.
However, if I initially stored 258 as buf[2] = {0x01, 0x02}. Then there is no need to call OS2IP, correct? or is this the convention that I have to save it into {0x02, 0x01}?
OS2IP encodes an non negative integer into it's big endian representation.
However, if I initially stored 258 as buf[2] = {0x01, 0x02}. Then there is no need to call OS2IP, correct?
That is correct. 258 is already encoded in big endian although depending the length you choosed (!=2) you might be missing the leading zeros.
or is this the convention that I have to save it into {0x02, 0x01}?
I don't understand your question :/

Resources