Node.js Crypto throwing error - node.js

Following code give bad decryption error
vaultEngine.AESDecrypt = function (encKey, data) {
var cipherObject = crypto.createDecipheriv('aes-256-cbc', encKey, "a2xhcgAAAAAAAAAA");
var Fcontent = cipherObject.update(data, vaultEngine.outputEncoding, vaultEngine.inputEncoding);
Fcontent += cipherObject.final(vaultEngine.inputEncoding);
//console.log("Decryption data is:"+Fcontent);
return Fcontent;
}
Specifically this error:
TypeError: error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decr
ypt

FIRST OF ALL
I'm concerned that your IV is hard coded directly into your method, which suggests that you're using the same IV for every encryption, which is bad bad bad. The IV should be cryptographically random (unpredictable), and different for every encryption. You can store it with your encrypted text and then pull it back out to use to decrypt, but you should not be using the same IV. If you're making this level of error, it suggests you need to do a lot more research on how to use encryption appropriately so that it actually protects the data you intend to protect. Start here.
And now to attempt to fix address your question directly:
According to the docs it looks like you've reversed your input encoding and output encoding variables, it should be:
var Fcontent = cipherObject.update(data, vaultEngine.inputEncoding, vaultEngine.outputEncoding);
Fcontent += cipherObject.final(vaultEngine.outputEncoding);
... if that doesn't work, I'd recommend the following changes:
use the stream processing write() and end() methods on your cipherObject, instead of the legacy update() and final() methods. The crypto module is considered "unstable" specifically because of the update to use node streams (see here), the legacy methods may remain but they'd be the first on the chopping block if breaking changes are introduced.
Create a buffer from your data before sending it to be decrypted. This will ensure that you've created your buffer correctly, and will minimize the work required at the decryption stage:
var dataBuffer = new Buffer(data, vaultEngine.inputEncoding);
cipherObject.write(dataBuffer);
cipherObject.end();
return cipherObject.read().toString(vaultEngine.outputEncoding);

Related

Docusign: Verify HMAC key from header response with the secret key

I am working with Docusign connect and planning to use HMAC keys to authenticate the messages. I am referring https://developers.docusign.com/esign-rest-api/guides/connect-hmac#example-hmac-workflow link.
I find few terms confusing in the documentation. Attaching the code snippet from the doc for python.
def ComputeHash(secret, payload):
import hmac
import hashlib
import base64
hashBytes = hmac.new(secret, msg=payload, digestmod=hashlib.sha256).digest()
base64Hash = base64.b64encode(hashBytes)
return base64Hash;
def HashIsValid(secret, payload, verify):
return verify == ComputeHash(secret,payload)
Can you explain what payload(didn't understand exactly what it is), secret (I am guessing the secret key) and verify means from the above code and how do I verify my secret key with X-Docusign-Signature-1 which I get from response header?
My code:
message = request.headers
hashBytes = hmac.new(secret_key.encode('utf-8'), msg=message.encode('utf-8'), digestmod=hashlib.sha256).hexdigest()
base64Hash = base64.b64encode(hashBytes)
[Edited]
I found the solution on my own. Please read the first answer. I have explained it in details.
Sorry for the confusion.
Payload is "The entire body of the POST request is used, including line endings."
This is what you're encoding here using a Hash (HMAC) function.
SHA256 HMAC digest take in an the array of bytes (payload) and a secret (some key to use for encryption) and produces some encrypted version of the payload that can later be verified.
I highly recommend you ensure you first understand how the Connect webhook works without using HAMC encoding. This feature is meant to secure your application and it's a bit more complex. If you first get it working without it - you'll get a better grasp of what's going on (as well as feel a bit better about accomplishing a subtask).
Once you have it working, you can add the HMAC to make it secure and it will be easier at that point.
I found the solution to my problem.
expected_signature = request.headers['X-Docusign-Signature-1']
message = request.data # It is already in bytes. No need to encode it again.
hashBytes = hmac.new(secret_key.encode('utf-8'), msg=message, digestmod=hashlib.sha256).hexdigest()
actual_signature = base64.b64encode(hashBytes)
hmac.compare_digest(actual_signature.decode('utf-8'),expected_signature):

How to save records with Asset field use server-to-server cloudkit.js

I want to use server-to-server cloudkit js. to save record with Asset field.
the Asset field is a m4a audio. after saved, the audio file is corrupt to play
The Apple's Doc is not clear about the Asset field.
In a record that is being saved to the database, the value of an Asset field must be a window.Blob type. In the code fragment above, the type of the assetFile variable is window.File.
Docs:
https://developer.apple.com/documentation/cloudkitjs/cloudkit/database/1628735-saverecords
but in nodejs ,there is no Blob or .File, I filled it with a buffer like this code:
var dstFile = path.join(__dirname,"../test.m4a");
var data = fs.readFileSync(dstFile);
let buffer = Buffer.from(data);
var rec = {
recordType: "MyAttachment",
fields: {
ext: { value: ".m4a" },
file: { value: buffer }
}
}
//console.debug(rec);
mydatabase.newRecordsBatch().create(rec).commit().then(function (response) {
if (response.hasErrors) {
console.log(">>> saveAttachFile record failed");
console.warn(response.errors[0]);
} else {
var createdRecord = response.records[0];
console.log(">>> saveAttachFile record success:", createdRecord);
}
});
The record is successful be saved.
But when I download the audio from icloud.developer.apple.com/dashboard .
the audio file is corrupt to play.
What's wrong with it. thank you to reply.
I was having the same problem and have found a working solution!
Remembering that CloudKitJS needs you to define your own fetch method, I implemented a custom one to see what was going on. I then attached a debugger on the custom fetch to inspect the data that was passing through it.
After stepping through the caller, I found that all asset values are transformed using its toString() method only when the library is embedded in NodeJS. This is determined by the absence of the global window object.
When toString() is called on a Buffer, its contents are encoded to UTF-8 (by default), which causes binary assets to become malformed. If you're using node-fetch for your fetch implementation, it supports Buffer and stream.Readable, so this toString() call does nothing but harm.
The most unobtrusive fix I've found is to swap the toString() method on any Buffer or stream.Readable instances passed as an asset field values. You should probably use stream.Readable, by the way, so that you don't load the entire asset in memory when uploading.
Anyway, here's what it looks like in practice:
// Put this somewhere in your implementation
const swizzleBuffer = (buffer) => {
buffer.toString = () => buffer;
return buffer;
};
// Use this asset value instead
{ asset: swizzleBuffer(fs.readFileSync(path)) }
Please be aware that this workaround mutates a Buffer in an ugly way (since Buffer apparently can't be extended). It's probably a good idea to design an API which doesn't use Buffer arguments so that you can mutate instances that only you create yourself to avoid unintended side effects anywhere else in your code.
Also, sure to vendor (make a local copy) of CloudKitJS in your project, as the behavior may change in the future.
ORIGINAL ANSWER
I ran into the same problem and solved it by encoding my data using Base64. It appears that there's a bug in their SDK which mangles Buffer instances containing non-ascii characters (which, um, seems problematic).
Anyway, try something like this:
const assetField = { value: Buffer.from(data.toString('base64')), 'ascii') }
Side note:
You'll need to decode the asset(s) on the device before using them. There's no way to do this efficiently without writing your own routines, as the methods included in Data / NSData instances requires all data to be in memory.
This is a problem with CloudKitJS (and not the native CloudKit client / service), so the other option is to write your own routine to upload assets.
Neither of these options seem particularly great, but rolling your own atleast means there aren't extra steps for clients to take in order to use the asset.

AES Encryption in Nodejs does not work as it works in PHP

NODE.JS CODE (DOES NOT WORK AS EXPECTED)
var crypto = require('crypto');
var input = '200904281000001|DOM|IND|INR|10|orderno_unique1|others|http://localhost/sample/Success.php|http://localhost/sample/failure.php|TOML';
var Key = "qcAHa6tt8s0l5NN7UWPVAQ==";
Key = new Buffer(Key || '', 'base64');
var cipher = crypto.createCipher('aes128', Key);
var actual = cipher.update(input, "utf8", "base64");
actual += cipher.final("base64");
console.log(actual);
Actual Output
bIK4D0hv2jcKP3eikoaM7ddqRee+RrT2FDOZA+c2sldyrqP+NrmgYOEXklUrSBQiU7w7e90nzFl/mpidy/Q8FD692bFLnESiNqGEQ7er44BXxFtNo6AKvpuohz31zm9JupJXL3jhOC+47mvDHokR4b9euDzPFitTJQW55JuSyvJphOKdiXjH+lGKxXKWsODq
Expected Output
ncJ+HX6zIdrUfEedi7YC82QOUARkySblivzysFbMqaYEMPj7UfMlE4SEkDcjg+D9dE5StGJgebSOkL7UuR6fXwodcgL0CSRds0Y+hX27gKUZK45b7Tc0EjXhepwHJ/olSdWUCkwRcZcv+wxtYzOH7+KKijJabJkU1/SF1ugExzcnqfV2wOZ9q79a4y/g3cb5
PHP CODE (WORKS AS EXPECTED)
include('CryptAES.php');
//Testing key
$Key = "qcAHa6tt8s0l5NN7UWPVAQ==";
//requestparam Testing - TOML
$input ="200904281000001|DOM|IND|INR|10|unique_10005|others|http://www.yourdomain.com/paymentSuccess.php|http://www.yourdomain.com/paymentFailure.php|TOML";
$aes = new CryptAES();
$aes->set_key(base64_decode($key));
$aes->require_pkcs5();
echo $aes->encrypt($input);
There is a known issue with using PHP's inbuilt mcrypt library. It pads the key in a different manner as node.js. This issue was bugging me a lot a few months ago, and there is a workaround here. What I did was use a small php command line script with my node.js app to handle encryption and decryption.
First of all, your two input strings are different. On Node.js, you are using
200904281000001|DOM|IND|INR|10|orderno_unique1|others|
http://localhost/sample/Success.php|
http://localhost/sample/failure.php|TOML
whereas on PHP it's:
200904281000001|DOM|IND|INR|10|unique_10005|others|
http://www.yourdomain.com/paymentSuccess.php|
http://www.yourdomain.com/paymentFailure.php|TOML
They differ in the domain as well as the second-last string in the first line. Then, you do not explain where your CryptAES function in PHP comes from. As you do not specify any parameters, one can only guess that it's AES with 128 bits. What are its defaults? What kind of padding is used? CBC mode or EBC mode? …?
Questions over questions one can not answer for now.
For testing, I wrote a small Node.js script that takes both of your input strings, and tries all ciphers available in Node.js (you can get them using crypto.getCiphers()) combined with all possible encodings Node.js supports (i.e., utf8, ascii, utf16le and ucs2). None of them resulted in the string you gave as expected.
So, although this is not a real answer, I hope that this helps you anyway, at least a little step into the right direction.

NodeJS "crypto" hash seems to produce different output than Crypto-JS javascript library

I am using NodeJS's bundled crypto module for SHA256 hashing on the server-side.
On the client-side, I am using a javascript library called Crypto-JS.
I am using SHA256 hashes for a login system that uses classical nonce-based authentication. However, my server-side and client-side hash-digests don't match up even when the hash-messages are the same (I have checked this). Even the length of the hash-digests are different.
This is a snippet of my client-side implementation:
var password_hash = CryptoJS.SHA256( token.nonce /*this is the server's nonce*/ + cnonce + password ).toString(CryptoJS.enc.Base64);
This is a snippet of my server-side implementation:
var sha256 = CRYPTO.createHash("sha256");
sha256.update(snonce+cnonce+password, "utf-8");
var hash = sha256.digest("base64");
This is some sample data:
client-digest: d30ab96e65d09543d7b97d7cad6b6cf65f852f5dd62c256595a7540c3597eec4
server-digest: vZaCi0mCDufqFUwVO40CtKIW7GS4h+XUhTUWxVhu0HQ=
client-message: O1xxQAi2Y7RVHCgXoX8+AmWlftjSfsrA/yFxMaGCi38ZPWbUZBhkVDc5eadCHszzbcOdgdEZ6be+AZBsWst+Zw==b3f23812448e7e8876e35a291d633861713321fe15b18c71f0d54abb899005c9princeofnigeria
server-message: O1xxQAi2Y7RVHCgXoX8+AmWlftjSfsrA/yFxMaGCi38ZPWbUZBhkVDc5eadCHszzbcOdgdEZ6be+AZBsWst+Zw==b3f23812448e7e8876e35a291d633861713321fe15b18c71f0d54abb899005c9princeofnigeria
Does anyone know why the hashes are different? I thought that if it is the same protocol/algorithm, it will always produce the same hash.
Edit: Wow. I went to this online hashing tool and it produces yet another digest for the same message:
4509a6d5028b217585adf41e7d49f0e7c1629c59c29ce98ef7fbb96c6f27502c
Edit Edit: On second thought, the reason for the online hashing tool being different is probably because it uses a hex encoding and I used base64
The problem was indeed with encodings.
Look at the client-side implementation:
var password_hash = CryptoJS.SHA256(message).toString(CryptoJS.enc.Base64);
The CryptoJS.enc.Base64 parameter actually requires another component in the CryptoJS library that I did not include (stored in a js file: enc-base64-min.js). So, in the absence of a valid encoding type, it defaulted to hex.
Thanks #dhj for pointing out the encoding issue!
The problem is that your client produces hex-encoded digest, while server uses base64 encoding.

How to resolve an InvalidMd5 error returned from the Windows Azure Blob Storage service?

I am building an application that needs to allow users to upload large images (up to about 100 MB) to the Windows Azure Blob Storage service. Having read Rob Gillen's excellent article on file upload optimization for Windows Azure, I borrowed his approach for doing parallel upload of file chunks, using the CloudBlockBlob.PutBlock() method within a Parallel.For loop (code is available here).
The problem I have is that whenever I try to upload a file I get an "InvalidMd5" exception from the storage client. Suspecting that the problem may be in the development storage, I also tried running the code against my live Azure storage account, but I got the same error. Looking at the traffic with Fiddler I see that the "Content-MD5" header is set to a valid MD5 hash. The description of the error says that "The MD5 value specified in the request is invalid. The MD5 value must be 128 bits and Base64-encoded.", but to the best of my knowledge the value I see being sent in Fiddler is valid (e.g. a91c588092cedbdb1b82c2d3786fd509).
Here is the code I use for calculating the hash (courtesy of Rob Gillen):
public static string GetMD5HashFromStream(byte[] data)
{
MD5 md5 = new MD5CryptoServiceProvider();
byte[] retVal = md5.ComputeHash(data);
StringBuilder sb = new StringBuilder();
for (int i = 0; i < retVal.Length; i++)
{
sb.Append(retVal[i].ToString("x2"));
}
return sb.ToString();
}
And this is the actual call to PutBlock():
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(buff), blockHash, options);
I also tried passing the hash like so:
Convert.ToBase64String(Encoding.UTF8.GetBytes(blockHash))
but the result was the same - "InvalidMd5" error :(
The MD5 hash being passed to PutBlock() with base64 encoding (e.g. YTkxYzU4ODA5MmNlZGJkYjFiODJjMmQzNzg2ZmQ1MDk=) and without it (e.g. a91c588092cedbdb1b82c2d3786fd509) doesn't seem to make a difference.
Rob's code obviously worked for him and I really have no idea what may be causing the problem in my case. The only change I've made to Rob's code is to alter the ParallelUpload() extension method to take a Stream instead of a file name and to dynamically determine the block size depending on the size of the file being uploaded.
Please, if anyone has an idea how to solve this problem, let me know! I will be really grateful! I already lost two days struggling with this.
Rob, thank you for offering to help and pointing out the difference in the MD5 hashes. Your answer got me thinking in the right direction. I spent another whole day digging into this but luckily (and thanks to your remark :)) I finally managed to resolve the problem. It turned out there were actually two issues in my case:
1) The MD5 hash: I noticed the hash you pasted in your answer is shorter than the one I was getting but it took me a while to see yours was exactly twice shorter. After some experimentation I found out that the GetMD5HashFromStream() method from your test application is converting the 16-byte hash generated by the MD5CryptoServiceProvider to a 32-character string. And it was this 32-character string that was causing the problem because it was converted to Base64 and passed to the PutBlock() method, hence the twice longer and thus invalid hash that the blob storage service was complaining about. Here is the code I ended up with:
Original:
public static string GetMD5HashFromStream(byte[] data)
{
MD5 md5 = new MD5CryptoServiceProvider();
byte[] retVal = md5.ComputeHash(data);
StringBuilder sb = new StringBuilder();
for (int i = 0; i < retVal.Length; i++)
{
sb.Append(retVal[i].ToString("x2"));
}
return sb.ToString();
}
and the call to PutBlock():
// calculate the block-level hash
string blockHash = Helpers.GetMD5HashFromStream(buff);
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(buff), blockHash, options);
Final:
MD5 md5 = new MD5CryptoServiceProvider();
byte[] blockHash = md5.ComputeHash(buff);
string convertedHash = Convert.ToBase64String(blockHash, 0, 16);
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(buff), convertedHash, options);
Rob, I'm really curious how your code worked in your case and why it didn't in mine - is it something specific to the setup on my machine, or perhaps a differing version of the Azure tools (I'm using v1.2)... Please let me know if you have any idea.
2) A bug in the development storage: lots of combing through the web led me to this page that mentions an obscure but apparently known bug in the development storage:
If two requests attempt to upload a
block to a blob that does not yet
exist in development storage, one
request will create the blob, and the
other may return status code 409
(Conflict), with storage services
error code BlobAlreadyExists.
Here is what I came up with to work around it:
public static bool IsDevelopmentStorageRunning()
{
return new Microsoft.ServiceHosting.Tools.DevelopmentStorage.DevStore().IsRunning();
}
You will need to add a reference to Microsoft.ServiceHosting.Tools.dll, which was located in "C:\Program Files\Windows Azure SDK\v1.2\bin" on my machine. Then, I use this method before the Parallel.For loop that processes the file chunks as follows:
bool isDevStorageRunning = StorageProxy.IsDevelopmentStorageRunning();
ParallelOptions parallelOptions = new ParallelOptions();
parallelOptions.MaxDegreeOfParallelism = isDevStorageRunning ? 1 : 4;
Parallel.For(0, transferDetails.Length, parallelOptions, j => { ... });
I hope this will save someone all the hassles I went through. Rob, thank you once again for helping out :)
tishon,
After seeing this post, I went back and re-tested my code, and I'm thinking that there is a problem with the data being passed (possibly what you are passing into the function?).
One thing that jumped out at me immediately was the md5 hash you provided... in every case I've tested, my md5 hashes end with two equals signs like the following (captured from fiddler):
Content-MD5: D1Mxthoqhlwm9cC0729mWA==
I'm not a crypto expert, but I know from working with the block IDs for block blobs, that if you have invalid/unsafe characters in your blob ID prior to converting it a base64 encoded value you'll get invalid data and block ids that Azure can't interpret.

Resources