Content Security Policy (csp) nonce: how long or complex should be a nonce - content-security-policy

I have a site which uses nonce. Everything works well. But how long or complex should be the nonce.
My little nonce maker is just this:
let generateNonce = (length = 32) => {
const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
let nonce = '';
for (let i = 0; i < length; i++)
nonce += chars.charAt(Math.floor(Math.random() * chars.length));
return nonce;
};
A call of generateNonce() returns something like hERnT30lr0G3Hw4b5eQCjuC423a3PcBl.
32 characters of numbers, lower and upper case letters. Is this complex enough or even too long?

See the CSP spec section at https://w3c.github.io/webappsec-csp/#security-nonces, which says:
[nonce values] should be at least 128 bits long (before encoding), and should be generated via a cryptographically secure random number generator
The question’s hERnT30lr0G3Hw4b5eQCjuC423a3PcBl value is more than 128 bits, so that’s OK.
But Math.random() isn’t cryptographically secure; https://stackoverflow.com/a/5651854/441757 and https://security.stackexchange.com/q/181580/86150. Use Crypto.getRandomValues instead.

With the help of #sideshowbarker the nonce generator could be like this (nodejs)
// require nodes native crypto module
const crypto = require('crypto');
// create 128 bit nonce synchronously
const nonce = crypto.randomBytes(16).toString('hex');
output = 1e31b6130c5be9ef4cbab7eb38df5491
crypto.randomBytes(size[, callback])
Generates cryptographically strong pseudo-random data. The size argument is a number indicating the number of bytes to generate.

Related

Why are secp256k1 privateKeys not always 32 bytes in nodejs?

I was generating a lot of secp256k1 keys using node's crypto module when I ran into a problem that some generated private keys were not always 32bytes in length. I wrote a test script and it shows clearly that that happens quite often.
What is the reason for that and is there a fix or do I have to check for length and then regenerate until I get 32 bytes?
This is the test script for reproducing the issue:
const { createECDH, ECDH } = require("crypto");
const privateLens = {};
const publicLens = {};
for(let i = 0; i < 10000; i++){
const ecdh = createECDH("secp256k1");
ecdh.generateKeys();
const privateKey = ecdh.getPrivateKey("hex");
const publicKey = ecdh.getPublicKey("hex");
privateLens[privateKey.length+""] = (privateLens[privateKey.length+""] || 0) + 1;
publicLens[publicKey.length+""] = (publicLens[publicKey.length+""] || 0) + 1;
}
console.log(privateLens);
console.log(publicLens);
The output (of multiple runs) looks like this:
% node test.js
{ '62': 32, '64': 9968 }
{ '130': 10000 }
% node test.js
{ '62': 40, '64': 9960 }
{ '130': 10000 }
% node test.js
{ '62': 39, '64': 9961 }
{ '130': 10000 }
I just don't get it... if I encode it in base64 its always the same length, but decoding that back to a buffer shows 31 bytes for some keys again.
Thanks, any insights are highly appreciated!
For EC cryptography the key is not fully random over the bytes, it's a random number in the range [1, N) where N is the order of the curve. Now generally the number generated will be in the same ballpark as the 256 bit order. This is especially true since N has been (deliberately) chosen to be very close to 2^256, i.e. the high order bits are all set to 1 for secp256k1.
However, about once in 256, the first bits are still all set to zero for the chosen private key s. That means that it takes 31 or fewer bytes instead of 32 bytes. Once in 65536 it will be even 30 bytes instead of 32, etc. And once in somewhere over 4 billion times (short scale) it will even use 29 bytes.
Base64 uses one character for 6 bits excluding overhead. However generally it just encodes blocks of 3 bytes to 4 characters at a time (possibly including padding with = characters). That means that 32 bytes will take ceil(32 / 3) * 4 = 44 bytes. Now since ceil(31 / 3) * 4 = 44 you won't notice anything. However, once in 65536 times you'll get ceil(30 / 3) * 4 = 40. After that going to 36 characters becomes extremely unlikely (although not negligibly small cryptographically speaking, "just" once in 2^48 times - there are lotteries that do worse I suppose)...
So no, you don't have to regenerate the keys - for the algorithm they are perfectly valid after all. For private keys you don't generally have much compatibility requirements, however generally you'd try and encode such keys to a static size (32 bytes, possibly using 00 valued bytes at the left). Re-encoding them as statically sized keys might be a good idea...

How to sign on a javacard applet and return the signature to the host application

I have the following function in the javacard applet that is supposed to receive a challenge from the host application, sign it, and return it to the host via command-response apdu communication.
private void sign(APDU apdu) {
if(!pin.isValidated())ISOException.throwIt(SW_PIN_VERIFICATION_REQUIRED);
else{
byte[] buffer = apdu.getBuffer();
byte [] output = new byte [20];
short length = 20;
short x =0;
Signature signature =Signature.getInstance(Signature.ALG_RSA_SHA_PKCS1, false);
signature.init(privKey, Signature.MODE_SIGN);
short sigLength = signature.sign(buffer, offset,length, output, x);
//This sequence of three methods sends the data contained in
//'serial' with offset '0' and length 'serial.length'
//to the host application.
apdu.setOutgoing();
apdu.setOutgoingLength((short)output.length);
apdu.sendBytesLong(output,(short)0,(short)output.length);
}
}
The host computes the challenge as follows and sends it to the javacard applet for signing:
//produce challenge
SecureRandom random = SecureRandom . getInstance( "SHA1PRNG" ) ;
byte [ ]bytes = new byte [ 20 ] ;
random . nextBytes ( bytes) ;
CommandAPDU challenge;
ResponseAPDU resp3;
challenge = new CommandAPDU(IDENTITY_CARD_CLA,SIGN_CHALLENGE, 0x00, 0x00,bytes ,20 );
resp3= c.transmit(challenge);
if(resp3.getSW()==0x9000) {
card_signature = resp2.getData();
String s = new String(card_signature);
System.out.println("signature " + s);
}else System.out.println("Challenge signature error: " + resp3.getSW());
As you can see, I check for both succesful and unsuccesful signing but I get the following printed out:
Challenge signature error:28416
Where exactly do I go wrong? Is it possible I retrieve the challenge in a faulty way with `byte[] buffer = apdu.getBuffer(); or is my signature all wrong?
You are trying to sign using an RSA key. However, the signature size of an RSA generated signature is identical to the key size (the modulus size) encoded in a minimum number of bytes. So e.g. a 2048 bit key results in a signature with size ceil(2028D / 8D) = 256 bytes (the maximum response size, unless you use extended length APDU's).
You should never create byte arrays in Java except when creating the class or when personalizing the applet. Any array created in persistent memory using new byte[] will likely remain until the garbage collector is run, and it may wear out the EEPROM or flash. And for signatures you don't need persistent memory.
If you look at the Signature.sign method:
The input and output buffer data may overlap.
So you can just generate the signature into the APDU buffer instead. Otherwise you can generate it in a JCSystem.makeTransientByteArray created buffer, but if you want to communicate it to the client you'll have to copy it into the APDU buffer anyway.
Please don't ever do the following:
String s = new String(card_signature);
A signature is almost indistinguishable from random bytes, so printing this out will generate just garbage. If you need text output try hexadecimals or base 64 encoding of the signature. Or print it as decimal number (but note that this may lead to loss of leading bytes with value 00).

Is it possible to decipher at random position with nodejs crypto?

My understanding is that an AES block cipher in CTR mode allows, in theory, to decipher any location of a large file, without needing to read the whole file.
However, I don't see how to do this with nodejs crypto module. I could feed the Decipher.update method with dummy blocks until I get to the part I'm interested in, at which point I would feed actual data read from the file, but that would be an awful hack, inefficient, and fragile, since I need to be aware of the block size.
Is there a way to do it with the crypto module, and if not, what module can I use?
I could feed the Decipher.update method with dummy blocks until I get to the part I'm interested in
As #Artjom already commented, assuming using CTR mode, you don't need to feed start of the file or any dummy blocks. You can directly feed ciphertext you are interested in. (starting the blocksize of 128 bit using AES)
see the CTR mode of operation, you just need to set the IV counter to the starting block of the ciphertext, feed only part of the encrypted file you want to decipher (you may need to feed dummy bytes of the starting block if needed)
Example:
you need to decrypt a file from position 1048577, using AES it's block 65536 (1048577/16) plus 1 byte. So you set the IV to nonce|65536, decrypt dummy 1 byte (to move to position to 16*65536+1) and then you can just feed your ciphertext from the part of the file you are interested in
I've found different approaches to solve this problem:
Method 1 : CTR mode
This answer is based on #ArtjomB. and #gusto2 comments and answer, which really gave me the solution. However, here is a new answer with a working code sample, which also shows implementation details (for example the IV must be incremented as a Big Endian number).
The idea is simple: to decrypt starting at an offset of n blocks, you just increment the IV by n. Each block is 16 bytes.
import crypto = require('crypto');
let key = crypto.randomBytes(16);
let iv = crypto.randomBytes(16);
let message = 'Hello world! This is test message, designed to be encrypted and then decrypted';
let messageBytes = Buffer.from(message, 'utf8');
console.log(' clear text: ' + message);
let cipher = crypto.createCipheriv('aes-128-ctr', key, iv);
let cipherText = cipher.update(messageBytes);
cipherText = Buffer.concat([cipherText, cipher.final()]);
// this is the interesting part: we just increment the IV, as if it was a big 128bits unsigned integer. The IV is now valid for decrypting block n°2, which corresponds to byte offset 32
incrementIV(iv, 2); // set counter to 2
let decipher = crypto.createDecipheriv('aes-128-ctr', key, iv);
let decrypted = decipher.update(cipherText.slice(32)); // we slice the cipherText to start at byte 32
decrypted = Buffer.concat([decrypted, decipher.final()]);
let decryptedMessage = decrypted.toString('utf8');
console.log('decrypted message: ' + decryptedMessage);
This program will print:
clear text: Hello world! This is test message, designed to be encrypted and then decrypted
decrypted message: e, designed to be encrypted and then decrypted
As expected, the decrypted message is shifted by 32 bytes.
And finally, here is the incrementIV implementation:
function incrementIV(iv: Buffer, increment: number) {
if(iv.length !== 16) throw new Error('Only implemented for 16 bytes IV');
const MAX_UINT32 = 0xFFFFFFFF;
let incrementBig = ~~(increment / MAX_UINT32);
let incrementLittle = (increment % MAX_UINT32) - incrementBig;
// split the 128bits IV in 4 numbers, 32bits each
let overflow = 0;
for(let idx = 0; idx < 4; ++idx) {
let num = iv.readUInt32BE(12 - idx*4);
let inc = overflow;
if(idx == 0) inc += incrementLittle;
if(idx == 1) inc += incrementBig;
num += inc;
let numBig = ~~(num / MAX_UINT32);
let numLittle = (num % MAX_UINT32) - numBig;
overflow = numBig;
iv.writeUInt32BE(numLittle, 12 - idx*4);
}
}
Method 2 : CBC mode
Since CBC uses the previous cipher text block as IV, and that all cipher text blocks are known during the decryption stage, you don't have anything particular to do, you can decrypt at any point of the stream. The only thing is that the first block you decrypt will be garbage, but the next ones will be fine. So you just need to start one block before the part you actually want to decrypt.

Converting ElGamal encryption from encrypting numbers to strings

I've have the following ElGamal encryption scheme
const forge = require('node-forge');
const bigInt = require("big-integer");
// Generates private and public keys
function keyPairGeneration(p, q, g) {
var secretKey = bigInt.randBetween(2, q.minus(2));
var publicKey = g.modPow(secretKey, p);
const keys = {
secret: secretKey,
public: publicKey
}
return keys;
}
// Generates a proxy and a user key
function generateProxyKeys(secretKey) {
const firstKey = bigInt.randBetween(1, secretKey);
const secondKey = secretKey.minus(firstKey);
const keys = {
firstKey: firstKey,
secondKey: secondKey
}
return keys;
}
// Re-encrypts
function preEncrypt(p, q, g, m, publicKey) {
const k = bigInt.randBetween(1, q.minus(1));
const c1 = g.modPow(k, p);
// g^x = publicKey
// m.publicKey^k
const c2 = bigInt(m).multiply(publicKey.modPow(k, p)).mod(p);
const c = {
c1: c1,
c2: c2
}
return c;
}
function preDecrypt(p, c1, c2, key) {
// (mg^xr) / (g^rx1)
var decrypt = c2.multiply(c1.modPow(key, p).modInv(p)).mod(p);
return decrypt;
}
Which works fine with numbers. However, I want to be able to use it to encrypt strings (btw, it's not a regular ElGamal, I don't think the difference is that relevant in this context but for more details see this question I asked)
I thought about converting the string to an integer, running the encryption, and converting back to a string whenever I needed it. I couldn't find a way of doing this in JS (there was this question posted here but the code didn't work). There is another similar question but it's in Java and the method mentioned there is not provided by the BigInt implementation in JS.
Is there any easy way of converting a string to a BigInt?
Arbitrarily long messages
Asymmetric encryption should not be used to encrypt messages of arbitrary length, because it is much slower than symmetric encryption. So, we can use symmetric encryption for the actual message and asymmetric encryption for the key that encrypted the message.
There are basically two ways for arbitrary sized messages:
If prime p is big enough that it fits a common key size of a symmetric cipher such as AES, then you can simply generate a random AES key (128, 192 or 256 bit) and use an AES-derived scheme such as AES-GCM to encrypt your message. Afterwards, you decode a number from the AES key (use fromArray) to be used as m in your ElGamal-like encryption scheme. This is called hybrid encryption.
Regardless how big prime p is, you can always generate a random m number in the range of 1 to p-1 and use that to produce your asymmetric ciphertext. Afterwards, you can take the previously generated m, encode it into a byte array (use toString(16) to produce a Hex-encoded string and then simply parse it as Hex for the hashing) and hash it with a cryptographic hash function such as SHA-256 to get your AES key. Then you can use the AES key to encrypt the message with a symmetric scheme like AES-GCM. This is called key encapsulation.
The main remaining thing that you have to look out for is data format: How do you serialize the data for the asymmetric part and the symmetric part of the ciphertext? How do you read them back that you can always tell them apart? There are many possible solutions there.
Short messages
If the messages that you want to encrypt have a maximum size that is smaller than the prime that you use, then you don't need the two approaches above. You just need to take the byte representation of the message and convert it to a big integer. Something like this:
var arr = Array.prototype.slice.call(Buffer.from("some message"), 0);
var message = bigInt.fromArray(arr, 256);
This is a big endian encoding.
This makes only sense if your prime is big enough which it should be for security.

Adding a simple MAC to url parameters?

I want to add a simple kind of MAC to some of my URL parameters. This is only intended as an additional line of defense against application bugs and caching related problems/bugs, and not intended as any form of replacement of the actual login security in the application. A given business-object-id is already protected by backends to be limited to a single user.
So basically I'd like to add a short authentication code to my url parameters, on the size of 2-4 characters. I think I'd like to have a reversible function along the lines of f(business-data-id + logged-on-user-id + ??) = hash, but I am open to suggestions.
The primary intention is to stop id guessing, and to make sure that url's are fairly distinct per logged on user. I also don't want something big and clunky like an MD5.
Since you aren't looking for cryptographic quality, maybe a 24-bit CRC would fit your needs. While MD5 is "fast" in absolute terms, CRC is, relatively, "blindingly fast". Then the 3-byte CRC could be text-encoded into four characters with Base-64 encoding.
Here's a Java implementation of the check used for OpenPGP ASCII-armor checksums:
private static byte[] crc(byte[] data)
{
int crc = 0xB704CE;
for (int octets = 0; octets < data.length; ++octets) {
crc ^= (data[octets] & 0xFF) << 16;
for (int i = 0; i < 8; ++i) {
crc <<= 1;
if ((crc & 0x1000000) != 0)
crc ^= 0x1864CFB;
}
}
byte[] b = new byte[3];
for (int shift = 16, idx = 0; shift >= 0; shift -= 8) {
b[idx++] = (byte) (crc >>> shift);
}
return b;
}
I would hash a secret key (which is known only by the server), together with whatever you want to protect—probably the combination of object identifier and user identifier.
If what you want is basically MD5 but smaller, why not just use MD5 but just the last 4 characters? This doesn't add a huge blob to your urls, it's always 4 nice hex digits.
A quick question for which I'm sure there's a good answer for, but why not store this information in a cookie?
Then you could use something big and clunky like MD5 and your URLs would still be pretty.

Resources