Brute Force Attack ( decryption ) AES - security

I'm trying to programming the brute force attack, the idea is that:
I already have the Cipher text After the encryption
I have the first 4 letter of the plain text ( which is 41 character )
I have the first 12 character of the secret key
What I need is to find the 4 missing characters
Let's assume I have the key :
"ABCDEFGHIJ????"
How can I apply brute force attack to find the missing character ?

There are 2^32 possibilities for the missing 4 key bytes. That fits into an unsigned 32-bit int. So, loop over all possibilities for this unsigned int, taking your four missing bytes from the integer value. In C, something like this:
unsigned int i = 0;
do {
first candidate missing byte for key = i&255;
second candidate missing byte for key = (i>>8)&255;
third candidate missing byte for key = (i>>16)&255;
fourth candidate missing byte for key = (i>>24)i&255;
/* here: try the candidate with your AES encryption, break if it works */
++i;
} while (i != 0);

Related

Getting Exception CryptoException.ILLEGAL value when using HMACKey.setkey

I want to Generate HMAC_SHA1 Signature in JavaCard Applet
I am trying to sign a message which contains in inBuffer byte array S (byte array , 64 byte). The snippet of the function from javacard (jc) applet module is given below. I am using javacard3.0.1 library for developing jc applet.
Signature m_sessionMAC = null;
HMACKey keyType = null;
// Create HMAC Key Used in Mac
m_sessionMAC = Signature.getInstance(Signature.ALG_HMAC_SHA_1, false);
// Create HMAC Key Used in Mac
keyType = (HMACKey) KeyBuilder.buildKey(KeyBuilder.TYPE_HMAC, KeyBuilder.LENGTH_HMAC_SHA_256_BLOCK_64, false);
keyType.setKey(S,(short) 0, (short) S.length);
this keyType.setKey result in exception as ILLEGAL_VALUE, please guide me what am i doing wrong?
Key length is specified in bits -- citing KeyBuilder.buildKey() documentation:
keyLength - the key size in bits. The valid key bit lengths are key type dependent. Some common key lengths are listed above in the LENGTH_* constants, for example LENGTH_DES.
Which means:
use 512 for a 64 byte key
use 64 for a 8 byte key
Note that you can use any key length for HMAC-SHA1, but keys longer than block size (which is 64 bytes for SHA-1) are transformed into their SHA-1 hash before use (see e.g. here).
Good luck with your project!

Converting ElGamal encryption from encrypting numbers to strings

I've have the following ElGamal encryption scheme
const forge = require('node-forge');
const bigInt = require("big-integer");
// Generates private and public keys
function keyPairGeneration(p, q, g) {
var secretKey = bigInt.randBetween(2, q.minus(2));
var publicKey = g.modPow(secretKey, p);
const keys = {
secret: secretKey,
public: publicKey
}
return keys;
}
// Generates a proxy and a user key
function generateProxyKeys(secretKey) {
const firstKey = bigInt.randBetween(1, secretKey);
const secondKey = secretKey.minus(firstKey);
const keys = {
firstKey: firstKey,
secondKey: secondKey
}
return keys;
}
// Re-encrypts
function preEncrypt(p, q, g, m, publicKey) {
const k = bigInt.randBetween(1, q.minus(1));
const c1 = g.modPow(k, p);
// g^x = publicKey
// m.publicKey^k
const c2 = bigInt(m).multiply(publicKey.modPow(k, p)).mod(p);
const c = {
c1: c1,
c2: c2
}
return c;
}
function preDecrypt(p, c1, c2, key) {
// (mg^xr) / (g^rx1)
var decrypt = c2.multiply(c1.modPow(key, p).modInv(p)).mod(p);
return decrypt;
}
Which works fine with numbers. However, I want to be able to use it to encrypt strings (btw, it's not a regular ElGamal, I don't think the difference is that relevant in this context but for more details see this question I asked)
I thought about converting the string to an integer, running the encryption, and converting back to a string whenever I needed it. I couldn't find a way of doing this in JS (there was this question posted here but the code didn't work). There is another similar question but it's in Java and the method mentioned there is not provided by the BigInt implementation in JS.
Is there any easy way of converting a string to a BigInt?
Arbitrarily long messages
Asymmetric encryption should not be used to encrypt messages of arbitrary length, because it is much slower than symmetric encryption. So, we can use symmetric encryption for the actual message and asymmetric encryption for the key that encrypted the message.
There are basically two ways for arbitrary sized messages:
If prime p is big enough that it fits a common key size of a symmetric cipher such as AES, then you can simply generate a random AES key (128, 192 or 256 bit) and use an AES-derived scheme such as AES-GCM to encrypt your message. Afterwards, you decode a number from the AES key (use fromArray) to be used as m in your ElGamal-like encryption scheme. This is called hybrid encryption.
Regardless how big prime p is, you can always generate a random m number in the range of 1 to p-1 and use that to produce your asymmetric ciphertext. Afterwards, you can take the previously generated m, encode it into a byte array (use toString(16) to produce a Hex-encoded string and then simply parse it as Hex for the hashing) and hash it with a cryptographic hash function such as SHA-256 to get your AES key. Then you can use the AES key to encrypt the message with a symmetric scheme like AES-GCM. This is called key encapsulation.
The main remaining thing that you have to look out for is data format: How do you serialize the data for the asymmetric part and the symmetric part of the ciphertext? How do you read them back that you can always tell them apart? There are many possible solutions there.
Short messages
If the messages that you want to encrypt have a maximum size that is smaller than the prime that you use, then you don't need the two approaches above. You just need to take the byte representation of the message and convert it to a big integer. Something like this:
var arr = Array.prototype.slice.call(Buffer.from("some message"), 0);
var message = bigInt.fromArray(arr, 256);
This is a big endian encoding.
This makes only sense if your prime is big enough which it should be for security.

What kind of cryptographic mechanism uses repeated xor's?

I'm attempting to analyze a short encryption program and figure out which mechanism it's using.
#include <stdio.h>
#include <stdlib.h>
int main( int argc, char * argv[] ) {
long int key;
char * endptr;
key = strtol( argv[1], &endptr, 10 );
srandom( key );
{ /* now copy input to output through crypt transformation */
char ch;
while (!feof( stdin )) {
putc( (getc(stdin) ^ random())&0xFF, stdout );
}
fclose( stdout );
}
}
I can follow this simply, but I'm having trouble trying to weed out which mechanism it's using..
I'm looking at the following:
http://en.wikipedia.org/wiki/Public-key_cryptography
http://en.wikipedia.org/wiki/Block_cipher
http://en.wikipedia.org/wiki/Stream_cipher
http://en.wikipedia.org/wiki/Diffie-Hellman
I'm leaning towards iterated block cyphers but I really have no idea at this point.
You need to clearly distinguish in your mind the ciphers in categories. There are:
Block ciphers, which operate in fixed-size blocks of input
Stream ciphers, which operate on data streams (i.e. one byte at a time)
The above only distinguishes ciphers by the size of the input they accept; it has nothing to do with the mechanism they use to produce the encrypted text.
Regarding this mechanism, we have:
Substitution ciphers
Transposition ciphers
And many other types which are basically combinations of the above, possibly with many iterations
So try to answer this question first:
Is your example a stream cipher or a block cipher? Remember, this has nothing to do with how it encrypts!
It's a stream cipher. The cipherkey is generated by seeding srandom with the given key.
In cryptography, a stream cipher is a
symmetric key cipher where plaintext
bits are combined with a pseudorandom
cipher bit stream (keystream),
typically by an exclusive-or (xor)
operation. In a stream cipher the
plaintext digits are encrypted one at
a time, and the transformation of
successive digits varies during the
encryption.
Which is what you're doing here. key is the symmetric key, and the cipher stream is generated by random(). The call to srandom(key) assures that the random stream will ke the same as long as your key is the same.

openssl encryption and decryption using evp library

I have a plain text and I have the cipher text with me and my task is to find the key for the cipher text declared. The key is a word list like a dictionary. I have written the code in c and it compiles perfect and creates the file with all the ciphers.
The problem I am facing is that every time i run the code a cipher text is completely different. I have no clue where I am making a mistake.
The following is the code I had written
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <openssl/evp.h>
int main()
{
int i;
char words[32], t;
FILE *key, *outFile;
const char *out = "Output.txt";
unsigned char outbuf[1024 + EVP_MAX_BLOCK_LENGTH];
unsigned char iv[] = "0000000000000000";
int outlen, tmplen;
int num;
EVP_CIPHER_CTX ctx;
EVP_CIPHER_CTX_init(&ctx);
char inText[] = "This is a top secret.";
char cipherText[] = "8d20e5056a8d24d0462ce74e4904c1b513e10d1df4a2ef2ad4540fae1ca0aaf9";
key = fopen("words.txt", "r");
if( remove("ciphertext.txt") == -1 ) {
perror("Error deleting file");
}
outFile = fopen("ciphertext.txt", "a+");
if( key < 0 || outFile < 0 )
{
perror ("Cannot open file");
exit(1);
}
char pbuffer[1024];
while ( fgets(words,32, key) )
{
i=strlen(words);
words[i-1]='\0';
//printf("%s",words);
i = 0;
EVP_EncryptInit_ex(&ctx, EVP_aes_128_cbc(), NULL, words, iv);
if(!EVP_EncryptUpdate(&ctx, outbuf, &outlen, inText, strlen(inText)))
{
EVP_CIPHER_CTX_cleanup(&ctx);
return 0;
}
if(!EVP_EncryptFinal_ex(&ctx, outbuf + outlen, &tmplen))
{
EVP_CIPHER_CTX_cleanup(&ctx);
return 0;
}
outlen += tmplen;
print_hex(outbuf, outlen, outFile);
}
fclose(key);
fclose(outFile);
return 1;
}
int print_hex(unsigned char *buf, int len, FILE *outFile)
{
int i,n;
char x='\n';
for ( i = 0; i < len; i++ )
{
fprintf(outFile,"%02x",buf[i]);
}
fprintf(outFile,"%c",x);
return (0);
}
Since the key is a word. The words in the wordlist can be of size < or > 16 bytes and from my research on openssl it was said that there will be a pkcs#5 padding if the block length is does not fit into 16bytes. Is it the same case for the key also.
The cipher text I declared does not match with the cipher text I am generating from the program and I am unable to find the key for the cipher text.
I need help from the experts. I would appreciate if some one helps me in getting out of the trouble
Thanks in advance
What are you actually trying to achieve? Your code looks like an attempt to carry out a brute-force attack using a dictionary of passwords ... I'm not sure I should be trying to help with that!
I'll assume it's just an exercise ...
The first thing that strikes me is that you are setting your initialization vector (the variable iv) to a string of ASCII zeros. That's almost certainly wrong, and you probably need to use binary zeros.
unsigned char iv[16] = { 0 };
I don't know how the ciphertext that you have was generated (by another program, presumably) but I would imagine that that program didn't use the dictionary word itself as a key, but went through some sort of key derivation process first. You are using 128-bit AES as your encryption algorithm, so your keys should be 16 bytes long. You could achieve that by padding, as you suggest, but it's more usual to go through some process that mixes up the bits of the key to make it look more random and to distribute the key bits throughout the whole key. It wouldn't be unusual to hash the word and to use the output of the hash function rather than the word itself as key. Another possibility is that the dictionary word may be used as the input to a passphrase-based key derivation function such as that defined in PKCS#5.
You really need to find out how the word is used to generate a key before you can get any further with this.
Thank you very much for the reply.
Yes it is just an exercise and is like a dictionary attack.
I am supposed to use iv with zeros but not ASCII zero, which is one of the mistakes I had made.
I assume the given cipher text is encrypted purely with a word from the word list without any hashing and might be padding is done but I am not sure because I am supposed to do find the key from the cipher text. The word list might have words less than 16 bytes or words greater than 16 bytes. So the problem I am thinking might be with the padding.
I am thinking may be if the word length is less than 16 bytes, then I have to pad with either ASCII zeros or something like that. Which one do you suggest me to do and with little push may be I am finished.
Thanks

Is it possible to convert UTF32 text to UTF16 using only Windows API?

I'm trying to find converting UTF-32 text to/from any code page is possible using the Windows API alone. I cannot used CLR to do this task.
The Code page identifiers page at Microsoft at http://msdn.microsoft.com/en-us/library/dd317756(VS.85).aspx lists UTF-32 as being available to only managed applicatiosn.
ConvertStringTo/FromUnicode fails when UTF-32 is used.
You can use this function that takes the UTF-32 codepoint to be converted to it's equivalent UTF-16 codepoint (single or surrogate as the case may be) as the first argument and the high and low surrogates as second and third arguments.
The high and low surrogate values are returned by reference.
If the codepoint is below 0x10000, then we simply return that codepoint in the low surrogate by reference while the high surrogate is 0.
If the codepoint is greater than 0x10000, then, we calculate the high and low surrogate pairs using the rules given on this wikipedia page:
https://en.wikipedia.org/wiki/UTF-16#Example_UTF-16_encoding_procedure
Here's the code:
unsigned int convertUTF32ToUTF16(unsigned int cUTF32, unsigned int &h, unsigned int &l)
{
if (cUTF32 < 0x10000)
{
h = 0;
l = cUTF32;
return cUTF32;
}
unsigned int t = cUTF32 - 0x10000;
h = (((t<<12)>>22) + 0xD800);
l = (((t<<22)>>22) + 0xDC00);
unsigned int ret = ((h<<16) | ( l & 0x0000FFFF));
return ret;
}
With a bit of knowledge of Unicode you should be able to create a UTF32 to UTF16 converter without using any APIs.
All characters in the range U+0000 to U+FFFF can simply have the upper 16 bits removed.
Values in the range U+10000 to U+10FFFF can be converted into two 16-bit words, called surrogate pairs:
http://en.wikipedia.org/wiki/UTF-16#Encoding_of_characters_outside_the_BMP
You can use the iconv library in Windows. It fully supports UTF-32 (big and little endian).

Resources