How to crack a WPA2 Password using HashCat? - security

I need to bruteforce a .hccapx file which includes a WPA2 handshake, because a dictionary attack didn't work. How can I do that with HashCat? I don't know about the length etc.
Thanks!

First of all, you should use this at your own risk. Don't do anything illegal with hashcat.
If you want to perform a bruteforce attack, you will need to know the length of the password. The following command is and example of how your scenario would work with a password of length = 8.
hashcat -m 2500 -a 3 capture.hccapx ?d?d?d?d?d?d?d?d
The -a 3 denotes the "mask attack" (which is bruteforce but more optimized).
The -m 2500 denotes the type of password used in WPA/WPA2.
The capture.hccapx is the .hccapx file you already captured.
The ?d?d?d?d?d?d?d?d denotes a string composed of 8 digits.
If you want to specify other charsets, these are the following supported by hashcat:
?l = abcdefghijklmnopqrstuvwxyz
?u = ABCDEFGHIJKLMNOPQRSTUVWXYZ
?d = 0123456789
?h = 0123456789abcdef
?H = 0123456789ABCDEF
?s = «space»!"#$%&'()*+,-./:;<=>?#[\]^_`{|}~
?a = ?l?u?d?s
?b = 0x00 – 0xff

Related

What is the difference between a Non-base 58 character and a invalid checksum private WIF key?

I am just playing around with a NodeJS dependency called CoinKey
Here is the question:
I randomly generate WIF keys with this function:
case 'crc':
let randomChars = 'cbldfganhijkmopqwesztuvxyr0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
let generatedPrivateKey = ''
for (let i = 0; i < 50; i++)
generatedPrivateKey += randomChars.charAt(Math.floor(Math.random() * randomChars.length))
return prefix + generatedPrivateKey
I have here 2 real examples:
1 = LSY3MrnXcjnW5QU2ymwZn6UYu412jr577U9AnR9akwTg2va7h9B -> Invalid checksum
2 = L2l37v6EPDN423O02BqV02T2PK1A6vnjBO1HRmgsZ6384LNdSCj -> Non-base58 character
I call the function CoinKey.fromWif(privateKey) with of course one of the 2 private keys above. But why does key 1 give me the error Invalid checksum and key 2 gives me the error Non-base58 character?
I am just a simple developer, I don't have any knowledge about encryption etc. The only thing i know that i try to generate a WIF key, and a WIF key is a shorter encryption of a larger private key. And yes i also know that it's almost as good as impossible to brute-force such a big private key but as i said i am just playing around.
The base 58 character set only contains 58 characters. a-z, A-Z, 0-9 are 62, so four of them are not valid. Case 2 apparently contains one or more of the invalid characters.
And each key has a checksum. So if all the characters are valid the checksum is checked. Looks like your first case has all valid characters, purely by coincidence, but not the correct checksum.
It's pretty nonstandard and a bit inefficient; base 64 is commonly used.

can some one explain what is iv = enc[:16] and enc= enc[16:] means?

iv = enc[:16]
enc= enc[16:]
what is the meaning of above:
as of now using it in blunt way so please explain, just wanted to know the meaning of what I'm using ...
It means that you're assigning the first 16 characters of enc to iv and the rest of the characters back to enc:
string = "abcdefg"
print(string[:3]) # abc
print(string[3:]) # defg

what does it mean to convert octet strings to nonnegative integers in RSA?

I am trying to implement RSA PKCS #1 based on this spec
http://www.emc.com/emc-plus/rsa-labs/pkcs/files/h11300-wp-pkcs-1v2-2-rsa-cryptography-standard.pdf
However, I am not sure what the purpose of OS2IP is in page 9. Assume my message is integer 258 and private key is e. Also assume we don't do any other formatting besides OS2IP.
So I will convert the 258 into octet strings and store it into char buf[2] = {0x02, 0x01}. Now before I compute the exponentiation 258^e. I need to call OS2IP to reverse the byte order and save it to buf_new[2] = {0x01, 0x02}. Now 0x0102 = 258.
However, if I initially stored 258 as buf[2] = {0x01, 0x02}. Then there is no need to call OS2IP, correct? or is this the convention that I have to save it into {0x02, 0x01}?
OS2IP encodes an non negative integer into it's big endian representation.
However, if I initially stored 258 as buf[2] = {0x01, 0x02}. Then there is no need to call OS2IP, correct?
That is correct. 258 is already encoded in big endian although depending the length you choosed (!=2) you might be missing the leading zeros.
or is this the convention that I have to save it into {0x02, 0x01}?
I don't understand your question :/

What the heck is a Perl string anyway?

I can't find a basic description of how string data is stored in Perl! Its like all the documentation is assuming I already know this for some reason. I know about encode(), decode(), and I know I can read raw bytes into a Perl "string" and output them again without Perl screwing with them. I know about open modes. I also gather Perl must use some interal format to store character strings and can differentiate between character and binary data. Please where is this documented???
Equivalent question is; given this perl:
$x = decode($y);
Decode to WHAT and from WHAT??
As far as I can figure there must be a flag on the string data structure that says this is binary XOR character data (of some internal format which BTW is a superset of Unicode -http://perldoc.perl.org/Encode.html#DESCRIPTION). But I'd like it if that were stated in the docs or confirmed/discredited here.
This is a great question. To investigate, we can dive a little deeper by using Devel::Peek to see what is actually stored in our strings (or other variables).
First lets start with an ASCII string
$ perl -MDevel::Peek -E 'Dump "string"'
SV = PV(0x9688158) at 0x969ac30
REFCNT = 1
FLAGS = (POK,READONLY,pPOK)
PV = 0x969ea20 "string"\0
CUR = 6
LEN = 12
Then we can turn on unicode IO layers and do the same
$ perl -MDevel::Peek -CSAD -E 'Dump "string"'
SV = PV(0x9eea178) at 0x9efcce0
REFCNT = 1
FLAGS = (POK,READONLY,pPOK)
PV = 0x9f0faf8 "string"\0
CUR = 6
LEN = 12
From there lets try to manually add some wide characters
$ perl -MDevel::Peek -CSAD -e 'Dump "string \x{2665}"'
SV = PV(0x9be1148) at 0x9bf3c08
REFCNT = 1
FLAGS = (POK,READONLY,pPOK,UTF8)
PV = 0x9bf7178 "string \342\231\245"\0 [UTF8 "string \x{2665}"]
CUR = 10
LEN = 12
From that you can clearly see that Perl has interpreted this correctly as utf8. The problem is that if I don't give the octets using the \x{} escaping the representation looks more like the regular string
$ perl -MDevel::Peek -CSAD -E 'Dump "string ♥"'
SV = PV(0x9143058) at 0x9155cd0
REFCNT = 1
FLAGS = (POK,READONLY,pPOK)
PV = 0x9168af8 "string \342\231\245"\0
CUR = 10
LEN = 12
All Perl sees is bytes and has no way to know that you meant them as a unicode character, unlike when you entered the escaped octets above. Now lets use decode and see what happens
$ perl -MDevel::Peek -CSAD -MEncode=decode -E 'Dump decode "utf8", "string ♥"'
SV = PV(0x8681100) at 0x8683068
REFCNT = 1
FLAGS = (TEMP,POK,pPOK,UTF8)
PV = 0x869dbf0 "string \342\231\245"\0 [UTF8 "string \x{2665}"]
CUR = 10
LEN = 12
TADA!, now you can see that the string is correctly internally represented matching what you entered when you used the \x{} escaping.
The actual answer is it is "decoding" from bytes to characters, but I think it makes more sense when you see the Peek output.
Finally, you can make Perl see you source code as utf8 by using the utf8 pragma, like so
$ perl -MDevel::Peek -CSAD -Mutf8 -E 'Dump "string ♥"'
SV = PV(0x8781170) at 0x8793d00
REFCNT = 1
FLAGS = (POK,READONLY,pPOK,UTF8)
PV = 0x87973b8 "string \342\231\245"\0 [UTF8 "string \x{2665}"]
CUR = 10
LEN = 12
Rather like the fluid string/number status of its scalar variables, the internal format of Perl's strings is variable and depends on the contents of the string.
Take a look at perluniintro, which says this.
Internally, Perl currently uses either whatever the native eight-bit character set of the platform (for example Latin-1) is, defaulting to UTF-8, to encode Unicode strings. Specifically, if all code points in the string are 0xFF or less, Perl uses the native eight-bit character set. Otherwise, it uses UTF-8.
What that means is that a string like "I have £ two" is stored as (bytes) I have \x{A3} two. (The pound sign is U+00A3.) Now if I append a multi-byte unicode string such as U+263A - a smiling face - Perl will convert the whole string to UTF-8 before it appends the new character, giving (bytes) I have \xC2\xA3 two\xE2\x98\xBA. Removing this last character again leaves the string UTF-8 encoded, as `I have \xC2\xA3 two.
But I wonder why you need to know this. Unless you are writing an XS extension in C the internal format is transparent and invisible to you.
Perls internal string format is implementation dependant, but usually a super set of UtF-8. It doesn't matter what it is because you use decode and encode to convert strings to and from the internal format to other encodings.
Decode converts to perls internal format, encode converts from perls internal format.
Binary data is stored internaly the same way characters 0 through 255 are.
Encode and decode just convert between formats. For example UTF8 encoding means each character will only be an octet using perl character vlaues 0 through 255, ie that the string consists of UTF8 octets.
Short answer: It's a mess
Slightly longer: The difference isn't visible to the programmer.
Basically you have to remember if your string contains bytes or characters, where characters are unicode codepoints. If you only encounter ASCII, the difference is invisible, which is dangerous.
Data itself and the representation of such data are distinct, and should not be confused. Strings are (conceptually) a sequence of codepoints, but are represented as a byte array in memory, and represented as some byte sequence when encoded. If you want to store binary data in a string, you re-interpret the number of a codepoint as a byte value, and restrict yourself to codepoints in 0–255.
(E.g. a file has no encoding. The information in that file has some encoding (be it ASCII, UTF-16 or EBCDIC at a character level, and Perl, HTML or .ini at an application level))
The exact storage format of a string is irrelevant, but you can store complete integers inside such a string:
# this will work if your perl was compiled with large integers
my $string = chr 2**64; # this is so not unicode
say ord $string; # 18446744073709551615
The internal format is adjusted accordingly to accomodate such values; normal strings won't take up one integer per character.
Perl can handle more than Unicode can, so it's very flexible. Sometimes you want to interface with something that cannot, so you can use encode(...) and decode(...) handle those transformations. see http://perldoc.perl.org/utf8.html

Help needed with perl crypt function

I am trying to generate a unique string/id given another relatively large string(consisting of a directory path name), thought of using crypt function. However, it's not working as expected, most probably due to my inability to understand.
here the code & output:
#!/usr/bin/perl
print "Enter a string:";
chomp(my $string = <STDIN>);
my $encrypted_string = crypt($string,'di');
print "\n the encrypted string is:$encrypted_string";
output:
$ perl crypt_test
Enter a string:abcdefghi
the encrypted string is:dipcn0ADeg0Jc
$
$ perl crypt_test
Enter a string:abcdefgh
the encrypted string is:dipcn0ADeg0Jc
$
$
$ perl crypt_test
Enter a string:abcde
the encrypted string is:diGyhSp4Yvj4M
$
I couldn't understand why it returned the same encrypted string for the first two strings and differed for the third one. Note that salt is same for all.
The crypt(3) function only takes into account the first eight chars of the input string:
By taking the lowest 7 bits of each of the first eight characters of the key, a 56-bit key is obtained. This 56-bit key is used to encrypt repeatedly a constant string (usually a string con‐
sisting of all zeros). The returned value points to the encrypted password, a series of 13 print‐
able ASCII characters (the first two characters represent the salt itself).
So what you are seeing is by design - from perlfunc:
crypt PLAINTEXT,SALT
Creates a digest string exactly like the crypt(3) function in the C library

Resources