Is there a way to control the output of md5sum command, Isn't it possible to set how many bits the md5 output should be.
$ echo $var | md5sum -b
45819a662f88d80bfb35ad703ca8676f
but the output is too long, I want it to be shorter and I don't want to omit some part of it.
No. An MD5 hash is 128 bits long; you cannot make it shorter without leaving part of it out.
MD5 hash is always 128-bit (or 16 bytes).
Related
I have a string s like:
username-password-{random}
I would like to generate the hash SHA256 of the string s, such that first 2 bytes of the hash equal to 0, like:
0000afcbd546843....
So, username-password- is fixed and we can random the {random} part, is there any way to control the random part to get the SHA256 satisfy that condition.
I have the code bash script below, but I have run many time, it does not meet the condition.
pt="0000"
counter=10000000
while [ $counter -le 20000000 ]
do
echo $counter
fibyte=$(echo -n ""username-password-"$counter" | sha256sum | cut -c1-4)
if [ "$fibyte" == "$pt" ]
then
echo ""username-password-"$counter"
echo =======================================================
break
fi
((counter++))
done
echo DONE
An important and highly crucial property of a cryptographical hash is that the result of the hashing operation cannot be predicted from the input. So no, unless somebody breaks SHA256, there is no way to control the output.
Trying different inputs until you get an output which meets your constraints isn't really feasible. If your current code can generate 10,000 attempts per second (in reality it's certainly a lot less, but humor me), you would need to run it for 1.767e68 seconds which is (Wolfram Alpha to the rescue 4.1e50 times the estimated age of the universe to get the result you want, give or take a few universes.
I have a situation where I have one VERY large file that I'm using the linux "split" command to break into smaller parts. Later I use the linux "cat" command to bring the parts all back together again.
In the interim, however, I'm curious...
If I get an MD5 fingerprint on the large file before splitting it, then later get the MD5 fingerprints on all the independent file parts that result from the split command, is there a way to take the independent fingerprints and somehow deduce that the sum or average (or whatever you like to all it) of their parts is equal to the fingerprint of the single large file?
By (very) loose example...
bigoldfile.txt MD5 = 737da789
smallfile1.txt MD5 = 23489a89
smallfile2.txt MD5 = 1238g89d
smallfile3.txt MD5 = 01234cd7
someoperator(23489a89,1238g89d,01234cd7) = 737da789 (the fingerprint of the original file)
You likely can't do that - MD5 is complex enough inside and depends on actual data as well as the "initial" hash value.
You could instead generate "incremental" hashes - hash of first part, hash of first plus second part, etc.
Not exactly but the next best thing would be to do this:
cat filepart1 filepart2 | md5sum
or
cat filepart* | md5sum
Be sure to cat them back together in the correct order.
by piping the output of cat you don't have to worry about creating a combined file that is too large.
I work with Bash. I want to generate randrom string by system time . The length of the unique string must be between 10 and 30 characters.Can anybody help me?
There are many ways to do this, my favorite one using the urandom device:
burhan#sandbox:~$ tr -cd '[:alnum:]' < /dev/urandom | fold -w30 | head -n1
CCI4zgDQ0SoBfAp9k0XeuISJo9uJMt
tr (translate) makes sure that only alphanumerics are shown
fold will wrap it to 30 character width
head makes sure we get only the first line
To use the current system time (as you have this specific requirement):
burhan#sandbox:~$ date +%s | sha256sum | base64 | head -c30; echo
NDc0NGQxZDQ4MWNiNzBjY2EyNGFlOW
date +%s = this is our date based seed
We run it through a few hashes to get a "random" string
Finally we truncate it to 30 characters
Other ways (including the two I listed above) are available at this page and others if you simply google.
Maybe you can use uuidgen -t.
Generate a time-based UUID. This method creates a UUID based on the system clock plus the system's ethernet hardware address, if present.
I recently put together a script to handle this, the output is 33 digit md5 checksum but you can trim it down with sed to between 10-30.
E.g. gen_uniq_id.bsh | sed 's/\(.\{20\}\)\(.*$\)/\1/'
The script is fairly robust - it uses current time to nanoseconds, /dev/urandom, mouse movement data and allows for optionally changing the collection times for random and mouse data collection.
It also has a -s option that allows an additional string argument to be incorporated, so you can random seed from anything.
https://code.google.com/p/gen-uniq-id/
this is somewhat related to the post in:
Perform OR on two hash outputs of sha1sum
I have a sample set of TPM measurements, e.g. the following:
10 1ca03ef9cca98b0a04e5b01dabe1ff825ff0280a ima 0ea26e75253dc2fda7e4210980537d035e2fb9f8 boot_aggregate
10 7f36b991f8ae94141753bcb2cf78936476d82f1d ima d0eee5a3d35f0a6912b5c6e51d00a360e859a668 /init
10 8bc0209c604fd4d3b54b6089eac786a4e0cb1fbf ima cc57839b8e5c4c58612daaf6fff48abd4bac1bd7 /init
10 d30b96ced261df085c800968fe34abe5fa0e3f4d ima 1712b5017baec2d24c8165dfc1b98168cdf6aa25 ld-linux-x86-64.so.2
According to the TPM spec, also referred to in the above post, the PCR extend operation is: PCR := SHA1(PCR || data), i.e. "concatenate the old value of PCR with the data, hash the concatenated string and store the hash in PCR". Also, the spec multiple papers and presentations I have found mention that data is a hash of the software to be loaded.
However, when I do an operation like echo H(PCR)||H(data) | sha1sum, I do not obtain a correct resulting value. I.e., when calculatinng (using the above hashes): echo 1ca03ef9cca98b0a04e5b01dabe1ff825ff0280a0ea26e75253dc2fda7e4210980537d035e2fb9f8 | sha1sum, the resuting value is NOT 7f36b991f8ae94141753bcb2cf78936476d82f1d.
Is my understanding of the TPM_Extend operation correct? if so, why is the resulting hash different from the one in the sample measurement file?
Thanks!
/n
To answer your very first question: Your understanding of extend operation is more or less correct. But you have 2 problems:
You are misinterpreting the things you have copied in here
You can't calculate hashes like you do on the shell
The log output you provided here is from Linux's IMA. According to the
documentation the first hash is template-hash and defined as
template-hash: SHA1(filedata-hash | filename-hint)
filedata-hash: SHA1(filedata)
So for the first line: SHA1(0ea26e75253dc2fda7e4210980537d035e2fb9f8 | "boot_aggregate")
results in 1ca03ef9cca98b0a04e5b01dabe1ff825ff0280a.
Note that the filename-hint is 256 byte long - it is 0-padded at the end.
(thumbs up for digging this out of the kernel source ;))
So to make it clear: In your log are no PCR values.
I wrote something in Ruby to verify my findings:
require 'digest/sha1'
filedata_hash = ["0ea26e75253dc2fda7e4210980537d035e2fb9f8"].pack('H*')
filename_hint = "boot_aggregate".ljust(256, "\x00")
puts Digest::SHA1.hexdigest(filedata_hash + filename_hint)
Now to your commands:
The way you are using it here, you are interpreting the hashes as ASCII-strings.
Also note that echo will add an additional new line character to the output.
The character sequence 1ca03ef9cca98b0a04e5b01dabe1ff825ff0280a is hexadecimal
encoding of 160 bit binary data - a SHA1 hash value. So basically you are right,
you have to concatenate the two values and calculate the SHA1 of the resulting
320 bit of data.
So the correct command for the command line would be something like
printf "\x1c\xa0\x3e\xf9\xcc\xa9\x8b\x0a\x04\xe5\xb0\x1d\xab\xe1\xff\x82\x5f\xf0\x28\x0a\x0e\xa2\x6e\x75\x25\x3d\xc2\xfd\xa7\xe4\x21\x09\x80\x53\x7d\x03\x5e\x2f\xb9\xf8" | sha1sum
The \xXX in the printf string will convert the hex code XX into one byte of
binary output.
This will result in the output of d14f958b2804cc930f2f5226494bd60ee5174cfa,
and that's fine.
I've started some work of which requires some quality random bytes, such as 32 at a time for an initialising vector for certain cryptographic applications. My issue is, this may be called upon multiple times simultaneously and I cannot afford the block /dev/random issues to wait for more collection of entropy.
I could use it to seed other algorithms, for example what /dev/urandom may do - however I do not trust what I cannot understand, I do not have any readily available resource on its method nor do I know if it remains the same between many kernel versions, I prefer a well defined method of some sort.
Are you aware of any methods you can think of over standard PRNGs that would be suited enough to use for (simultaneous) key generation and alike?
Would certain ciphers such as RC4 with a large seed be sufficient to generate random output? (I've seen a /dev/frandom implementation that uses this, however am not entirely sure of it.)
If it means anything, I am on a headless Debian server, reason of lack of entropy gathering.
The response is simple: use /dev/urandom, not /dev/random. /dev/urandom is cryptographically secure, and will not block. The "superiority" of /dev/random over /dev/urandom exist only in a specific theoretical setting which makes no sense if the random bytes are to be used with just about any "normal" cryptographic algorithm, such as encryption or signatures.
See this for more details.
(Trust me, I am a cryptographer.)
Consider using a hardware random number generator. For example, the entropy key or Whirlygig. Using /dev/urandom instead will avoid blocking but may (depending on your level of paranoia) degrade security (you'll output more random bits than you have input entropy, so in theory the output is predictable - this isn't a problem if you're just using it for IVs however)
On a modern CPU with AES hardware acceleration, you can easily reach more than 1 GiB/s of random data by encrypting a string of zeros using a random password (from /dev/urandom), as shown by another answer on serverfault. Note that the random password is passed as a pipe, so that it doesn't show up in the process list.
On my machine, this approach is roughly 100 times faster than /dev/urandom:
$ openssl enc -aes-256-ctr -pass file:<(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64) -nosalt < /dev/zero | pv > /dev/null
11.2GiB 0:00:09 [1.23GiB/s] [ <=> ]
$
$ # Let's look at /dev/urandom for comparison:
$ pv < /dev/urandom > /dev/null
48MiB 0:00:04 [12.4MiB/s] [ <=> ]
If you put this in a shell script, you can easily pipe it into other processes:
$ cat ~/.bin/fast_random
#!/bin/bash
openssl enc -aes-256-ctr \
-pass file:<(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64) \
-nosalt < /dev/zero