XML SignatureMethod and DigestMethod - digital-signature

Following a code example:
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig-more#rsa-sha256" />
<ds:Reference URI="">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2001/09/xmldsig#enveloped-signature" />
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256" />
<ds:DigestValue>...</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>...</ds:SignatureValue>
<ds:KeyInfo>
<ds:KeyName>...</ds:KeyName>
</ds:KeyInfo>
There is a SignatureMethod Algorithm (http://www.w3.org/2000/09/xmldsig#rsa-sha256) and a DigestMethod Algorithm (http://www.w3.org/2000/09/xmldsig-more#rsa-sha256).
As far as I am correctly informed, SignatureMethod Algorithm means that the content of the XML is first hashed (by SHA256) and then signed by RSA.
Now I read an article about increasing security Level by changing to SHA512.
What would be the most effect on my code? Would it be more slow? And what are the main arguments for SHA512 to definitely change. Thank you.

SHA-256 is already providing you with 128 bits of security when it comes to collision resistance. So there isn't all that much need to upgrade; cracking 128 bits of security is not thought possible. The SHA-3 competition has shown us that - besides length extension attacks - SHA-2 is still pretty secure. SHA-512 upgrades your security to 256 bits, so if that's your target then it makes sense to use it.
Quantum computers could in the end half that 128 bit security to 64 bits of security using Grovers algorithm. That's currently however not feasible at all, and as such a quantum computer could very likely also attack RSA, so upgrading for that reason doesn't seem all that useful.
SHA-512 is often faster on modern computers than SHA-256. That sounds strange, but SHA-512 uses 64 bit operations internally while SHA-256 uses 32 bit. As CPUs are more geared towards 64 bit operation on the desktop you could speed up the processing time and be more secure. Expect a performance hit when switching to 32 bit processing or - worse - 8 or 16 bit embedded CPU's though.
There is also SHA-2 512/256 which is the same as SHA-512 but for the output size (and a few constants). It is however not as prevalent as SHA-256 which doesn't make it a good option in most situations.
The signed data is usually much larger than the SignedInfo, processed by the signature generation algorithm. So changing the DigestMethod makes most sense.
Finally, not that you need about a 16KiB RSA key to be able to hit 256 bits of security. So if you want to upgrade everything to 256 bit security levels you may want to switch to ECDSA with a 512 or 521 bit named curve.

Related

Is a good idea use 16384bit key length for openvpn?

I would like to know if it is a good idea to use 16384 bit key length for openvpn CA on pfsense and the main differences between this a 8192bit and a 4096bit. Which is the best of these?
It depends on what computational power you want to be protected against. For most usecases, 16384 bits likely doesn't make any sense today, much shorter keys are secure for the foreseeable future, and are more efficient.
For example, GnuPG advises even against 4096 bit keys, stating 2048 is enough, but for example SSLLabs requires a 4096 bit key for maximum score.
NIST says a 2048 bit key is equivalent to a 112 bit symmetric key (116.8 in reality, see this), which would be sufficient for most applications.
Also longer keys are a lot more resource intensive, see comparisons here. Considering signing operations for example, using a 4096 bit key instead of 2048 bits reduces the signature rate to almost a tenth.
What will have a great impact is quantum computing, but we don't have that working yet (for this application), and against such an attack, likely none of these key lengths will be effective.
Also key length is just one aspect, if your systems, applications, data ever get comrpomised, it is very unlikely that the cause will be a 4096 bit key being used instead of a 16384 bit one.

Why does my linux entropy have such a low upper limit?

I noticed I was getting poor performance when running cryptographic operations.
I ran cat /proc/sys/kernel/random/entropy_avail.
After some testing using watch and my own observations I can see that my entropy levels never surpass 200!
Even when I generate entropy using mouse movements etc. (when my computer is completely idle) it briefly surpasses 200 then suddenly dips back down below it for no reason.
Why is this and how do I fix it?
Perhaps the entropy-accumulating system has only about 200 bits of state, and simply cannot get more "unknown" than that. The people most concerned about having enough entropy tend to be cryptologists, and 200 bits of entropy is plenty for most (maybe all?) cryptographic applications.
You can substantially improve the available entropy with haveged. It may be already included in your distribution. Centos/Redhat users can install it from the epel repository.
Haveged was created to remedy low-entropy conditions in the Linux
random device that can occur under some workloads, especially on
headless servers.
Don't worry about it. 200 bits of entropy is more than enough.
Here's a quote from RFC 4086 (Randomness Requirements for Security):
3.1. Volume Required
How much unpredictability is needed? Is it possible to quantify the
requirement in terms of, say, number of random bits per second?
The answer is that not very much is needed. For AES, the key can be
128 bits, and, as we show in an example in Section 8, even the
highest security system is unlikely to require strong keying material
of much over 200 bits.

Is there a chance of reading 16-bytes /dev/urandom data twice, and getting same result?

Working with Linux 3.2, I would like to implement a UID algorithm using /dev/urandom.
There may be a chance of reading 16 random bytes twice, and getting the same result. But is the chance small enough to be negligible?
/dev/urandom is supposed to be a random device that should look uniformly random, and in a uniformly random sequence you would expect to find repeated patterns. However, since there are 2128 possible 16-byte sequences, this should happen with probability 2-128, which is vanishingly small.
That said, /dev/urandom is not known to be cryptographically safe and there may be attacks that aren't in the open literature to force the behavior to degenerate (perhaps some government agency knows how to do this, for example). From the man pages:
A read from the /dev/urandom device will not block waiting for more
entropy. As a result, if there is not sufficient entropy in the
entropy pool, the returned values are theoretically vulnerable to a
cryptographic attack on the algorithms used by the driver. Knowledge
of how to do this is not available in the current unclassified
literature, but it is theoretically possible that such an attack may
exist. If this is a concern in your application, use /dev/random
instead.
(My emphasis) Therefore, I wouldn't rely on this if you are trying to go for cryptographic security.
In short, if you just need random values, this is probably fine. If you want to go for cryptographic security, I would not recommend doing this.
Hope this helps!
you have a 1/2^128 chance of reading the same data, so yes - the probability is very negligible. Roughly the same probability of breaking the AES128 encryption scheme.
Assuming the values are perfectly random, due to the Birthday Paradox the probability is approximately 2-64 (the square root of getting any particular value). That is, at about 264 UIDs, the probability to find a pair becomes greater than 50%.
For most applications that should be fine.

How weak are 64 bit hashs? Can I generate a hash of length X?

I am working on an application where I need to hash binary data, and store the hash in a structure 64 bits long. I am looking for a cryptographic hash function. Ripemd-64 and elf-64 are some possibilities that I have found, but I can't find much data on them (e.g., have they been cracked with less than brute force matches, how long they would take to break, etc). Any links or details are most welcome.
I understand that 64 bits is going to somewhat insecure due to the length of hash. I may have some additional bits to play with (72-74). The problem is that I am not a cryptographer, so I have no idea how to modify a hash function to return some hash of length X. I figure that if I can use 72 bits over 64, I will gain a much bigger hash space. How do I change a hash function so that the length is some non-standard amount?
Any help is most welcome!
Thanks,
Erick
Yes, 64 bits isn't a whole lot for security purposes. It could be brute forced, depending on your application. But assuming you accept that fact and still want to move forward with it, I don't see any problem with just truncating a normal 128/256 bit hash.
Meaning, just use a strong hash function from any cryptographic library you want, and only use the first 64 bits of it. A "proper" method would be to find a hash algorithm natively outputting 64 bits, but as far as I know, people have pretty much stopped making them. It would be even harder to find an implementation available.
Having said that, I'd still urge you to look into making this data structure of yours larger.

is 1024 bit rsa secure

Is 1024 bit rsa secure, or is it crackable now? Is it safe for my program to use 1024 bit rsa? I read at http://pcworld.about.com/od/privacysecurity1/Researcher-RSA-1024-bit-encry.htm that 1024 bit encryption is unsecure, but I find 2048 bit slower, and also I see that various https sites (even paypal) use 1024 bit encryption. Is 1024 bit encryption secure enough?
Last time I checked, NIST recommends 2048-bit RSA and predicts that it will remain secure until 2030. Page 67 of this PDF has the table.
Edit: They actually predict 1024-bit is OK until 2010, then 2048-bit until 2030, then 3072-bit after that. And it's NIST, not the NSA. Been too long since I did my thesis, LOL.
What are you trying to protect? If you are encrypting something that is not terribly vital, then 1024 may be fine, but, if you are protecting something that is very vital, such as someone's medical or financial info then 4096 bits would be better.
The size of the key really depends on what you are protecting, and how long you expect the encryption to hold. If your timeframe is that the info is only valid for 10 mins then 1024 works fine, for 10 years of protection it isn't.
So, what are you protecting?
There is no easy answer to the question "is size n secure ?" because it depends on the resources of an expected attacker. This has two parts:
Resources that the attacker is willing to invest heavily depend on the situation: defeating your grandmother, a bored computer-science student, or the full secret service of some big, rich country, does not involve the same attack power. It also depends on the perceived value of the protected data.
When designing the system, you want some margin of security, which means that you will make some prophecies on how computing power will evolve in the future, and this raises the difficult question of the notion of cost.
So there are several estimates which have been proposed by various researchers and government institutes. This site offers a survey of such methods, with online calculators so that you may play a bit with some of the input parameters.
Short answer is that if you want short-term security (i.e. security is not relevant beyond, say, year 2015) and 1024 bits are not enough for you, then your enemies must be very powerful indeed. Scarily so. To the point that you should have other, more urgent trouble on your hands.
It is necessary to define the meaning of secure to get a useful answer.
Is your house secure? Mostly we make it "good enough." For example, making it harder to break in than the neighbors is often adequate. That way the thieves spend time trying to break into next door rather than your place.
It might be secure if it requires X hours to break in and the valuable content is worth Y. Converting time to money is tricky, but if it takes a cracker 100 hours of his time to break in, and the contents of your information is worth, say $100, then your data is probably secure enough.
Nothing is going to be totally secure forever. If you're that worried about it, just use 2048-bit and sacrifice speed for better security.
Besides, as the article states:
But determining the prime numbers that make up a huge integer is nearly impossible without lots of computers and lots of time.
It all depends on whether or not you think people will actually try that hard to get at whatever information you're trying to protect.
Found a recent paper addressing exactly this question:
On the Security of 1024-bit RSA and
160-bit Elliptic Curve Cryptography
version 2.1, September 1, 2009
http://eprint.iacr.org/2009/389.pdf
It is said that, currently 1024 bit numbers cannot be factored but, RSA 1024 bit (which is about 310 decimal digits) is not considered secured enough. It is advisable to use RSA with 2048 bit or more, if one needs long term security. There are too many research companies, which are well-funded, doing research and there is a chance that they would not share everything at all. So i think, we can say it is not secure at all. I mean, if one day I happened encrypt an important data, i would prefer 2048 bits or more considering the long term security and the unknown developments in that field.

Resources