Can't we enforce a limited number of attempts to prevent quantum computing from cracking our RSA encryption in the future? Or am I missing something? - security

It is often said that quantum computing could eventually be the end of modern encryption, and data security companies will eventually need to discover and implement quantum safe encryption. But can't a company just set limits on how many password attempts are allowed in a given time? Couldn't this slow down the brute force enough to keep data safe for several hundred+ years?

Related

Why bittorrent need chunk hash

In the torrent file,every chunk(or piece) has a SHA1 hash.
Sure, this hash is used for verification because public network is unreliable.
In a private network, if all peers is reliable, should this hash been ignored, i.e skip chunk verification in client ?
Is there other consideration about using hash? e.g. network transfer error or software bug.
In a private network, if all peers is reliable
Hardware is never 100% reliable. At large scale you're going to see random bitflips everywhere. TCP and UDP only have weak checksums that will miss a bit flip happening in flight every now and then. Memory may not be protected by ECC. Storage might not even be protected by checksums.
So eventually there will be some corruption go uncaught if data isn't verified.
Generic SHA1 software implementations already are quite fast and should be faster than most common network or storage systems. With specialized SHA1 instructions in recent CPUs the cost of checksumming should become even lower, assuming the software makes use of them.
So generally speaking the risk of bitrot is not worth the very tiny decrease in CPU load. There might be exceptional situations where that is not the case, but it would be up to the operator of that specific system to measure the impact and decide whether he can accept bitrot to save a few CPU cycles.

spectre with device memory

Regarding the spectre security issues and side-channel attacks.
In both x86 and ARM exists a method to disable caches/speculative access on specific memory pages. So any side-channel attack (spectre, meltdown) on these memory regions should be impossible. So why are we not using this to prevent side-channel attacks by storing all secure information (password, keys, etc.) into slow but secure (?) memory regions, while placing the unsecure data into the fast but unsecure normal memory? Accesstime on these pages will decrease by a huge factor (~100), but the kernel fixes are not cheap either. So maybe reducing the performance of only a few memory-pages is faster then a slightly overall decrease?
It would shift the responsibility of fixing the issues from the OS to the application-developer, which would be a huge change. But hoping, that the kernel will somehow fix all bugs seems not to a be good approach either.
So my questions are
Will the use of "device" memory-pages really prevent such attacks?
What are the downsides of it? (Besides the obvious performance issues)
How practical would be the usage?
Because our compilers / toolchains / OSes don't have support for using uncacheable memory for some variables, and for avoiding spilling copies of them to the stack. (Or temporaries calculated from them.)
Also AFAIK, you can't even allocate a page of UC memory in a user-space process on Linux even if you wanted to. That could of course be changed with a new flag for mmap and/or mprotect. Hopefully it could be designed so that running new binaries on an old system would get regular write-back memory (and thus still work, but without security advantages).
I don't think there are any denial-of-service implications to letting unprivileged user-space map WC or UC memory; you can already use NT stores and / or clflush to force memory access and compete for a larger share of system memory-controller time / resources.

Java card PBKDF2 implementation

I am trying to implement the pbkdf2 on java card, but the card doesnot support the same. Can someone help.
PBKDF2 is a key strengthening algorithm. Although top of the line smart card processors are getting near 100 MHz by now (some 33 times the speed of my old MSX, and that's not including advances in caching, instructions and timings), it is not a good idea to perform a function such as PBKDF2 on a smart card.
The idea of PBKDF2 is that you trade off CPU cycles with security of the input keying material. Unfortunately any desktop processor core will be at least 50 times the performance of a smart card processor. So even if we do not consider paralellization, an adversary will have an advantage of at least 50 over the implementation.
Instead you could use OwnerPIN which has a retry count, which limits the number of tries by the adversary. Another possibility is to use a split implementation of PBKDF2 (or PBKDF2 followed by a key based KDF / HMAC) where only the last step is performed on the smart card.
(An addition to an existing answer that should be accepted)
Citing a thesis where author implemented optimized PBKDF2 in javacard (emphasis mine):
Our implementation takes 136 seconds to compute
an HMAC_SHA1 with 2048 iterations. Using a different hashing algorithm, such as SHA-256, did not raise the computation time significantly, only to 148 seconds. This means that just to conform to Kerberos
standard, we would need 4 minutes to compute the key. Generating
the key for VeraCrypt partition encryption would take more than 4
hours.
The respective implementation is here (I have never used it or verified it).

Difference between "instruction fetch" and "data read" ?

I have a question regarding a paper I am reading right now, which is a demonstration of an attack against some tampering resistant software, using self-hashing mechanism. This kind of self hashing is working because authors are making the assumption that the executed code is the same as the hashed code, which is true except against some manipulations against the way a processor is manipulating the memory.
In the paper, there is the following sentence which troubles me : "A critical (implicit) assumption of both the hashing in Aucsmith’s IVK and checksum systems employing networks is that processors operate such that D(x) = I(x), where D(x) is the bit-string result of a “data read”
from memory address x, and I(x) is the bit-string result of an “instruction fetch” of corresponding length from x."
How do you state the difference between D(x) and I(x) ? What is the difference between a data read and an instruction fetch ?
Thanks for your help
The difference in these operations is when they occur and where the data is stored before use. Most processors have dedicated caches for instructions. This may mean that the data is fetched from main memory twice: once into a data cache for calculating the hash and again into the instruction cache.
I cannot find it now, but a year ago I read of a means of hiding malicious code on an Intel processor by causing a cache incoherency between these two caches. The processor would execute the malicious code, but any other tool reading the same memory as mere data would see good code. Here is a means of accomplishing this on an ARM chip.

How does the kernel entropy pool work?

I'm using /dev/urandom to generate random data for my programs. I learned that /dev/random can be empty because, unlike /dev/urandom, it doesn't use SHA when there are not enough bytes generated. /dev/random uses "the kernel entropy pool". Apparently it relies on keyboard timings, mouse movements, and IDE timings.
But how does this really work?
And wouldn't it be possible to "feed" the entropy pool making the /dev/random output predictable?
What you are saying is spot on, yes theoretically it is possible to feed entropy into /dev/random, but you'd need to control a lot of the kernel "noise" sources for it to be significant. You can look at the source for random.c, to see where /dev/random picks up noise from. Basically, if you control a significant number of the noises sources, then you can guess what the others are contributing to the entropy pool.
Since /dev/urandom is a Hash chain seeded from /dev/random, then you could actually predict the next numbers, if you knew the seed. If you have enough control over the entropy pool, then from the output of /dev/urandom you might be able to guess this seed, which would enable you to predict all the next numbers from /dev/urandom, but only if you keep /dev/random exhausted, otherwise /dev/urandom will be reseeded.
That being said, I haven't seen anyone actually do it, not even in a controlled environment. Of course this isn't a guarantee, but I wouldn't worry.
So I'd rather use /dev/urandom and guarantee that my program doesn't block while waiting for entropy, instead of using /dev/random and asking the user to do silly things, like moving the mouse or bang on the keyboard.
I think you should read On entropy and randomness from LWN, hopefully it will calm your worries :-).
Should you still be worried, then get yourself a HRNG.
Edit
Here is a small note on entropy:
I think the concept of entropy is generally difficult to grasp. There is an article with more information on Wikipedia. But basically, in this case, you can read entropy as randomness.
So how I see it, is that you have a big bag of coloured balls, the higher entropy in this bag the harder it is to predict the next colour drawn from the bag.
In this context, your entropy pool is just a bunch of random bytes, where one cannot be derived from the previous, or any of the others. Which means you have high entropy.
I appreciate the depth of jbr's answer.
Adding a practical update for anyone currently staring at a ipsec pki command or something similar blocking on empty entropy pool:
I just installed rng-tools in another window and my pki command completed.
apt-get install rng-tools
I am in the midst of reading a paper at
factorable
and made note of the section where it says:
"For library developers:
Default to the most secure configuration. Both OpenSSL
and Dropbear default to using /dev/urandom instead of
/dev/random, and Dropbear defaults to using a less secure
DSA signature randomness technique even though
a more secure technique is available as an option."
The authors address the tradeoff of an application hanging while waiting for entropy to build /dev/random to get better security compared to a quick, but less secure, result from /dev/urandom.
Some additional Info:
IRQF_SAMPLE_RANDOM : This interrupt flag specifies that interrupts generated by a device should contribute to kernel entropy pool
Interrupt are what devices like mouse, keyboard etc (devices) are sending asynchronously.

Resources