Newer Intel processors include a DRBG, which generates random numbers which you can read with the RDRAND instruction. It involves a 256-bit seed S generated from a hardware entropy source dependant on noise in a metastable oscillator. The algorithm used to arrive at the numbers is effectively AES(K,V), where K is an ephemeral key derived from half of S, and V is an IV which is derived from the other half of S. I think, anyway; this is explained much better by some people who audited it.
For various reasons, I would like to audit the performance of this mechanism programmatically in situ, which requires the ability to read or derive two things:
The value of S
The value of either K or V
Using this and the output of RDRAND across several iterations will provide me with the required test data to make this determination.
However, nowhere in the software developer's manual or elsewhere can I find any documented means of accomplishing either of these tasks.
Assuming that I am willing to write a Linux kernel module to accomplish this, and that I am willing to use RDMSR for it or any other means available including calls to on-die devices such as the MEI, is it possible to acquire this data?
The internal state of the DRBG is within a FIPS 140-2 compliant security boundary. You cannot access those state variables.
Related
I am thinking to implement left to right binary modular exponentiation in Javacard.
I know that there are libraries which can perform RSA encryption etc. but in my case I just need to perform the modular exponentiation.
The only thing that I am confused is that as there is a restriction of usage of the data types as Javacard accepts at most the int data type. But in my case the numbers could also be in double.
Is is still possible to implement this algorithm using Javacard API for the big numbers.
Modular exponentiation in general can be used through raw RSA (RSA without padding) or Diffie-Hellman calculations on a Java Card. That way the co-processor - which is generally present on high end Java Card implementations - can be used directly. An hardware assisted Montgomery calculation in the cryptographic coprocessor will outperform any specific calculations by a very large margin. Performing calculations on very large numbers my not even be possible using a low end processor due to efficiency issues.
Usually int is not available in Java Card implementations - if just because the whole Java Card API doesn't use int anywhere. This goes double for double as the processor is extremely unlikely to contain a floating point processor (FPU). So generally you're stuck with (signed) short values. Of course you can perform any kind of calculations using short - see my answer here - but it won't be pretty nor fast.
In the end, the Java Card subset of Java is easily a Turing-complete machine. So yes, anything is possible until you run out of memory or - indeed - time.
Note that security measures may make some tricks such as raw RSA impossible to use for generic modular arithmetic. I would recommend to try DH first and dig deep into the manuals to find out what the requirements of your particular platform may be.
I'm new to Smart Cards and Java Card. I'm planning to implement a variation of the ElGamal key generation algorithm. It's not easy to find information, so is it possible to calculate this steps on a Java Card?
Find smallest prime number greater than a number x (about 2048 bit)
Determine if a number g is a primitive root mod p
Modular exponentation, arithmetic on big numbers (about 2048 bit)
I know that the RSA key generation is possible on a Smart Card, but are the individual steps of the generation (like finding a prime number) also possible? If not, are there other kinds of security tokens that can do this? I'm planning to use the NXP J3D081 Card.
Probably all you have is the javacard's RSA implementation (including the CRT variant). This way you can generate some large primes (as components of CRT private key) and do some modular arithmetic (see this recent question and the RSAPrivateCrtKey class).
Your platform might have some restrictions which could complicate things a bit.
Manual implementation of anything will probably be slow (even if you had the signed 32-bit integer type supported by the card).
Desclaimer: I never did this sort of computations so please do verify my thoughts.
EDIT>
The OV chip 2.0 project contains a Bignat library which offers arithmetic on big numbers (download here).
EDIT2>
OpenCrypto project provides JCMathLib which implements mathematical operations with big numbers and elliptic curve points.
The El gamal algorithm itself is not implemented on any card as far as i know. The requiured cryptographic primitives are not available in java card. Manual implementations are too slow as well
I have smartcards by NXP that support ECC over GF(p) and that do not support ECC over GF(2^n).
In my project I need to use this particular type of smartcard (thousands of instances are used already). However, I need to add verification of EC signature over sect193r1, which is a curve over GF(2^n).
Performance is not an issue for me. It can take some time. Signature verification does not involve any private keys, so the security and key management are not issues, either. Unfortunately, I have to verify the signature inside my smartcard, not in the device equipped with smartcard reader.
Is there any solution? Is there any existing source code of a pure software JavaCard implementation of EC cryptography over GF(2^n)?
Smart cards that are able to perform asymmetric cryptography always do this using a co-processor (that usually contains a Montgomery multiplier). Most smart cards (e.g. the initial NXP SmartMX processors) still operate using an 8 bit or 16 bit CPU. Those CPU's are not designed to perform operations on large numbers. Unfortunately Java Card doesn't provide direct support for calls to the multiplier - if that would be of use at all. Most cards (e.g. again the SmartMX) also don't support 32 bit (Java int) operations.
So if you want to perform such calculations you will have to program it yourself, using signed 8 bit and signed 16 bit primitives. This will require a lot of work and will be very slow. Add to this the overhead required to process Java byte-code and you will have an amazing amount of sluggishness.
Just updating with some extra info in case someone is still looking for a solution.
The OpenCryptoJC lib indeed provides BigNumbers, EC curve primitive operations etc. So you should be able to load your own curve and its parameters.
However, if this curve is not supported natively by the card, you use the lib to implement the operations on the curve yourself. That's not-trivial though...
Alternatively, if there is any mapping between the GF(2^n) curve you want to use and another GF(p) you could try do all operations in GF(p) and them map the results back to GF(2^n). That could be easier to do assuming that there is such a mapping.
Disclaimer: I'm one of the lib authors. :)
For instance, could it be used to generate a one-time pad key?
Also, what are its sources and how could it be used to generate a random number between x and y?
Strictly speaking, /dev/random is not really completely random. /dev/random feeds on hardware sources which are assumed to be unpredictible in some way; then it mixes such data using functions (hash functions, mostly) which are also assumed to be one-way. So the "true randomness" of /dev/random is thus relative to the inherent security of the mixing functions, security which is no more guaranteed than that of any other cryptographic primitive, in particular the PRNG hidden in /dev/urandom.
The difference between /dev/random and /dev/urandom is that the former will try to maintain an estimate (which means "a wild guess") of how much entropy it has gathered, and will refuse to output more bits than that. On the other hand, /dev/urandom will happily produce megabytes of data from the entropy it has.
The security difference between the two approaches is meaningless unless you assume that "classical" cryptographic algorithms can be broken, and you use one of the very few information-theoretic algorithms (e.g. OTP or Shamir's secret sharing); and, even then, /dev/random may be considered as more secure than /dev/urandom only if the mixing functions are still considered to be one-way, which is not compatible with the idea that a classical cryptographic algorithm can be broken. So, in practice and even in theory, no difference whatsoever. You can use the output of /dev/urandom for an OTP and it will not be broken because of any structure internal to /dev/urandom -- actual management of the obtained stream will be the weak point (especially long-time storage). On the other hand, /dev/random has very real practical issues, namely that it can block at untimely instants. It is really irksome when an automated OS install blocks (for hours !) because SSH server key generation insists on using /dev/random and needlessly stalls for entropy.
There are many applications which read /dev/random as a kind of ritual, as if it was "better" than /dev/urandom, probably on a karmic level. This is plain wrong, especially when the alea is to be used with classical cryptographic algorithms (e.g. to generate a SSH server public key). Do not do that. Instead, use /dev/urandom and you will live longer and happier. Even for one-time pad.
(Just for completeness, there is a quirk with /dev/urandom as implemented on Linux: it will never block, even if it has not gathered any entropy at all since previous boot. Distributions avoid this problem by creating a "random seed" at installation time, with /dev/random, and using that seed at each boot to initialize the PRNG used by /dev/urandom; a new random seed is regenerated immediately, for next boot. This ensures that /dev/urandom always works over a sufficiently big internal seed. The FreeBSD implementation of /dev/urandom will block until a given entropy threshold is reached, which is safer.)
The only thing in this universe that can be considered truly is one based on quantum effects. Common example is radioactive decay. For certain atoms you can be sure only about half-life, but you can't be sure which nucleus will break up next.
About /dev/random - it depends on implementation. In Linux it uses as entropy sources:
The Linux kernel generates entropy
from keyboard timings, mouse
movements, and IDE timings and makes
the random character data available to
other operating system processes
through the special files /dev/random
and /dev/urandom.
Wiki
It means that it is better than algorithmic random generators, but it is not perfect as well. The entropy may not be distributed randomly and can be biased.
This was philosophy. Practice is that on Linux /dev/random is random enough for vast majority of tasks.
There are implementations of random generators that have more entropy sources, including noise on audio inputs, CPU temperature sensors etc. Anyway they are not true.
There is interesting site where you can get Genuine random numbers, generated by radioactive decay.
/dev/random will block if there's not enough random data in the entropy pool whereas /dev/urandom will not. Instead, /dev/urandom will fall back to a PRNG (kernel docs). From the same docs:
The random number generator [entropy pool] gathers environmental noise from device drivers and other sources into an entropy pool.
So /dev/random is not algorithmic, like a PRNG, but it may not be "truly random" either. Mouse movements and keystroke timings tend to follow patterns and can be used for exploits but you'll have to weigh the risk against your use case.
To get a random number between x and y using /dev/random, assuming you're happy with a 32-bit integer, you could have a look at the way the Java java.util.Random class does it (nextInt()), substituting in appropriate code to read from /dev/random for the nextBytes() method.
I am dealing with some computer security issues at the school at the moment and I am interested in general programming public preferences, customs, ideas etc. If you have to use a random number generator or extractor, which one do you choose? Why do you choose it? The mathematical properties, already implemented as a package or for what reason? Do you write your own or use some package?
If computational time is no object, then you can't go wrong with Blum Blum Shub (http://en.wikipedia.org/wiki/Blum_blum_shub). Informally speaking, it's at least as secure (hard to predict) as integer factorization.
dev/random, or equivalent on your platform.
It returns bits from an entropy pool fed by device drivers. No need to worry about mathematical properties.
If you're after a cryptographically secure PRNG, then repeated application of a secure hash to a large seed array is generally the way to go. Don't invent your own algorithm, though, go for a version of Fortuna or something else reasonably well reviewed.
The keys for encryption of phone calls between presidents of the USA and USSR were said to be generated from cosmic rays. We checked it in the physics lab at out univercity -- their energies yield true Gaussian distribution. ;-) So for the best encryption you should use these, because such random sequence can not be replayed. Unless, of course, your adversary covertly builds a particle accelerator near your random number generator.
Ah... about computers... Well, acquire a stream that comes from something physical, not computed. /dev/random is an easiest solution, but your hand-made Geiger-counter attached to USB would give the best randomness ever.
For a little school project, I'd use whatever the OS provides for random number generation.
For a serious security application (eg: COMSEC-level encryption), I use a hardware random number generator. Pure algorithms with no hardware access by definition don't produce random numbers.
HotBits.