Java Card: can this operations be implemented? - security

I'm new to Smart Cards and Java Card. I'm planning to implement a variation of the ElGamal key generation algorithm. It's not easy to find information, so is it possible to calculate this steps on a Java Card?
Find smallest prime number greater than a number x (about 2048 bit)
Determine if a number g is a primitive root mod p
Modular exponentation, arithmetic on big numbers (about 2048 bit)
I know that the RSA key generation is possible on a Smart Card, but are the individual steps of the generation (like finding a prime number) also possible? If not, are there other kinds of security tokens that can do this? I'm planning to use the NXP J3D081 Card.

Probably all you have is the javacard's RSA implementation (including the CRT variant). This way you can generate some large primes (as components of CRT private key) and do some modular arithmetic (see this recent question and the RSAPrivateCrtKey class).
Your platform might have some restrictions which could complicate things a bit.
Manual implementation of anything will probably be slow (even if you had the signed 32-bit integer type supported by the card).
Desclaimer: I never did this sort of computations so please do verify my thoughts.
EDIT>
The OV chip 2.0 project contains a Bignat library which offers arithmetic on big numbers (download here).
EDIT2>
OpenCrypto project provides JCMathLib which implements mathematical operations with big numbers and elliptic curve points.

The El gamal algorithm itself is not implemented on any card as far as i know. The requiured cryptographic primitives are not available in java card. Manual implementations are too slow as well

Related

Left to right binary modular exponentiation in Javacard

I am thinking to implement left to right binary modular exponentiation in Javacard.
I know that there are libraries which can perform RSA encryption etc. but in my case I just need to perform the modular exponentiation.
The only thing that I am confused is that as there is a restriction of usage of the data types as Javacard accepts at most the int data type. But in my case the numbers could also be in double.
Is is still possible to implement this algorithm using Javacard API for the big numbers.
Modular exponentiation in general can be used through raw RSA (RSA without padding) or Diffie-Hellman calculations on a Java Card. That way the co-processor - which is generally present on high end Java Card implementations - can be used directly. An hardware assisted Montgomery calculation in the cryptographic coprocessor will outperform any specific calculations by a very large margin. Performing calculations on very large numbers my not even be possible using a low end processor due to efficiency issues.
Usually int is not available in Java Card implementations - if just because the whole Java Card API doesn't use int anywhere. This goes double for double as the processor is extremely unlikely to contain a floating point processor (FPU). So generally you're stuck with (signed) short values. Of course you can perform any kind of calculations using short - see my answer here - but it won't be pretty nor fast.
In the end, the Java Card subset of Java is easily a Turing-complete machine. So yes, anything is possible until you run out of memory or - indeed - time.
Note that security measures may make some tricks such as raw RSA impossible to use for generic modular arithmetic. I would recommend to try DH first and dig deep into the manuals to find out what the requirements of your particular platform may be.

JavaCard - pure software implementation of ECC over GF(2^n)

I have smartcards by NXP that support ECC over GF(p) and that do not support ECC over GF(2^n).
In my project I need to use this particular type of smartcard (thousands of instances are used already). However, I need to add verification of EC signature over sect193r1, which is a curve over GF(2^n).
Performance is not an issue for me. It can take some time. Signature verification does not involve any private keys, so the security and key management are not issues, either. Unfortunately, I have to verify the signature inside my smartcard, not in the device equipped with smartcard reader.
Is there any solution? Is there any existing source code of a pure software JavaCard implementation of EC cryptography over GF(2^n)?
Smart cards that are able to perform asymmetric cryptography always do this using a co-processor (that usually contains a Montgomery multiplier). Most smart cards (e.g. the initial NXP SmartMX processors) still operate using an 8 bit or 16 bit CPU. Those CPU's are not designed to perform operations on large numbers. Unfortunately Java Card doesn't provide direct support for calls to the multiplier - if that would be of use at all. Most cards (e.g. again the SmartMX) also don't support 32 bit (Java int) operations.
So if you want to perform such calculations you will have to program it yourself, using signed 8 bit and signed 16 bit primitives. This will require a lot of work and will be very slow. Add to this the overhead required to process Java byte-code and you will have an amazing amount of sluggishness.
Just updating with some extra info in case someone is still looking for a solution.
The OpenCryptoJC lib indeed provides BigNumbers, EC curve primitive operations etc. So you should be able to load your own curve and its parameters.
However, if this curve is not supported natively by the card, you use the lib to implement the operations on the curve yourself. That's not-trivial though...
Alternatively, if there is any mapping between the GF(2^n) curve you want to use and another GF(p) you could try do all operations in GF(p) and them map the results back to GF(2^n). That could be easier to do assuming that there is such a mapping.
Disclaimer: I'm one of the lib authors. :)

Read Intel DRBG parameters

Newer Intel processors include a DRBG, which generates random numbers which you can read with the RDRAND instruction. It involves a 256-bit seed S generated from a hardware entropy source dependant on noise in a metastable oscillator. The algorithm used to arrive at the numbers is effectively AES(K,V), where K is an ephemeral key derived from half of S, and V is an IV which is derived from the other half of S. I think, anyway; this is explained much better by some people who audited it.
For various reasons, I would like to audit the performance of this mechanism programmatically in situ, which requires the ability to read or derive two things:
The value of S
The value of either K or V
Using this and the output of RDRAND across several iterations will provide me with the required test data to make this determination.
However, nowhere in the software developer's manual or elsewhere can I find any documented means of accomplishing either of these tasks.
Assuming that I am willing to write a Linux kernel module to accomplish this, and that I am willing to use RDMSR for it or any other means available including calls to on-die devices such as the MEI, is it possible to acquire this data?
The internal state of the DRBG is within a FIPS 140-2 compliant security boundary. You cannot access those state variables.

How bad is 3 as an RSA public exponent

I'm creating an application where I have to use RSA to encrypt some stuff using a public key. I want this encryption to be really fast. Initially, I tried a 2048 bit key with F4 (=65537) as the exponent but it is not fast enough. So now I'm considering the following 2 options:
2048 bit modulus, e=3
1024 bit modulus, e=65537
Both satisfy my performance requirements but which one provides better security? I should also note that I use the PKCS#1 padding scheme.
If you use random padding such as OAEP in PKCS#1, most (all?) of the known weaknesses from using low exponents are no longer relevant.
Also have you tried using e=17? There's no rule saying you have to choose either 3 or 65537.
Provided that you use a good padding scheme, then there is no known reason why e=3 should have worse security than any other public exponent. Using a short exponent has issues if you also do not use a good padding scheme, but the problem more lies in the padding scheme than in the exponent.
The "gut feeling" of many researcher is that e=3 is not better than any other public exponent, and e=3 might turn out to be slightly weaker at some unspecified future date, although nothing points at such a weakness right now.
Key length has a much higher practical impact on security. A 768-bit RSA key has been cracked recently (this was not easy ! Four years of work with big computers and bigger brains). A 1024-bit key is deemed adequate for the short term, but long-term uses (e.g. the encrypted data has high value and must still be confidential in year 2030) would mandate something bigger, e.g. 2048 bits. See this site for much information on how cryptographic strength can be estimated and has been estimated by various researchers and organizations.
If you are after very fast asymmetric encryption, you may want to investigate the Rabin-Williams encryption scheme which is faster than RSA, while providing at least the same level of security for the same output length (but there is no easy-to-use detailed standard for that scheme, contrary to RSA with PKCS#1, so you are a bit on your own here).
While there is currently no known attack against if a correct padding is used, small exponents are more likely to lead to exploits in case of implementation errors. And implementation errors are unfortunately still a threat. E.g. this is a vulnerability that was quite "popular". (Note, this is for signatures. I just want to show that even commercial software can have serious bugs.)
If you have to cut corners, then you have to consider the potential implications of your actions. I.e. choosing a small modulus or a small exponent both have their own drawbacks.
If you choose a small (1024 bit) modulus then you can't assume that your data can be kept confidential for decades.
If you choose a small exponent you might be more susceptible to implementation errors.
In the first case, you pretty much know when your secrets are in danger, since it is quite easy to follow the progress made in factoring. (This assumes of course that agencies that don't publish e.g. NSA is not your enemy).
In the second case (implementation errors), you don't know when you made a mistake. You might be safe using e=3 or you might have made a big blunder. I.e. in one case you have a rather good way to estimate your risk, and in the other case you have not.
Therefore, I'd recommend not to use e=3 at all.
I'd use more safety margin against those threats that are hard to predict, than those threats that are widely publicized.
In their book 'Practical Cryptography', Bruce Schneier and Niels Ferguson suggest using a public exponent of 3 for signatures and 5 for encryption. You should double check on the other criteria they recommend which avoid catastrophes. Section 13.4 covers this (p229ff), and discusses the not very complex requirement that given n = pq (where p and q are random primes), neither (p-1) nor (q-1) can be a multiple of 3 or 5. But still double check the book for the details.
(I believe there is a new edition of the book due out in 2010.)
To cite Don Coppersmith's 1997 paper "Small Solutions to Polynomial Equations, and Low Exponent RSA Vulnerabilities":
RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message.
While this may not be a problem if RSA-OAEP padding scheme is used, the PKCS#1 padding scheme (which op is using) is vulnerable if public exponent 3 is used.
FYI, see this for a bit of history:
http://chargen.matasano.com/chargen/2006/9/18/rsa-signature-forgery-explained-with-nate-lawson-part-iv.html
If your exponent is low and the value of m*e < modulus, you can just take the eth root of the ciphertext to decrypt.
This is from my notes on crypto from two years ago. But, in answer to your question, it would seem that option 2 is better.
Someone who is more eager to do math might be able to give you a better explanation why.

Why do you use a random number generator/extractor?

I am dealing with some computer security issues at the school at the moment and I am interested in general programming public preferences, customs, ideas etc. If you have to use a random number generator or extractor, which one do you choose? Why do you choose it? The mathematical properties, already implemented as a package or for what reason? Do you write your own or use some package?
If computational time is no object, then you can't go wrong with Blum Blum Shub (http://en.wikipedia.org/wiki/Blum_blum_shub). Informally speaking, it's at least as secure (hard to predict) as integer factorization.
dev/random, or equivalent on your platform.
It returns bits from an entropy pool fed by device drivers. No need to worry about mathematical properties.
If you're after a cryptographically secure PRNG, then repeated application of a secure hash to a large seed array is generally the way to go. Don't invent your own algorithm, though, go for a version of Fortuna or something else reasonably well reviewed.
The keys for encryption of phone calls between presidents of the USA and USSR were said to be generated from cosmic rays. We checked it in the physics lab at out univercity -- their energies yield true Gaussian distribution. ;-) So for the best encryption you should use these, because such random sequence can not be replayed. Unless, of course, your adversary covertly builds a particle accelerator near your random number generator.
Ah... about computers... Well, acquire a stream that comes from something physical, not computed. /dev/random is an easiest solution, but your hand-made Geiger-counter attached to USB would give the best randomness ever.
For a little school project, I'd use whatever the OS provides for random number generation.
For a serious security application (eg: COMSEC-level encryption), I use a hardware random number generator. Pure algorithms with no hardware access by definition don't produce random numbers.
HotBits.

Resources