Are FIPS certified algorithms in .net framework backward compatible? - fips

Our software is based on .net framework 4.5. We are making our application FIPS compliant. So we are replacing the older classes with FIPS compliant classes.
MD5CryptoServiceProvider -> SHA1CryptoServiceProvider
RijndaelManaged -> AesCryptoServiceProvider
But we have certain data in our database which are encrypted with older algorithm. How do I retrieve them, as we are now using newer algorithms? Are the newer algorithms backward compatible?
Thanks

we have certain data in our database which are encrypted with older algorithm. How do I retrieve them, as we are now using newer algorithms
Upsize the data. Rather than storing just MD5(data), add an extra column to the table called upsized. If upsized = false, then calculate SHA256(MD5(data)) and store it. Finally, set upsized = true.
There's some small/trivial technical defects in the construction, but it gets you past the C&A requirements of FIPS 140-2 and the SP800-53 audit.

Related

How to implement SHA256 on Java Card 2.2.1?

I'm trying to implement RSA sign on Java Card version 2.2.1. I have implemented RSA 2048 and tested this successfully, but when trying to hash using the MessageDigest class, I'm unable to get the correct answer in response.
Here is my code:
MessageDigest md = MessageDigest.getInstance(MessageDigest.ALG_SHA, false);
md.reset();
md.doFinal(toSign, bOffset, bLength, tempBuffer, (short) 0);`
But I do not get the correct answer; neighther for ALG_SHA nor for ALG_MD5.
I'm wondering what the problem is. All samples I have seen use the same methods and parameters.
The Java Card 2.2.1 specification does not support SHA-256 (or any of the other SHA-2 message digests). It only supports SHA1 and MD5, two complete different cryptographic hash functions. Consequently, neither MessageDigest.ALG_SHA nor MessageDigest.ALG_MD5 will get you an instance of MessageDigest that could calculate the SHA-256 hash function.
Only Java Card 2.2.2 and onwards supports various SHA2 functions. In that specification, the MessageDigest class would also support
SHA-256: MessageDigest.ALG_SHA_256,
SHA-384: MessageDigest.ALG_SHA_384, and
SHA-512: MessageDigest.ALG_SHA_512.
So if you are lucky and your card actually supports Java Card 2.2.2, you could actually use those constants to obtain a proper MessageDigest object.
If your card does not support Java Card 2.2.2, then, of course, you can't use should not1 be able to use those constants. You could still check the manual of your card if it supports some proprietary implementation of the MessageDigest that also has support for SHA-256, though I highly doubt that.
1) Thanks to vlp for pointing out that there are actually cards that are Java Card 2.2.1 (or below) that seemingly support using the constants for SHA-2 algorithms introduced in the Java Card 2.2.2 API. This might just be caused by other implementation bugs and nobody seems to have tested if these algorithms actually work on those cards. See the JCAlgTest list for findings on that.

Java Card: can this operations be implemented?

I'm new to Smart Cards and Java Card. I'm planning to implement a variation of the ElGamal key generation algorithm. It's not easy to find information, so is it possible to calculate this steps on a Java Card?
Find smallest prime number greater than a number x (about 2048 bit)
Determine if a number g is a primitive root mod p
Modular exponentation, arithmetic on big numbers (about 2048 bit)
I know that the RSA key generation is possible on a Smart Card, but are the individual steps of the generation (like finding a prime number) also possible? If not, are there other kinds of security tokens that can do this? I'm planning to use the NXP J3D081 Card.
Probably all you have is the javacard's RSA implementation (including the CRT variant). This way you can generate some large primes (as components of CRT private key) and do some modular arithmetic (see this recent question and the RSAPrivateCrtKey class).
Your platform might have some restrictions which could complicate things a bit.
Manual implementation of anything will probably be slow (even if you had the signed 32-bit integer type supported by the card).
Desclaimer: I never did this sort of computations so please do verify my thoughts.
EDIT>
The OV chip 2.0 project contains a Bignat library which offers arithmetic on big numbers (download here).
EDIT2>
OpenCrypto project provides JCMathLib which implements mathematical operations with big numbers and elliptic curve points.
The El gamal algorithm itself is not implemented on any card as far as i know. The requiured cryptographic primitives are not available in java card. Manual implementations are too slow as well

JavaCard - pure software implementation of ECC over GF(2^n)

I have smartcards by NXP that support ECC over GF(p) and that do not support ECC over GF(2^n).
In my project I need to use this particular type of smartcard (thousands of instances are used already). However, I need to add verification of EC signature over sect193r1, which is a curve over GF(2^n).
Performance is not an issue for me. It can take some time. Signature verification does not involve any private keys, so the security and key management are not issues, either. Unfortunately, I have to verify the signature inside my smartcard, not in the device equipped with smartcard reader.
Is there any solution? Is there any existing source code of a pure software JavaCard implementation of EC cryptography over GF(2^n)?
Smart cards that are able to perform asymmetric cryptography always do this using a co-processor (that usually contains a Montgomery multiplier). Most smart cards (e.g. the initial NXP SmartMX processors) still operate using an 8 bit or 16 bit CPU. Those CPU's are not designed to perform operations on large numbers. Unfortunately Java Card doesn't provide direct support for calls to the multiplier - if that would be of use at all. Most cards (e.g. again the SmartMX) also don't support 32 bit (Java int) operations.
So if you want to perform such calculations you will have to program it yourself, using signed 8 bit and signed 16 bit primitives. This will require a lot of work and will be very slow. Add to this the overhead required to process Java byte-code and you will have an amazing amount of sluggishness.
Just updating with some extra info in case someone is still looking for a solution.
The OpenCryptoJC lib indeed provides BigNumbers, EC curve primitive operations etc. So you should be able to load your own curve and its parameters.
However, if this curve is not supported natively by the card, you use the lib to implement the operations on the curve yourself. That's not-trivial though...
Alternatively, if there is any mapping between the GF(2^n) curve you want to use and another GF(p) you could try do all operations in GF(p) and them map the results back to GF(2^n). That could be easier to do assuming that there is such a mapping.
Disclaimer: I'm one of the lib authors. :)

Sample code for public key encryption/decryption on Mac?

Where can I find some simple sample code for public key encryption and decryption on Mac OS X? I'm frustrated that Apple's "Certificate, Key, and Trust Services Programming Guide" shows how to do this stuff on iOS, but the needed APIs (SecKeyEncrypt, SecKeyDecrypt) are apparently not available on Mac OS X. There's probably a way to do it in "CryptoSample", but it doesn't look clear or simple, and the sample project is too old to open with the current version of Xcode.
The Security Framework APIs change rather frequently between Mac OS releases. The best approach depends on what version you target:
If your code only needs to run on 10.7 and above, you can use Security Transforms, a new high-level public API for cryptography transformations. The Security Transforms Programming Guide has useful (and simple!) example code:
https://developer.apple.com/library/archive/documentation/Security/Conceptual/SecTransformPG/SecurityTransformsBasics/SecurityTransformsBasics.html
You'll want to create a transform using SecEncryptTransformCreate or SecDecryptTransformCreate, set its input using SecTransformSetAttribute and execute it with SecTransformExecute.
If you need to support Mac OS 10.6 or below, you must use the low-level and rather scary CDSA APIs. CryptoSample's cdsaEncrypt is a concise example.
https://developer.apple.com/library/archive/samplecode/CryptoSample/Listings/libCdsaCrypt_libCdsaCrypt_cpp.html
You can get a CSSM_CSP_HANDLE and a CSSM_KEY from a SecKeyRef by using SecKeyGetCSPHandle and SecKeyGetCSSMKey, respectively.
To learn more about CDSA, the full specification is available from the Open Group (free, but requires registration):
https://www2.opengroup.org/ogsys/jsp/publications/PublicationDetails.jsp?publicationid=11287
Good luck!
If the private key was created exportable, you can export it in an unprotected format and use openssl directly. This puts the raw key data directly in the address space of your application, so it defeats one of the primary purposes of the Keychain. Don't do this.
Finally, you can mess around with private functions. Mac OS 10.6 and 10.7 include, but do not publicly declare, SecKeyEncrypt and SecKeyDecrypt, with the same arguments as on iOS. The quick'n'dirty solution is to simply declare and use them (weakly linked, with the usual caveats). This is probably a bad idea to do in code that you plan to distribute to others.
There's an implementation of decrypting data using the Public-Key at: https://github.com/karstenBriksoft/CSSMPublicKeyDecrypt.
The Security.framework does not have a public API for that kind of functionality, which is why CSSM needs to be use directly even though its marked as deprecated.
To encrypt with the public key, simply use the SecEncryptTransformCreate, but for public-key decryption you need to use the CSSMPublicKeyDecrypt class.
Mac OS X contains OpenSSL in libcrypto. The CommonCrypto framework seems to be derived from SSLeay, the precursor of OpenSSL.

Best general-purpose digest function?

When writing an average new app in 2009, what's the most reasonable digest function to use, in terms of security and performance? (And how can I determine this in the future, as conditions change?)
When similar questions were asked previously, answers have included SHA1, SHA2, SHA-256, SHA-512, MD5, bCrypt, and Blowfish.
I realize that to a great extent, any one of these could work, if used intelligently, but I'd rather not roll a dice and pick one randomly. Thanks.
I'd follow NIST/FIPS guidelines:
March 15, 2006: The SHA-2 family of
hash functions (i.e., SHA-224,
SHA-256, SHA-384 and SHA-512) may be
used by Federal agencies for all
applications using secure hash
algorithms. Federal agencies should
stop using SHA-1 for digital
signatures, digital time stamping and
other applications that require
collision resistance as soon as
practical, and must use the SHA-2
family of hash functions for these
applications after 2010. After 2010,
Federal agencies may use SHA-1 only
for the following applications:
hash-based message authentication
codes (HMACs); key derivation
functions (KDFs); and random number
generators (RNGs). Regardless of use,
NIST encourages application and
protocol designers to use the SHA-2
family of hash functions for all new
applications and protocols.
You say "digest function"; presumably that means you want to use it to compute digests of "long" messages (not just hashing "short" "messages" like passwords). That means bCrypt and similar choices are out; they're designed to be slow to inhibit brute-force attacks on password databases. MD5 is completely broken, and SHA-0 and SHA-1 are too weakened to be good choices. Blowfish is a stream cipher (though you can run it in a mode that produces digests), so it's not such a good choice either.
That leaves several families of hash functions, including SHA-2, HAVAL, RIPEMD, WHIRLPOOL, and others. Of these, the SHA-2 family is the most thoroughly cryptanalyzed, and so it would be my recommendation for general use. I would recommend either SHA2-256 or SHA2-512 for typical applications, since those two sizes are the most common and likely to be supported in the future by SHA-3.
It really depends on what you need it for.
If you are in need of actual security, where the ability to find a collision easily would compromise your system, I would use something like SHA-256 or SHA-512 as they come heavily recommended by various agencies.
If you are in need of something that is fast, and can be used to uniquely identify something, but there are no actual security requirements (ie, an attacker wouldn't be able to do anything nasty if they found a collision) then I would use something like MD5.
MD4, MD5, and SHA-1 have been shown to be more easily breakable, in the sense of finding a collision via a birthday attack method, than expected. RIPEMD-160 is well regarded, but at only 160 bits a birthday attack needs only 2^80 operations, so it won't last forever. Whirlpool has excellent characteristics and appears the strongest of the lot, though it doesn't have the same backing as SHA-256 or SHA-512 does - in the sense that if there was a problem with SHA-256 or SHA-512 you'd be more likely to find out about it via proper channels.

Resources