Is initialization vector part of Output ESP packet in IPsec? - security

Encryption Algorithm: AES-CBC
Authentication Algorithm: HMAC-SHA1-96
Is it necessary that in ESP, initialization vector be always the part of output packet.
If no, what are those algorithms ?
If yes, Why in some of the images on google/books show esp packet with no iv field ?

Citing RFC-2406, section 2.3 (emphasis mine):
Payload Data is a variable-length field containing data described by
the Next Header field. The Payload Data field is mandatory and is an
integral number of bytes in length. If the algorithm used to encrypt
the payload requires cryptographic synchronization data, e.g., an
Initialization Vector (IV), then this data MAY be carried explicitly
in the Payload field. Any encryption algorithm that requires such
explicit, per-packet synchronization data MUST indicate the length,
any structure for such data, and the location of this data as part of
an RFC specifying how the algorithm is used with ESP. If such
synchronization data is implicit, the algorithm for deriving the data
MUST be part of the RFC.
So the initialization vector presence is conditional and depends on particular encryption algorithm.

Related

Why can Serde not be used for Libra's serialization and deserialization of cryptographic purposes?

In Libra, participants pass around messages or data structures that often times need to be signed by a prover and verified by one or more verifiers. Serialization in this context refers to the process of converting a message into a byte array. Many serialization approaches support loose standards such that two implementations can produce two different byte streams that would represent the same, identical message. While for many applications, non-deterministic serialization causes no issues, it does so for applications using serialization for cryptographic purposes. For example, given a signature and a message, a verifier may not unable to produce the same serialized byte array constructed by the prover when the prover signed the message resulting in a non-verifiable message. In other words, to ensure message verifiability when using non-deterministic serialization, participants must either retain the original serialized bytes or risk losing the ability to verify messages. This creates a burden requiring participants to maintain both a copy of the serialized bytes and the deserialized message often leading to confusion about safety and correctness. While there exist a handful of existing deterministic serialization formats, there is no obvious choice. To address this, we propose Libra Canonical Serialization that defines a deterministic means for translating a message into bytes.
That is what the Libra project says. What is the real deterministic serialization? If serde is not deterministic, how it can deserialize back after serialize the datastruct?
HashSet and HashMap can have items inserted into different slots depending on the exact order of inserts, updates, and deletes. Serializers of these data structures operate in slot order, emitting a sequence of (key, value) pairs. Since slot order is nondeterministic, the serialized bytes will be nondeterministic.

How standard is HMAC(SHA-1)

HMAC(SHA-1) is an algorithm for Hash computation that also accepts a key as input value. The algorithm follows certain rules and guarantees a certain level of security and resilience against attacks.
Moving to its implementation: is HMAC(SHA-1) standard at the point that all the "official" and correct implementations of it produce exactly the same result for a given input message and key? Or is the algorithm accepting different implementations that might produce a different result?
any given implementation of HMAC-SHA1 will produce the same set of bytes given the same set of bytes as the input message and key.
That said, there can be a lot of variation on how various interfaces work and how they accept those bytes. For example, one library may output the hash as a hex string, and another may output it as an array of bytes. Or one would take a string as input with a UTF-8 encoding, whereas another would take it in as a UTF-16 encoding. You would need to be careful that the same bytes are hitting the algorithm in different libraries to ensure you get the same result.
Also, while HMAC-SHA1 is probably okay from a security perspective, you should probably be using HMAC-SHA256 instead.
It's very standard. It's a standard, even!
RFC2104 specifies the actual HMAC algorithm and block sizes.
RFC2202 contains test cases for both HMAC-MD5 and HMAC-SHA1.
For further study, RFC4868 gives more guidance on HMAC for the SHA2 family, with an emphasis on IPSec.

Is a Caesar cipher on purely binary data unbreakable?

As far as I can see methods to break a simple Caesar Cipher rely on patterns in the source data, such as the frequency of vowels in the English language. I struggle to see how a simple cipher on binary data could be compromised, assuming the key length is equal to or greater than the length of the data to be encrypted (so the key bytes never repeat) and assuming the source binary data is scrambled so any underlying pattern is removed.
If I offer you a byte value of 152, there is no mathematical way to determine that original data was 52 and the key was 100 without the key.
Are my assumptions here correct and if not how could this simple encryption method be broken?
key length is equal to or greater than the length of the data to be encrypted
You're describing one-time pad. It is secure.
Caesar ciphers have keys that are much shorter than the message.

Secret vs. Non-secret Initialization Vector

Today I was doing some leisurely reading and stumbled upon Section 5.8 (on page 45) of Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography (Revised) (NIST Special Publication 800-56A). I was very confused by this:
An Approved key derivation function
(KDF) shall be used to derive secret
keying material from a shared secret.
The output from a KDF shall only be
used for secret keying material, such
as a symmetric key used for data
encryption or message integrity, a
secret initialization vector, or a
master key that will be used to
generate other keys (possibly using a
different process). Nonsecret keying
material (such as a non-secret
initialization vector) shall not be
generated using the shared secret.
Now I'm no Alan Turing, but I thought that initialization vectors need not be kept secret. Under what circumstances would one want a "secret initialization vector?"
Thomas Pornin says that IVs are public and he seems well-versed in cryptography. Likewise with caf.
An initialization vector needs not be secret (it is not a key) but it needs not be public either (sender and receiver must know it, but it is not necessary that the Queen of England also knows it).
A typical key establishment protocol will result in both involve parties computing a piece of data which they, but only they, both know. With Diffie-Hellman (or any Elliptic Curve variant thereof), the said shared piece of data has a fixed length and they have no control over its value (they just both get the same seemingly random sequence of bits). In order to use that shared secret for symmetric encryption, they must derive that shared data into a sequence of bits of the appropriate length for whatever symmetric encryption algorithm they are about to use.
In a protocol in which you use a key establishment algorithm to obtain a shared secret between the sender and the receiver, and will use that secret to symmetrically encrypt a message (possibly a very long streamed message), it is possible to use the KDF to produce the key and the IV in one go. This is how it goes in, for instance, SSL: from the shared secret (called "pre-master secret" in the SSL spec) is computed a big block of derived secret data, which is then split into symmetric keys and initialization vectors for both directions of encryption. You could do otherwise, and, for instance, generate random IV and send them along with the encrypted data, instead of using an IV obtained through the KDF (that's how it goes in recent versions of TLS, the successor to SSL). Both strategies are equally valid (TLS uses external random IV because they want a fresh random IV for each "record" -- a packet of data within a TLS connection -- which is why using the KDF was not deemed appropriate anymore).
Well, consider that if two parties have the same cryptographic function, but don't have the same IV, they won't get the same results. So then, it seems like the proposal there is that the two parties get the same shared secret, and each generate, deterministically, an IV (that will be the same) and then they can communicate. That's just how I read it; but I've not actually read the document, and I'm not completely sure that my description is accurate; but it's how I'd start investigating.
IV is public or private, it doesn't matter
let's consider IV is known to attacker, now by looking at encrypted packet/data,
and knowledge of IV and no knowledge on encryption key, can he/she can guess about input data ? (think for a while)
let's go slightly backwards, let's say there is no IV in used in encryption
AES (input, K)= E1
Same input will always produce the same encrypted text.
Attacker can guess Key "K" by looking at encrypted text and some prior knowledge of input data(i.e. initial exchange of some protocols)
So, here is what IV helps. its added with input value , your encrypted text changes even for same input data.
i.e. AES (input, IV, K)= E1
Hence, attacker sees encrypted packets are different (even with same input data) and can't guess easily. (even having IV knowledge)
The starting value of the counter in CTR mode encryption can be thought of as an IV. If you make it secret, you end up with some amount of added security over the security granted by the key length of the cipher you're using. How much extra is hard to say, but not knowing it does increase the work required to figure out how to decrypt a given message.

Initialization vector uniqueness

Best practice is to use unique ivs, but what is unique? Is it unique for each record? or absolutely unique (unique for each field too)?
If it's per field, that sounds awfully complicated, how do you manage the storage of so many ivs if you have 60 fields in each record.
I started an answer a while ago, but suffered a crash that lost what I'd put in. What I said was along the lines of:
It depends...
The key point is that if you ever reuse an IV, you open yourself up to cryptographic attacks that are easier to execute than those when you use a different IV every time. So, for every sequence where you need to start encrypting again, you need a new, unique IV.
You also need to look up cryptographic modes - the Wikipedia has an excellent illustration of why you should not use ECB. CTR mode can be very beneficial.
If you are encrypting each record separately, then you need to create and record one IV for the record. If you are encrypting each field separately, then you need to create and record one IV for each field. Storing the IVs can become a significant overhead, especially if you do field-level encryption.
However, you have to decide whether you need the flexibility of field level encryption. You might - it is unlikely, but there might be advantages to using a single key but different IVs for different fields. OTOH, I strongly suspect that it is overkill, not to mention stressing your IV generator (cryptographic random number generator).
If you can afford to do encryption at a page level instead of the row level (assuming rows are smaller than a page), then you may benefit from using one IV per page.
Erickson wrote:
You could do something clever like generating one random value in each record, and using a hash of the field name and the random value to produce an IV for that field.
However, I think a better approach is to store a structure in the field that collects an algorithm identifier, necessary parameters (like IV) for that parameter, and the ciphertext. This could be stored as a little binary packet, or encoded into some text like Base-85 or Base-64.
And Chris commented:
I am indeed using CBC mode. I thought about an algorithm to do a 1:many so I can store only 1 IV per record. But now I'm considering your idea of storing the IV with the ciphertext. Can you give me more some more advice: I'm using PHP + MySQL, and many of the fields are either varchar or text. I don't have much experience with binary in the database, I thought binary was database-unfriendly so I always base64_encoded when storing binary (like the IV for example).
To which I would add:
IBM DB2 LUW and Informix Dynamic Server both use a Base-64 encoded scheme for the character output of their ENCRYPT_AES() and related functions, storing the encryption scheme, IV and other information as well as the encrypted data.
I think you should look at CTR mode carefully - as I said before. You could create a 64-bit IV from, say, 48-bits of random data plus a 16-bit counter. You could use the counter part as an index into the record (probably in 16 byte chunks - one crypto block for AES).
I'm not familiar with how MySQL stores data at the disk level. However, it is perfectly possible to encrypt the entire record including the representation of NULL (absence of) values.
If you use a single IV for a record, but use a separate CBC encryption for each field, then each field has to be padded to 16 bytes, and you are definitely indulging in 'IV reuse'. I think this is cryptographically unsound. You would be much better off using a single IV for the entire record and either one unit of padding for the record and CBC mode or no padding and CTR mode (since CTR does not require padding - one of its merits; another is that you only use the encryption mode of the cipher for both encrypting and decrypting the data).
Once again, appendix C of NIST pub 800-38 might be helpful. E.g., according to this
you could generate an IV for the CBC mode simply by encrypting a unique nonce with your encryption key. Even simpler if you would use OFB then the IV just needs to be unique.
There is some confusion about what the real requirements are for good IVs in the CBC mode. Therefore, I think it is helpful to look briefly at some of the reasons behind these requirements.
Let's start with reviewing why IVs are even necessary. IVs randomize the ciphertext. If the same message is encrypted twice with the same key then (but different IVs) then the ciphertexts are distinct. An attacker who is given two (equally long) ciphertexts, should not be able to determine whether the two ciphertexts encrypt the same plaintext or two different plaintext. This property is usually called ciphertext indistinguishablility.
Obviously this is an important property for encrypting databases, where many short messages are encrypted.
Next, let's look at what can go wrong if the IVs are predictable. Let's for example take
Ericksons proposal:
"You could do something clever like generating one random value in each record, and using a hash of the field name and the random value to produce an IV for that field."
This is not secure. For simplicity assume that a user Alice has a record in which there
exist only two possible values m1 or m2 for a field F. Let Ra be the random value that was used to encrypt Alice's record. Then the ciphertext for the field F would be
EK(hash(F || Ra) xor m).
The random Ra is also stored in the record, since otherwise it wouldn't be possible to decrypt. An attacker Eve, who would like to learn the value of Alice's record can proceed as follows: First, she finds an existing record where she can add a value chosen by her.
Let Re be the random value used for this record and let F' be the field for which Eve can submit her own value v. Since the record already exists, it is possible to predict the IV for the field F', i.e. it is
hash(F' || Re).
Eve can exploit this by selecting her value v as
v = hash(F' || Re) xor hash(F || Ra) xor m1,
let the database encrypt this value, which is
EK(hash(F || Ra) xor m1)
and then compare the result with Alice's record. If the two result match, then she knows that m1 was the value stored in Alice's record otherwise it will be m2.
You can find variants of this attack by searching for "block-wise adaptive chosen plaintext attack" (e.g. this paper). There is even a variant that worked against TLS.
The attack can be prevented. Possibly by encrypting the random before using putting it into the record, deriving the IV by encrypting the result. But again, probably the simplest thing to do is what NIST already proposes. Generate a unique nonce for every field that you encrypt (this could simply be a counter) encrypt the nonce with your encryption key and use the result as an IV.
Also note, that the attack above is a chosen plaintext attack. Even more damaging attacks are possible if the attacker has the possibility to do chosen ciphertext attacks, i.e. is she can modify your database. Since I don't know how your databases are protected it is hard to make any claims there.
The requirements for IV uniqueness depend on the "mode" in which the cipher is used.
For CBC, the IV should be unpredictable for a given message.
For CTR, the IV has to be unique, period.
For ECB, of course, there is no IV. If a field is short, random identifier that fits in a single block, you can use ECB securely.
I think a good approach is to store a structure in the field that collects an algorithm identifier, necessary parameters (like IV) for that algorithm, and the ciphertext. This could be stored as a little binary packet, or encoded into some text like Base-85 or Base-64.

Resources