I have a question about Bluetooth Low Energy(BLE) and AES encryption data size. BLE uses AES. AES block size is 16byte. If data is less than 16byte, we need to add padding. But when I sniff BLE data by Nordic sniffer, it shows payload size is 5 byte or 7 byte. I do not know how to decrypt less than 16 byte with AES.
In most of case, BLE MTU size is 20byte. But AES block size is 16byte. How to handle 17byte data? After AES encryption, it become 32byte and over MTU size.
Related
I have read one article about difference between the methods update() and dofinal() in cipher.
It was about what will happend if we want to encrypt 4 Bytes Array, when the block size of the cipher is for example 8 Bytes. If we call update here it will return null. My question is: what will happen if we call doFinal() with a 4 byte array to encrypt, and the buffer size is 8 bytes, how many bytes encoded data will we receive on the return?
update(): feed the data, again and again, enables you to encrypt long files, streams.
dofinal(): apply the requested padding scheme to the data, if requested and necessary, then encrypt. ECB and CBC mode requires padding but CTR mode doesn't. If NOPADDING has used some libraries may secretly pad, in others you have to handle the padding yourself.
When you call, dofinal() with 4-byte data, if NOPADDING is not set, it will be padded and then encrypted.
From Java Doc;
update(byte[] input)
Continues a multiple-part encryption or decryption operation (depending on how this cipher was initialized), processing another data part.
doFinal()
Finishes a multiple-part encryption or decryption operation, depending on how this cipher was initialized.
I have an Android App that write several bytes to a Bluetooth device.
Looking on btsnoop_hci.log I see that, when a large amount of bytes are sent to the BLE device, the app use Prepare Write Request more times and then Execute Write Request: Immediately Write All.
Now my problem is how to perform this with a my application using a RN4870 module.
At this moment I can connect, read service and characteristics, and write using
CHW command as described in the manual when the there are few bytes.
But I cannot write as the remote BLE device expect when there are lot of bytes.
Thank You for support
Marco
This is the Microchip answer:
Hello,
The core specifications are handled by the firmware.
The user doesn't have access at this level, so is nothing you can set.
Regarding the long data question:
"Does RN4870 module support the Data Length Extension feature? "
RN4870 rev 1.28 support DLE, but partially. The normal packet size in BLE without DLE is 20 bytes.
With a standard DLE feature, the normal packet size should be 251 bytes.
However, in RN4870 Rev 1.28, the packet size is 151 bytes. So it is not a full implementation of the DLE.
The DLE feature (Data Length Extension) is embedded into the lower levels of the Bluetooth stack and there are no specific commands to enable or disable DLE. Essentially, if the peer device also supports DLE, then the DLE will be enabled.
So there are no specific (commands) that you need to do to increase throughput through DLE.
Regards,
In other words there is nothing to do!
In Android Application you can't directly set the DLE length, instead you should set the MTU size. Android Bluetooth stack will calculate DLE length based on the MTU. Maximum Data Length supported by BT protocol in 251, but can be between 27 and 251 depends on BT controller H/W capability. During connection BT Device will negotiate with peer device(If peer device support DLE) to set Maximum DLE size supported by both device.
To increase your throughput you can use the maximum supported MTU size of 512. Also you can write without response and do error check on data using your own logic like checking parity or CRC and re-transmit data from Application for better throughput.
We are working on an application that will use SPP (Serial Port Profile) over Bluetooth and the developers are debating using some type of protocol and packet delivery, versus just streaming the data without any form of ACK, sequence, or size information.
Does Bluetooth provide the guaranteed delivery and data integrity so that we do not need the overhead of a packet protocol design? Can we rely on just Bluetooth to ensure the data was delivered?
is delivery guaranteed?,
The order of the delivery is guaranteed. This is due to the acknowledgement/sequence numbering scheme built in to the lower-layers of bluetooth protocol. so at a lower layer a packet is re-transmitted until it is acknowledged. note that this is equivalent to stop and wait ARQ scheme. if it takes more than a timeout then the connection is considered as lost (generally 30 seconds)
is data integrity guaranteed?
Bluetooth 4.2 introduced BT secure connection.This includes a Message Integiry Check(MIC) with each transmitted packet and a MIC mismatch at the receiving end will trigger a re-transmission and a number of MIC mismatches might disconnect the connection.
so if you are not using Secure Connection feature, then integrity is not guaranteed. There is a 16 bit CRC scheme used to protect data, but it is known that over long time period there is going to be CRC escapes (bit flips in such a way that CRC remains correct). but this is relatively rare and happens in a noisy environment. if your application requires very high data integrity then either use SecureConnection or introduce Application level Integrity checks.
Note that SPP Profile in itself does not have any error/sequence checking, RFCOMM has a 8 bit FCS (Frame check sequence) which checks for the header corruption. L2CAP streaming/Retransmission mode has an optional 16-bit FCS which covers L2CAP header and data, Note that the basic L2CAP mode does not have FCS at all.
if you have an option to enable L2CAP FCS then you have a 16bit CRC at lower-level + 16 bit FCS at the L2CAP layer + 8 bit FCS at RFCOMM level will provide a data integrity which is good enough for most applications. However as mentioned earlier if it is really critical then you need to introduce additional application level integrity checks.
In essence BT has its own safety mechanisms for transfer. However, and this is just as important - i order for YOU to know when data starts and ends you should use a packet type transmission, such as STX and ETX to delimit each packet. There are dongles that adhere to the bad habit of repeating the last sent byte if there is a time lapse in the transmission, but they will stop when ETX or EOT in encountered.
And, for your system safety, you might as well include a checksum at the end of the packet. Then you are pretty sure.
I'm using ip xfrm under Linux to add an IPsec SA with AES in GCM mode to the system.
The command I'm using is like this:
ip xfrm state add src 10.66.21.164 dst 10.66.21.166 proto esp spi 0x201 mode transport aead "rfc4106(gcm(aes))" 0x010203047aeaca3f87d060a12f4a4487d5a5c335 96
Now I'm wondering:
The key is seemingly 20B = 160b long. The normal AES key is 128b and, as can be seen above, the IV-length is 96b. If I lengthen or shorten the key it doesn't work, so clearly the expected input is (sizeof(AES)=128b) (it does, of course, work with 256b too) + 32b long.
Why is this so? The only thing I know that is 4B long in this context is unsigned int, which is the data type of IV-length variable, but this has nothing to do with the key.
Shouldn't the key plus IV be 224b long (128 + 96 for the IV)?
The 96 value in your command is the size of the authentication tag. The authentication tag is part of a message in the session, it is not something you have to specify. The same thing goes for the IV, it is generated by the protocol per message.
The key material consists of a 16 byte (128 bit) AES key and a 4 byte (16 bit) salt in hexadecimal format, which uses 2 characters per byte.
The KEYMAT requested for each AES-GCM key is 20 octets. The first
16 octets are the 128-bit AES key, and the remaining four octets
are used as the salt value in the nonce.
Source: RFC 4106.
I am making a protocol that uses packets (i.e., not a stream) encrypted with AES. I've decided on using GCM (based off CTR) because it provides integrated authentication and is part of the NSA's Suite B. The AES keys are negotiated using ECDH, where the public keys are signed by trusted contacts as a part of a web-of-trust using something like ECDSA. I believe that I need a 128-bit nonce / initialization vector for GCM because even though I'm using a 256 bit key for AES, it's always a 128 bit block cipher (right?) I'll be using a 96 bit IV after reading the BC code.
I'm definitely not implementing my own algorithms (just the protocol -- my crypto provider is BouncyCastle), but I still need to know how to use this nonce without shooting myself in the foot. The AES key used in between two people with the same DH keys will remain constant, so I know that the same nonce should not be used for more than one packet.
Could I simply prepend a 96-bit pseudo random number to the packet and have the recipient use this as a nonce? This is peer-to-peer software and packets can be sent by either at any time (e.g., an instant message, file transfer request, etc.) and speed is a big issue so it would be good not to have to use a secure random number source. The nonce doesn't have to be secret at all, right? Or necessarily as random as a "cryptographically secure" PNRG? Wikipedia says that it should be random, or else it is susceptible to a chosen plaintext attack -- but there's a "citation needed" next to both claims and I'm not sure if that's true for block ciphers. Could I actually use a counter that counts the number of packets sent (separate from the counter of the number of 128 bit blocks) with a given AES key, starting at 1? Obviously this would make the nonce predictable. Considering that GCM authenticates as well as encrypts, would this compromise its authentication functionality?
GCM is a block cipher counter mode with authentication. A Counter mode effectively turns a block cipher into a stream cipher, and therefore many of the rules for stream ciphers still apply. Its important to note that the same Key+IV will always produce the same PRNG stream, and reusing this PRNG stream can lead to an attacker obtaining plaintext with a simple XOR. In a protocol the same Key+IV can be used for the life of the session, so long as the mode's counter doesn't wrap (int overflow). For example, a protocol could have two parties and they have a pre-shared secret key, then they could negotiate a new cryptographic Nonce that is used as the IV for each session (Remember nonce means use ONLY ONCE).
If you want to use AES as a block cipher you should look into CMAC Mode or perhaps the OMAC1 variant. With CMAC mode all of the rules for still CBC apply. In this case you would have to make sure that each packet used a unique IV that is also random. However its important to note that reusing an IV doesn't have nearly as dire consequences as reusing PRNG stream.
I'd suggest against making your own security protocol. There are several things you need to consider that even a qualified cryptographer can get it wrong. I'd refer you to the TLS
protocol (RFC5246), and the datagram TLS protocol (RFC 4347). Pick a library and use them.
Concerning your question with IV in GCM mode. I'll tell you how DTLS and TLS do it. They use an explicit nonce, i.e. the message sequence number (64-bits) that is included in every packet, with a secret part that is not transmitted (the upper 32 bits) and is derived from the initial key exchange (check RFC 5288 for more information).