Simple authentication and encryption methods over TCP for microcontroller embedded device - security

I have designed a simple communications protocol, over raw TCP sockets, to enable simple messaging between some embedded devices. To put things in context, my embedded device is a box of electronics containing a relatively small microcontroller that is running a basic embedded RTOS (which essentially provides just task priority and message queueing) and TCP/IP stack. The intended use for the TCP protocol is to
Enable two or more 'boxes' to communicate with each other over a LAN in one building
To allow a box to exchange data with an external server over the Internet.
I now have a messaging protocol working between my metal boxes that I'm happy with. The basic messaging procedure between two boxes is basically:
Box 'A' initiates a socket connection to 'B'.
Box 'A' sends a message report or command sequence.
Box 'B' responds with an acknowledge and / or command response.
Box 'A' closes the socket.
What I would now like to do is to incorporate some level of security and authentication. The great restriction here is that I don't have any kind of OS or glorified TCP stack that can provide me with any security features; I just have simple TCP stack, and therefore I must implement security measures at application level, and with microcontroller limitations in mind.
The objectives I would like to meet are:
Authentication between devices. To achieve this, I'm intending to do the following measures:
Hold a table of known IPs from which connections shall only be accepted.
Each time a socket connection is established, unique identifiers are always exchanged first. The number might be a unique serial number for a given device, and must be known to the other devices.
Encryption of data, in the event that packets are somehow intercepted. Presumably I need some kind of cipher algorithm that isn't too 'expensive' to perform on a small microcontroller, used in conjuction with a unique key that is programmed into all devices. One such algorithm I've seen, which looks compact enough to implement in my code, is TEA (Tiny Encryption Algorithm).
I would be most grateful for any advice or pointers.

Check MatrixSSL - they boast a tiny size and embeddable capabilities. This is much better than re-inventing SSL/TLS yourself (which you would end up doing).

Tea looks to be quite simple and will probably do what you need.
compiling for thumb with -O2 or similar optimizations:
arm-none-linux-gnueabi-gcc (Sourcery G++ Lite 2011.03-41) 4.5.2
encrypt 136 bytes decrypt 128 bytes
llvm 29
encrypt 92 bytes decrypt 96 bytes
compiling for generic arm...
gnu encrypt 188 bytes, decrypt 184 bytes
llvm encrypt 112 bytes, decrypt 116 bytes
For authentication, is there a one to one relationship between the ip address table and the number of devices? Basically does there need to be more than one unique identifier per device? Are you wanting the other side that is making the connection to the embedded system to log in in some form or fashion? Or are you controlling the binaries/code on both ends and there are no users or no selection of devices (the programs know what to do), etc? If each device has a known ip address, that ip address could be the key to the encryption (plus other bytes common to all or derived in some fashion that everyone knows) If a connection coming in is 1) not from the approved list 2) encryption fails when the embedded systems ip address based key fails then reject the connection.
Your security can only go so far anyway, if you need something really robust you probably need more horsepower in the embedded system and then implement one of the common/standard systems like ssl.

Related

Any secure USB dongle/token with internal AES and RSA, with simple API?

I've my C# NET6 desktop application to send to customers, important functions have been removed and implemented on a server
I've my public server on which I want to auth desktop app(license, feature...), get its blob, process it, send back
I consider the C# app crackable whatever obfuscator/protector I'll use (but i'll use anyway), server is considered secure, i need a secure point at customer premise.
The idea is to use an usb dongle to bring up a secure and authenticated session between desktop app and server.
Requisites for the dongle are:
Be able to do AES128(at least) and/or RSA1024(at least)
EAL5+/6+ secure MCU (nothing that could be dumped with glitches or baths in acid)
dll and API to talk with
So far i've looked at various sw protection dongle, but:
some are 15years old mcu and not sure if still in business
most doesn't tell what mcu is inside, some are fast (but silly) stm32, some are slow 8051
the expensive ones are the most complex ones, i mean it takes days to read unclear documentation and see that i don't need 90% of the package (enveloper, mssql db for my 50customers...)
i don't need at all their C# enveloper, I want to use a thirdparty/specific protector with VM
So i've looked at usb tokens PKI, FIDO2, PIV....but:
FIDO2 allows customer to reset pin and cear all certificates, no good as i want to burn keypair inside prior to ship to customer
PIV not found any cheap PIV only usb token, some FIDO2 expensive has also PIV interface, but...
to talk to FIDO2 and PIV i would need all the overload bloat of libraries that i very dislike (and also needs admin right, which i want to avoid)
PC/SC usb token are the most lowlevel to use, mscard lib and do whatever, nice but.....ISO-7816-8,9 are not public, costs like 300bucks to eventually see that my card vendor implemented custom stuff
I've 0x80 blob to send to dongle to powmod() it, that's all, no x509, no pkcs11, no base64, nothing human, just need a powmod(data) or an aes_dec(data).
Any suggestions?
While this is no full answer, I would like to address some issues:
You may underestimate the complexity required. Obviously necessary is some specification, whether RSA or AES operation is required. This has to show up somewhere, either as command parameter or as a set-up command (between host and connected token).
Pure modular exponentiation is unlikely to reach the desirable level of security, since RSA depends on padding to exclude some kinds of attack.
You may not like PKCS 11 interface, but it is proven and known to introduce no security issues. This may require notable effort if done on your own.
Given the mentioned EAL levels, my guess would be, that you need a smart card chip with USB interface.
The MCU is pretty irrelevant: to get crypto operations hardened, you need special hardware (as cryptographic coprocessors). It has little influence, how old the architecture of the chip is, which feeds the bytes to those.

Bluetooth Security Concern

We are developing sensors which will be distributed in large quantities and broadcast BLE every 5s in order to have access to DFU and Data Sending. The DFU is encrypted from the manufacturer's end however the Data Sending (NUS/UART) is left open and so we are looking for ways to encrypt the data or limit access to this service from unwanted users. A static PIN key could be used however since it is only 4 digits long (usually), there are only 10,000 combinations. It would be appreciated if you could shed some light on this.
The Bluetooth standard won't help you solve this in a good way. Its pairing / bonding features are designed to prevent remote attacks while a user is pairing with the device, not to prevent any person from pairing at all. You should see the question as a general question and not a Bluetooth-specific one in my opinion.
Unless you want to pre-bond all the sensors to some legit device and then prevent new pairings (which would of course solve your problem, but might be cumbersome in practice), you should use something else than what the Bluetooth standards offers.
For example, if you are happy with having a password to access the sensors, you can implement a PAKE scheme (https://en.m.wikipedia.org/wiki/Password-authenticated_key_agreement) and then encrypt and sign all data using the derived key. You can also simply use TLS, or some other certificate-based solution.
If you are lazy and think it's too hard to implement proper cryptography you can otherwise just have a characteristic that the user writes a password to, and if it's accepted, the data sending service opens up. This of course is unsecure because an attacker can sniff the connection and find the password. The same applies when you have a static PIN and use standard Bluetooth pairing.

How to do bluetooth pairing at factory time

I have some Bluetooth LE v4.2 beacons that I will connect ONLY with known devices that we may call "readers". The beacons are program and installed by me. I consume the data and I sell the service.
I want to use a hard-coded shared secret to realize the pairing or communication. My primary concern is that only a known and authenticated device SHALL be able to send data (with integrity protection).
What would be my best option ?
A few previsions :
We are talking about 1000s of devices, and more will join the network every day.
I am already doing advertisement filter, etc. I only connect to devices with my vendor id.
Replacement if preferable to any kind of lack of security in the authentication, my added value is the trust in data.
I have an OTA update system for all the devices.
Interesting documentation I found about Bluetooth Low Energy (BLE) security :
NIST Guide to Bluetooth Security
An answer to my question on the Nordicsemi devzone gave me some hints. Find below the answers I was looking for. I hope that will help.
Mode 1 Level 4 (encryption) vs Mode 2 Level 2 (signing)
Resources :
Dev zone semi question
Forget about CSRK. It's a bad idea that almost no BLE stacks support. One reason is that it only supports Write Without Response in one direction. Another is that you need to keep a write counter stored in flash. A third is that a MITM could potentially delay a message for an arbitrary time and doesn't need an active connection during this time. It has no benefits at all compared to the normal AES-CCM except that CCM takes 2.5 round trips to set up for BLE.
How to ensure secure encryption with a pre-shared secret
Resources :
Dev zone semi question
Stackoverflow question
Dev zone semi : Pre-shared key = OOB
Dev zone semi : OOB LESC vs Legacy
Do we need pairing ?
No pairing :
If you remove the pairing step from BLE security you basically just have AES-CCM with pre-shared keys, where each connection has an own key derived from the shared key and a nonce from each side. LESC is about the pairing step which you want to remove, so that doesn't apply in that case.
Vs Out Of Band (OOB) :
A pre-shared key is an example of OOB (Out of band) pairing. That might sound a bit strange, but essentially you are using the production setup in your factory as the medium to share keys. You do not want to have the LTK or any BLE bonding data pre-shared, but rather just a key at some location in flash which can be used in a regular OOB pairing.
Preferred solution is Out of band pairing.
LESC with pre-shared passkey vs OOB with pre-shared key ?
Resources :
Dev zone semi : Pre-shared key
The first time you connect you should authenticate the other device, and you can do this by using your pre-shared key when you bond. You can bond by using Passkey Entry or OOB. The key used with Passkey Entry is short, so I would recommend using a 128-bit key with OOB, this is much more secure.
Out of band LESC Vs Out Of Band Legacy
Both LESC and Legacy end up with 128-bit encryption keys, and these are equally secure. The power consumption will be the same after pairing is done. LESC uses a more complex algorithm so it will use more power during the pairing process. The difference is in the key generation algorithm. It depends on what kind of attacks you want to protect against. If you do OOB with legacy and you are sure that the attacker can't get the OOB data, you are secure. If the attacker can get this data, you should go for LESC. What kind of central device are you connecting to? Does it support OOB and/or LESC?
In fact LESC out of band with pre-shared key is quite complicated to archive because of the calculation of the oob payload is supposed to be a random number signed with private key, and this mechanism is implemented in the softdevice but not accessible. Thus we could either re-invent the wheel, or just decide that this computation is useless as evedrop of out-of-band is just impossible with pre-shared key. Also, LESC oob pairing is more calculation intensive for no benefits.
Out of band Legacy
For more detailed explanations of Out of band Legacy pairing, see bluetooth.com.
Temporary key calculation
A master key will be included in the new FW release code (that's probably my major weakness, but I cannot do much about it). I will use legacy Out Of Band pairing. The Temporary Key (TK), used for paring communication encryption, will be derivated from the master key using generation function fc (inspired by the f5 function described in the Bluetooth specification).
The definition of this key generation function fc makes use of the MAC function AES-CMACT with a 128-bit key T.
The input of the function are:
M is 128 bits
A1 is 56 bits
A2 is 56 bits
The string “******” is mapped into keyID using extended ASCII as follows:
keyID = 0xXXXXXXXXXXXX
The output of the key generation function fc is as follows:
fc(M, A1, A2) = AES-CMACM(keyID || 0x00 || A1 || A2 || Length = 128)
The TK is calculated as:
TK = fc(Master key, DB_ADDR_master, DB_ADDR_slave)
I wouldn’t pair in factory, but instead add other programmatically controlled mechanisms in the FW. I’m thinking bondable LE links, whitelisted MAC-addresses (as long as we’re not talking about random/obfuscated addresses).
If you have access to the chip/design in production, you could let the production test station use wired/wireless available interface and add the whitelisted MAC addresses in there...?
Or, use Vendor-specific data in the BLE advertisements data and add X identification bytes that you filter on in the LE central.
Or, use groups of custom Service UUIDs and add to the adv data, allowing centrals to filter on that.
Etc etc — the point is; my experience of setting production pre-paired stuff has always ended up in chaos, and there should always be a mechanism to clear your pairing and manually setup thing as you, or your customer, want. How else would you deal with replacements, upgrades etc and sudden implicit or explicit breaking changes — always design things so that there’s a way to get thing up and running from scratch again. Depending on the product, that might be using a Config tool from PC, or Admin-mode from your phone/app, or the like — but don’t rely on production-defined pairings.

Protocol/Packet Design Questions

I'm looking to a design a protocol for a client-server application and need some links to some resources that may help me.
The big part is I'm trying to create my own "packet" format so I can minimize the amount of information being sent. I'm looking for some resources to dissect their protocol, but it seems some completely lack packet design, such as SMTP (which just sends strings terminated by CLRF). What are the advantages/disadvantages of using a system like SMTP over a system that uses a custom made packet? Couldn't SMTP use only a couple bytes to cover all commands through bit flags and save bandwidth/space?
Just trying to get my head around all this.
True, but SMTP wasn't particularly optimized for space, nor is it a packet-based protocol. It sits atop TCP, and uses the stream functionality of TCP. You need to decide what is desirable in your protocol: is it performance sensitive? latency? bandwidth?
Is it going to need to run as superuser? If not, you'll probably want to use UDP or TCP.
Are you going to need guarantees on delivery? If so, TCP is probably your best option, unless you are dealing with fairly extreme performance or size issues.
Few protocols these days design individual packets, though many do send very specific data structures across the wire using TCP, or, less commonly, UDP.
If you want to really optimize for space or bandwidth, consider condensing your data as much as possible into individual bits and byte, and defining and packing structures to send it across TCP. Modern network adapters are so optimized for TCP anyway, that there is often little advantage to other strategies.
First of all, are you about to implement an enhanced transport protocol (like RTP on top of UDP) or an application protocol (like HTTP/SMTP)?
There are several things you should think about in both cases concerning your design of the protocol or the demands of your application:
Stream based or packet based,
unidirectional / bi-directional,
stateful and sessionful or stateles,
reliable or best effort,
timing demands,
flow/congestion control,
secure or plain.
Towards an application layer protocol, you should also think about:
Textual or binary data, mapping of application data to network data units/packets, security demands and integrity, etc.
SMTP, HTTP and other TCP based protocols do not concern themselves with packet design because they are stream based. So it makes more sense to talk about protocol design.
As for why use text based protocols vs binary protocols...
Readability of the protocol by packet sniffing programs like Wireshark is very useful.
Also it is often very useful to be able to simply telnet into your port and be able to communicate with the server by specifying plain text.
Also with a protocol like HTTP the actual resource is usually the payload of the communication, the resource can be in binary or any other specified format. So having just the headers and status in plain text is not a bad thing.
TCP is a stream based protocol, not packet based.
Using text with lines makes ad hoc debugging a lot easier
Using text with lines makes it possible to exercise your protocol with telnet

Answer modem using voip

I have an application where I have about 10,000 pieces of monitoring equipment across the US that periodically dials into a bank of 32 phone lines. I have two receivers of 16 lines each that answer the call and temporarily stores a small alpha string. I then have a computer that polls the receivers and parses the string and copies it to a database.
I am looking to replace the phone lines and the receivers with a voip solution and rewrite the software to parse the data string.
Any ideas on where to get started?
Tom's suggestion about Asterisk is a good one for the overall system.
However you will still need to decode the data sent from your remote equipment from an audio signal to a data signal. That task is what the "dem" part of Modem stands for (Modulate/Demodulate). Either you do this with a canned hardware/software package (as you are currently doing with a commercial modem) or you have to emulate the modem in software yourself which will be extremely tricky to code at the very least if you attempt it yourself (heaps of standards that you have to comply with for a general modem solution, plus the solution needs to work in real time)
For the software approach could start with this page Linmodems.org (just a something I saw on google prompted by your question). Alternatively do lots of searches on google for software modems. Getting someone else's code is the best approach for this sort of code :)
Whatever you end up doing I suspect it will be rather custom.
A good place to start is probably Asterisk PBX.
I take it you don't want to replace the modems at the client sites (the easiest thing on the server side would be each clients had its own IP software stack, and used its modem to call an ISP and establish an internet connection, and then talk to your server using TCP or UDP or HTTP or whatever).
Assuming that you don't have IP capability on the client sites, Googling suggests that the relevent technology is called "Modem over IP" or "MoIP" (which Wikipedia seems to be confusing with "Mobile over IP").
VoIP consists of SIP for signalling (e.g. for call set-up and call tear-down) plus some codecs (e.g. H.323) for traffic (encoded voice) while the call is established.
I'm guessing that MoIP can keep the SIP signalling, but needs to use some different codecs.
V.150 Modem over IP White Paper looks like an introduction to the technologies. I don't know what vendors there are.
I presume you are looking to find a way to do this without mofidying the modem hardware at your remote sites. If this is the case you will have to find or write signal processing software to demodulate the encoded signal from the modem. Fortunately, signal encodings on a modem are designed to be easy to do this with.
Maybe somebody makes software modem libaries that do this sort of thing. The other parts of the problem will be emulating the handshaking on the modem so it plays nicely with the remote sites.
If you can modify the software (really just the number to dial, but it would have to include the data you want to transfer) at the 10000 sites (not likely!), you could in theory use DTMF in the "dial" string to key the data over into Asterisk. Ok, more than a bit hackey, but it would avoid having to have a software modem. Note: you'd want a checksum!! (and maybe send it multiple times) And a way to tell the caller if it was received correctly. Like I said, hackey but cute.

Resources