Military level engineer challenge - security

I am trying to create a special military RADIO transmitter.
Basically, the flow is:
A solider will receive a message to transmit (about 10 times a day). Each message is of length 1024 bits exactly.
He will insert this message into the radio and validate it is inserted correctly.
The RADIO will repetitively transmit this message.
This is very important that the transmitter will not be hacked, because its very important in times of emergencies.
So, the assistance I ask from you is, how to preform stage 2 without risking getting infected.
If I will transfer the data using a DOK, it may be hacked.
If I will make the user type in 1024 bits, it will be safe but exhausting.
Any Ideas? (unlimited budget)
(It’s important to say that the data being transmitted is not a secret)
Thanks for any help you can supply!
Danny
Edit:
Basically, I want to create the most secure way to transfer a fixed number of bits (in this case 1024), from one (may be infected computer) to the other (air gaped computer).
without having any risk of a virus being transferred as well.
I don't mind if an hacker will change the data that is transferred from the infected computer, I just want that the length of the data will be exactly 1024, and avoiding virus to be inserted to the other computer.
Punch card (https://en.wikipedia.org/wiki/Punched_card) sounds like a good option, but an old one.
Any alternatives?

The transmitter is in the field, and is one dead soldier away from falling into enemy hands at any time. The enemy will take it apart, dissect it, learn how it works and use the protocol to send fraudulent messages that may contain exploit code to you, with or without the original equipment. You simply cannot prevent a trasmitter or otherwise mocked up "enemy" version of a transmitter from potentially transmitting bad stuff, because those are outside of your control. This is the old security adage "Never trust the client" taken to its most extreme level. Even punch cards could be tampered with.
Focus on what you can control: The receiving (or host) computer (which, contrary to your description, is not airgapped as it is receiving outside communication in your model) will need to validate the messages that come in from the client source; this validation will need to check for malicious messages and handle them safely (don't do anything functional with them, just log it, alert somebody and move on with life).
Your protocol should only be treating inbound messages like text or identifiers for message types. Under no circumstances should you be trying to interpret them as machine language instructions, and any SQL queries or strings that this message is appended to should be properly sanitized. This will prevent the host from executing any nasties that do come in.

Related

Getting the relevant data from a VISA resource in Labview to use in pyVISA

I'm volunteering at a uni lab, and I was tasked with removing the dependency on Labview (among other things).
The only problem there for me is the VISA resource. I have no clue (and can't seem to figure out) what exactly the format of the data being sent is.
The VISA buffer seems to get a string, but I've been told that what's being sent is just numbers (0-255), which makes sense, except for the fact that the buffer receives a string.
When I looked at the com port using MAX I saw that there's a termination character on write only (which does make sense given that the device isn't meant to send any data back)
the baud on the com port also says 96,000, when the block diagram has a higher number getting inputted when initializing the VISA resource (though I didn't check it through MAX after running the thing, so it may just keep going to default until I run it)
The device also doesn't respond to an *IDN? query (times out), though I hope it's not a problem since, as mentioned, the device isn't meant to send back data, but I'm assuming whatever chip implements the VISA protocol on that side should also respond. pyVISA throws no errors (even with logging enabled), and any attempt to write just gives me success code 0.
All in all, short of debugging Labview to see exactly what's being fed to the buffer (which I haven't done yet - as a volunteer I'm not sure I'm even entitled to a license of labview on my laptop), I'm at a loss as to how I get all the information I need to imitate what's going on in LABVIEW with pyVISA. Right clicking on the VISA resource and looking at its properties is of little help.
Note: I'm using pyVISA-py as a backend for pyVISA since it seems I also need a license for NI's VISA drivers

How to protect against Replay Attacks

I am trying to figure out a way to implement decent crypto on a micro-controller project. I have an ARMv4 based MCU that will control my garage door and receive commands over a WiFi module.
The MCU will run a TCP/IP server, that will listen for commands from Android clients that can connect from anywhere on the Internet, which is why I need to implement crypto.
I understand how to use AES with shared secret key to properly encrypt traffic, but I am finding it difficult to deal with Replay Attacks. All solutions I see so far have serious drawbacks.
There are two fundamental problems which prevent me from using well
established methods like session tokens, timestamps or nonces:
The MCU has no reliable source of entropy, so I can't generate
quality random numbers.
The attacker can reset the MCU by cutting power to the garage,
thus erasing any stored state at will, and resetting time counter to
zero (or just wait 49 days until it loops).
With these restrictions in mind, I can see only one approach that seems
ok to me (i.e. I don't see how to break it yet). Unfortunately, this
requires non-volatile storage, which means writing to external flash,
which introduces some serious complexity due to a variety of technical details.
I would love to get some feedback on the following solution. Even better, is there a solution I am missing that does not require non-volatile storage?
Please note that the primary purpose of this project is education. I realize that I could simplify this problem by setting up a secure relay inside my firewall, and let that handle Internet traffic, instead of exposing the MCU directly. But what would be the fun in that? ;)
= Proposed Solution =
A pair of shared AES keys will be used. One key to turn a Nonce into an IV for the CBC stage, and another for encrypting the messages themselves:
Shared message Key
Shared IV_Key
Here's a picture of what I am doing:
https://www.youtube.com/watch?v=JNsUrOVQKpE#t=10m11s
1) Android takes current time in milliseconds (Ti) (64-bit long) and
uses it as a nonce input into the CBC stage to encrypt the command:
a) IV(i) = AES_ECB(IV_Key, Ti)
b) Ci = AES_CBC(Key, IV(i), COMMAND)
2) Android utilizes /dev/random to generate the IV_Response that the
MCU will use to answer current request.
3) Android sends [<Ti, IV_Response, Ci>, <== HMAC(K)]
4) MCU receives and verifies integrity using HMAC, so attacker can't
modify plain text Ti.
5) MCU checks that Ti > T(i-1) stored in flash. This ensures that
recorded messages can't be replayed.
6) MCU calculates IV(i) = AES_ECB(IV_Key, Ti) and decrypts Ci.
7) MCU responds using AES_CBC(Key, IV_Response, RESPONSE)
8) MCU stores Ti in external flash memory.
Does this work? Is there a simpler approach?
EDIT: It was already shown in comments that this approach is vulnerable to a Delayed Playback Attack. If the attacker records and blocks messages from reaching the MCU, then the messages can be played back at any later time and still be considered valid, so this algorithm is no good.
As suggested by #owlstead, a challenge/response system is likely required. Unless I can find a way around that, I think I need to do the following:
Port or implement a decent PRGA. (Any recommendations?)
Pre-compute a lot of random seed values for the PRGA. A new seed will be used for every MCU restart. Assuming 128-bit seeds, 16K of storage buys be a 1000 unique seeds, so the values won't loop until the MCU successfully uses at least one PRGA output value and restarts a 1000 times. That doesn't seem too bad.
Use the output of PRGA to generate the challenges.
Does that sound about right?
Having an IV_KEY is unnecessary. IVs (and similar constructs, such as salts) do not need to be encrypted, and if you look at image you linked to in the youtube video you'll see their description of the payload includes the IV in plaintext. They are used so that the same plaintext does not encode to the same ciphertext under the same key every time, which presents information to an attacker. The protection against the IV being altered is the HMAC on the message, not the encryption. As such, you can remove that requirement. EDIT: This paragraph is incorrect, see discussion below. As noted though, your approach described above will work fine.
Your system does have a flaw though, namely the IV_Response. I assume, from that you include it in your system, that it serves some purpose. However, because it is not in any way encoded, it allows an attacker to respond affirmatively to a device's request without the MCU receiving it. Let's say that your device's were instructing an MCU that was running a small robot to throw a ball. Your commands might look like.
1) Move to loc (x,y).
2) Lower anchor to secure bot table.
3) Throw ball
Our attacker can allow messages 1 and 3 to pass as expected, and block 2 from getting to the MCU while still responding affirmatively, causing our bot to be damaged when it tosses the ball without being anchored. This does have an imperfect solution. Append the response (which should fit into a single block) to the command so that it is encrypted as well, and have the MCU respond with AES_ECB(Key, Response), which the device will verify. As such, the attacker will not be able to forge (feasibly) a valid response. Note that as you consider /dev/random untrustworthy this could provide an attacker with plaintext-ciphertext pairs, which can be used for linear cryptanalysis of the key provided an attacker has a large set of pairs to work with. As such, you'll need to change the key with some regularity.
Other than that, your approach looks good. Just remember it is crucial that you use the stored Ti to protect against the replay attack, and not the MCU's clock. Otherwise you run into synchronization problems.

Securely transmit commands between PIC microcontrollers using nRF24L01 module

I have created a small wireless network using a few PIC microcontrollers and nRF24L01 wireless RF modules. One of the PICs is PIC18F46K22 and it is used as the main controller which sends commands to all other PICs. All other (slave) microcontrollers are PIC16F1454, there are 5 of them so far. These slave controllers are attached to various devices (mostly lights). The main microcontroller is used to transmit commands to those devices, such as turn lights on or off. These devices also report the status of the attached devices back to the main controller witch then displays it on an LCD screen. This whole setup is working perfectly fine.
The problem is that anybody who has these cheap nRF24L01 modules could simply listen to the commands which are being sent by the main controller and then repeat them to control the devices.
Encrypting the commands wouldn’t be helpful as these are simple instructions and if encrypted they will always look the same, and one does not need to decrypt it to be able to retransmit the message.
So how would I implement a level of security in this system?
What you're trying to do is to prevent replay attacks. The general solution to this involves two things:
Include a timestamp and/or a running message number in all your messages. Reject messages that are too old or that arrive out of order.
Include a cryptographic message authentication code in each message. Reject any messages that don't have the correct MAC.
The MAC should be at least 64 bits long to prevent brute force forgery attempts. Yes, I know, that's a lot of bits for small messages, but try to resist the temptation to skimp on it. 48 bits might be tolerable, but 32 bits is definitely getting into risky territory, at least unless you implement some kind of rate limiting on incoming messages.
If you're also encrypting your messages, you may be able to save a few bytes by using an authenticated encryption mode such as SIV that combines the MAC with the initialization vector for the encryption. SIV is a pretty nice choice for encrypting small messages anyway, since it's designed to be quite "foolproof". If you don't need encryption, CMAC is a good choice for a MAC algorithm, and is also the MAC used internally by SIV.
Most MACs, including CMAC, are based on block ciphers such as AES, so you'll need to find an implementation of such a cipher for your microcontroller. A quick Google search turned up this question on electronics.SE about AES implementations for microcontrollers, as well as this blog post titled "Fast AES Implementation on PIC18F4550". There are also small block ciphers specifically designed for microcontrollers, but such ciphers tend to be less thoroughly analyzed than AES, and may harbor security weaknesses; if you can use AES, I would. Note that many MAC algorithms (as well as SIV mode) only use the block cipher in one direction; the decryption half of the block cipher is never used, and so need not be implemented.
The timestamp or message number should be long enough to keep it from wrapping around. However, there's a trick that can be used to avoid transmitting the entire number with each message: basically, you only send the lowest one or two bytes of the number, but you also include the higher bytes of the number in the MAC calculation (as associated data, if using SIV). When you receive a message, you reconstruct the higher bytes based on the transmitted value and the current time / last accepted message number and then verify the MAC to check that your reconstruction is correct and the message isn't stale.
If you do this, it's a good idea to have the devices regularly send synchronization messages that contain the full timestamp / message number. This allows them to recover e.g. from prolonged periods of message loss causing the truncated counter to wrap around. For schemes based on sequential message numbering, a typical synchronization message would include both the highest message number sent by the device so far as well as the lowest number they'll accept in return.
To guard against unexpected power loss, the message numbers should be regularly written to permanent storage, such as flash memory. Since you probably don't want to do this after every message, a common solution is to only save the number every, say, 1000 messages, and to add a safety margin of 1000 to the saved value (for the outgoing messages). You should also design your data storage patterns to avoid directly overwriting old data, both to minimize wear on the memory and to avoid data corruption if power is lost during a write. The details of this, however, are a bit outside the scope of this answer.
Ps. Of course, the MAC calculation should also always include the identities of the sender and the intended recipient, so that an attacker can't trick the devices by e.g. echoing a message back to its sender.

How long should a message header/prefix be?

I've worked with a few protocols, and written my own. I have written some message formats with only 1 char to identify the message, and some with 4 chars. I don't feel that I'm experienced enough to tell which is better, so I'm looking for an answer which describes in which scenario one might be better than the other.
For performance, you would imagine that sending 2 bytes (A%1i) is faster than sending 5 bytes (ABCD%1i). However, I have noticed that when writing the protocol with the 1 byte prefix, if you have a bug which causes your code to not read enough data from the socket, you might get garbage data comming into your system.
So is the purpose of a 4 byte prefix just to provide a guarentee that your message is clean? Is it worth it for the performance you sacrafice? Do you really sacrafice any performance at all? Maybe it's better to have 2 or 3 byte prefix?
I'm not sure if this question should be specific to TCP, or whether it applies to all transport protocols. Advice on this would be interesting.
Update: For interest, I will mention that Synergy uses 4-byte message prefixes, so for a mouse move delta the header is the same size as the actual data. Some have suggested just having a 1 or 2 byte prefix to improve efficiency. I wonder what drawbacks this would have?
Update: Also, I wonder if only the handshake really matters, if you're worried about garbage data. Synergy has a long handshake (a few bytes), so are the 4-byte message prefixes needed? I made a protocol recently that has only a 1 byte handshake, and that turned out to be a bad idea, since incompatible protocols were spamming the system with bad data (off the back of this, I might reccomend at least having a long handshake).
The purpose of the header is to make it easier to solve the frame synchronization problem ( byte aligning in serial communication ).
To synchronize, the receiver looks for anything in the data stream that "looks like" a start-of-message header.
If you have lots of different kinds of valid start-of-message headers, and all of them are 1 byte long, then you will inevitably get a lot of "false frame synchronizations" -- garbage from something that "looks like" a start-of-message header, but isn't.
It would be better to pick some other header that makes it "unlikely" that anything in the serial data stream "looks like" a valid start-of-message header.
It is inevitable that you will get garbage data coming into your system, no matter how you design the packet header.
Whatever you use to handle these other problems (such as occasional bit errors in the middle of the message) should also be adequate to handle the occasional "false frame synchronization" garbage.
In some systems, any bad data is quickly overwritten by fresh new good data, and if you blink you might never see the bad data.
Other systems need at least some sort of error detection in the footer to reject the bad data.
Yet other systems need to not only detect such errors, but somehow keep re-sending that message -- until both sides are convinced that an error-free version of that message has been successfully received.
As Oleksi implied, in some systems the latency is not significantly different between sending a single binary bit (100 ms) and sending 10 bytes (102.4 ms).
So the advantages of using a tiny header (2.4% less latency!) may not be worth it compared to the advantages of using a more verbose header (easier debugging; easier to make backward-compatible and forward-compatible; easier to test the effect of minor changes "in isolation" without upgrading both sides in lockstep to the new protocol which is completely incompatible with the old protocol).
Perhaps you could get the best of both worlds by (a) keeping the verbose, easy-to-debug headers on messages that are so rarely used that the effect of tiny headers is too small to measure (which I suspect is nearly all messages), and (b) introducing a "tiny header" format for any kind of message where the effect of tiny headers is "noticeably better" or at least at least measurable.
It looks like the Synergy protocol is flexible enough to add such a "tiny header" format in a way that is easily distinguishable from the other kinds of message headers.
I use Synergy between my laptop and a few desktop machines. I am glad someone is trying to make it even better.
The performance will depend on the content of the message you are sending. If your content is several kilobytes, it doesn't really matter how many bytes your header is. For now, I would choose the scheme that's easiest to work with, because the performance difference between sending one byte, or four bytes is going to be negligible compared to the actual data that you're sending.

How RealVNC works?

I would to know how RealVNC remote viewer works.
It frequently send screenshots to the client in real time ?
or does it use other approach ?
As a very high-level overview, there are two types of VNC servers:
Screen-grabbing. These servers will capture the current display into a buffer, compare it to the client state, and send only the rectangles that differ to the client.
Hook-assisted. Hooking into the display update process, these servers will be informed when the screen changes by the display manager or OS. They can then use that information to send only the changed rectangles to the client.
In both cases, it is effectively a stream of screen updates; however, only the changed regions of the screen are transmitted to the client. Depending on the version of the VNC protocol in use, these updates may be compressed as well.
(Note that the client is free to request a complete screen update any time it wants to, but the server will only do this on its own if the entire screen is changed.)
Also, screen updates are not the only things transmitted. There are separate channels that the server can use to send clipboard updates and mouse position updates (since a user physically at the remote machine may be able to move the mouse too).
The display side of the protocol is
based around a single graphics
primitive: “put a rectangle of pixel
data at a given x,y position”. At
first glance this might seem an
inefficient way of drawing many user
interface components. However,
allowing various different encodings
for the pixel data gives us a large
degree of flexibility in how to trade
off various parameters such as network
bandwidth, client drawing speed and
server processing speed. A sequence of
these rectangles makes a framebuffer
update (or simply update). An update
represents a change from one valid
framebuffer state to another, so in
some ways is similar to a frame of
video. The rectangles in an update are
usually disjoint but this is not
necessarily the case.
Read here to find out more how it works
Yes. It just sends some sort of screenshots (compressed and which reuses unchanged portions of the previous screenshot).
This is by the way the VNC protocol, any client work that way (although the actual way to compress images etc etc may change).
Essentially the server sends Frame Buffer Updates to the client and the client sends keyboard and mouse input and frame buffer update requests to the server.
Frame Buffer Update messages can have different encodings, but in essence they are different ways of representing square screen areas of pixel data. Generally the client asks for Frame Buffer Updates for the entire screen but it can ask for just an area of the screen (for example, small screen clients showing a viewport of the servers screen). The server then sends a FBU (frame buffer update) that contains rectangles where the screen has changed since the last FBU was sent to the client.
The best reference for the RFB/VNC protocol is here. The IETF has a recent (2011) standards document RFC 6143 that covers RFB although it is not an extensive as the reference guide.
It essentially works by sending screenshots on the fly. ("Real time" is something of a misnomer here in that there is no clear deadline.) It does attempt to optimize by only sending areas of the screen that have changed, and some forks of the VNC code line use a mirror driver to receive notification when areas of the display are written to, while others use window message hooks to detect repaint requests.

Resources