Possible severe security vulnerabilities that could go unnoticed for a long time - security

I've been wondering about this for a while after reading through an interesting article a while ago (probably old news for some of you here) about a technique which one could use to introduce intrinsically hidden backdoors in compilers (article here). Another interesting article: (Did NSA Put a Secret Backdoor in New Encryption Standard?).
Which other methods can you think of that could threaten data security (in the broad sense) and could go unnoticed for a (very) long time?
I'm making this community wiki. Let the paranoia go!

This may not be a complicated one, but doing an assign operation instead of a comparison could be a big hole that's easily overlooked:
if (userRole = "admin") { ... }
instead of:
if (userRole == "admin") { ... }
Lately, I've taken to putting my constant or literal as the first part of the conditional, like so:
if ("admin" == userRole) { ... }
So that if I ever mess up the comparison operator, the code will bomb right then and there during testing, instead of silently giving the user "admin" privileges.

Having just seen a round of code fixing things like this, using an insufficiently secure random number generator is a great way to open yourself up to security vulnerabilities, and do so in a way that won't be detected until you're subject to a deliberate attack.
As an extreme example of this, consider the Debian openssl/ssh/etc vulnerability. That that went undetected for only just under two years is amazing - in something less wide-spread than Debian, I imagine it would lie buried for at least twice as long.
As a simpler example of something that's insecure, consider java.util.Random. This uses a very simple and very fast algorithm to go from one random number to the next, has only 48 bits of internal state, and if you ever generate a 64-bit number from it that you expose to the user, you've then exposed to the user all of your RNG's internal state. (A replacement for java programmers is the aptly named java.security.SecureRandom class)
Yes, in some sense it's "just" a matter of educating programmers to pay attention to when they're generating a security-sensitive random number and when they aren't, but you only need to miss one spot in your authentication-token code to really screw you.

If you wanted to write a non-obvious source code backdoor, a good place to start is to read through C standards and look for anything which states "undefined behaviour".
That Wiki page has information on a few bugs, all you have to do is learn how to deterministically control the outcome.

Related

How secure is the "if" statement?

Regardless of the language I'm always puzzled by the concept of security through an if. All the code I write relies on success of that one line with if statement:
user = getUserName();
password = getPassword();
if (match(user, password)) {
print secret information;
}
Since it's only one line I feel like sabotage can be relatively simple. Am I overlooking things, or is a single if really the best way to do this?
You are right, an if like this is easily hacked. If one reverse engineers this application, you can easily modify a few instructions to skip the if.
There are various options, like obfuscating the executable or adding more complex checks and in add them in various places in your application. But whatever you do, your application can always be hacked.
Best thing is not to worry about it. By the time your application is so good and great and widely used that people are actually willing to put effort in cracking it, you will probably make enough money to protect it better. Until then, it's a waste of time to even think about it.
In the specific case you are showing, if you were really worried about unauthorized people seeing the secret information output by "print secret information;" you would encrypt the "secret information" with the supplied password. This would ensure that only the person who was able to provide the proper password would be able to see the secret information.
There's one thing about IF's that is often overlooked. It's called timing attack. Suppose you have a web application that does comparison based on direct matching of password sent against password stored in the DB (yes, I know that nobody in his mind will store passwords in the DB, but as Cheshire Cat said, "we are all mad here"). Then comparison procedure takes different time depending on whether the passwords don't match on the first character, on the second one or on the last one. While it might seem that the time difference is tiny, it's enough for attacker to attempt to guess the password even across internet, not talking about local analysis. Timing attack is a bit more complicated, than I described, but in general IF comparison is not 100% safe, at least not in all cases.
The if statement is absolutely secure, and can never be the cause of a vulnerability. Vulnerabilities arise from nearly everything else in your code.
It is possible that the comparison operator that you are using is flawed. For instance the == operator employs fuzzing matching where a range of possible values are accepted. This might not be good for secuirty but its hard to come up with a good example, it doesn't really matter for a password. A simple $password==$_GET['password'] should work just fine.
Your if statement could also be relying on bad regular expression such as
if(preg_match('/(.+)\\.js/'.$_GET['file'])){
readfile($_GET['file']);
}
In this case the regex is looking for a .js anywhere in the string, not enforcing it to be at the end.
?file=../../.js/../../../../../../../etc/passwd
(And this vulnerability won me $3,000 in the Mozilla bug bounty program ;)
If this is a server code - this is not a problem, as long as you keep your server secure.
If this is a client code - you are right. Someone can manipulate your code - either the binary file or the memory image (once loaded). However, this is true for any client application. You can only make it harder (by using tools like PECompact + Anti-debug plugin for example), but you can't achieve very strong security.
I'm not sure to understand your question.
Software security techniques are imperfect, and AFAIK they pre-suppose few bugs in the compiler, and a "perfect" hardware (that is, the processor is interpreting correctly the machine code).
I am not familiar (but interested) with approaches for imperfect hardware (except of course by using redundancy or other techniques, e.g. ECC, to detect hardware errors).
There is nothing insecure about one line with an if in it.
If the code is running on your server, what matters is how secure that server is. If an hacker gains access to it, it doesn't matter how complicated your code is, he will be able to circumvent it.
Similarly, if your code runs on the computer of a potential attacker (like a computer game that you want to protect), there is nothing you can do to stop the attacker. You can make his work slightly more difficult, but that's all.
You shouldn't worry about the security of one line, but of the system as a whole. If you make your code more complicated, all you did is introduce more potential for bugs. Using more complicated code is an attempt at security through obscurity, which doesn't work.
If you can't trust your computer to execute a simple if correctly, you can't trust it at all.

Do Lisp apps and webapps need special input sanitizing?

EDIT 3 Quite some new development have happened since I asked this question. Basically I wasn't "seeing things" and webapps written in Clojure have been found to be vulnerable, which prompted changes in Clojure 1.5 and very heated discussion on the Clojure Google groups.
Here's a quote from someone on Hacker News about the changes in Clojure 1.5:
Another slightly interesting thing is the sudden enhancement to
read-eval and EDN[2]. That's mainly because of the rough weather
Ruby/Rubygems was in with the YAML-exploits, which caused a heated
discussion on how the Clojure reader should act by default.
Holes have been found and it's too late to really fix Clojure, so read-eval shall still ship by default set to true (because otherwise it would break too many things). And anyone parsing inputs in Clojure should not use the default read functions but the EDN ones.
So I certainly wasn't seeing things and it didn't take long (not even 18 months) for people to find ways to attack common Clojure webapp stacks.
EDIT 2 I didn't know it but my question is a dupe of the following question (which has been described as a 'killer question'): Lisp data security/validation
If anyone's interested in the answer(s) to this question, I'd suggest they open the above question and read the answers made there by Lisp gurus instead of the ones of the type "nothing to see here, move along, it's just like PHP or JavaScript".
EDIT: I'd like to know if, somehow, because it is Lisp, it would be "easier" for an attacker to transform "data" (i.e. "crafted user input with a malicious intent") into "code". For example, do I need to escape/replace all the parentheses in the user input before starting to "evaluate" / parse or whatever the data?
Original question
I'm still reading about Lisp and suddenly I was wondering, with this entire "code is data" / "data is code" thing, do Lisp need to perform input sanitizing in order to prevent attacks?
I was thinking specifically of webapps, say when a user does some HTTP POST.
What if the data he's sending contains things like:
This is some malicious (eval '(nasty-stuff (...)) or whatever.
(I'm no Lisp programmer, it's just an example of what I've got in mind, it's not meant to be actually mean code)
Is there anything special to keep in mind due to how Lisp works? For example if some dark-side hacker would know that some webserver is running on Clojure, can he exploit that fact and then inject "code between parentheses" that would then be evaluated on the webserver?
Is this a concern at all when receiving/parsing user data (and hence potentially crafted data) from Lisp?
I have written some webapps in Lisp (i.e. Common Lisp) and here are the things I've kept in mind:
if you use read, you should always set *read-eval* to nil for any untrusted data
if you are dealing with code generation - for example, HTML, JS, CSS or SQL generation - which is very common in Lisp-land, you shouldn't forget to use the sanitizing facilities provided by the corresponding libraries (not use raw input strings)
Basically, that's all. Moreover, since it's Lisp, it usually makes your system less prone to attack, because:
there are no standard attacks (as Lisp's use is relatively rare)
the system is rather secure in terms of defaults - this isn't unique, but many web-oriented languages (like PHP in the first place) suffer from insecurity by default, although it is mitigated by modern frameworks
You should always assume that injection attacks are possible until proven otherwise. Without knowing more about your specific Lisp environment and what you are comparing it with, it is impossible to answer whether you need "special" sanitization.
We know that machine code attacks are possible.
We know that SQL injection is possible.
We should assume that it is possible to hijack any turing-complete system, whether it is hardware or software.
Note that the soft barrier between "code" and "data" is not unique to Lisp. perl, once the workhorse of the web world has eval. So does PHP. It looks like Java bytecode injection may be possible as well.
It really does boil down to: don't use READ and don't use EVAL. You need to know exactly what you are sending to either or both of those functions, as well as the contexts within which they are executed. If you do not call either of these, then you're fine.

How to Protect an Exe File from Decompilation

What are the methods for protecting an Exe file from Reverse Engineering.Many Packers are available to pack an exe file.Such an approach is mentioned in http://c-madeeasy.blogspot.com/2011/07/protecting-your-c-programexe-files-from.html
Is this method efficient?
The only good way to prevent a program from being reverse-engineered ("understood") is to revise its structure to essentially force the opponent into understanding Turing Machines. Essentially what you do is:
take some problem which generally proven to be computationally difficult
synthesize a version of that whose outcome you know; this is generally pretty easy compared to solving a version
make the correct program execution dependent on the correct answer
make the program compute nonsense if the answer is not correct
Now an opponent staring at your code has to figure what the "correct" computation is, by solving algorithmically hard problems. There's tons of NP-hard problems that nobody has solved efficiently in the literature in 40 years; its a pretty good bet if your program depends on one of these, that J. Random Reverse-Engineer won't suddenly be able to solve them.
One generally does this by transforming the original program to obscure its control flow, and/or its dataflow. Some techniques scramble the control flow by converting some control flow into essentially data flow ("jump indirect through this pointer array"), and then implementing data flow algorithms that require precise points-to analysis, which is both provably hard and has proven difficult in practice.
Here's a paper that describes a variety of techniques rather shallowly but its an easy read:
http://www.cs.sjsu.edu/faculty/stamp/students/kundu_deepti.pdf
Here's another that focuses on how to ensure that the obfuscating transformations lead to results that are gauranteed to be computationally hard:
http://www.springerlink.com/content/41135jkqxv9l3xme/
Here's one that surveys a wide variety of control flow transformation methods,
including those that provide levels of gaurantees about security:
http://www.springerlink.com/content/g157gxr14m149l13/
This paper obfuscates control flows in binary programs with low overhead:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.167.3773&rank=2
Now, one could go through a lot of trouble to prevent a program from being decompiled. But if the decompiled one was impossible to understand, you simply might not bother; that's the approach I'd take.
If you insist on preventing decompilation, you can attack that by considering what decompilation is intended to accomplish. Decompilation essentially proposes that you can convert each byte of the target program into some piece of code. One way to make that fail, is to ensure that the application can apparently use each byte
as both computer instructions, and as data, even if if does not actually do so, and that the decision to do so is obfuscated by the above kinds of methods. One variation on this is to have lots of conditional branches in the code that are in fact unconditional (using control flow obfuscation methods); the other side of the branch falls into nonsense code that looks valid but branches to crazy places in the existing code. Another variant on this idea is to implement your program as an obfuscated interpreter, and implement the actual functionality as a set of interpreted data.
A fun way to make this fail is to generate code at run time and execute it on the fly; most conventional languages such as C have pretty much no way to represent this.
A program built like this would be difficult to decompile, let alone understand after the fact.
Tools that are claimed to a good job at protecting binary code are listed at:
https://security.stackexchange.com/questions/1069/any-comprehensive-solutions-for-binary-code-protection-and-anti-reverse-engineeri
Packing, compressing and any other methods of binary protection will only every serve to hinder or slow reversal of your code, they have never been and never will be 100% secure solutions (though the marketing of some would have you believe that). You basically need to evaluate what sort of level of hacker you are up against, if they are script kids, then any packer that require real effort and skill (ie:those that lack unpacking scripts/programs/tutorials) will deter them. If your facing people with skills and resources, then you can forget about keeping your code safe (as many of the comments say: if the OS can read it to execute it, so can you, it'll just take a while longer). If your concern is not so much your IP but rather the security of something your program does, then you might be better served in redesigning in a manner where it cannot be attack even with the original source (chrome takes this approach).
Decompilation is always possible. The statement
This threat can be eliminated to extend by packing/compressing the
executable(.exe).
on your linked site is a plain lie.
Currently many solutions can be used to protect your application from being anti-compiled. Such as compressing, Obfuscation, Code snippet, etc.
You can looking for a company to help you achieve this.
Such as Nelpeiron, the website is:https://www.nalpeiron.com/
Which can cover many platforms, Windows, Linux, ARM-Linux, Android.
What is more Virbox is also can be taken into consideration:
The website is: https://lm-global.virbox.com/index.html
I recommend is because they have more options to protect your source code, such as import table protection, memory check.

Code obfuscation usage in various languages

I recently learned about code obfuscation. Its nice thing to do, when you have spare time, but I have different question. Why to do it?
First, there are languages in which I am sure its great thing - interpreted ones, like php, JavaScript and much more. There it seems like a good and more secure thing.
Second, there are languages where this seems to have no real effect for me - all the native code compiled languages. Take C for example. when compiled, all the variable names, function names, most of obfuscation techniques go away. If some can make it into native code, it would be things like recursion instead of for cycles and so, but disassembled code will anyway have instead of names some disassembler-generated identifiers, right?
And last category are languages I am not quite sure about. And that's the main reason I ask. These languages would be Java, C# (.NET),and the last Silverlight used in WP7. I ask because I read some article that state that on WP7 apps, code obfuscation helps preventing code from hacking. But I always thought of byte-code as being very similar to standard assembler codes, therefore again not having any information about real pre-compilation variable names, function names, etc. So, where is the truth?
Do it if you want, but don't expect any determined person to be scared away by it. There exist de-obfuscators, people can read obfuscated code as well (just as there are people who can read optimized assembly and reconstruct the original C code). Code obfuscation just gives you a false sense of security and might deter a person who is just curious (instead of deterring those who are serious about stealing your code). All it gives you is a false sense of security but no real one. Schneier aptly names this "security theater".
Yes, many modern languages that retain more information about the source can be obfuscated better than those that are compiled right to machine code. For the latter the compiler already does quite a good job with optimization. Your notion of bytecode being akin to traditional assembler is slightly wrong here, though. Especially .NET bytecode retains enough metadata to reconstruct the original source almost exactly (see Reflector). What isn't retained there are the names of local variables and arguments to methods. But you still need and retain the method and class names.
Another issue you should be aware of: If you give your customers an obfuscated executable and your program crashes, make sure you have a way of getting the real stacktrace back instead of the obfuscated one. Saying "Sorry, I cannot determine the root cause of why my program killed hours of your work since I chose to obfuscate it" isn't going to cut it, I guess :-)
Obfuscation is a common technique for mobile applications where you have hardware restrictions. Obfuscated code tends to have shorter identifiers and therefore smaller binaries.

Pitfalls of cryptographic code

I'm modifying existing security code. The specifications are pretty clear, there is example code, but I'm no cryptographic expert. In fact, the example code has a disclaimer saying, in effect, "Don't use this code verbatim."
While auditing the code I'm to modify (which is supposedly feature complete) I ran across this little gem which is used in generating the challenge:
static uint16 randomSeed;
...
uint16 GetRandomValue(void)
{
return randomSeed++;/* This is not a good example of very random generation :o) */
}
Of course, the first thing I immediately did was pass it around the office so we could all get a laugh.
The programmer who produced this code knew it wasn't a good algorithm (as indicated by the comment), but I don't think they understood the security implications. They didn't even bother to call it in the main loop so it would at least turn into a free running counter - still not ideal, but worlds beyond this.
However, I know that the code I produce is going to similarly cause a real security guru to chuckle or quake.
What are the most common security problems, specific to cryptography, that I need to understand?
What are some good resources that will give me suitable knowledge about what I should know beyond common mistakes?
-Adam
Don't try to roll your own - use a standard library if at all possible. Subtle changes to security code can have a huge impact that aren't easy to spot, but can open security holes. For example, two modified lines to one library opened a hole that wasn't readily apparent for quite some time.
Applied Cryptography is an excellent book to help you understand crypto and code. It goes over a lot of fundamentals, like how block ciphers work, and why choosing a poor cipher mode will make your code useless even if you're using a perfectly implemented version of AES.
Some things to watch out for:
Poor Sources of Randomness
Trying to design your own algorithm or protocol - don't do it, ever.
Not getting it code reviewed. Preferably by publishing it online.
Not using a well established library and trying to write it yourself.
Crypto as a panacea - encrypting data does not magically make it safe
Key Management. These days it's often easier to steal the key with a side-channel attack than to attack the crypto.
Your question shows one of the more common ones: poor sources of randomness. It doesn't matter if you use a 256 bit key if they bits aren't random enough.
Number 2 is probably assuming that you can design a system better than the experts. This is an area where a quality implementation of a standard is almost certainly going to be better than innovation. Remember, it took 3 major versions before SSL was really secure. We think.
IMHO, there are four levels of attacks you should be aware of:
social engineering attacks. You should train your users not to do stupid things and write your software such that it is difficult for users to do stupid things. I don't know of any good reference about this stuff.
don't execute arbitrary code (buffer overflows, xss exploits, sql injection are all grouped here). The minimal thing to do in order to learn about this is to read Writing Secure Code from someone at MS and watching the How to Break Web Software google tech talk. This should also teach you a bit about defense in depth.
logical attacks. If your code is manipulating plain-text, certificates, signatures, cipher-texts, public keys or any other cryptographic objects, you should be aware that handling them in bad ways can lead to bad things. Minimal things you should be aware about include offline&online dictionary attacks, replay attacks, man-in-the-middle attacks. The starting point to learning about this and generally a very good reference for you is http://www.soe.ucsc.edu/~abadi/Papers/gep-ieee.ps
cryptographic attacks. Cryptographic vulnerabilities include:
stuff you can avoid: bad random number generation, usage of a broken hash function, broken implementation of security primitive (e.g. engineer forgets a -1 somewhere in the code, which renders the encryption function reversible)
stuff you cannot avoid except by being as up-to-date as possible: new attack against a hash function or an encryption function (see e.g. recent MD5 talk), new attack technique (see e.g. recent attacks against protocols that send encrypted voice over the network)
A good reference in general should be Applied Cryptography.
Also, it is very worrying to me that stuff that goes on a mobile device which is probably locked and difficult to update is written by someone who is asking about security on stackoverflow. I believe your case would one of the few cases where you need an external (good) consultant that helps you get the details right. Even if you hire a security consultant, which I recommend you to do, please also read the above (minimalistic) references.
What are the most common security problems, specific to cryptography, that I need to understand?
Easy - you(1) are not smart enough to come up with your own algorithm.
(1) And by you, I mean you, me and everyone else reading this site...except for possibly Alan Kay and Jon Skeet.
I'm not a crypto guy either, but S-boxes can be troublesome when messed with (and they do make a difference). You also need a real source of entropy, not just a PRNG (no matter how random it looks). PRNGs are useless. Next, you should ensure the entropy source isn't deterministic and that it can't be tampered with.
My humble advice is: stick with known crypto algorithms, unless you're an expert and understand the risks. You could be better off using some tested, publicly-available open source / public domain code.

Resources