Strong Link - Weak Link in software security - security

Give me an example on how I could apply the Strong Link - Weak Link principle in designing a security component for a piece of software. Is there such a concept of "weak" modules in software security, where in case of an attack these will deliberately fail first, and determine the impossibility of the attacker to access and compromise any other, more sensitive data?

One thing that can happen accidentally is to fail (as DoS) under a dictionary attack. Generally you would want to throttle, which I guess is a weaker version of weak module.

Related

What if security through obscurity fails?

I know that security through obscurity means that a system of any sort can be secure as long as nobody outside of its implementation group is allowed to find out anything about its internal mechanisms.
But what if someone does find it? Are there still any mechanisms build within the system that still protect the system if anyone tries to attack it? Are there any examples of systems using security through obscurity?
Security through obscurity refers to systems which are only secure inasmuch as their design and implementation remain a secret. This is like burying a treasure chest on a deserted island. If somebody finds a map, it's just a matter of time before your treasure gets dug up. It's typically hard to keep the design and implementation details secure when multiple parties will need access to the thing in order to use it. For personal standalone systems which do not require frequent access, security through obscurity is not a bad choice.
However, personal standalone systems which do not require frequent access are not the kind of system computer security typically considers. Computer security typically concerns itself mostly with multi-user connected systems which are accessed all the time. In such cases, effectively concealing all the relevant implementation details is prohibitively difficult. In the treasure chest analogy, imagine if there were a thousand people who required regular access to the treasure. Aside from it being difficult to access the treasure, any bad actor has lots of opportunities to steal the map, look at the map, follow somebody to the treasure, or just trick somebody into divulging the location.
So, what's the alternative? Imagine a safety deposit box in a vault at a large bank in the middle of a city. Everybody knows (hypothetically) everything about the security setup: there are alarm systems, cameras, guards, police, the vault itself, and then locks on the boxes. All of this can be known and still many of the attack vectors which would succeed against the buried treasure would fail this case: stealing one key would not be enough and, even if all keys were stolen, you'd still have to defeat surveillance, guards and police. Furthermore, access to the contents of the safety deposit box is relatively more convenient for end users: properly authenticated users (has ID, has key, etc.) just need to go to the (local) bank and present credentials and get access.
The most secure systems - imagine defense systems - use a combination of these techniques. Design and implementation details are not publicly known and are actively protected. However, knowing these details does not give you a map to the treasure, just an understanding of the various other systems protecting the loot. Governments use obscurity because no (useful) system is impervious to all attacks.

What are the techniques to protect preventively own software developed in Java etc. against 0days exploit?

I just stumbled upon this article:
Lessons from Anonymous on cyberwar
A cyberwar is brewing, and Anonymous reprisal attacks on HBGary Federal shows how deep the war goes.
http://english.aljazeera.net/indepth/opinion/2011/03/20113981026464808.html
Are there techniques to really protect our own developed software against highly sophisticated techniques ?
For example can't a simple crc check be enough ?
uPDATE: My question is not about protecting the software against cracking, it's about protecting the USER pc from being infiltrated. why wouldn't be enough to just check the crc and avoid running if it is not right ?
There is no guaranteed full protection. If you want your software be uncrackable or unexploitable, don't release it at all.
You want your software to verify checksums? Of what?
If the highly sophisticated techniques exploit problems in the network, the operating system or hardware; then no, user-ring software doesn't enter the picture so verifying checksums won't help.
If you want to checksum every incoming message, then you need to be able to enumerate all the possible safe incoming messages, in which case you've got an easier to use filter than checksums.
If you want to reject every incoming message that doesn't match a small list of checksums, then you've just turned one kind of attack into another : a denial of availability. This may be fine for some systems but not most.
There is no protection against 0-days, after all thats the very definition of an 0-day. ALL software is vulnerable.
But you can plan on failure. That is the point behind Defense in depth.
What you can take into mind is the principal of least privilege access. Security in layers.

Isn't a password a form of security through obscurity?

I know that security through obscurity is frowned upon and considered not really secure, but isn't a password security through obscurity? It's only secure so long as no one finds it.
Is it just a matter of the level of obscurity? (i.e. a good password well salted and hashed is impractical to break)
Note I'm not asking about the process of saving passwords (Assume they are properly hashed and salted). I'm asking about the whole idea using a password, which is a piece of information, which if known could compromise a person's account.
Or am I misunderstanding what security through obscurity means? I guess that's what I assume it to mean, that is there exists some information which if known would compromise a system (in this case, the system being defined as whatever the password is meant to protect)
You are right in that a password is only secure if it is obscure. But the "obsure" part of "security through obscurity" refers to obscurity of the system. With passwords, the system is completely open -- you know the exact method that is used to unlock it, but the key, which is not part of the system, is the unknown.
If we were to generalize, then yes, all security is by means of obscurity. However, the phrase "security through obscurity" does not refer to this.
Maybe it's easier to understand what Security-by-Obscurity is about, by looking at something that is in some sense the opposite: Auguste Kerckhoffs's Second Principle (now simply known usually as Kerckhoffs's Principle), formulated in 1883 in two articles on La Cryptographie Militaire:
[The cipher] must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience.
Claude Shannon reformulated it as:
The enemy knows the system.
And Eric Raymond as:
Any security software design that doesn't assume the enemy possesses the source code is already untrustworthy.
An alternative formulation of that principle is that
The security of the system must depend only of the secrecy of the key, not the secrecy of the system.
So, we can simply define Security-by-Obscurity to be any system that does not follow that principle, and thus we cleverly out-defined the password :-)
There are two basic reasons why this Principle makes sense:
Keys tend to be much smaller than systems, therefore they are easier to protect.
Compromising the secrecy of a key only compromises the secrecy of all communications protected by that key, compromising the secrecy of the system compromises all communications.
Note that it doesn't say anywhere that you can't keep your system secret. It just says you shouldn't depend on it. You may use Security-by-Obscurity as an additional line of defense, you just shouldn't assume that it actually works.
In general, however, cryptography is hard, and cryptographic systems are complex, therefore you pretty much need to publish it, to get as many eyeballs on it as possible. There are only very few organizations on this planet that actually have the necessary smart people to design cryptographic systems in secrecy: in the past, when mathematicians were patriots and governments were rich, those were the NSA and the KGB, right now it's IBM and a couple of years from now it's gonna be the Chinese Secret Service and international crime syndicates.
No. Let's look a definition of security through obscurity from wikipedia
a pejorative referring to a principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to provide security.
The phrase refers to the code itself, or the design of a system. Passwords on the other hand are something a user has to identify themselves with. It's a type of authentication token, not a code implementation.
I know that security through obscurity is frowned upon and considered not really secure, but isn't a password security through obscurity? It's only secure so long as no one finds it.
In order to answer this question, we really need to consider why "security through obscurity" is considered to be flawed.
The big reason that security through obscurity is flawed is that it's actually really easy to reverse-engineer a system based on its interactions with the outside world. If your computer system is sitting somewhere, happily authenticating users, I can just watch what packets it sends, watching for patterns, and figure out how it works. And then it's straightforward to attack it.
In contrast, if you're using a proper open cryptographic protocol, no amount of wire-sniffing will let me steal the password.
That's basically why obscuring a system is flawed, but obscuring key material (assuming a secure system) is not. Security through obscurity will never in and of itself secure a flawed system, and the only way to know your system isn't flawed is to have it vetted publicly.
Passwords are a form of authentication. They are meant to identify that you are interacting with who you are supposed to interact with.
Here is a nice model of the different aspects of security (I had to memorize this in my security course)
http://en.wikipedia.org/wiki/File:Mccumber.jpg
Passwords are an aspect of the confidentiality aspect of security.
While probably the weaker of the forms of authentication (something you know, something you have, something you are), I would still say that it does not constitute security through obscurity. With a password, you are not trying to mask a facet of the system to try to keep it hidden.
Edit:
If you follow the reasoning that passwords are also a means of "security Throguh Obscurity" to it's logical end then All security, including things like encryption, is security through obscurity. Then that means, the only system that is not secured through obscurity is one that is surrounded in concrete and sunk to the ocean floor, no one ever being allowed to use it. This reasoning, however, is not conducive to getting anything done. Therefore we use Security Through obscurity to describe practices that use not understanding the implementation of a system as a means of security. With passwords, the implementation is known.
No, they are not.
Security through obscurity means that the process that provides the access protection is only secure because its exact details are not publicly available.
Publicly available here means that all the details of the process are known to everyone, except, of course, a randomized portion that constitutes the key. Note that the range from which keys can be chosen is still known to everyone.
The effect of this is that it can be proven that the only part that needs to be secret is the password itself, and not other parts of the process. Or conversely, that the only way to gain access to the system is by somehow getting at the key.
In a system that relies on the obscurity of its details, you cannot have such an assurance. It might well be that anyone who finds out what algorithm you are using can find a back door into it (i.e. a way to access the system without the password).
The short answer is no. Passwords by themselves are not security by obscurity.
A password can be thought of as analogous to the key in cryptography. If you have the key you can decode the message. If you do not have the key you can not. Similarly, if you have the right password you can authenticate. If you do not, you can not.
The obscurity part in security by obscurity refers to how the scheme is implemented. For example, if passwords were stored somewhere in the clear and their precise location was kept a secret that would be security by obscurity. Let's say I'm designing the password system for a new OS and I put the password file in /etc/guy/magical_location and name it "cooking.txt" where anyone could access it and read all the passwords if they knew where it was. Someone will eventually figure out (e.g. by reverse engineering) that the passwords are there and then all the OS installations in the world will be broken because I relied on obscurity for security.
Another example is if the passwords are stored where everyone can access them but encrypted with a "secret" key. Anyone who has access to the key could get at the passwords. That would also be security by obscurity.
The "obscurity" refers to some part of the algorithm or scheme that is kept secret where if it was public knowledge the scheme could be compromised. It does not refer to needing a key or a password.
Yes, you are correct and it is a very important realisation you are having.
Too many people say "security through obscurity" without having any idea of what they mean. You are correct in all that matters is the level of "complexity" of decoding any given implementation. Usernames and passwords are just a complex realisation of it, as they greatly increase the amount of information required to gain access.
One important thing to keep in mind in any security analysis is the threat model: Who are you worried about, why, and how are you preventing them? What aren't you covering? etc. Keep up the analytical and critical thinking; it will serve you well.

Bugs versus vulnerabilities?

What, if any, is the difference between a software bug and a software vulnerability?
A bug is when a system isn't behaving as it's designed to behave.
A vulnerability is a way of abusing the system (most commonly in a security-related way) - whether that's due to a design fault or an implementation fault. In other words, something can have a vulnerability due to a defective design, even if the implementation of that design is perfect.
Vulnerability is a subset of bug.
A bug is any defect in a product.
A vulnerability is bug that manifests as an opportunity for malicious use of the product. Vulnerabilities generally are not that clearly evident, but require ingenuity to be exploited.
The two can sometimes overlap, but I'd say a "bug" is a mistake, while a "vulnerability" is, like the name suggests, a weakness.
From a programming perspective, I believe there is no difference between a bug and a vulnerability. They are both mistakes in the software.
However, from a security perspective, a vulnerability is a class of bugs that can be manipulated in some fashion by a malicious person.
A bug is a failure of your system to meet requirements.
Vulnerability is a subset of bug - it is when your system can be forced into a failure mode that does not meet requirements, usually by (ab)using your system (or something your system relies on) in an unexpected way.
Usually a vulnerability may result in failure to meet a requirement in one or more of these areas:
confidentiality
integrity
availability
or you can combine the last two:
confidentiality
reliability (= integrity + availability)
If you use Bugzilla, anything you need to do something with is a bug ;)
In my eyes vulnerabilities are a subset of bugs that enable someone to perform a malicious or harmful operation with your software.
Bugs are just code that does not work properly (how you define properly is subject to opinion).
Wikipedia:
In computer security, the term
vulnerability is applied to a weakness
in a system which allows an attacker
to violate the integrity of that
system
For example, home computers are vulnerable to physical threats like flood and hand grenades, but they are not considered a "bug". In enterprise environment, these threats are treated with more seriousness if the risk of system shutting down is great enough, maybe for air traffic support or nuclear reactor management.
Business continuity planning/disaster recovery and high availability usually deals with physical threats and failures by redundant hardware and distributing servers to remote locations.
Classification of software defect (or "bug") can be subjective, since it depends on the intent of the software design and requirements. A feature for a given set of audience may be interpreted as a vulnerability to the other if abused. For example, stackoverflow.com now discloses self-closed questions to those with 10k reps. Some may say it is a vulnerability since it violates common expectation of ordinary users (Like I said, it's a subjective call).
A bug is the failure of software to meet requirements. I would consider these to be the ideal requirements, so it would make sense to say that there's a bug in the requirements analysis, although that's more debatable.
A vulnerability is a feature, intended or otherwise, that can be exploited maliciously. It is not necessarily a bug, provided that it was deliberate.
To change subjects, it is a vulnerability that my home wireless has a guessable WPA password, but that was a conscious choice, to facilitate use by my guests. That's an example of requirements leading to a vulnerability. If I'd entered a weak password because I didn't know better, that would have been a bug as well as a vulnerability.

What are the best programmatic security controls and design patterns?

There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?
Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:
Deny by default
Deny everything that is not explicitly permitted...
Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.
edit: I am making this a community wiki to encourage voting.
Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.
All these ideas that people are listing (isolation, least privilege, white-listing) are tools.
But you first have to know what "security" means for your application. Often it means something like
Availability: The program will not fail to serve one client because another client submitted bad data.
Privacy: The program will not leak one user's data to another user
Isolation: The program will not interact with data the user did not intend it to.
Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
Trusted Path: The user knows which entity they are interacting with.
Once you know what security means for your application, then you can start designing around that.
One design practice that doesn't get mentioned as often as it should is Object Capabilities.
Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.
Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.
Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.
DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.
White listing
Opt in what you know you accept
(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)
Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.
Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.
Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.
(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)
Encryption ≠ security.
Hire security professionals
Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.
Reuse proven code
Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.
Design security in from the start
It's a lot easier to get security wrong when you're adding it to an existing system.
Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.
Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.
Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).
If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.
Use risk to make security decisions. Once you determine the probability of different threats, then consider the harm that each could do. Risk is, by definition
R = Pe × H
where Pe is the probability of the undersired event, and H is the hazard, or the amount of harm that could come from the undesired event.
Separate concerns. Architect your system and design your code so that security-critical components can be kept together.
KISS (Keep It Simple, Stupid)
If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.
Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.

Resources