What if security through obscurity fails? - security

I know that security through obscurity means that a system of any sort can be secure as long as nobody outside of its implementation group is allowed to find out anything about its internal mechanisms.
But what if someone does find it? Are there still any mechanisms build within the system that still protect the system if anyone tries to attack it? Are there any examples of systems using security through obscurity?

Security through obscurity refers to systems which are only secure inasmuch as their design and implementation remain a secret. This is like burying a treasure chest on a deserted island. If somebody finds a map, it's just a matter of time before your treasure gets dug up. It's typically hard to keep the design and implementation details secure when multiple parties will need access to the thing in order to use it. For personal standalone systems which do not require frequent access, security through obscurity is not a bad choice.
However, personal standalone systems which do not require frequent access are not the kind of system computer security typically considers. Computer security typically concerns itself mostly with multi-user connected systems which are accessed all the time. In such cases, effectively concealing all the relevant implementation details is prohibitively difficult. In the treasure chest analogy, imagine if there were a thousand people who required regular access to the treasure. Aside from it being difficult to access the treasure, any bad actor has lots of opportunities to steal the map, look at the map, follow somebody to the treasure, or just trick somebody into divulging the location.
So, what's the alternative? Imagine a safety deposit box in a vault at a large bank in the middle of a city. Everybody knows (hypothetically) everything about the security setup: there are alarm systems, cameras, guards, police, the vault itself, and then locks on the boxes. All of this can be known and still many of the attack vectors which would succeed against the buried treasure would fail this case: stealing one key would not be enough and, even if all keys were stolen, you'd still have to defeat surveillance, guards and police. Furthermore, access to the contents of the safety deposit box is relatively more convenient for end users: properly authenticated users (has ID, has key, etc.) just need to go to the (local) bank and present credentials and get access.
The most secure systems - imagine defense systems - use a combination of these techniques. Design and implementation details are not publicly known and are actively protected. However, knowing these details does not give you a map to the treasure, just an understanding of the various other systems protecting the loot. Governments use obscurity because no (useful) system is impervious to all attacks.

Related

What is the security standard for a small business?

This maybe a very newbie question, but exactly what do I need so that I can say my network is considered "secure"?
To be more specific, if I have a website that deals with login/signup and lots of money transactions, what do I need to protect it?
So far I know I need EV SSL certificate, login system protections like brute force login protection, hashing the password, key stretching. Is there anything I missed?
Besides, is firewall really necessary in my case? I just feel like everything I want to do can be accomplished by the server itself, so is there really a need to get a software/hardware firewall?
To be completely blunt, you should probably hire a security professional to assess and make recommendations about your site. Alternatively, a part or full-time network administrator with security experience/certifications might be a good hire.
I recommend the "don't do-it-yourself" approach not because I want to increase work for my peers, or that I don't believe you are a fully competent individual. Rather, I recommend it because security is really, really hard to get right, and any site that handles money is an ideal target for any attacker out there. From a professional perspective, you would be best served by getting an expert to secure your network, perhaps on an ongoing basis; this is a situation that security professionals are very used to, and very well equipped to handle. From a legal perspective, getting an expert opinion on such a sensitive matter is essential due diligence, and trying to do it entirely on your own opens you to significant liability if your system gets breached and attackers are able to carry off your customer's data. Which, as your business grows and you gain more visibility online, only more and more likely to happen without ongoing, professional help.

UUID on database level used as a security measure instead of a true rights control?

Can UUID on database level be used as a security measure instead of a true rights control?
Consider a web application where all servlets implements "normal" access control by having a session id connected to the user calling it (through the web client). All users are therefore authenticated.
The next level of security needed is if a authenticated user actually "owns" the data being changed. In a web application this could for example be editing some text in a form. The client makes sure a user, by accident, doesn’t do something wrong (JavaScript). The issue is of course is that any number of network tools could easily repeat the call made by the browser and, by only changing the ID, edit a different row in the database table behind the servlet that the user does not "own".
My question is if it would be sufficient to use UUID's as keys in the database table and thereby making it practically impossible to guess a valid ID (https://en.wikipedia.org/wiki/Universally_unique_identifier#Random_UUID_probability_of_duplicates)? As far as I know similar approaches is used in Google Photos (http://www.theverge.com/2015/6/23/8830977/google-photos-security-public-url-privacy-protected) but I'm not sure it is 100% comparable.
Another option is off cause to have every servlet verify that the user is only performing an action on its own data, but in a big application with 200+ servlets and 50-100 tables this could be a very cumbersome task where mistakes could easily happen. In my mind this weakens the security far more, but I'm not sure if that is true.
I'm leaning towards the UUID solution, but I'm also curious if there are other obvious approaches to this problem that I ought to consider.
Update:
I should probably have clarified that my plan would be to use UUIDv4 which is supposed to be random. I know that entropy comes in to play here in regards to how random the UUID's actually are, but as far as I have read then Java (which is the selected platform/language) uses SecureRandom which is supposed to be "cryptographically strong" (link).
And in that case wiki states (link):
In other words, only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%.
Using UUIDs in this manner has two major issues:
If there are no additional authentication methods, any attacker could simply guess UUIDs until they find one belonging to someone else. Google Photos doesn't need to worry about this as much, because they only use UUIDs to obfuscate publicly-shared photo views; you still need to authenticate to modify the photos. This is especially dangerous because:
UUIDs are intended to be unique, not random. There are likely to be predictable patterns in your UUIDs that an attacker would be able to observe and take advantage of. In addition, even without a clear pattern, the number of UUIDs an attacker needs to test to find a valid one swiftly decreases as your userbase grows.
I will always recommend using secure, continuously-checked authentication. However, if you have a fairly small userbase, and you are only using this to obfuscate public data access, then using UUIDs in this manner might be alright. Even then, you should be using actual random strings, and not UUIDs.
Another option is off cause to have every servlet verify that the user
is only performing an action on its own data, but in a big application
with 200+ servlets and 50-100 tables this could be a very cumbersome
task where mistakes could easily happen. In my mind this weakens the
security far more, but I'm not sure if that is true.
With a large legacy application adding in security later is always a complex task. And you're right - the more complicated an application, the harder it is to verify security. Complexity is the main enemy of security.
However, this is the best way to go rather than by trying to obscure insecure direct object reference problems.
If you are using these UUIDs in the query string then this information within URLs may be logged in various locations, including the user's browser, the web server, and any forward or reverse proxy servers between the two endpoints. URLs may also be displayed on-screen, bookmarked or emailed around by users. They may be disclosed to third parties via the Referer header when any off-site links are followed. Placing direct object references into the URL increases the risk that they will be captured by an attacker. An existing user of the application that then has their access revoked to certain bits of data - they will still be able to access this data by using a previously bookmarked URL (or by using their browser history). Even where the ID is passed outside of the URL mechanism, a local attacker that knows (or has figured out) how your system works could have purposely saved IDs just for the occasion.
As said by other answers, GUIDs/UUIDs are not meant to be unguessable, they are just meant to be unique. Granted, the Java implementation does actually generate cryptographically secure random numbers. However, what if this implementation changes in future releases, or what if your system is ported elsewhere where this functionality is different? If you're going to do this, you might as well generate your own cryptographically secure random numbers using your own implementation to use as identifiers. If you have 128bits of entropy in your identifiers, it is completely infeasible for anyone ever to guess them (even if they had all of the world's computing power).
However, for the above reasons I recommend you implement access checks instead.
You are trying to bypass authorisation controls by hoping that the key is unguessable. This is a security no-no. Depending on whom you ask, they may refer to it as an insecure direct object reference or a violation of the complete mediation principle.
As noted by F. Stephen Q, your assumption that UUIDs are unique does not imply that they are not predictable. The threat here is that if a user knows a few UUIDs, say his own, does that allow him to predict other peoples' UUIDs? This is a very real threat, see: Cautionary note: UUIDs generally do not meet security requirements. Especially note what the UUID RFC says:
Do not assume that UUIDs are hard to guess; they should not be used as
security capabilities (identifiers whose mere possession grants
access), for example.
You can use UUIDs for keys, but you still need to do authorisation checks. When a user wants to access his data, the database should identify the owner of the data, and the server logic needs to enforce that the current user is the same as the database claims the owner is.

Isn't a password a form of security through obscurity?

I know that security through obscurity is frowned upon and considered not really secure, but isn't a password security through obscurity? It's only secure so long as no one finds it.
Is it just a matter of the level of obscurity? (i.e. a good password well salted and hashed is impractical to break)
Note I'm not asking about the process of saving passwords (Assume they are properly hashed and salted). I'm asking about the whole idea using a password, which is a piece of information, which if known could compromise a person's account.
Or am I misunderstanding what security through obscurity means? I guess that's what I assume it to mean, that is there exists some information which if known would compromise a system (in this case, the system being defined as whatever the password is meant to protect)
You are right in that a password is only secure if it is obscure. But the "obsure" part of "security through obscurity" refers to obscurity of the system. With passwords, the system is completely open -- you know the exact method that is used to unlock it, but the key, which is not part of the system, is the unknown.
If we were to generalize, then yes, all security is by means of obscurity. However, the phrase "security through obscurity" does not refer to this.
Maybe it's easier to understand what Security-by-Obscurity is about, by looking at something that is in some sense the opposite: Auguste Kerckhoffs's Second Principle (now simply known usually as Kerckhoffs's Principle), formulated in 1883 in two articles on La Cryptographie Militaire:
[The cipher] must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience.
Claude Shannon reformulated it as:
The enemy knows the system.
And Eric Raymond as:
Any security software design that doesn't assume the enemy possesses the source code is already untrustworthy.
An alternative formulation of that principle is that
The security of the system must depend only of the secrecy of the key, not the secrecy of the system.
So, we can simply define Security-by-Obscurity to be any system that does not follow that principle, and thus we cleverly out-defined the password :-)
There are two basic reasons why this Principle makes sense:
Keys tend to be much smaller than systems, therefore they are easier to protect.
Compromising the secrecy of a key only compromises the secrecy of all communications protected by that key, compromising the secrecy of the system compromises all communications.
Note that it doesn't say anywhere that you can't keep your system secret. It just says you shouldn't depend on it. You may use Security-by-Obscurity as an additional line of defense, you just shouldn't assume that it actually works.
In general, however, cryptography is hard, and cryptographic systems are complex, therefore you pretty much need to publish it, to get as many eyeballs on it as possible. There are only very few organizations on this planet that actually have the necessary smart people to design cryptographic systems in secrecy: in the past, when mathematicians were patriots and governments were rich, those were the NSA and the KGB, right now it's IBM and a couple of years from now it's gonna be the Chinese Secret Service and international crime syndicates.
No. Let's look a definition of security through obscurity from wikipedia
a pejorative referring to a principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to provide security.
The phrase refers to the code itself, or the design of a system. Passwords on the other hand are something a user has to identify themselves with. It's a type of authentication token, not a code implementation.
I know that security through obscurity is frowned upon and considered not really secure, but isn't a password security through obscurity? It's only secure so long as no one finds it.
In order to answer this question, we really need to consider why "security through obscurity" is considered to be flawed.
The big reason that security through obscurity is flawed is that it's actually really easy to reverse-engineer a system based on its interactions with the outside world. If your computer system is sitting somewhere, happily authenticating users, I can just watch what packets it sends, watching for patterns, and figure out how it works. And then it's straightforward to attack it.
In contrast, if you're using a proper open cryptographic protocol, no amount of wire-sniffing will let me steal the password.
That's basically why obscuring a system is flawed, but obscuring key material (assuming a secure system) is not. Security through obscurity will never in and of itself secure a flawed system, and the only way to know your system isn't flawed is to have it vetted publicly.
Passwords are a form of authentication. They are meant to identify that you are interacting with who you are supposed to interact with.
Here is a nice model of the different aspects of security (I had to memorize this in my security course)
http://en.wikipedia.org/wiki/File:Mccumber.jpg
Passwords are an aspect of the confidentiality aspect of security.
While probably the weaker of the forms of authentication (something you know, something you have, something you are), I would still say that it does not constitute security through obscurity. With a password, you are not trying to mask a facet of the system to try to keep it hidden.
Edit:
If you follow the reasoning that passwords are also a means of "security Throguh Obscurity" to it's logical end then All security, including things like encryption, is security through obscurity. Then that means, the only system that is not secured through obscurity is one that is surrounded in concrete and sunk to the ocean floor, no one ever being allowed to use it. This reasoning, however, is not conducive to getting anything done. Therefore we use Security Through obscurity to describe practices that use not understanding the implementation of a system as a means of security. With passwords, the implementation is known.
No, they are not.
Security through obscurity means that the process that provides the access protection is only secure because its exact details are not publicly available.
Publicly available here means that all the details of the process are known to everyone, except, of course, a randomized portion that constitutes the key. Note that the range from which keys can be chosen is still known to everyone.
The effect of this is that it can be proven that the only part that needs to be secret is the password itself, and not other parts of the process. Or conversely, that the only way to gain access to the system is by somehow getting at the key.
In a system that relies on the obscurity of its details, you cannot have such an assurance. It might well be that anyone who finds out what algorithm you are using can find a back door into it (i.e. a way to access the system without the password).
The short answer is no. Passwords by themselves are not security by obscurity.
A password can be thought of as analogous to the key in cryptography. If you have the key you can decode the message. If you do not have the key you can not. Similarly, if you have the right password you can authenticate. If you do not, you can not.
The obscurity part in security by obscurity refers to how the scheme is implemented. For example, if passwords were stored somewhere in the clear and their precise location was kept a secret that would be security by obscurity. Let's say I'm designing the password system for a new OS and I put the password file in /etc/guy/magical_location and name it "cooking.txt" where anyone could access it and read all the passwords if they knew where it was. Someone will eventually figure out (e.g. by reverse engineering) that the passwords are there and then all the OS installations in the world will be broken because I relied on obscurity for security.
Another example is if the passwords are stored where everyone can access them but encrypted with a "secret" key. Anyone who has access to the key could get at the passwords. That would also be security by obscurity.
The "obscurity" refers to some part of the algorithm or scheme that is kept secret where if it was public knowledge the scheme could be compromised. It does not refer to needing a key or a password.
Yes, you are correct and it is a very important realisation you are having.
Too many people say "security through obscurity" without having any idea of what they mean. You are correct in all that matters is the level of "complexity" of decoding any given implementation. Usernames and passwords are just a complex realisation of it, as they greatly increase the amount of information required to gain access.
One important thing to keep in mind in any security analysis is the threat model: Who are you worried about, why, and how are you preventing them? What aren't you covering? etc. Keep up the analytical and critical thinking; it will serve you well.

Website hacking - Why it is always possible to do?

we know that each executable file can be reverse engineered (disassembled, decompiled). No mater how strong security you will implement, anyway if crackers want to, they do crack!!! Just that is a question of time.
What about websites? May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)? If no, than what is the reason?
Yes it is always possible to do. There is always a way in.
It's like my grandfather always said:
Locks are meant to keep the honest
people out
May we say that website can be completely safe from attacks of hackers?
No. Even the most secure technology in the world is vulnerable to social engineering attacks, for one thing.
You can easily write a webapp that is mathematically proven to be secure... But that proof will only hold as long as the underlying operating system, interpreter|compiler, and hardware are secure, which is never the case.
The key thing to remember is that websites are usually part of a huge and complex system and it doesn't really matter if the hacker enters the system through the web application itself or some other part of the entire infrastructure. If someone can get access to your servers, routers, DNS or whatever, they can bring down even the best web application. In my experience a lot of systems are vulnerable in some way or another. So "completely secure" means either "we're trying really hard to secure the platform" or "we have no clue whatsoever, but we hope everything is okay". I have seen both.
To sum up and add to the posts that precede:
Web as a shared resource - websites are useful so long as they are accessible. Render the web site unaccessible, and you've broken it. Denial of service attacks add up to flooding the server so that it can no longer respond to legitimate requests will always be a factor. It's a game of keep away - big server sites find ways to distribute, hackers find ways to deluge.
Dynamic data = dynamic risk - if the user can input data, there's a chance for a hacker to be a menance. Today the big concepts are cross-site scripting and SQL injection, but once one avenue for cracking is figured out, chances are high that another mechanism will rise. You could, conceivably, argue that a totally static site can be secure from this, but then how many useful sites fit that bill?
Complexity = the more complex, the harder to secure - given the rapid change of technology, I doubt that any web developer could say with 100% confidence that a modern website was secure - there's too much unknown code. Taking the host aside (the server, network protocols, OS, and maybe database), there's still all the great new libraries in Java EE and .Net. And even a less enterprise-y architecture will have some serious complexity that makes knowing all potential inputs and outputs of the code prohibitively difficult.
The authentication problem = by definition, the web site lets a remote user do something useful on a server that is far away. Knowing and trusting the other end of the communication is an old challenge. These days server side authenitication is relatively well implemented an understood and (so far as I know!) no one's managed to hack PKI. But getting user authentication ironed out is still quite tricky. It's doable, but it's a tradeoff between difficulty for the user and for configuration, and a system with a higher risk of vulnerability. And even a strong system can be broken when users don't follow the rules or when accidents happen. All this doesn't apply if you want to make a public site for all users, but that severely limits the features you'll be able to implement.
I'd say that web sites simply change the nature of the security challenge from the challenges of client side code. The developer does not need to be as worried about code replication, but the developer does need to be aware of the risks that come from centralizing data and access to a server (or collection of servers). It's just a different sort of problem.
Websites suffer greatly from injection and cross site scripting attacks
Cross-site scripting carried out on
websites were roughly 80% of all
documented security vulnerabilities as
of 2007
Also part of a website (in some web sites a great deal) is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.
Not to nitpick, but your definition of "good hosting" does not assume the HTTP service running on the host is completely free from exploits.
Popular web servers such as IIS and Apache are often patched in order to protect against such exploits, which are often discovered the same way exploits in local executables are discovered.
For example, a malformed HTTP request could cause a buffer overrun on the server, leading to part of its data being executed.
It's not possible to make anything 100% secure.
All that can be done is to make something hard enough to break into, that the time and effort spent doing so makes it not worth doing.
Can I crack your site? Sure, I'll just hire a few suicide bombers to blow up your servers. Or... I'll blow up those power plants that power up your site, or I do some sort of social engineering, and DDOS attacks would quite likely be effective in a large scale not to mention atom bombs...
Short answer: yes.
This might be the wrong website to discuss that. However, it is widely known that security and usability are inversely related. See this post by Bruce Schneier for example (which refers to another website, but on Schneier's blog there's a lot of interesting readings on the issue).
Assuming the server itself isn't comprimised, and has no other clients sharing it, static code should be fine. Things usually only start to get funky when there's some sort of scripting language involved. After all, I've never seen a comprimised "It Works!" page
Saying 'completely secure' is a bad thing as it will state two things:
there has not been a proper threat analysis, because secure enough would be the 'correct' term
since security is always a tradeoff it means that the a system that is completely secure will have abysmal usability and the site will be a huge resource hog as security has been taken to insane levels.
So instead of trying to achieve "complete security" you should;
Do a proper threat analysis
Test your application (or have someone professional test it) against common attacks
Apply best practices, not extreme measures
The short of it is that you have to strike a balance between ease of use and security, much of the time, and decide what provides the optimal level of both for your purposes.
An excellent case in point is passwords. The easy way to go about it is to just have one, use it everywhere, and make it something easy to remember. The secure way to go about it is to have a randomly generated variable-length sequence of characters across the encoding spectrum that only the user himself knows.
Naturally, if you go too far on the easy side, the user's data is easy to pick off. If you go too far on the side of security, however, practical application could end up leading to situations that compromise the added value of the security measures (e.g. people can't remember their whole keychain of passwords and corresponding user names, and therefore write them all down somewhere. If the list is compromised, the security measures that had been put into place are for naught. Hence, most of the time a balance gets struck and places ask that you put a number in your password and tell you not to do anything stupid like tell it to other people.
Even if you remove the possibility of a malicious person with the keys to everything leaking data from the equation, human stupidity is infinite. There is no such thing as 100% security.
May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)?
Well if we're going to start putting constraints on the attacker, then of course we can design a completely secure system: we just have to bar all of the attacker's attacks from the scenario.
If we assume the attacker actually wants to get in (and isn't bound by the rules of your engagement), then the answer is simply no, you can't be completely safe from attacks.
Yes, it's possible for a website to be completely secure, for a reasonable definition of 'complete' that includes your original premise that the hosting is not vulnerable. The problem is the same as with any software that contains defects; people create software of a complexity that is slightly beyond their capability to manage and thus flaws remain undetected until it's too late.
You could start smaller and prove all your work correct and safe as you construct it, remaking any off-the-shelf components that haven't been designed to that stringent degree of quality, but unfortunately that leaves you at a massive commercial disadvantage compared to the people who can write 99% safe software in 1% of the time. Therefore there's rarely a good business reason for going down this path.
The answer to this question lies close to the ideas about computational theory that arise from considering the halting problem. http://en.wikipedia.org/wiki/Halting_problem To wit, if you could with clarity say you'd devised a way to programmatically determine if any particular program was secure, you might be close to disproving the undecidability of the halting problem on the class of machines you were working with. Since the undecidability of the halting problem has been proven, we can know that over turing machines you would be unable to prove securability since the problem of security reduces to the halting problem. Even for finite machines you might be able to decide all of the states of the program, but Minsk would tell us that the time required for a complete state tree for even simplistic modern day machines and web servers would be huge. You probably know a lot about a specific piece of code, but as soon as you changed the code, or updated it, a complete retest would be required. Fundamentally this is interesting because it all boils back to the concept of information and meaning. Read about Automated theory proving to understand more about the limits of computational systems. http://en.wikipedia.org/wiki/Automated_theorem_proving
The fact is hackers are always one step ahead of developers, you can never ever consider a site to be bullet proof and 100% safe. You just avoid malicious stuff as much as you can !!
In fact, you should follow whitelist approach rather than blacklist approach when it comes to security.

What are the best programmatic security controls and design patterns?

There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?
Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:
Deny by default
Deny everything that is not explicitly permitted...
Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.
edit: I am making this a community wiki to encourage voting.
Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.
All these ideas that people are listing (isolation, least privilege, white-listing) are tools.
But you first have to know what "security" means for your application. Often it means something like
Availability: The program will not fail to serve one client because another client submitted bad data.
Privacy: The program will not leak one user's data to another user
Isolation: The program will not interact with data the user did not intend it to.
Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
Trusted Path: The user knows which entity they are interacting with.
Once you know what security means for your application, then you can start designing around that.
One design practice that doesn't get mentioned as often as it should is Object Capabilities.
Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.
Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.
Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.
DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.
White listing
Opt in what you know you accept
(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)
Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.
Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.
Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.
(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)
Encryption ≠ security.
Hire security professionals
Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.
Reuse proven code
Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.
Design security in from the start
It's a lot easier to get security wrong when you're adding it to an existing system.
Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.
Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.
Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).
If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.
Use risk to make security decisions. Once you determine the probability of different threats, then consider the harm that each could do. Risk is, by definition
R = Pe × H
where Pe is the probability of the undersired event, and H is the hazard, or the amount of harm that could come from the undesired event.
Separate concerns. Architect your system and design your code so that security-critical components can be kept together.
KISS (Keep It Simple, Stupid)
If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.
Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.

Resources