How to securely distinguish traffic from my app and browser traffic - security

I'm designing a game that makes queries to a database on the web. The database is fronted by a web service. For example, a request could look like this:
Endpoint: "server.com/user/UID/buygold"
POST:
amount: 100
The web service would make sure that userid has enough funds to purchase 100 gold, then would return a Boolean answer based on the success of the transaction.
However, I want to limit the amount of scripting someone could possibly do to automate gameplay. For example, they could figure out their userid and have automated tasks that buy gold for them while they are at work.
On the web services side, what are some sound security measures that I can put in place to decline all but real app traffic. Is there also a way to trump reverse engineers who will take the app apart and look for keys/certs?

I hope that this is not for a production environment, the security implications alone are mind boggling and certainly go beyond the scope of the allowed response, as it would require a rather lengthy and in depth list of requirements and recommendations, that are really contingent on many other factors that make up your web-service environment. For example, the network topology, authentication, session control and management, and various other variables all play important roles in fostering and implementing a sound cyber security counter measure.
However, assuming that you have all that taken care of, I will answer your main questions as follows:
For question:
"what are some sound security measures that I can put in place to decline all but real app traffic"
Answer:
This is one of many options out there that would address your particular concern, and that is to check for the client User-Agent header in the the request, which may appear something like this:
"Mozilla/5.0 (Macintosh; Intel Mac OS X 32.11; rv:49.0) Gecko/20122101 Firefox/42.0"
Depending on how the script is being run to automate gameplay, if it is in the form of a browser extension, then the User Agent really plays very poorly as a counter measure, if on the other hand, the script is being run directly from the client to your server (web-service), then, you can detect it right away, and there are ways to detect if someone spoofed a User Agent just to bypass this counter measure.
Another counter measure you can utilize is session management at the client level. So, this would require an architectural overview of how you implemented your particular project, but a general summary would follow a pattern like this:
Customer/GamePlayer would naturally be required to login (authentication of some sort)
The client system (which is the User Interface) that the game player is using, will have counter measures implemented in a front-end type of scripting language, i.e. Javascript or any framework that makes use of JS, such as jQuery, DoJo, etc.
Register event handlers that monitor actions, such as type of input, some Boolean logic that will follow something like this:
"if input is not from keyboard or mouse, then send flag with request"
The server/web-service will have logic to handle this request appropriately. This would be a way to catch/detect the game player after they commit the violation, used for legal reasons and such to profile evidence and such. If on the other hand you want to prevent that from happening, then, you could have some Boolean logic that goes something like this: "if input is not from the keyboard or mouse (or whatever permitted input device), then do not allow action (GamePlay), and still report back to web-server".
There are a dozen other ways, but this one seems to generally address your question, provided that you take into consideration that there are hundreds of other factors to think about, from networking level all the way to the application layer, and down to what pattern are you using for your web-service, such as whether it's a REST/API type of environment, does it follow an MVC pattern, and so on. There is no silver lining when it comes to cyber security, it's really a proactive and constant initiative on your end to ensure that all stakeholders' assets are protected, in this case, the asset is the web-service and gameplay, and the threat is the risk of gameplay tampering, that would affect the integrity of your game.
Now, regarding your second question:
" Is there also a way to trump reverse engineers who will take the app apart and look for keys/certs?"
When reverse engineers really put their mind to it, there is nothing you can really do, as whatever counter measure you may implement, they will find a workaround, that's why it's called reverse engineering, they will reverse engineer your "counter measure", so, not to be all cynical about it, you have to accept the reality that there is really no such thing as a "trump" counter measure when it comes to cyber security. You can however employ various mechanisms at both, the network layer and all the way to the application level, combined with proactive initiatives, intrusion detection, abnormal behavioral characteristics of the gameplay pattern, all will mitigate your risk; with all that said, your final frontier will be to ensure you have a good legal policy in place in your TOS (terms of service), and depending where you're hosting your web-service (geographically), you will be protected when users violate such terms, especially when you have verbiage that precludes users from attempting to reverse-engineer, tamper with gameplay scoreboards or currency, and so on.
Another good way, is to really connect with users, users are people, and people sometimes forget that they are also affecting others by their actions, so, once a user is aware of how his/her actions may negatively affect others, such as in the case of a user's actions to increase their 100 Gold, they may financially and emotionally affect others who may have put real time and effort into making this service even possible, so, a simple introductory welcome video upon signing up can do wonders for example; however, sometimes the user may not really know that they were prohibited from using auto-scripts for gameplay, or at least can raise an affirmative defense of that, so, having well published policies can really mitigate and potentially reduce altogether these types of risks. Despite the potentially optimistic outlook on users, you still have to exercise good programming practices and have security counter measures in place though.
I hope that I have given you some insight and direction to assist you with this matter, and that as you can see, it is really an involved and can be a very complicated type of process.
Good luck with your initiatives.

Related

what is the simplest protocol to securely tether a hardware device to a network?

After the Sony PSN debacle, I am trying to find examples of secure hardware tethering to a network. There are two use cases in particular:
1- computer downloads a piece of software that then uniquely and securely labels it to a cloud service
2- a hardware manufacturer uniquely labels a hardware device that then negotiates membership on the network.
Given the fact that the hardware device might have to change (revoke or service enhancements) it feels like #2 becomes #1.
The broad outline is this:
- connect to the service via HTTPS to protect against man in the middle
- device generates a GUID and presents it via HTTPS to service
- service records GUID against account
- on success, service 'enables' device
But how do you protect the GUID so that it cannot be stolen?
I just wanted to comment here:
Sony's PSN issues started with horrible practices with regards to their QA environment.
First, they defaulted to trusting anything that was sent to those servers using their developers toolkit. The reason they did this was that the dev kit used to cost upwards of $10k US and therefore they thought anyone who paid that amount would be on the up and up. However, when they radically lowered the price things changed externally and they didn't account for it.
The second issue with PSN was that the security between QA and live was, well, weak at best and easily circumvented. My understanding is that you could send commands to live using QA credentials. Because QA credentials were used, all chargeable actions were approved without money changing hands and the actions were applied to live accounts. When several people told Sony about this they did nothing.
A third issue was a reliance on hardware based encryption keys. Even hardware encryption keys installed on the devices can be figured out.
Point is, Sony dug their own grave on it so I wouldn't use anything they did as a template for how to do things. Heck, a lot of their websites were open to SQL injection which in today's day and age should get you fired.
Another example here is the iPhone. Each iPhone has a unique identifier that installed apps can grab and send back across the network; similar to a serial number. Some apps use this ID to try and tie a particular device to a person. However, it's trivial to create ID's and broadcast them, so this hasn't worked out so well for the partners. Also Apple does not expose a way to ensure a given ID (UUID) is valid to app producers.
A third example is mobile phone carriers. They use a particular ID baked into your SIM card to identify your account in order to know who to bill when a call is made. This ID is verified whenever the phone checks in with the network. However, we're dealing with radio signals and any device that can broadcast a correct ID can gain access. Point is, honest people think that only AT&T approved devices can get on an AT&T network. Reality is, anything can but they are going to bill the owner of the particular ID...
That said, any software you have running on a remote device that is not under your direct control is likely to be hacked. The popularity of the device will increase the likelihood of it happening sooner rather than later.
Where do we go from here?
On a basic level you associate an ID with an account in your service. PSN, Apple and others have done this. When an ID is broadcast, you need to verify that it exists AND that it's tied to an active account. If both pass then you have two options: either perform the action requested OR request additional verification.
For any actions that require money to be spent, do the additional verification (usually some form of username/password), capture the funds, then perform the action. Go one step further and every time a bad login is entered, send an email to the user on file. Further, automatically send a receipt. These are typically done so that your honest users can tell when something is going on.
Anything else just let through.
Bearing in mind, of course, that QA credentials should NOT work in your Live environment. Those systems should not be tied to each other under any condition and, quite frankly, should even live on separate hardware. In other words, QA and Live should NOT share a login database.
The thing here is that you shouldn't care about the device itself; just the account. You can't control the device as it's out of your hands; heck you can't even be sure it hasn't been physically tampered with. (XBox has been fighting this one with people adding resistors or burning out certain components to get past physical security features).
So, IMHO, do a bit to keep honest people honest but overall don't worry about it. Now, you should transfer everything via SSL or someother encrypted connection between the device and your cloud so that you don't leak ID's to anyone that wants to grab them. This will help protect those honest people.
Further, you shouldn't have a direct way to query whether an ID is valid or not from the outside. This will make it a bit more difficult for a hacker to find existing valid IDs and take over accounts. If you want to get fancy you could honey pot those and track the hackers down in order to sue them into oblivion, but that takes time and resources companies don't normally have. Also you could log all of the requests that contained bad IDs and use that to track hackers down.
Note that even after the device has been "enabled" I still suggest you have two levels of authentication. The first is for simple actions like downloading free content; the second kicks in anytime there is a fee associated. Again, we're trying to protect your honest subscribers.
For the dishonest ones you will have to apply some statistical analysis on the transactions coming across. Things like the transaction rate can help identify bots that are running and allow you to kill their IDs. There are others but they'll be unique to your application.
This was long winded. But my point is:
You can't secure the ID or anything else you pass out.
You can't ensure the requests are coming from your devices or your own approved devices.
You better take actions to keep QA and production separate for those building software for these devices using your services.
You better take actions to protect your normal honest users.
Trust NOTHING.
Due to the above you should evaluate your business model so that you don't care what device was used and instead focus on the individual accounts themselves; which you do have control over.
I am not sure I entirely understand the question, but I think you want some sort of device to hold on to a GUID assigned to it by a web service, and you don't want someone finding out what that GUID is, correct?
If so, there isn't a lot you can do. You have already mentioned one option... using HTTPS during the assigning of the ID. That is a good start, but remember that anyone who has physical access to the device can do a lot of things to look up this ID.
In short, it is impossible to completely hide. Someone can always reverse engineer it. There are folks out there reading data right out of memory with hardware.

Security Beyond a Username/Password?

I have a webapp that requires security beyond that of a normal web application. When any user visits the domain name, they are presented with two text fields, a username field, and a password field. If they enter a valid user/pass, they get access to the web application. Standard stuff.
However, I'm looking for additional security beyond this standard setup. Ideally it would be a software solution, but I'm also open for hardware solution as well (hardware=key fobs), or even procedural changes (one time use passwords on a password pad for example).
The webapp is unique in that we know all our users ahead of time, and we create their username and password and give it to them. In this sense, we can be assured that the username and password are "strong".
However, our clients have requested additional security beyond this. Anyone have any ideas on how to add another layer of complexity to the security?
Our company used PhoneFactor and we absolutely love it.
We've also used Safeword Tokens in the past.
However, it's notthe only game in the book. I'd start by googling "Two factor authentication"
The OWASP guide to authentication is another good place to start. Actually, OWASP is the first place I'd look for ANY web security question.
Another option for additional security is to use a piece of physical 'evidence' such as a Smart Card: Protect Your Data Via Managed Code And The Windows Vista Smart Card APIs
There are lots of different areas that web apps can have their security improved on. Before getting started you need to determine what, exactly, your problem areas might be and what you want to focus on.
You might start this process by having a third party do Penetration Testing (PEN Testing) on your application. This should give a quick hit list of things you can take care of and, when you have a passing grade, is something to use in your sales literature.
Next you'll want to talk to your customers to understand what they mean by "more secure". Is it simply two factor authentication like David and Mitch mentioned or are they more concerned about things such as data in motion (ARP Poisoning, SSL, and the like), data at rest (everything from hard drive encryption to database encryption), authorization, impersonation (cross site and replay), personnel (ongoing background checks on who has access to the machines), etc..
The concept of security covers a lot of ground.

Paranoid attitude: What's your degree about web security concerns?

this question can be associated to a subjective question, but this is not a really one.
When you develop a website, there is several points you must know: XSS attacks, SQL injection, etc.
It can be very very difficult (and take a long time to code) to secure all potential attacks.
I always try to secure my application but I don't know when to stop.
Let's take the same example: a social networking like Facebook. (Because a bank website must secure all its datas.)
I see some approaches:
Do not secure XSS, SQL injection... This can be really done when you trust your user: back end for a private enterprise. But do you secure this type of application?
Secure attacks only when user try to access non owned datas: This is for me the best approach.
Secure all, all, all: You secure all datas (owner or not): the user can't break its own datas and other user datas: this is very long to do and is it very useful?
Secure common attacks but don't secure very hard attacks (because it's too long to code comparing to the chance of being hacked).
Well, I don't know really what to do... For me, I try to do 1, 2, 4 but I don't know if it's the great choice.
Is there an acceptable risk to not secure all your datas? May I secure all datas but it takes me double time to code a thing? What's the enterprise approach between risk and "time is money"?
Thank you to share this because I think a lot of developers don't know what is the good limit.
EDIT: I see a lot of replies talking about XSS and SQL injection, but this is not the only things to take care about.
Let's take a forum. A thread can be write in a forum where we are moderator. So when you send data to client view, you add or remove the "add" button for this forum. But when a user tries to save a thread in server side, you must check that user has the right to dot it (you can't trust on client view security).
This is a very simple example, but in some of my apps, I've got a hierarchy of rights which can be very very difficult to check (need a lot of SQL queries...) but in other hand, it's really hard to find the hack (datas are pseudo encrypted in client view, there is a lot of datas to modify to make the hack runs, and the hacker needs a good understanding of my app rules to do a hack): in this case, may I check only surface security holes (really easy hack) or may I check very hard security holes (but it will decrease my performances for all users, and takes me a long time to develop).
The second question is: Can we "trust" (to not develop a hard and long code which decreases performance) on client view for very hard hack?
Here is another post talking of this sort of hack: (hibernate and collection checking) Security question: how to secure Hibernate collections coming back from client to server?
I think you should try and secure everything you can, the time spent doing this is nothing compared to the time needed to fix the mess done by someone exploiting a vulnerability you left somewhere.
Most things anyway are quite easy to fix:
sql injections have really nothing to do with sql, it's just string manipulation, so if you don't feel comfortable with that, just use prepared statements with bound parameters and forget about the problem
cross site exploit are easily negated by escaping (with htmlentities or so) every untrusted data before sending it out as output -- of course this should be coupled with extensive data filtering, but it's a good start
credentials theft: never store data which could provide a permanent access to protected areas -- instead save a hashed version of the username in the cookies and set a time limit to the sessions: this way an attacker who might happen to steal this data will have a limited access instead of permanent
never suppose that just because a user is logged in then he can be trusted -- apply security rules to everybody
treat everything you get from outside as potentially dangerous: even a trusted site you get data from might be compromised, and you don't want to fall down too -- even your own database could be compromised (especially if you're on a shared environment) so don't trust its data either
Of course there is more, like session hijacking attacks, but those are the first things you should look at.
EDIT regarding your edit:
I'm not sure I fully understand your examples, especially what you mean by "trust on client security". Of course all pages with restricted access must start with a check to see if the user has rights to see the content and optionally if he (or she) has the correct level of privilege: there can be some actions available to all users, and some others only available to a more restricted group (like moderators in a forum). All this controls have to be done on the server side, because you can never trust what the client sends you, being it data through GET, POST and even COOKIES. None of these are optional.
"Breaking data" is not something that should ever be possible, by the authorized user or anybody else. I'd file this under "validation and sanitation of user input", and it's something you must always do. If there's just the possibility of a user "breaking your data", it'll happen sooner or later, so you need to validate any and all input into your app. Escaping SQL queries goes into this category as well, which is both a security and data sanitation concern.
The general security in your app should be sound regardless. If you have a user management system, it should do its job properly. I.e. users that aren't supposed to access something should not be able to access it.
The other problem, straight up XSS attacks, has not much to do with "breaking data" but with unauthorized access to data. This is something that depends on the application, but if you're aware of how XSS attacks work and how you can avoid them, is there any reason not to?
In summary:
SQL injection, input validation and sanitation go hand in hand and are a must anyway
XSS attacks can be avoided by best-practices and a bit of consciousness, you shouldn't need to do much extra work for it
anything beyond that, like "pro-active" brute force attack filters or such things, that do cause additional work, depend on the application
Simply making it a habit to stick to best practices goes a long way in making a secure app, and why wouldn't you? :)
You need to see web apps as the server-client architecture they are. The client can ask a question, the server gives answers. The question is just a URL, sometimes with a bit of attached POST data.
Can I have /forum/view_thread/12345/ please?
Can I POST this $data to /forum/new_thread/ please?
Can I have /forum/admin/delete_all_users/ please?
Your security can't rely only on the client not asking the right question. Never.
The server always needs to evaluate the question and answer No when necessary.
All applications should have some degree of security. You generally don't ask for SSL on intranet websites, but you need to take care of SQL/XSS attacks.
All data your user enters into your application should be under your responsibility. You must make sure nobody unauthorized get access to it. Sometimes, a "not critical" information can pose a very security problem, because we're all lazy people.
Some time ago, a friend used to run a games website. Users create their profiles, forum , all that stuff. Then, some day, someone found a SQL injection open door somewhere. That attacker get all user and password information.
Not a big deal, huh? I mean, who cares about a player account into a website? But most users used same user/password to MSN, Counter Strike, etc. So become a big problem very fast.
Bottom line is: all applications should have some security concern. You should take a look into STRIDE to understand your attack vectors and take best action.
I personally prefer to secure everything at all times. It might be a paranoid approach, but when I see tons of websites throughout internet, that are vulnerable to SQL injection or even much simpler attacks, and they are not bothered to fix it until someone "hacks" them and steal their precious data, it makes me pretty much afraid. I don't really want to be the one responsible for leaked passwords or other user info.
Just ask someone with hacking experiences to check your application / website. It should give you a fair idea what's wrong and what should be updated.
You want to have strong API side ACL. Some days ago I saw a problem where a guy had secured every single UI, but the website was vulnerable through AJAX, just because his API (where he was sending requests) just trusted every single request to be checked. I could basically pull whole database through this bug.
I think it's helpful to distinguish between preventing code injection and plain data authorization.
In my opinion, all it takes is a few good coding habits to completely eliminate SQL injection. There is simply no excuse for it.
XSS injection is a little bit different - i think it can always be prevented, but it may not be trivial if your application features user generated content. By that I simply mean that it may not be as trivial to secure your app against XSS as it is compared to SQL injection. So I do not mean that it is ok to allow XSS - I still think there is no excuse for allowing it, it's just harder to prevent than SQL injection if your app revolves around user generated content.
So SQL injection and XSS are purely technical matters - the next level is authorization: how thoroughly should one shield of access to data that is no business of the current user. Here I think it really does depend on the application, and I can imagine that it makes sense to distinguish between: "user X may not see anything of user Y" vs "Not bothering user X with data of user Y would improve usability and make the application more convenient to use".

Website hacking - Why it is always possible to do?

we know that each executable file can be reverse engineered (disassembled, decompiled). No mater how strong security you will implement, anyway if crackers want to, they do crack!!! Just that is a question of time.
What about websites? May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)? If no, than what is the reason?
Yes it is always possible to do. There is always a way in.
It's like my grandfather always said:
Locks are meant to keep the honest
people out
May we say that website can be completely safe from attacks of hackers?
No. Even the most secure technology in the world is vulnerable to social engineering attacks, for one thing.
You can easily write a webapp that is mathematically proven to be secure... But that proof will only hold as long as the underlying operating system, interpreter|compiler, and hardware are secure, which is never the case.
The key thing to remember is that websites are usually part of a huge and complex system and it doesn't really matter if the hacker enters the system through the web application itself or some other part of the entire infrastructure. If someone can get access to your servers, routers, DNS or whatever, they can bring down even the best web application. In my experience a lot of systems are vulnerable in some way or another. So "completely secure" means either "we're trying really hard to secure the platform" or "we have no clue whatsoever, but we hope everything is okay". I have seen both.
To sum up and add to the posts that precede:
Web as a shared resource - websites are useful so long as they are accessible. Render the web site unaccessible, and you've broken it. Denial of service attacks add up to flooding the server so that it can no longer respond to legitimate requests will always be a factor. It's a game of keep away - big server sites find ways to distribute, hackers find ways to deluge.
Dynamic data = dynamic risk - if the user can input data, there's a chance for a hacker to be a menance. Today the big concepts are cross-site scripting and SQL injection, but once one avenue for cracking is figured out, chances are high that another mechanism will rise. You could, conceivably, argue that a totally static site can be secure from this, but then how many useful sites fit that bill?
Complexity = the more complex, the harder to secure - given the rapid change of technology, I doubt that any web developer could say with 100% confidence that a modern website was secure - there's too much unknown code. Taking the host aside (the server, network protocols, OS, and maybe database), there's still all the great new libraries in Java EE and .Net. And even a less enterprise-y architecture will have some serious complexity that makes knowing all potential inputs and outputs of the code prohibitively difficult.
The authentication problem = by definition, the web site lets a remote user do something useful on a server that is far away. Knowing and trusting the other end of the communication is an old challenge. These days server side authenitication is relatively well implemented an understood and (so far as I know!) no one's managed to hack PKI. But getting user authentication ironed out is still quite tricky. It's doable, but it's a tradeoff between difficulty for the user and for configuration, and a system with a higher risk of vulnerability. And even a strong system can be broken when users don't follow the rules or when accidents happen. All this doesn't apply if you want to make a public site for all users, but that severely limits the features you'll be able to implement.
I'd say that web sites simply change the nature of the security challenge from the challenges of client side code. The developer does not need to be as worried about code replication, but the developer does need to be aware of the risks that come from centralizing data and access to a server (or collection of servers). It's just a different sort of problem.
Websites suffer greatly from injection and cross site scripting attacks
Cross-site scripting carried out on
websites were roughly 80% of all
documented security vulnerabilities as
of 2007
Also part of a website (in some web sites a great deal) is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.
Not to nitpick, but your definition of "good hosting" does not assume the HTTP service running on the host is completely free from exploits.
Popular web servers such as IIS and Apache are often patched in order to protect against such exploits, which are often discovered the same way exploits in local executables are discovered.
For example, a malformed HTTP request could cause a buffer overrun on the server, leading to part of its data being executed.
It's not possible to make anything 100% secure.
All that can be done is to make something hard enough to break into, that the time and effort spent doing so makes it not worth doing.
Can I crack your site? Sure, I'll just hire a few suicide bombers to blow up your servers. Or... I'll blow up those power plants that power up your site, or I do some sort of social engineering, and DDOS attacks would quite likely be effective in a large scale not to mention atom bombs...
Short answer: yes.
This might be the wrong website to discuss that. However, it is widely known that security and usability are inversely related. See this post by Bruce Schneier for example (which refers to another website, but on Schneier's blog there's a lot of interesting readings on the issue).
Assuming the server itself isn't comprimised, and has no other clients sharing it, static code should be fine. Things usually only start to get funky when there's some sort of scripting language involved. After all, I've never seen a comprimised "It Works!" page
Saying 'completely secure' is a bad thing as it will state two things:
there has not been a proper threat analysis, because secure enough would be the 'correct' term
since security is always a tradeoff it means that the a system that is completely secure will have abysmal usability and the site will be a huge resource hog as security has been taken to insane levels.
So instead of trying to achieve "complete security" you should;
Do a proper threat analysis
Test your application (or have someone professional test it) against common attacks
Apply best practices, not extreme measures
The short of it is that you have to strike a balance between ease of use and security, much of the time, and decide what provides the optimal level of both for your purposes.
An excellent case in point is passwords. The easy way to go about it is to just have one, use it everywhere, and make it something easy to remember. The secure way to go about it is to have a randomly generated variable-length sequence of characters across the encoding spectrum that only the user himself knows.
Naturally, if you go too far on the easy side, the user's data is easy to pick off. If you go too far on the side of security, however, practical application could end up leading to situations that compromise the added value of the security measures (e.g. people can't remember their whole keychain of passwords and corresponding user names, and therefore write them all down somewhere. If the list is compromised, the security measures that had been put into place are for naught. Hence, most of the time a balance gets struck and places ask that you put a number in your password and tell you not to do anything stupid like tell it to other people.
Even if you remove the possibility of a malicious person with the keys to everything leaking data from the equation, human stupidity is infinite. There is no such thing as 100% security.
May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)?
Well if we're going to start putting constraints on the attacker, then of course we can design a completely secure system: we just have to bar all of the attacker's attacks from the scenario.
If we assume the attacker actually wants to get in (and isn't bound by the rules of your engagement), then the answer is simply no, you can't be completely safe from attacks.
Yes, it's possible for a website to be completely secure, for a reasonable definition of 'complete' that includes your original premise that the hosting is not vulnerable. The problem is the same as with any software that contains defects; people create software of a complexity that is slightly beyond their capability to manage and thus flaws remain undetected until it's too late.
You could start smaller and prove all your work correct and safe as you construct it, remaking any off-the-shelf components that haven't been designed to that stringent degree of quality, but unfortunately that leaves you at a massive commercial disadvantage compared to the people who can write 99% safe software in 1% of the time. Therefore there's rarely a good business reason for going down this path.
The answer to this question lies close to the ideas about computational theory that arise from considering the halting problem. http://en.wikipedia.org/wiki/Halting_problem To wit, if you could with clarity say you'd devised a way to programmatically determine if any particular program was secure, you might be close to disproving the undecidability of the halting problem on the class of machines you were working with. Since the undecidability of the halting problem has been proven, we can know that over turing machines you would be unable to prove securability since the problem of security reduces to the halting problem. Even for finite machines you might be able to decide all of the states of the program, but Minsk would tell us that the time required for a complete state tree for even simplistic modern day machines and web servers would be huge. You probably know a lot about a specific piece of code, but as soon as you changed the code, or updated it, a complete retest would be required. Fundamentally this is interesting because it all boils back to the concept of information and meaning. Read about Automated theory proving to understand more about the limits of computational systems. http://en.wikipedia.org/wiki/Automated_theorem_proving
The fact is hackers are always one step ahead of developers, you can never ever consider a site to be bullet proof and 100% safe. You just avoid malicious stuff as much as you can !!
In fact, you should follow whitelist approach rather than blacklist approach when it comes to security.

What are the best programmatic security controls and design patterns?

There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?
Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:
Deny by default
Deny everything that is not explicitly permitted...
Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.
edit: I am making this a community wiki to encourage voting.
Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.
All these ideas that people are listing (isolation, least privilege, white-listing) are tools.
But you first have to know what "security" means for your application. Often it means something like
Availability: The program will not fail to serve one client because another client submitted bad data.
Privacy: The program will not leak one user's data to another user
Isolation: The program will not interact with data the user did not intend it to.
Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
Trusted Path: The user knows which entity they are interacting with.
Once you know what security means for your application, then you can start designing around that.
One design practice that doesn't get mentioned as often as it should is Object Capabilities.
Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.
Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.
Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.
DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.
White listing
Opt in what you know you accept
(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)
Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.
Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.
Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.
(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)
Encryption ≠ security.
Hire security professionals
Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.
Reuse proven code
Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.
Design security in from the start
It's a lot easier to get security wrong when you're adding it to an existing system.
Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.
Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.
Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).
If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.
Use risk to make security decisions. Once you determine the probability of different threats, then consider the harm that each could do. Risk is, by definition
R = Pe × H
where Pe is the probability of the undersired event, and H is the hazard, or the amount of harm that could come from the undesired event.
Separate concerns. Architect your system and design your code so that security-critical components can be kept together.
KISS (Keep It Simple, Stupid)
If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.
Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.

Resources