What should every programmer know about security? [closed] - security

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am an IT student and I am now in the 3rd year in university. Until now we've been studing a lot of subjects related to computers in general (programming, algorithms, computer architecture, maths, etc).
I am very sure that nobody can learn every thing about security but sure there is a "minimum" knowledge every programmer or IT student should know about it and my question is what is this minimum knowledge?
Can you suggest some e-books or courses or anything can help to start with this road?

Principles to keep in mind if you want your applications to be secure:
Never trust any input!
Validate input from all untrusted sources - use whitelists not blacklists
Plan for security from the start - it's not something you can bolt on at the end
Keep it simple - complexity increases the likelihood of security holes
Keep your attack surface to a minimum
Make sure you fail securely
Use defence in depth
Adhere to the principle of least privilege
Use threat modelling
Compartmentalize - so your system is not all or nothing
Hiding secrets is hard - and secrets hidden in code won't stay secret for long
Don't write your own crypto
Using crypto doesn't mean you're secure (attackers will look for a weaker link)
Be aware of buffer overflows and how to protect against them
There are some excellent books and articles online about making your applications secure:
Writing Secure Code 2nd Edition - I think every programmer should read this
Building Secure Software: How to Avoid Security Problems the Right Way
Secure Programming Cookbook
Exploiting Software
Security Engineering - an excellent read
Secure Programming for Linux and Unix HOWTO
Train your developers on application security best pratices
Codebashing (paid)
Security Innovation(paid)
Security Compass (paid)
OWASP WebGoat (free)

Rule #1 of security for programmers: Don't roll your own
Unless you are yourself a security expert and/or cryptographer, always use a well-designed, well-tested, and mature security platform, framework, or library to do the work for you. These things have spent years being thought out, patched, updated, and examined by experts and hackers alike. You want to gain those advantages, not dismiss them by trying to reinvent the wheel.
Now, that's not to say you don't need to learn anything about security. You certainly need to know enough to understand what you're doing and make sure you're using the tools correctly. However, if you ever find yourself about to start writing your own cryptography algorithm, authentication system, input sanitizer, etc, stop, take a step back, and remember rule #1.

Every programmer should know how to write exploit code.
Without knowing how systems are exploited you are accidentally stopping vulnerabilities. Knowing how to patch code is absolutely meaningless unless you know how to test your patches. Security isn't just a bunch of thought experiments, you must be scientific and test your experiments.

Security is a process, not a product.
Many seem to forget about this obvious matter of fact.

I suggest reviewing CWE/SANS TOP 25 Most Dangerous Programming Errors. It was updated for 2010 with the promise of regular updates in the future. The 2009 revision is available as well.
From http://cwe.mitre.org/top25/index.html
The 2010 CWE/SANS Top 25 Most Dangerous Programming Errors is a list of the most widespread and critical programming errors that can lead to serious software vulnerabilities. They are often easy to find, and easy to exploit. They are dangerous because they will frequently allow attackers to completely take over the software, steal data, or prevent the software from working at all.
The Top 25 list is a tool for education and awareness to help programmers to prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped. Software customers can use the same list to help them to ask for more secure software. Researchers in software security can use the Top 25 to focus on a narrow but important subset of all known security weaknesses. Finally, software managers and CIOs can use the Top 25 list as a measuring stick of progress in their efforts to secure their software.

A good starter course might be the MIT course in Computer Networks and Security. One thing that I would suggest is to not forget about privacy. Privacy, in some senses, is really foundational to security and isn't often covered in technical courses on security. You might find some material on privacy in this course on Ethics and the Law as it relates to the internet.

The Web Security team at Mozilla put together a great guide, which we abide by in the development of our sites and services.

The importance of secure defaults in frameworks and APIs:
Lots of early web frameworks didn't escape html by default in templates and had XSS problems because of this
Lots of early web frameworks made it easier to concatenate SQL than to create parameterized queries leading to lots of SQL injection bugs.
Some versions of Erlang (R13B, maybe others) don't verify ssl peer certificates by default and there are probably lots of erlang code that is susceptible to SSL MITM attacks
Java's XSLT transformer by default allows execution of arbitrary java code. There has been many serious security bugs created by this.
Java's XML parsing APIs by default allow the parsed document to read arbitrary files on the filesystem. More fun :)

You should know about the three A's. Authentication, Authorization, Audit. Classical mistake is to authenticate a user, while not checking if user is authorized to perform some action, so a user may look at other users private photos, the mistake Diaspora did. Many, many more people forget about Audit, you need, in a secure system, to be able to tell who did what and when.

Remember that you (the programmer) has to secure all parts, but the attacker only has to succeed in finding one kink in your armour.
Security is an example of "unknown unknowns". Sometimes you won't know what the possible security flaws are (until afterwards).
The difference between a bug and a security hole depends on the intelligence of the attacker.

I would add the following:
How digital signatures and digital certificates work
What's sandboxing
Understand how different attack vectors work:
Buffer overflows/underflows/etc on native code
Social engineerring
DNS spoofing
Man-in-the middle
CSRF/XSS et al
SQL injection
Crypto attacks (ex: exploiting weak crypto algorithms such as DES)
Program/Framework errors (ex: github's latest security flaw)
You can easily google for all of this. This will give you a good foundation.
If you want to see web app vulnerabilities, there's a project called google gruyere that shows you how to exploit a working web app.

when you are building any enterprise or any of your own software,you should just think like a hacker.as we know hackers are also not expert in all the things,but when they find any vulnerability they start digging into it by gathering information about all the things and finally attack on our software.so for preventing such attacks we should follow some well known rules like:
always try to break your codes(use cheatsheets & google the things for more informations).
be updated for security flaws in your programming field.
and as mentioned above never trust in any type of user or automated inputs.
use opensource applications(their most security flaws are known and solved).
you can find more security resource on the following links:
owasp security
CERT Security
SANS Security
netcraft
SecuritySpace
openwall
PHP Sec
thehackernews(keep updating yourself)
for more information google about your application vendor security flows.

Why is is important.
It is all about trade-offs.
Cryptography is largely a distraction from security.

For general information on security, I highly recommend reading Bruce Schneier. He's got a website, his crypto-gram newsletter, several books, and has done lots of interviews.
I would also get familiar with social engineering (and Kevin Mitnick).
For a good (and pretty entertaining) book on how security plays out in the real world, I would recommend the excellent (although a bit dated) 'The Cuckoo's Egg' by Cliff Stoll.

Also be sure to check out the OWASP Top 10 List for a categorization of all the main attack vectors/vulnerabilities.
These things are fascinating to read about. Learning to think like an attacker will train you of what to think about as you're writing your own code.

Salt and hash your users' passwords. Never save them in plaintext in your database.

Just wanted to share this for web developers:
security-guide-for-developershttps://github.com/FallibleInc/security-guide-for-developers

Related

How do we "test" our security policy?

DISCLAIMER: At my place of work we are aware that, as none of us are security experts, we can't avoid hiring security consultants to get a true picture of our security status and remedial actions for vulnerabilities. This question is asked in the spirit of trying to be a little less dumb and a bit more aware of the issues.
In my place of work, a small business with a sum total of 7 employees, we need to do some work on reviewing our application for security flaw and vulnerabilities. We have identified two main requirements in a security tester:
They are competent, thorough and know their stuff.
They are able to leave us with a clear idea of the work we need to do to make our security better.
This process will be iterative so we will have a scan, do the remedial work and repeat. This will be a regular occurrence going forward.
The problem we have is: How do we know 1? And, even if we're reasonably sure of 1, how on earth do we proceed to 2?
Our first idea was to do some light security scanning on our code ourselves and see if we could identify any definite issues. Then, if the security consultants we choose identify those issues and a few more we're well on the way to 1 and 2. The only problem is that I've been trawling the interweb for days now looking at OWASP, Metasploit, w3af, burp, wikto, sectools (and Stack Overflow, natch)...
As far as I can tell security software seems to come in two flavours, complex open source security stuff for security experts and expensive complex proprietary security stuff for security experts.
I am not a security expert, I am an intermediate level business systems programmer looking for guidance. Is there no approachable scanner type software or similar which will give me an overview of the state of my codebase? Am I just going to have to take a part time degree in order to understand this stuff at a brass tacks level? Or am I missing something?
I read that you're first interested in hiring someone and knowing they're good. Well, you've got a few options, but the easiest is to talk to someone in the know. I've worked with a few companies, and can tell you that Neohapsis and Matasano are very good (though it'll cost you).
The second option you have is to research the company. Who have they worked with? Can they give you references? What do the references have to say? What vulns has the company published to the world? What was the community response (were they shouted down, was the vuln considered minor, or was it game changing, like the SSL MitM vuln)? Have any of the company's employees talked at a conference? Was it a respected conference? Was the talk considered good by the attendees?
Second, you're interested in understanding the vulnerabilities that are reported to you. A good testing company will (a) give you a document describing what they did and did not do, what vulnerabilities they found, how to reproduce the vulnerabilities, and how they know the vulnerability is valid, and (b) will meet with you (possibly teleconference) to review the vulnerabilities and explain how the vulns work, and (c) will have written into the contract that they will retest once after you fix the vulns to validate that they are truly fixed.
You can also get training for your developers (or hire someone who has a good reputation in the field) so they can understand what's what. SafeLight is a good company. SANS offers good training, too. You can use training tools like OWASP's webgoat, which walks you through common web app vulns. Or you can do some reading - NIST SP 800 is a freely downloadable fantastic intro to computer security concepts, and the Hacking Exposed series do a good job teaching how to do the very basic stuff. After that Microsoft Press offers a great set of books about security and security development lifecycle activities. SafeCode offers some good, short recommendations.
Hope this helps!
If you can afford to hire expert security consultants, then that may be your best bet given that your in-house security skills are low.
If not, there is not escaping the fact that you are going to need to understand more about security, how to identify threats, and how to write tests to test for common security exploits like XSS, SQL injection, CSRF, and so on.
Automated security vulnerability software (static code analysis and runtime vulnerability scanning) are useful, but they are only ever going to be one piece in your overall security approach. Automated tools do not identify all exploits, and they can leave you with a false sense of security, or a huge list of false positives. Without the ability to interpret the output of these tools, you might as well not have them.
One tool I would recommend for external vulnerability scanning is QualysGuard. They have a huge and up to date database of common exploits that they can scan for in public facing web applications, web servers, DNS servers, firewalls, VPN servers etc., and the output of the reports usually leaves you with a very clear idea of what is wrong, and what to do about it. But again, this would only be one part in your overall security approach.
If you want to take a holistic approach to security that covers not only the components in your network, applications, databases, and so on, but also the processes (eg. change management, data retention policy, patching) you may find the PCI-DSS specification to be a useful guide, even if you are not storing credit card numbers.
Wow. I wasn't really expecting this little activity.
I may have to alter this answer depending on my experiences but in continuing to wade through the acres of verbiage on my quest for something approachable I happened on a project which has been brought into the OWASP fold:
http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
It boasts, and I quote from the project documentation's introduction:
[ZAP] is designed to be used by people
with a wide range of security
experience and as such is ideal for
developers and functional testers who
a (sic) new to penetration testing.
EDIT: After having a swift play with ZAP this morning, although I couldn't directly switch on the attack mode on our site right away I can see that the proxy works in a manner very similar to OWASP's Web Scarab (Would link but lack of rep and anti-spam rules prevent this. Web Scarab is more technically oriented, it seems, looking over the feature list Scarab does more stuff, but it doesn't have a pen test vulnerability scanner. I'll update more once I've worked out how to have a go with the vulnerability scanner.
Anyone else who would like to pitch in and have a go would be welcome to do so and comment or answer as well below.

Tools to test softwares against any attacks for programmers? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
in these days, i'm interested in software security. As i'm reading papers i see that there are many attacks and researchers are trying to invent new methods for softwares to get more secure systems.
this question can be a general including all types of attacks.There are many experienced programmers in SO, i just want to learn what are using to check your code against these attacks ? Is there any tools you use or you don't care ?
For example i heard about static/dynamic code analysis and fuzz testing.
SQL injection attacks
Cross Site Scripting
Bufferoverflow attacks
Logic errors
Any kind of Malwares
Covert Channels
... ...
thanks
I'm going to focus on web application security here...
Really you want to get used to manually trawling through a website/application and playing with various parameters etc. so proxy tools are of great help (they allow you to capture and interact with forms, before they reach the server):
LiveHTTPHeaders - FireFox plugin.
Burp Proxy - Java based.
Obviously there becomes a point where manually crawling a whole website becomes rather time consuming/tedious and this is where automated scanning tools can be of help.
Black box:
WebSecurify - not used it but it's been created by a well known web app security guy.
Skipfish - Google released this recently so it's probably worth a look.
And there are many other commercial tools: WhiteHat Sentinel, HP Web Inspect and probably many others I can't remember.
White box:
A lot of the academic research I've seen is related to static code analysis tools; I've not used any because they all focused on PHP only and had some limitations.
Other resources:
ha.ckers.org - great blog, with an active forum related to web app sec.
OWASP - as perviously mentioned, there are lots of insightful articles/guides/tutorials here.
If you want to learn more about manually attacking sites yourself the Damn Vulnerable Web App is a nice learning project. By that I mean, it's a web application that is written to be deliberately insecure, so you can test your knowledge of web application security vulnerabilities legally.
I wrote a black box scanner in Perl for my third year dissertation which was quite an interesting project. If you wanted to build something yourself it really just consisted of:
crawler
parser
attacker
Something that you haven't mentioned but I think is important: code reviews.
When you're just trying to implement something as fast as you can it is easy to overlook a security issue. A second pair of eyes can pick up many problems or potential problems, especially if the reviewer is experienced at spotting typical security holes.
I believe that it is possible in many cases to do manual code reviews without special tools. Just sit together at the same computer or even print out the code and do the review on the paper copy. But since you specifically asked for tools, a tool to help with manual code review is Rietveld. I haven't used it myself, but it is based on the same ideas used internally at Google (and written by the same guy, who also happens to be the author of Python).
Security is definitely a concern and developers should at least be aware of common vulnerabilities (and how to avoid them). Here are some resources that I find interesting:
OWASP Top 10 for 2010
OWASP Guide for Secure Web Applications
OWASP Testing Guide v3
There are 2 types of software defects that can cause security problems: implementation bugs and design flaws.
Implementation bugs usually appear in a specific area in the code, they are relatively easy to detect and (usually) not too complicated to fix. You can detect (most) of these with automated tools that do static code analysis (tools like Fortify or Ounce) although these tools are expensive. With that said, you still have to remember that there are no "silver bullets" and you cannot not blindly rely only on the tool output without some sort of manual code review to confirm/understand the real risk behind the issues the tool reports.
The other problem is design flaws, that's another story. They are usually complex issues that are not consequence of a mistake in the code but poor choice in the design or architecture of the application. Those cannot be identified by an automated tool and really can only be detected manually, by a code/design/architecture review. They are usually very hard and expensive to fix passed the design phase.
So I recommend, reviewing your code for implementation bugs that can have impact on security (code review using automated tools like Fortify/Ounce + manual review of tool results) and reviewing your design for security flaws (no tools for this, has to be done by someone who knows about security).
For a good read on software security and the complexity behind designing secure software, check Software Security: Building Security In, by Gary McGraw (amazon link)
I use tools to aid in the hunt for vulnerabilities, but you can't just fire off some test and assume everything is okay. When I am auditing a project I look at the code and I try and get a feel for the programmers style and skill level. If the code looks messy then chances are they are a novice and they will probably make novice mistakes.
It is important to identify security related functions in a project and manually audit them. Tamperdata is very helpful for manual auditing and exploit development because you can build custom http requests. A good example for manual auditing for PHP is: Are they using mysql_real_escape_string($var) or are they using htmlspecialchars($var,ENT_QUOTES) to stop sql injection? (ENT_QUOTES doesn't stop backslashes which is just as dangerous as quote marks for mysql, mssql is a different story.) Security functions are also places for "Logic errors" to crop up, and no tool is going to be able to detect this, this requires manual auditing.
If you are doing web application testing then Acunetix is the best testing tool you can use. Wapiti is a very good open source alternative. Although any tool can be used improperly. Before you do a web application test make sure error reporting is turned on, and also make sure you aren't suppressing sql errors, such as with a try/catch.
If you are doing Automated Static Code Analysis for vulnerabilities such as Buffer Overflows then Coverity is the best tool you can use(Fortify is nearly identical to Coverity). Coverity costs tens of thousands of dollars, but big names like the Department Of Homeland Security uses it. RATS is a open source alternative, although Coverity is far more complex of a tool. Both of these tools will produce a lot of false positives and false negatives. RATS looks for nasty function calls, but doesn't see if its still safe. So RATS will report every call to strcpy() strcat() sprintf(), but these can be safe if for instance you are just copying static text. This means you will have to dig though a lot of crap, but if you are doing a peer review then RATS helps a lot by narrowing the manual search. If you are trying to find a single exploitable vulnerability in a large code base, like Linux, then Rats isn't going to help much.
I have used Coverity and their sales team will claim it will "detect ****ALL**** vulnerabilities in your code base." But I can tell you from first hand experience that I found vanilla stack based buffer overflows with peach that Coverity didn't detect. (RATS did however pick up these issues, along with 1,000+ other function calls that where safe...) If you want a secure application or you want to find an exploitable buffer overflow then Peach is the platform tool you can use to build the tools you need.
If you are looking for more exotic memory corruption issues such as Dangling Pointers then Valgrind will help.
There's bunch of web application security scanners in the market
Take a look at this list:
WASC - Web application security scanner list and Netsparker Community Edition : which is the free version of Netsparker.
A tool doesn't know if your code is insecure.
Only you do (and the attackers).
At best the tool will spot a few vulnerabilities of one type in your code and make you realize you never protected against that type of vulnerability, but you will still have to go clean up all the instances the tool missed.

Website hacking - Why it is always possible to do?

we know that each executable file can be reverse engineered (disassembled, decompiled). No mater how strong security you will implement, anyway if crackers want to, they do crack!!! Just that is a question of time.
What about websites? May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)? If no, than what is the reason?
Yes it is always possible to do. There is always a way in.
It's like my grandfather always said:
Locks are meant to keep the honest
people out
May we say that website can be completely safe from attacks of hackers?
No. Even the most secure technology in the world is vulnerable to social engineering attacks, for one thing.
You can easily write a webapp that is mathematically proven to be secure... But that proof will only hold as long as the underlying operating system, interpreter|compiler, and hardware are secure, which is never the case.
The key thing to remember is that websites are usually part of a huge and complex system and it doesn't really matter if the hacker enters the system through the web application itself or some other part of the entire infrastructure. If someone can get access to your servers, routers, DNS or whatever, they can bring down even the best web application. In my experience a lot of systems are vulnerable in some way or another. So "completely secure" means either "we're trying really hard to secure the platform" or "we have no clue whatsoever, but we hope everything is okay". I have seen both.
To sum up and add to the posts that precede:
Web as a shared resource - websites are useful so long as they are accessible. Render the web site unaccessible, and you've broken it. Denial of service attacks add up to flooding the server so that it can no longer respond to legitimate requests will always be a factor. It's a game of keep away - big server sites find ways to distribute, hackers find ways to deluge.
Dynamic data = dynamic risk - if the user can input data, there's a chance for a hacker to be a menance. Today the big concepts are cross-site scripting and SQL injection, but once one avenue for cracking is figured out, chances are high that another mechanism will rise. You could, conceivably, argue that a totally static site can be secure from this, but then how many useful sites fit that bill?
Complexity = the more complex, the harder to secure - given the rapid change of technology, I doubt that any web developer could say with 100% confidence that a modern website was secure - there's too much unknown code. Taking the host aside (the server, network protocols, OS, and maybe database), there's still all the great new libraries in Java EE and .Net. And even a less enterprise-y architecture will have some serious complexity that makes knowing all potential inputs and outputs of the code prohibitively difficult.
The authentication problem = by definition, the web site lets a remote user do something useful on a server that is far away. Knowing and trusting the other end of the communication is an old challenge. These days server side authenitication is relatively well implemented an understood and (so far as I know!) no one's managed to hack PKI. But getting user authentication ironed out is still quite tricky. It's doable, but it's a tradeoff between difficulty for the user and for configuration, and a system with a higher risk of vulnerability. And even a strong system can be broken when users don't follow the rules or when accidents happen. All this doesn't apply if you want to make a public site for all users, but that severely limits the features you'll be able to implement.
I'd say that web sites simply change the nature of the security challenge from the challenges of client side code. The developer does not need to be as worried about code replication, but the developer does need to be aware of the risks that come from centralizing data and access to a server (or collection of servers). It's just a different sort of problem.
Websites suffer greatly from injection and cross site scripting attacks
Cross-site scripting carried out on
websites were roughly 80% of all
documented security vulnerabilities as
of 2007
Also part of a website (in some web sites a great deal) is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.
Not to nitpick, but your definition of "good hosting" does not assume the HTTP service running on the host is completely free from exploits.
Popular web servers such as IIS and Apache are often patched in order to protect against such exploits, which are often discovered the same way exploits in local executables are discovered.
For example, a malformed HTTP request could cause a buffer overrun on the server, leading to part of its data being executed.
It's not possible to make anything 100% secure.
All that can be done is to make something hard enough to break into, that the time and effort spent doing so makes it not worth doing.
Can I crack your site? Sure, I'll just hire a few suicide bombers to blow up your servers. Or... I'll blow up those power plants that power up your site, or I do some sort of social engineering, and DDOS attacks would quite likely be effective in a large scale not to mention atom bombs...
Short answer: yes.
This might be the wrong website to discuss that. However, it is widely known that security and usability are inversely related. See this post by Bruce Schneier for example (which refers to another website, but on Schneier's blog there's a lot of interesting readings on the issue).
Assuming the server itself isn't comprimised, and has no other clients sharing it, static code should be fine. Things usually only start to get funky when there's some sort of scripting language involved. After all, I've never seen a comprimised "It Works!" page
Saying 'completely secure' is a bad thing as it will state two things:
there has not been a proper threat analysis, because secure enough would be the 'correct' term
since security is always a tradeoff it means that the a system that is completely secure will have abysmal usability and the site will be a huge resource hog as security has been taken to insane levels.
So instead of trying to achieve "complete security" you should;
Do a proper threat analysis
Test your application (or have someone professional test it) against common attacks
Apply best practices, not extreme measures
The short of it is that you have to strike a balance between ease of use and security, much of the time, and decide what provides the optimal level of both for your purposes.
An excellent case in point is passwords. The easy way to go about it is to just have one, use it everywhere, and make it something easy to remember. The secure way to go about it is to have a randomly generated variable-length sequence of characters across the encoding spectrum that only the user himself knows.
Naturally, if you go too far on the easy side, the user's data is easy to pick off. If you go too far on the side of security, however, practical application could end up leading to situations that compromise the added value of the security measures (e.g. people can't remember their whole keychain of passwords and corresponding user names, and therefore write them all down somewhere. If the list is compromised, the security measures that had been put into place are for naught. Hence, most of the time a balance gets struck and places ask that you put a number in your password and tell you not to do anything stupid like tell it to other people.
Even if you remove the possibility of a malicious person with the keys to everything leaking data from the equation, human stupidity is infinite. There is no such thing as 100% security.
May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)?
Well if we're going to start putting constraints on the attacker, then of course we can design a completely secure system: we just have to bar all of the attacker's attacks from the scenario.
If we assume the attacker actually wants to get in (and isn't bound by the rules of your engagement), then the answer is simply no, you can't be completely safe from attacks.
Yes, it's possible for a website to be completely secure, for a reasonable definition of 'complete' that includes your original premise that the hosting is not vulnerable. The problem is the same as with any software that contains defects; people create software of a complexity that is slightly beyond their capability to manage and thus flaws remain undetected until it's too late.
You could start smaller and prove all your work correct and safe as you construct it, remaking any off-the-shelf components that haven't been designed to that stringent degree of quality, but unfortunately that leaves you at a massive commercial disadvantage compared to the people who can write 99% safe software in 1% of the time. Therefore there's rarely a good business reason for going down this path.
The answer to this question lies close to the ideas about computational theory that arise from considering the halting problem. http://en.wikipedia.org/wiki/Halting_problem To wit, if you could with clarity say you'd devised a way to programmatically determine if any particular program was secure, you might be close to disproving the undecidability of the halting problem on the class of machines you were working with. Since the undecidability of the halting problem has been proven, we can know that over turing machines you would be unable to prove securability since the problem of security reduces to the halting problem. Even for finite machines you might be able to decide all of the states of the program, but Minsk would tell us that the time required for a complete state tree for even simplistic modern day machines and web servers would be huge. You probably know a lot about a specific piece of code, but as soon as you changed the code, or updated it, a complete retest would be required. Fundamentally this is interesting because it all boils back to the concept of information and meaning. Read about Automated theory proving to understand more about the limits of computational systems. http://en.wikipedia.org/wiki/Automated_theorem_proving
The fact is hackers are always one step ahead of developers, you can never ever consider a site to be bullet proof and 100% safe. You just avoid malicious stuff as much as you can !!
In fact, you should follow whitelist approach rather than blacklist approach when it comes to security.

Is there a good service for checking website/server vulnerability [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I have been asked to provide information on available techniques for assessing our current, and any future websites for security problems. the request is in the form of
Do you know of any good free one that examines for security holes?
I think our data security is probably worth a small amount of upfront spend so any non-free methods would be appreciated too.
Our systems are a mish mash of mySQL, Oracle, SQLServer, PHP, ASP.NET etc etc systems though I guess that that does not matter too much. All the systems are secured in as much as they are patched and the firewalls are set sensibly so outside people cannot get directly to the database boxes etc.
It is XSS and similar attacks that we wish to prevent.
What do YOU use to give you confidence in your systems? ');DROP TABLE answer;
owasp would be a good place to start. There's too much to cover to include here.
If the security of your site is worth nothing to your company then that's what you should pay. For my company the security of our data and the brand image has quite a high value.
We pay a whole bunch of money for regular scans, we've trained the developers in basic hacking/security of applications, our code reviews include a security review and now we're looking at AppScan from IBM (which is expensive but in the long run probably cheaper than all the pen' testing we pay for).
You get what you pay for. Making sure you understand the owasp issues would be a good start though.
Personally, I choose not to be confident in the security of our systems. I am convinced there is always something that I am missing and thus I keep looking for it.
What you seem to be looking for is something to make others feel confident (even if that confidence is an illusion). Penetration testing is probably the right choice for that. Depending upon the tool, it shows potential vunerabilities in a nice report and then you can report how you mitigated them.
We use IBM AppScan and it is a good tool for this. As with any tester of this type you will find yourself following a lot of bad leads. Most of them are not false postives per se, more just things that might be an issue or appear to be and you will have to investigate and determine if they actually are.
I would not put a lot of faith in this kind of testing. If you app scans clean it really does not mean your app is clean. Does not mean it is worthless, but don't make it out to be more than it is.
The next thing I would look into is static analysis tools in your various languages. A lot of these are free. Hand in hand with that is developer education. That is usually a pretty cheap solution to the issue, just making sure they understand what the risks are.
There is no silver bullet, no simple answer, you need to define security as an EVERYONE problem and make sure it is given both priority and commitment.
Check out dotDefender - they've got versions for IIS/Apache/ISA. I use this app to protect against SQL Injection/XSS/DDOS/probing/encoding attacks. No piece of software will ever be perfect but in my case I run systems with sites being developed in .NET, PHP, and classic ASP with some of our sites being new and others being 5+ years old.
http://www.applicure.com/?page=dotDefender
I do also have a company do penetration testing / social engineering every year or so as well but with dotDefender I'm at least happy that I've got a baseline security blanket to protect my sites.
Of particular interest to me was that their app is fully x64 compatible - necessary since I'm using x64 web servers.

How do you protect your software from illegal distribution? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am curious about how do you protect your software against cracking, hacking etc.
Do you employ some kind of serial number check? Hardware keys?
Do you use any third-party solutions?
How do you go about solving licensing issues? (e.g. managing floating licenses)
EDIT: I'm not talking any open source, but strictly commercial software distribution...
There are many, many, many protections available. The key is:
Assessing your target audience, and what they're willing to put up with
Understanding your audience's desire to play with no pay
Assessing the amount someone is willing to put forth to break your protection
Applying just enough protection to prevent most people from avoiding payment, while not annoying those that use your software.
Nothing is unbreakable, so it's more important to gauge these things and pick a good protection than to simply slap on the best (worst) protection you are able to afford.
Simple registration codes (verified online once).
Simple registration with revokable keys, verified online frequently.
Encrypted key holds portion of program algorithm (can't just skip over the check - it has to be run for the program to work)
Hardware key (public/private key cryptography)
Hardware key (includes portion of program algorithm that runs on the key)
Web service runs critical code (hackers never get to see it)
And variations of the above.
Whatever route you go, charge a fair price, make it easy to activate, give free minor updates and never deactivate their software. If you treat your users with respect they'll reward you for it. Still, no matter what you do some people are going to end up pirating it.
Don't.
Pirates will pirate. No matter what solution you come up with, it can and will be cracked.
On the other hand, your actual, paying customers are the ones who are being inconvenienced by the crap.
Make it easier to buy than to steal. If you put mounds of copy protection then it just makes the value of owning the real deal pretty low.
Use a simple activation key and assure customers that they can always get an activation key or re-download the software if they ever lose theirs.
Any copy protection (aside from online-only components like multiplayer games and finance software that connects to your bank, etc.) you can just assume will be defeated. You want downloading your software illegally, at the very least, to be slightly harder than buying it.
I have a PC games that I've never opened, because there is so much copy protection junk on it that it's actually easier to download the fake version.
Software protections aren't worth the money -- if your software is in demand it will be defeated, no matter what.
That said, hardware protections can work well. An example way it can work well is this: Find a (fairly) simple but necessary component of your software and implement it in Verilog/VHDL. Generate a public-private keypair and make a webservice that takes a challenge string and encrypts it with the private key. Then make a USB dongle that contains your public key and generates random challenge strings. Your software should ask the USB dongle for a challenge string and send it up to the server for encryption. The software then sends it to the dongle. The dongle validates the encrypted challenge string with the public key and goes into an 'enabled' mode. Your software then calls into the dongle any time it needs to do the operation you wrote in HDL. This way anyone wanting to pirate your software has to figure out what the operation is and reimplement it -- much harder than just defeating a pure software protection.
Edit: Just realized some of the verification stuff is backwards from what it should be, but I'm pretty sure the idea comes across.
The Microsoft Software License scheme is crazy expensive for a small business. The server cost is around $12,000 if you want to set it up yourself. I don't recommend it for the feint of heart.
We actually just implemented Intellilock in our product. It lets you have all of the decisions for how strict you want your license to be, and it is very cost effective as well. In addition it does obfuscation, compiler prevention, etc.
Another good solution I have seen small/med businesses use is SoloServer. It is much more of an ecommerce and license control system. It is very configurable to the point of maybe a little too complex. But it does a very good job from what I have heard.
I have also used the Desaware license system for dot net in the past. It is a pretty lightweight system compared to the two above. It is a very good license control system in terms of cryptographically sound. But it is a very low level API in which you have to implement almost everything your app will actually use.
Digital "Rights" Management is the single biggest software snake-oil product in the industry. To borrow a page from classic cryptography, the typical scenario is that Alice wants to get a message to Bob without Charlie being able to read it. DRM doesn't work because in its application, Bob and Charlie are the same person!
You would be better off asking the inverse question, which is "How do I get people to buy my software instead of stealing it?" And that is a very broad question. But it generally starts by doing research. You figure out who buys the type of software you wish to sell, and then produce software that appeals to those people.
The additional prong to this is to limit updates/add-ons to legit copies only. This can be something as simple as an order code received during the purchase transaction.
Check out Stardock software, makers of WindowBlinds and games such as Sins of a Solar Empire, the latter has no DRM and turned a sizable profit off a $2M budget.
There are several methods, such as using the processor ID to generate an "activation key."
The bottom line is that if someone wants it bad enough -- they'll reverse engineer any protection you have.
The most failsafe methods are to use online verification at runtime or a hardware hasp.
Good luck!
Given a little time your software will always be cracked. You can search for cracked versions of any well known piece of software in order to confirm this. But it is still well worth adding some form of protection to your software.
Remember that dishonest people will never pay for your software and always find/use a cracked version. Very honest people will always stick to the rules even without a licensing scheme just because that is the kind of person they are. But the majority of people are between these two extremes.
Adding some simple protection scheme is a good way of making that bulk of people in the middle act in an honest way. It is a way to nudge them into remembering that the software is not free and they should be paying for the appropriate number of licenses. Many people do actually respond to this. Businesses are especially good at sticking to the rules because the manager is not spending his/her own money. Consumers are less likely to stick to the rules because it is their own money.
But recent experience with releases such as Spore from Electronic Arts shows that you can go to far in licensing. If you make even legit people feel like criminals because they are constantly being validated then they start to rebel. So add some simple licensing to remind people if they are being dishonest but anything more than that is unlikely to boost sales.
Online-only games like World of Warcraft (WoW) have it made, everyone has to connect to the server every time and thus accounts can be constantly verified. No other method works for beans.
Generally there are two systems that often get confused -
Licensing or activation tracking, legal legitimate usage
Security preventing illegal usage
For licensing use a commercial package, FlexLM many companies invest huge sums of money into licensing think they also get security, this is a common mistake key generators for these commercial packages are prolifically abundant.
I would only recommend licensing if your selling to corporations who will legitimately pay based on usage, otherwise its probably more effort than its worth.
Remember that as your products become successful, all and every licensing and security measure will be breached eventually. So decide now if it is really worth the effort.
We implemented a clean room clone of FlexLM a number of years ago, we also had to enhance our applications against binary attacks, its long process, you have to revisit it every release. It also really depends on which global markets you sell too, or where your major customer base is as to what you need to do.
Check out another of my answers on securing a DLL.
As has been pointed out, software protection is never guaranteed to be foolproof. What you intend to use depends largely on your target audience. A game, for instance, is not something you are going to be able to protect forever. A server software, on the other hand, is something far less likely to be distributed on the Internet, for a number of reasons (product penetration and liability come to mind; a large corporation does not want to be held liable for bootleg software, and the pirates only bother with things in large-enough demand). In all honesty, for a high-profile game, the best solution is probably to seed the torrent yourself (clandestinely!) and modify it in some way (for instance, so that after two weeks of play it pops up with messages telling you to please consider supporting the developers by purchasing a legitimate copy).
If you put protection in place, bear two things in mind. First, a lower price will supplement any copy protection by making people more inclined to pay the purchase price. Secondly, the protection must not get in the way of users - see Spore for a recent example.
DRM this, DRM that - publishers who force DRM on their projects are doing it because it's profitable. Their economists are concluding this on data which none of us will ever see. The "DRM is evil" trolls are going a little too far.
For a low-visibility product, a simple internet activation is going to stop casual copying. Any other copying is likely negligible to your bottom line.
Illegal distribution is practically impossible to prevent; just ask the RIAA. Digital content can just be copied; analog content can be digitised, and then copied.
You should focus your efforts on preventing unauthorised execution. It's never possible to completely prevent the execution of code on someone else's machine, but you can take certain steps to raise the bar sufficiently high that it becomes easier to purchase your software than to pirate it.
Take a look at the article Developing for Software Protection and Licensing that explains how best to go about developing your application with licensing in mind.
Obligatory disclaimer & plug: the company I co-founded produces the OffByZero Cobalt software licensing solution for .NET.
The trouble with this idea of just let the pirates use it they wont buy it anyway and will show their friends who might buy it is twofold.
With software that uses 3rd party services, the pirated copies are using up valuable bandwidth/resource which gives legit users a worse experience, make my sw look more popular then it is and has the 3rd party services asking me to pay more for their services because of the bandwidth being used.
Many casual wouldn't dream of cracking the sw themselves but if there is an easy assessible crack on a site like piratebay they will use it, if there wasn't they might buy it.
This concept of not disabling pirated software once discovered also seems crazy, I don't understand why I should let someone continue to use software they shouldn't be using, I guess this is just the view/hope of the pirates.
Also, its worth noting that making a program hard to crack is one thing, but you also need to prevent legit copies being shared, otherwise somebody could simply buy one copy and then
share it with thousands of others via a torrent site. The fact of having their name/email address embedded in the license isn't going to be enough to disuade everyone from doing this, and it only really takes one for there to be a problem.
The only way I can see to prevent this is to either:
Have server check and lock license on program startup every time, and release license on program exit. If another client starts with same license whilst the first client has license then it is rejected. This way doesn't prevent the license being used by more than one user, but does prevent it being used concurrently by more than one user - which is good enough. It also allows a legitimate user to transfer the license on any of their computers which provides a better experience.
On first client startup client sends license to server and server verifies it, causing some flag to be set within the client software. Further requests from other clients with the same license are rejected. The trouble with this approach is the original client would have problems if they reinstalled the software or wanted to use a different computer.
Even if you used some kind of biometric fingerprint authentication, someone would find a way to crack it. There's really no practical way around that. Instead of trying to make your software hack-proof, think about how much extra revenue will be brought in by adding additional copy protection vs. the amount of time and money it will take to implement it. At some point, it gets to be cheaper to go with a less rigorous copy protection scheme.
It depends on what exactly your software product is, but one possibility is to move the "valuable" part of the program out of the software and keep it under your exclusive control. You would charge a modest fee for the software (mostly to cover print and distribution costs) and would generate your revenue from the external component. For example, an anti-virus program that is sold for cheap (or bundled for free with other products) but sells subscriptions to its virus definitions update service. With that model, a pirated copy that subscribes to your update service wouldn't represent much of a financial loss. With the increasing popularity of applications "in the cloud", this method is becoming easier to implement; host the application on your cloud, and charge users for cloud access. This doesn't stop someone from re-implementing their own cloud to eliminate the need for your service, but the time and effort involved in doing so would most likely outweigh the benefits (if you keep your pricing model reasonable).
If your interested in protecting software that you intend to sell to consumers I would recommend any of a variety of license key generating libraries (Google search on license key generation). Usually the user has to give you some sort of seed like their email address or name and they get back the registration code.
Several companies will either host and distribute your software or provide a complete installation/purchase application that you can integrate with and do this automatically probably at no additional cost to you.
I have sold software to consumers and I find this the right balance of cost/ease of use/protection.
The simple, and best solution, is just to charge them up front. Set a price that works for you and them.
Asking paying customers to prove that they are paying customers after they've already paid just pisses them off. Implementing the code to make your software not run wastes your time and money, and introduces bugs and annoyances for legitimate customers. You'd be better off spending that time making a better product.
Lots of games/etc will "protect" the first version, then drop the protections in the first patch due to compatibility problems with real customers. It's not an unreasonable strategy if you insist on a modicum of protection.
Almost all copy-protection is both ineffective, and a usability nightmare. Some of it, such as putting root-kits on your customers' machines becomes downright unethical
I suggest simple activation key (even if you know that it can be broken), you really don't want your software to get in your users way, or they'll simply push it away.
Make sure that they can re-download the software, I suggest a web page where they can logging and download your software only after they paid (and yes they should be able to download as many times they wish it, directly, without a single question about why on your part).
Thrust your paid users above all, there is nothing more irritating that being accused from being a criminal when you are a legit users (DVD's anti-piracy warnings anyone).
You can add a service that checks the key against a server when online, and in case of two different IPs are using the same key, popup a suggestion to buy another license.
But please don't inactivate it, it might be a happy user showing your software to a friend!!!!
Make part of your product an online component which requires connection and authentication. Here are some examples:
Online Games
Virus Protection
Spam Protection
Laptop tracking software
This paradigm only goes so far though and can turn some consumers off.
I agree with a lot of posters that no software-based copy protection scheme will deter against a skilled software pirate. For commercial .NET based software Microsoft Software License Protection (SLP) is a very reasonably priced solution. It supports time-limited and floating licenses. Their pricing starts at $10/month + $5 per activation and the protection components seem to work as advertised. It's a fairly new offering, though, so buyer beware.

Resources