How do we "test" our security policy? - security

DISCLAIMER: At my place of work we are aware that, as none of us are security experts, we can't avoid hiring security consultants to get a true picture of our security status and remedial actions for vulnerabilities. This question is asked in the spirit of trying to be a little less dumb and a bit more aware of the issues.
In my place of work, a small business with a sum total of 7 employees, we need to do some work on reviewing our application for security flaw and vulnerabilities. We have identified two main requirements in a security tester:
They are competent, thorough and know their stuff.
They are able to leave us with a clear idea of the work we need to do to make our security better.
This process will be iterative so we will have a scan, do the remedial work and repeat. This will be a regular occurrence going forward.
The problem we have is: How do we know 1? And, even if we're reasonably sure of 1, how on earth do we proceed to 2?
Our first idea was to do some light security scanning on our code ourselves and see if we could identify any definite issues. Then, if the security consultants we choose identify those issues and a few more we're well on the way to 1 and 2. The only problem is that I've been trawling the interweb for days now looking at OWASP, Metasploit, w3af, burp, wikto, sectools (and Stack Overflow, natch)...
As far as I can tell security software seems to come in two flavours, complex open source security stuff for security experts and expensive complex proprietary security stuff for security experts.
I am not a security expert, I am an intermediate level business systems programmer looking for guidance. Is there no approachable scanner type software or similar which will give me an overview of the state of my codebase? Am I just going to have to take a part time degree in order to understand this stuff at a brass tacks level? Or am I missing something?

I read that you're first interested in hiring someone and knowing they're good. Well, you've got a few options, but the easiest is to talk to someone in the know. I've worked with a few companies, and can tell you that Neohapsis and Matasano are very good (though it'll cost you).
The second option you have is to research the company. Who have they worked with? Can they give you references? What do the references have to say? What vulns has the company published to the world? What was the community response (were they shouted down, was the vuln considered minor, or was it game changing, like the SSL MitM vuln)? Have any of the company's employees talked at a conference? Was it a respected conference? Was the talk considered good by the attendees?
Second, you're interested in understanding the vulnerabilities that are reported to you. A good testing company will (a) give you a document describing what they did and did not do, what vulnerabilities they found, how to reproduce the vulnerabilities, and how they know the vulnerability is valid, and (b) will meet with you (possibly teleconference) to review the vulnerabilities and explain how the vulns work, and (c) will have written into the contract that they will retest once after you fix the vulns to validate that they are truly fixed.
You can also get training for your developers (or hire someone who has a good reputation in the field) so they can understand what's what. SafeLight is a good company. SANS offers good training, too. You can use training tools like OWASP's webgoat, which walks you through common web app vulns. Or you can do some reading - NIST SP 800 is a freely downloadable fantastic intro to computer security concepts, and the Hacking Exposed series do a good job teaching how to do the very basic stuff. After that Microsoft Press offers a great set of books about security and security development lifecycle activities. SafeCode offers some good, short recommendations.
Hope this helps!

If you can afford to hire expert security consultants, then that may be your best bet given that your in-house security skills are low.
If not, there is not escaping the fact that you are going to need to understand more about security, how to identify threats, and how to write tests to test for common security exploits like XSS, SQL injection, CSRF, and so on.
Automated security vulnerability software (static code analysis and runtime vulnerability scanning) are useful, but they are only ever going to be one piece in your overall security approach. Automated tools do not identify all exploits, and they can leave you with a false sense of security, or a huge list of false positives. Without the ability to interpret the output of these tools, you might as well not have them.
One tool I would recommend for external vulnerability scanning is QualysGuard. They have a huge and up to date database of common exploits that they can scan for in public facing web applications, web servers, DNS servers, firewalls, VPN servers etc., and the output of the reports usually leaves you with a very clear idea of what is wrong, and what to do about it. But again, this would only be one part in your overall security approach.
If you want to take a holistic approach to security that covers not only the components in your network, applications, databases, and so on, but also the processes (eg. change management, data retention policy, patching) you may find the PCI-DSS specification to be a useful guide, even if you are not storing credit card numbers.

Wow. I wasn't really expecting this little activity.
I may have to alter this answer depending on my experiences but in continuing to wade through the acres of verbiage on my quest for something approachable I happened on a project which has been brought into the OWASP fold:
http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
It boasts, and I quote from the project documentation's introduction:
[ZAP] is designed to be used by people
with a wide range of security
experience and as such is ideal for
developers and functional testers who
a (sic) new to penetration testing.
EDIT: After having a swift play with ZAP this morning, although I couldn't directly switch on the attack mode on our site right away I can see that the proxy works in a manner very similar to OWASP's Web Scarab (Would link but lack of rep and anti-spam rules prevent this. Web Scarab is more technically oriented, it seems, looking over the feature list Scarab does more stuff, but it doesn't have a pen test vulnerability scanner. I'll update more once I've worked out how to have a go with the vulnerability scanner.
Anyone else who would like to pitch in and have a go would be welcome to do so and comment or answer as well below.

Related

Jenkins security as an open-source tool

I work in a corporate development environment that is fairly risk-averse where management is often afraid of change. I've prototyped out how a Jenkins solution for our development team might work, and highlighted some success stories where the pilot implementation has helped, but the time has now come to get it approved to a wider audience and in a more permanent way, and some security concerns have been raised.
Primarily, the concerns so far have focused on the fact that the tool is open-sourced and the plugins are open-sourced and made by community contributors, so management is concerned that somebody could insert malicious code that would go unnoticed by us when we update. My opinion is that if so many other places can make Jenkins work, we probably can too, but that is not necessarily a very compelling argument to our security testing team.
My question is, can anybody tell me how they have secured their own Jenkins implementations, or how what specific Jenkins capabilities (sandboxing, etc) are in place to prevent malicious code from being executed on our systems?
Using 3rd party components either in your software or your infrastructure will always have risks. One very important thing to note is that open source is not less secure than closed source. While probably anybody might contribute code to an open source project, in most cases there is review before it actually makes its way into the project. Of course, a vulnerability may slip through, but how is that different from a software company with lots of developers? A vulnerability may slip through there too, and based on the experience of many of us, it quite often does. :) And in case of closed source, you don't even have the power of a diverse community to spot such security flaws, the best you can rely on are 3rd party penetration tests or code scans, both of which miss many issues.
In case of such a well established project like Jenkins, you can be pretty sure that there is lots of scrutiny on its security, probably more so than any closed source commercial tool you may currently have.
As with any 3rd party component, you should exercise due diligence though. Have a look at online vulnerability databases like NVD regularly to find security issues. Install updates as they come out to mitigate the risk. You should do these for closed source components too.
As for how to secure a Jenkins installation, an answer here is not the right format I think, but there is a whole set of pages on their website dedicated to the topic.
Having said all this and looking at past vulnerabilities in Jenkins, there are quite a few. It's up to you (and your security department) to assess how exactly you would want to deploy Jenkins, and whether those past vulnerabilities are serious enough for you to think the whole tool is not adequate for your environment considering the way you want to deploy it. Again, it's the same process you would follow with a closed source tool too.

Tools to test softwares against any attacks for programmers? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
in these days, i'm interested in software security. As i'm reading papers i see that there are many attacks and researchers are trying to invent new methods for softwares to get more secure systems.
this question can be a general including all types of attacks.There are many experienced programmers in SO, i just want to learn what are using to check your code against these attacks ? Is there any tools you use or you don't care ?
For example i heard about static/dynamic code analysis and fuzz testing.
SQL injection attacks
Cross Site Scripting
Bufferoverflow attacks
Logic errors
Any kind of Malwares
Covert Channels
... ...
thanks
I'm going to focus on web application security here...
Really you want to get used to manually trawling through a website/application and playing with various parameters etc. so proxy tools are of great help (they allow you to capture and interact with forms, before they reach the server):
LiveHTTPHeaders - FireFox plugin.
Burp Proxy - Java based.
Obviously there becomes a point where manually crawling a whole website becomes rather time consuming/tedious and this is where automated scanning tools can be of help.
Black box:
WebSecurify - not used it but it's been created by a well known web app security guy.
Skipfish - Google released this recently so it's probably worth a look.
And there are many other commercial tools: WhiteHat Sentinel, HP Web Inspect and probably many others I can't remember.
White box:
A lot of the academic research I've seen is related to static code analysis tools; I've not used any because they all focused on PHP only and had some limitations.
Other resources:
ha.ckers.org - great blog, with an active forum related to web app sec.
OWASP - as perviously mentioned, there are lots of insightful articles/guides/tutorials here.
If you want to learn more about manually attacking sites yourself the Damn Vulnerable Web App is a nice learning project. By that I mean, it's a web application that is written to be deliberately insecure, so you can test your knowledge of web application security vulnerabilities legally.
I wrote a black box scanner in Perl for my third year dissertation which was quite an interesting project. If you wanted to build something yourself it really just consisted of:
crawler
parser
attacker
Something that you haven't mentioned but I think is important: code reviews.
When you're just trying to implement something as fast as you can it is easy to overlook a security issue. A second pair of eyes can pick up many problems or potential problems, especially if the reviewer is experienced at spotting typical security holes.
I believe that it is possible in many cases to do manual code reviews without special tools. Just sit together at the same computer or even print out the code and do the review on the paper copy. But since you specifically asked for tools, a tool to help with manual code review is Rietveld. I haven't used it myself, but it is based on the same ideas used internally at Google (and written by the same guy, who also happens to be the author of Python).
Security is definitely a concern and developers should at least be aware of common vulnerabilities (and how to avoid them). Here are some resources that I find interesting:
OWASP Top 10 for 2010
OWASP Guide for Secure Web Applications
OWASP Testing Guide v3
There are 2 types of software defects that can cause security problems: implementation bugs and design flaws.
Implementation bugs usually appear in a specific area in the code, they are relatively easy to detect and (usually) not too complicated to fix. You can detect (most) of these with automated tools that do static code analysis (tools like Fortify or Ounce) although these tools are expensive. With that said, you still have to remember that there are no "silver bullets" and you cannot not blindly rely only on the tool output without some sort of manual code review to confirm/understand the real risk behind the issues the tool reports.
The other problem is design flaws, that's another story. They are usually complex issues that are not consequence of a mistake in the code but poor choice in the design or architecture of the application. Those cannot be identified by an automated tool and really can only be detected manually, by a code/design/architecture review. They are usually very hard and expensive to fix passed the design phase.
So I recommend, reviewing your code for implementation bugs that can have impact on security (code review using automated tools like Fortify/Ounce + manual review of tool results) and reviewing your design for security flaws (no tools for this, has to be done by someone who knows about security).
For a good read on software security and the complexity behind designing secure software, check Software Security: Building Security In, by Gary McGraw (amazon link)
I use tools to aid in the hunt for vulnerabilities, but you can't just fire off some test and assume everything is okay. When I am auditing a project I look at the code and I try and get a feel for the programmers style and skill level. If the code looks messy then chances are they are a novice and they will probably make novice mistakes.
It is important to identify security related functions in a project and manually audit them. Tamperdata is very helpful for manual auditing and exploit development because you can build custom http requests. A good example for manual auditing for PHP is: Are they using mysql_real_escape_string($var) or are they using htmlspecialchars($var,ENT_QUOTES) to stop sql injection? (ENT_QUOTES doesn't stop backslashes which is just as dangerous as quote marks for mysql, mssql is a different story.) Security functions are also places for "Logic errors" to crop up, and no tool is going to be able to detect this, this requires manual auditing.
If you are doing web application testing then Acunetix is the best testing tool you can use. Wapiti is a very good open source alternative. Although any tool can be used improperly. Before you do a web application test make sure error reporting is turned on, and also make sure you aren't suppressing sql errors, such as with a try/catch.
If you are doing Automated Static Code Analysis for vulnerabilities such as Buffer Overflows then Coverity is the best tool you can use(Fortify is nearly identical to Coverity). Coverity costs tens of thousands of dollars, but big names like the Department Of Homeland Security uses it. RATS is a open source alternative, although Coverity is far more complex of a tool. Both of these tools will produce a lot of false positives and false negatives. RATS looks for nasty function calls, but doesn't see if its still safe. So RATS will report every call to strcpy() strcat() sprintf(), but these can be safe if for instance you are just copying static text. This means you will have to dig though a lot of crap, but if you are doing a peer review then RATS helps a lot by narrowing the manual search. If you are trying to find a single exploitable vulnerability in a large code base, like Linux, then Rats isn't going to help much.
I have used Coverity and their sales team will claim it will "detect ****ALL**** vulnerabilities in your code base." But I can tell you from first hand experience that I found vanilla stack based buffer overflows with peach that Coverity didn't detect. (RATS did however pick up these issues, along with 1,000+ other function calls that where safe...) If you want a secure application or you want to find an exploitable buffer overflow then Peach is the platform tool you can use to build the tools you need.
If you are looking for more exotic memory corruption issues such as Dangling Pointers then Valgrind will help.
There's bunch of web application security scanners in the market
Take a look at this list:
WASC - Web application security scanner list and Netsparker Community Edition : which is the free version of Netsparker.
A tool doesn't know if your code is insecure.
Only you do (and the attackers).
At best the tool will spot a few vulnerabilities of one type in your code and make you realize you never protected against that type of vulnerability, but you will still have to go clean up all the instances the tool missed.

What should every programmer know about security? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am an IT student and I am now in the 3rd year in university. Until now we've been studing a lot of subjects related to computers in general (programming, algorithms, computer architecture, maths, etc).
I am very sure that nobody can learn every thing about security but sure there is a "minimum" knowledge every programmer or IT student should know about it and my question is what is this minimum knowledge?
Can you suggest some e-books or courses or anything can help to start with this road?
Principles to keep in mind if you want your applications to be secure:
Never trust any input!
Validate input from all untrusted sources - use whitelists not blacklists
Plan for security from the start - it's not something you can bolt on at the end
Keep it simple - complexity increases the likelihood of security holes
Keep your attack surface to a minimum
Make sure you fail securely
Use defence in depth
Adhere to the principle of least privilege
Use threat modelling
Compartmentalize - so your system is not all or nothing
Hiding secrets is hard - and secrets hidden in code won't stay secret for long
Don't write your own crypto
Using crypto doesn't mean you're secure (attackers will look for a weaker link)
Be aware of buffer overflows and how to protect against them
There are some excellent books and articles online about making your applications secure:
Writing Secure Code 2nd Edition - I think every programmer should read this
Building Secure Software: How to Avoid Security Problems the Right Way
Secure Programming Cookbook
Exploiting Software
Security Engineering - an excellent read
Secure Programming for Linux and Unix HOWTO
Train your developers on application security best pratices
Codebashing (paid)
Security Innovation(paid)
Security Compass (paid)
OWASP WebGoat (free)
Rule #1 of security for programmers: Don't roll your own
Unless you are yourself a security expert and/or cryptographer, always use a well-designed, well-tested, and mature security platform, framework, or library to do the work for you. These things have spent years being thought out, patched, updated, and examined by experts and hackers alike. You want to gain those advantages, not dismiss them by trying to reinvent the wheel.
Now, that's not to say you don't need to learn anything about security. You certainly need to know enough to understand what you're doing and make sure you're using the tools correctly. However, if you ever find yourself about to start writing your own cryptography algorithm, authentication system, input sanitizer, etc, stop, take a step back, and remember rule #1.
Every programmer should know how to write exploit code.
Without knowing how systems are exploited you are accidentally stopping vulnerabilities. Knowing how to patch code is absolutely meaningless unless you know how to test your patches. Security isn't just a bunch of thought experiments, you must be scientific and test your experiments.
Security is a process, not a product.
Many seem to forget about this obvious matter of fact.
I suggest reviewing CWE/SANS TOP 25 Most Dangerous Programming Errors. It was updated for 2010 with the promise of regular updates in the future. The 2009 revision is available as well.
From http://cwe.mitre.org/top25/index.html
The 2010 CWE/SANS Top 25 Most Dangerous Programming Errors is a list of the most widespread and critical programming errors that can lead to serious software vulnerabilities. They are often easy to find, and easy to exploit. They are dangerous because they will frequently allow attackers to completely take over the software, steal data, or prevent the software from working at all.
The Top 25 list is a tool for education and awareness to help programmers to prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped. Software customers can use the same list to help them to ask for more secure software. Researchers in software security can use the Top 25 to focus on a narrow but important subset of all known security weaknesses. Finally, software managers and CIOs can use the Top 25 list as a measuring stick of progress in their efforts to secure their software.
A good starter course might be the MIT course in Computer Networks and Security. One thing that I would suggest is to not forget about privacy. Privacy, in some senses, is really foundational to security and isn't often covered in technical courses on security. You might find some material on privacy in this course on Ethics and the Law as it relates to the internet.
The Web Security team at Mozilla put together a great guide, which we abide by in the development of our sites and services.
The importance of secure defaults in frameworks and APIs:
Lots of early web frameworks didn't escape html by default in templates and had XSS problems because of this
Lots of early web frameworks made it easier to concatenate SQL than to create parameterized queries leading to lots of SQL injection bugs.
Some versions of Erlang (R13B, maybe others) don't verify ssl peer certificates by default and there are probably lots of erlang code that is susceptible to SSL MITM attacks
Java's XSLT transformer by default allows execution of arbitrary java code. There has been many serious security bugs created by this.
Java's XML parsing APIs by default allow the parsed document to read arbitrary files on the filesystem. More fun :)
You should know about the three A's. Authentication, Authorization, Audit. Classical mistake is to authenticate a user, while not checking if user is authorized to perform some action, so a user may look at other users private photos, the mistake Diaspora did. Many, many more people forget about Audit, you need, in a secure system, to be able to tell who did what and when.
Remember that you (the programmer) has to secure all parts, but the attacker only has to succeed in finding one kink in your armour.
Security is an example of "unknown unknowns". Sometimes you won't know what the possible security flaws are (until afterwards).
The difference between a bug and a security hole depends on the intelligence of the attacker.
I would add the following:
How digital signatures and digital certificates work
What's sandboxing
Understand how different attack vectors work:
Buffer overflows/underflows/etc on native code
Social engineerring
DNS spoofing
Man-in-the middle
CSRF/XSS et al
SQL injection
Crypto attacks (ex: exploiting weak crypto algorithms such as DES)
Program/Framework errors (ex: github's latest security flaw)
You can easily google for all of this. This will give you a good foundation.
If you want to see web app vulnerabilities, there's a project called google gruyere that shows you how to exploit a working web app.
when you are building any enterprise or any of your own software,you should just think like a hacker.as we know hackers are also not expert in all the things,but when they find any vulnerability they start digging into it by gathering information about all the things and finally attack on our software.so for preventing such attacks we should follow some well known rules like:
always try to break your codes(use cheatsheets & google the things for more informations).
be updated for security flaws in your programming field.
and as mentioned above never trust in any type of user or automated inputs.
use opensource applications(their most security flaws are known and solved).
you can find more security resource on the following links:
owasp security
CERT Security
SANS Security
netcraft
SecuritySpace
openwall
PHP Sec
thehackernews(keep updating yourself)
for more information google about your application vendor security flows.
Why is is important.
It is all about trade-offs.
Cryptography is largely a distraction from security.
For general information on security, I highly recommend reading Bruce Schneier. He's got a website, his crypto-gram newsletter, several books, and has done lots of interviews.
I would also get familiar with social engineering (and Kevin Mitnick).
For a good (and pretty entertaining) book on how security plays out in the real world, I would recommend the excellent (although a bit dated) 'The Cuckoo's Egg' by Cliff Stoll.
Also be sure to check out the OWASP Top 10 List for a categorization of all the main attack vectors/vulnerabilities.
These things are fascinating to read about. Learning to think like an attacker will train you of what to think about as you're writing your own code.
Salt and hash your users' passwords. Never save them in plaintext in your database.
Just wanted to share this for web developers:
security-guide-for-developershttps://github.com/FallibleInc/security-guide-for-developers

Hacking and exploiting - How do you deal with any security holes you find?

Today online security is a very important factor. Many businesses are completely based online, and there is tons of sensitive data available to check out only by using your web browser.
Seeking knowledge to secure my own applications I've found that I'm often testing others applications for exploits and security holes, maybe just for curiosity. As my knowledge on this field has expanded by testing on own applications, reading zero day exploits and by reading the book The Web Application Hacker's Handbook: Discovering and Exploiting Security Flaws, I've come to realize that a majority of online web applications are really exposed to a lot of security holes.
So what do you do? I'm in no interest of destroying or ruining anything, but my biggest "break through" on hacking I decided to alert the administrators of the page. My inquiry was promptly ignored, and the security hole has yet not been fixed. Why wouldn't they wanna fix it? How long will it be before someone with bad intentions break inn and choose to destroy everything?
I wonder why there's not more focus on this these days, and I would think there would be plenty of business opportunities in actually offering to test web applications for security flaws. Is it just me who have a too big curiosity or is there anyone else out there who experience the same? It is punishable by law in Norway to actually try break into a web page, even if you just check the source code and find the "hidden password" there, use it for login, you're already breaking the law.
I once reported a serious authentication vulnerability in a online audiobook store that allowed you to switch the account once you were logged in. I was wary too if I should report this. Because in Germany hacking is forbidden by law too. So I reported the vulnerability anonymously.
The answer was that although they couldn’t check this vulnerability by themselves as the software was maintained by the parent company they were glad for my report.
Later I got a reply in that they confirmed the dangerousness of the vulnerability and that it was fixed now. And they wanted to thank me again for this security report and offered me an iPod and audiobook credits as a gift.
So I’m convinced that reporting a vulnerability is the right way.
"Ive found that Im often testing others applications for exploits and security holes, maybe just for curiosity".
In the UK, we have the "Computer Misuse Act". Now if these applications you're proverbially "looking at" are say Internet based and the ISP's concerned can be bothered to investigate (for purely political motivations) then you're opening yourself up getting fingered. Even doing the slightest "testing" unlesss you are the BBC is sufficient to get you convicted here.
Even Penetration Test houses require Sign Off from companies who wish to undertake formal work to provide security assurance on their systems.
To set expectations on the difficulty in reporting vulnerabilties, I have had this with actual employers where some pretty serious stuff has been raised and people have sat on it for months from the likes of brand damage to even completely shutting down operations to support an annual £100m E-Com environment.
I usually contact the site administrator, although the response is almost ALWAYS "omg you broke my javascript page validation I'll sue you."
People just don't like to hear that their stuff is broken.
Informing the administrator is the best thing to do, but some companies just won't take unsolicited advice. They don't trust or don't believe the source.
Some people would advise you to exploit the security flaw in a damaging way to draw their attention to the danger, but I would recommend against this, and it's possible that you could have serious consequences because of this.
Basically if you've informed them it's no longer your problem (not that it ever was in the first place).
Another way to ensure you get their attention is to provide specific steps as to how it can be exploited. That way it will be easier for whomever recieves the email to verify it, and pass it on to the right people.
But at the end of the line, you owe them nothing, so anything you choose to do is sticking your neck out.
Also, you could even create a new email address for yourself to use to alert the websites, because as you mentioned, some places it would be illegal to even verify the exploit, and some companies would choose to go after you instead of the security flaw.
If it doesn't affect many users, then I think notifying the site administrators is the most you can be expected to do. If the exploit has widespread ramifications (like a Windows security exploit) then you should notify someone in a position to fix the problem, then give them time to fix it before you publish the exploit (if publishing it is your intention).
A lot of people cry about exploit publication, but sometimes that's the only way to get a response. Keep in mind that if you found an exploit, there's a high likelihood that someone with less altruistic intentions has found it and has started exploiting it already.
Edit: Consult a lawyer before you publish anything that could damage a company's reputation.
I experienced the same like you. I once found an exploit in an oscommerce shop where you could download ebooks without paying. I wrote two mails:
1) Developers of oscommerce, they answered "Known issue, just don't use this paypal module, we won't fix"
2) Shop administrator: no answer at all
Actually I have no idea what's the best way to behave ... maybe even publicate the exploit to force the admins to react.
Contact the administrator, not a business-type person. Generally the admin will be thankful for the notice, and the chance to fix the problem before something happens and he gets blamed for it. A higher-up, or the channels a customer service person is going to go through, are the channels where lawyers get involved.
I was part of a group of people who reported an issue we stumbled across on the NAS system at University. The admins were very grateful we found the hole and reported it, and argued with their bosses on our behalf (the people in charge wanted to crucify us).
We informed the main developer about a sql injection vulnerability on their login page. Seriously, it's the classic '<your-sql-here>-- variety. You can't bypass the login, but you can easily execute arbitrary sql. Still hasn't been fixed in 2 months! Not sure what to do now...no one else at my office really cares, which amazes me since we pay so much for every little upgrade and new feature. It also scares me when I think about the code quality and how much stock we are putting in this software.

Application Security Audit of an .NET Web Application?

Anyone have suggestions for security auditing of an .NET Web Application?
I'm interested in all options. I'd like to be able to have something agnostically probe my application for security risks.
EDIT:
To clarify, the system has been designed with security in mind. The environment has been setup with security in mind. I want an independent measure of security, other than - 'yeah it's secure'... The cost of having someone audit 1M+ lines of code is probably more expensive than the development. It looks like there really isn't a good automated/inexpensive approach to this yet. Thanks for your suggestions.
The point of an audit would be to independently verify the security that was implemented by the team.
BTW - there are several automated hack/probe tools to probe applications/web servers, but i'm a bit concerned about whether they are worms or not...
Best Thing to do:
Hiring a security guy for source code analysis
Second best thing to do hiring a security guy / pentesting company for black-box analysis
Following tools will help :
Static Analysis Tools Fortify / Ounce Labs - Code Review
Consider solutions such as HP WebInspects's secure object (VS.NET addon)
Buying a blackbox application scanner such as Netsparker, Appscan, WebInspect, Hailstorm, Acunetix or free version of Netsparker
Hiring some security specialist is so much better idea (will cost more though) because they won't only find injection and technical issues where an automated tool might find, they will also find all logical issues as well.
Anyone in your situation has the following options available:
Code Review,
Static Analysis of the code base using a tool,
Dynamic Analysis of the application at run time.
Mitchel has already pointed out the use of Fortify. In fact, Fortify has two products to cover the areas of static and dynamic analysis - SCA (static analysis tool, to be used in development) and PTA (that performs analysis of the application as test cases are executed during testing).
However, no tool is perfect and you can end up with false positives (fragments of your code base although not vulnerable will be flagged) and false negatives. Only a code review could solve such problems. Code reviews are expensive - not everyone in your organization would be capable of reviewing code with the eyes of a security expert.
To begin, with one can start with OWASP. Understanding the principles behind security is highly recommended before studying the OWASP Development Guide (3.0 is in draft; 2.0 can be considered stable). Finally, you can prepare to perform the first scan of your code base.
One of the first things that I have started to do with our internal application is use a tool such as Fortify that does a security analysis of your code base.
Otherwise, you might consider enlisting the services of a third-party company that specializes in security to have them test your application
Testing and static analysis is a very poor way to find security vulnerabilities, and is really a method of last resort if you haven't thought of security throughout the design and implementation process.
The problem is that you are now trying to enumerate all of the ways your application could fail, and deny those (by patching), rather than trying to specify what your application should do, and prevent everything that isn't that (by defensive programming). Since your application probably has infinite ways to go wrong and only a few things that it is meant to do, you should take an approach of 'deny by default' and allow only the good stuff.
Put it another way, it's easier and more effective to build in controls to prevent whole classes of typical vulnerabilities (for examples, see OWASP as mentioned in other answers) no matter how they may arise, than it is to go looking for which specific screwup some version of your code has. You should be trying to evidence the presence of good controls (which can be done), rather than the absence of bad stuff (which can't).
If you get somebody to review your design and security requirements (what exactly are you trying to protect against?), with full access to code and all details, that will be more valuable than some kind of black box test. Because if your design is wrong then it won't matter how well you implemented it.
We have used Telus to conduct Pen Testing for us a few times and have been impressed with the results.
May I recommend you contact Artec Group, Security Compass and Veracode and check out their offerings...

Resources