URL Scanning tool - security

I am looking for a tool to scan a given URL for security vulnerabilities. I've googled around a bit and found a few but most of them require something that's publicly facing and my DEV environment isn't publicly facing, or they have some expensive solution that's more than I need for now. I don't need anything super powerful as I'm just doing lightweight testing for now, and the QA folk will run their more sophisticated battery of tests later.
EDIT: Use case for clarification
I hand the tool a URL to scan, ie: http://www.host.com/path/to/page.asp
It run a series of test on that pages to see if there are any possible security vulnerabilities it exposes. Examples include but are not limited to SQL injection, cross side scripting, etc.

Assuming that you wish to scan your web application, by providing the 'base' URL of the application to a penetration testing tool, you will find the OWASP Live CD project to be useful. Grendel-Scan available on the CD might prove to be most useful, since it appears to be the most mature among the penetration testing tools in the list. Nikito and the OWASP Wapiti project are the other penetration testing tools on the Live CD.
Additionally, the Watcher plug-in for Fiddler is also capable of detecting certain vulnerabilities in the application, although it requires that the individual pages in the application be visited with Fiddler as the proxy.

There are 2 forms of tools you will find for this, one type of tool has a list of known problems (bug in IIS version 5.34 or whatever) and goes through the list trying each issue. Tools of this kind also try common filenames like robots.txt and web.config etc. Nikito is an example of this type.
There is also the type that will look at all the querystring/cookie/form parameters and tweak them to try and trigger faults. I believe this is what would serve you best and for this I recommended burp proxy. http://portswigger.net/proxy/ There is a free version and a pro version. Also in this set of tools are expensive things like IBMs appscan and Hps webinspect.

Are you talking about scanning the URI that someone has requested from your site?
If so, you can use the .htaccess file to simply redirect to a 404 page any URI that doesn't exist or isn't found in the database (depending on how you're building the site).
You can therefore force requests to come in a specific way and anything that doesn't will automatically get canned.

Related

How could I protect the users from my webpage from being tracked?

This is maybe quite a broad question and I tried to look for other stack exchanges where addressing my question would suit better – but in the end I decided that it might be still a question of a technical nature, and so I am posting it here:
I recently started to think more about privacy and security and I realized that I as a web user can only do so much about staying untracked. VPN, (slow) Tor, privacy helpers, add-blockers, Firefox are just a few tools to name, but still I realize that the information that I normally share (like installed add-ons, browser size, IP location etc.) can still very well be fingerprinted.
Normally as a web-developer I am told that we should add analytics, that we should find out more about the users to «make a better service», but I think I would like to do the opposite.
So:
Are there steps I could take, when building a website, that help the visitors to stay untracked? And I don't mean «not installing google analytics», I mean things like somehow actively messing with the statistics, so that my hosters server is incapable of tracking things correctly or similar things...
Right now I can't really think of anything, but I somehow believe that I as a person who builds bricks of the internet could and should be able to influence these kind of things directly...
For now I see the obvious things:
- not using statistic services
- use https
- not using any third party tools that might include tracking or open doors for other trackers
But still this seems to just omit the bad things, but I can't actually do active stuff...
So I would be very glad to hear your thoughts about this. (Or guide me to a place, where this discussion fits a better..)
Cheers
merc
As a web developer, you can only control your website.
Assuming you aren't caching any data or using cookies, then users shouldn't be tracked while using your website by tools like 3rd party cookies.
Here is a good article about online tracking and how it works.
As far as I know, there isn't an effective way to actively mess with tracking statistics. Your best bet is to avoid installing libraries or tools that track your users.

Good resources for versioning

I have a number of Windows servers at work that are used for staging web sites for clients while they are being created.
I wanted to start using versioning on them so that when we work with outside vendors on a project, if/when they overwrite my work, I'd like to be able to go back and get the version before.
My question is that I think I'm not looking for the correct terms in searching for information, but what kind of resources are there to learn how to install the software for versioning or a site to help me get started.
Any and all suggestions would be appreciated.
Steph
Since your development workflow can be decentralized (as in "there isn't always one central repository), DVCS tools, with their common tasks described here) can be more adapted.
Git-Scm
Mercurial (see HgInit.com for a very good tutorial like the kind you are after)
Plastic SCM (which has a DVCS nature)

Advice on code scanning / penetration testing tools

As far as I can see the offerings fall into two categories – scanning services such as McAfee, Comodo, etc. and tools such as Burp Proxy, HP’s WebInspect,CodeScan, etc.
In an ideal world, I’d use something that actively scanned a certain URL (the target being a LAMP stack) on a daily basis (or as required if it’s a standalone tool), but I’m a bit wary of standalone tools in terms of their coverage and frequency of updating. (The ‘remote’ scanners such as McAfee are presumably updated as required.)
I’ve also had issues with some standalone tools (can’t remember which one unfortunately) that managed to get themselves lost within our URL rewriting system (there’s a facetted search in play, so you can imagine things get fairly deep on the URL front).
As such, I’m just wondering what experiences people have had with the offerings out there and whether the standalone tools stack up against the scanning services.
(Incidentally, I'm aware of Penetration testing tools - I'm just wondering if the situation has changed since then)
I have done penetration testing and exploit development. I can tell you from first hand experience that hacking isn't just firing off some tool. Sometimes tools can make life easier, but if you don't know what you are doing then a tool isn't going to help.
If you want to KNOW that your system is secure then you need to higher a skilled hacker to break in. The PCI-DSS is a certification required for credit card processing which mandates that you have regular penetration testing conducted on your server. Conducting regular penetration testing is something that you should adopt if you want to have a very secure server.
A very good security measure for web servers is a Web Application Firewall (WAF). WAFs are also required by the PCI-DSS. Mod_security is free and open source WAF. Mod_Security can be used to prevent hundreds of different types of attacks. A WAF can be a nightmare for a penetration tester or would be hacker.

Application Security Audit of an .NET Web Application?

Anyone have suggestions for security auditing of an .NET Web Application?
I'm interested in all options. I'd like to be able to have something agnostically probe my application for security risks.
EDIT:
To clarify, the system has been designed with security in mind. The environment has been setup with security in mind. I want an independent measure of security, other than - 'yeah it's secure'... The cost of having someone audit 1M+ lines of code is probably more expensive than the development. It looks like there really isn't a good automated/inexpensive approach to this yet. Thanks for your suggestions.
The point of an audit would be to independently verify the security that was implemented by the team.
BTW - there are several automated hack/probe tools to probe applications/web servers, but i'm a bit concerned about whether they are worms or not...
Best Thing to do:
Hiring a security guy for source code analysis
Second best thing to do hiring a security guy / pentesting company for black-box analysis
Following tools will help :
Static Analysis Tools Fortify / Ounce Labs - Code Review
Consider solutions such as HP WebInspects's secure object (VS.NET addon)
Buying a blackbox application scanner such as Netsparker, Appscan, WebInspect, Hailstorm, Acunetix or free version of Netsparker
Hiring some security specialist is so much better idea (will cost more though) because they won't only find injection and technical issues where an automated tool might find, they will also find all logical issues as well.
Anyone in your situation has the following options available:
Code Review,
Static Analysis of the code base using a tool,
Dynamic Analysis of the application at run time.
Mitchel has already pointed out the use of Fortify. In fact, Fortify has two products to cover the areas of static and dynamic analysis - SCA (static analysis tool, to be used in development) and PTA (that performs analysis of the application as test cases are executed during testing).
However, no tool is perfect and you can end up with false positives (fragments of your code base although not vulnerable will be flagged) and false negatives. Only a code review could solve such problems. Code reviews are expensive - not everyone in your organization would be capable of reviewing code with the eyes of a security expert.
To begin, with one can start with OWASP. Understanding the principles behind security is highly recommended before studying the OWASP Development Guide (3.0 is in draft; 2.0 can be considered stable). Finally, you can prepare to perform the first scan of your code base.
One of the first things that I have started to do with our internal application is use a tool such as Fortify that does a security analysis of your code base.
Otherwise, you might consider enlisting the services of a third-party company that specializes in security to have them test your application
Testing and static analysis is a very poor way to find security vulnerabilities, and is really a method of last resort if you haven't thought of security throughout the design and implementation process.
The problem is that you are now trying to enumerate all of the ways your application could fail, and deny those (by patching), rather than trying to specify what your application should do, and prevent everything that isn't that (by defensive programming). Since your application probably has infinite ways to go wrong and only a few things that it is meant to do, you should take an approach of 'deny by default' and allow only the good stuff.
Put it another way, it's easier and more effective to build in controls to prevent whole classes of typical vulnerabilities (for examples, see OWASP as mentioned in other answers) no matter how they may arise, than it is to go looking for which specific screwup some version of your code has. You should be trying to evidence the presence of good controls (which can be done), rather than the absence of bad stuff (which can't).
If you get somebody to review your design and security requirements (what exactly are you trying to protect against?), with full access to code and all details, that will be more valuable than some kind of black box test. Because if your design is wrong then it won't matter how well you implemented it.
We have used Telus to conduct Pen Testing for us a few times and have been impressed with the results.
May I recommend you contact Artec Group, Security Compass and Veracode and check out their offerings...

Is it good practice to hide web server information in HTTP headers?

This question is more security related than programming related, sorry if it shouldn't be here.
I'm currently developing a web application and I'm curious as to why most websites don't mind displaying their exact server configuration in HTTP headers, like versions of Apache and PHP, with complete "mod_perl, mod_python, ..." listing and so on.
From a security point of view, I'd prefer that it would be impossible to find out if I'm running PHP on Apache, ASP.NET on IIS or even Rails on Lighttpd.
Obviously "obscurity is not security" but should I be worried at all that visitors know what version of Apache and PHP my server is running ? Is it good practice or totally unnecessary to hide this information ?
Prevailing wisdom is to remove the server ID and the version; better yet, change them to another legitimate server ID and version - that way the attacker goes off trying IIS vulnerabilities against Apache or something like that. Might as well mislead the attacker.
But honestly, there are so many other clues to go by, I wonder about whether this is worth it. I suppose it could stop attackers using a search engine to find servers with known vulnerabilities.
(Personally, I don't bother on my HTTP server, but it's written in Java and much less vulnerable to the typical kinds of attack.)
I think you usually see those headers because the systems send them by default.
I routinely remove them as they provide no real value and could, as you suggested reveal information about the server.
Hiding the information in the headers usually just slows down the lazy and ignorant villains. There are many ways to fingerprint a system.
Running nmap -O -sV against an IP will give you the OS and service versions with a fairly high degree of accuracy. The only extra info you're giving away by having your server advertise that information is which modules you have loaded.
It seems that some of the answers are missing an obvious advantage of turning off the headers.
Yes, you all are right; turning of the headers (and the statusline present e.g. at directory listings) does not stop an attacker from finding out what software you use.
However, turning this information off prevents malware which uses google to look for vulnerable systems from finding you.
tldr: Don't use it as a (or even as THE) security-measure, but as a measure to drive away unwanted traffic.
I normally turn off Apache's long header version information with ServerTokens; it adds nothing useful.
One point which nobody has picked up on, is it looks like better security to a prospective client, pen testing company etc, if you're giving out less information from your web server.
So giving less information out boosts the perceived security (i.e. it shows you have actually thought about it and done something)

Resources