Content-Security-Policy header seems to be a great way to make websites more secure. However we tried to find any large website that is using this header and we didn't find any single one, unlike many other security related headers. That is strange and I would like to know if there any problems (caching, bugs etc) that may be caused by this header.
Yes, CSP is safe, but you cannot rely on it alone.
CSP will make XSS attacks very difficult (though not impossible) against visitors to your site that have browsers that support it.
Lots of browsers don't support it though - IE11 still doesn't, so you still need to strictly manage any user input displayed or echoed to limit your risk.
Implementing CSP in an existing application can be very painful, to get the full benefit you are stopped from using inline CSS and Javascript. This in turn breaks lots of libraries and frameworks - for instance Modernizer breaks with CSP on.
For this reason it isn't widely used yet.
Related
I am a newbie in web application development. My application is using AngularJS\NodeJS. A security tool reported reflected XSS vulnerability in our application. From my search on the internet I found that there is a HTTP X-XSS-Protection response header which appears to protect the application against reflected XSS attacks. However, I am not sure if that should be sufficient for handling the reflected XSS attacks or additionally any input sanitization should also be done in the application.
X-XSS-Protection is not enough, and is already enabled by default in most browsers. All it does is it enables the built-in XSS protection in browsers, but the trick is, to the best of my knowledge (maybe somebody will correct me) it is not specified what exactly the browser should do. Based on my experience, browsers only filter the most trivial XSS, when there is a parameter with javascript between script tags - such javascript from the parameter will not be run. For example if the parameter is
something <script>alert(1)</script> something
this will not be run by the browser. And apparently that's it.
For example if you have server-side code like
<div class="userclass-{{$userinput}}">
then this can be exploited with the string abc" onclick="alert(1)" x=", and this will go through the built-in filter in any browser I tried.
And then we haven't talked about stored or DOM XSS, where this protection is totally useless.
With Angular, it is slightly harder to make your code vulnerable to XSS (see here how to bind stuff as html for example), but it is by far not impossible, and because you would almost never need an actual script tag for that, this built-in protection has very limited use in an Angular app.
Also while input validation/sanitization is nice to have, what you actually need against XSS in general is output encoding. With Angular, that is somewhat easier than with many other frameworks, but Angular is not the silver bullet, your application can still be vulnerable.
In short, anytime you display user input anywhere (more precisely, anytime you insert user input into the page DOM), you have to make sure that javascript from the user can't be run. This can be achieved via output encoding, and secure Javascript use, which can mean a lot of things, mostly using your template engine or data binding in a way that only binds stuff as text, and not anything else (eg. tags).
What X-XSS-Protection does is browser dependent and can't be relied upon for security. It is meant as an extra layer to make it more difficult to exploit vulnerable applications, but is not meant as a replacement for correct encoding. It is a warning that an application is vulnerable and the responsibility to fix it is on the application developers.
This is why in Chrome, errors in the XSS filter are not even considered security bugs. See the Chrome Security FAQ:
Are XSS filter bypasses considered security bugs?
No. Chromium contains a reflected XSS filter (called XSSAuditor) that
is a best-effort second line of defense against reflected XSS flaws
found in web sites. We do not treat these bypasses as security bugs in
Chromium because the underlying issue is in the web site itself. We
treat them as functional bugs, and we do appreciate such reports.
The XSSAuditor is not able to defend against persistent XSS or
DOM-based XSS. There will also be a number of infrequently occurring
reflected XSS corner cases, however, that it will never be able to
cover. Among these are:
Multiple unsanitized variables injected into the page.
Unexpected server side transformation or decoding of the payload.
And there are plently of bugs. There are ways found to bypass the filter regularly. Just last week there was a fairly simple looking injected style attribute bypass.
Such a filter would need to tell where values in the output have come from by only looking at the input and the output, without seeing any templates or server-side code. When any processing is done to the input, or there's multiple kinds of decoding occurring, or multiple values are combined together, this can be very difficult.
Much easier would be to take care of the encoding at the template level. Here, we can easily tell the difference between variable values and static code because they haven't been merged together yet.
I'm currently in the process of building a browser helper object.
One of the things the BHO has to do is to make cross-site requests that bypass the cross-domain policy.
For this, I'm exposing a __MyBHONameSpace.Request method that uses WebClient internally.
However, it has occurred to me that anyone that is using my BHO now has a CSRF vulnerability everywhere as a smart attacker can now make arbitrary requests from my clients' computers.
Is there any clever way to mitigate this?
The only way to fully protect against such attacks is to separate the execution context of the page's JavaScript and your extension's JavaScript code.
When I researched this issue, I found that Internet Explorer does provide a way to achieve creation of such context, namely via IActiveScript. I have not implemented this solution though, for the following reasons:
Lack of documentation / examples that combines IActiveScript with BHOs.
Lack of certainty about the future (e.g. https://stackoverflow.com/a/17581825).
Possible performance implications (IE is not known for its superb performance, how would two instances of a JavaScript engines for each page affect the browsing speed?).
Cost of maintenance: I already had an existing solution which was working well, based on very reasonable assumptions. Because I'm not certain whether the alternative method (using IActiveScript) would be bugfree and future-proof (see 2), I decided to drop the idea.
What I have done instead is:
Accept that very determined attackers will be able to access (part of) my extension's functionality.
#Benjamin asked whether access to a persistent storage API would pose a threat to the user's privacy. I consider this risk to be acceptable, because a storage quota is enforced, and all stored data is validated before it's used, and it's not giving an attacker any more tools to attack the user. If an attacker wants to track the user via persistent storage, they can just use localStorage on some domain, and communicate with this domain via an <iframe> using the postMessage API. This method works across all browsers, not just IE with my BHO installed, so it is unlikely that any attacker dedicates time at reverse-engineering my BHO in order to use the API, when there's a method that already works in all modern browsers (IE8+).
Restrict the functionality of the extension:
The extension should only be activated on pages where it needs to be activated. This greatly reduces the attack surface, because it's more difficult for an attacker to run code on https://trusted.example.com and trick the user into visiting https://trusted.example.com.
Create and enforce whitelisted URLs for cross-domain access at extension level (in native code (e.g. C++) inside the BHO).
For sensitive APIs, limit its exposure to a very small set of trusted URLs (again, not in JavaScript, but in native code).
The part of the extension that handles the cross-domain functionality does not share any state with Internet Explorer. Cookies and authorization headers are stripped from the request and response. So, even if an attacker manages to get access to my API, they cannot impersonate the user at some other website, because of missing session information.
This does not protect against sites who use the IP of the requestor for authentication (such as intranet sites or routers), but this attack vector is already covered by a correct implemention a whitelist (see step 2).
"Enforce in native code" does not mean "hard-code in native code". You can still serve updates that include metadata and the JavaScript code. MSVC++ (2010) supports ECMAScript-style regular expressions <regex>, which makes implementing a regex-based whitelist quite easy.
If you want to go ahead and use IActiveScript, you can find sample code in the source code of ceee, Gears (both discontinued) or any other project that attempts to enhance the scripting environment of IE.
Web applications on uncompromised computers are vulnerable to XSS,CRSF,sql injection attacks and cookie stealing in unsecure wifi environments.
To prevent those security issues there are the folowing remedies
sql injection: a good datamapper(like linq-to-sql) does not have the risk of sql injection (am i naïeve to believe this?)
CSRF: Every form-post is verified with the <%:Html.AntiForgeryToken() %> (this is a token in a asp.net mvc environment that is stored in a cookie and verified on the server)
XSS: every form that is allowed to post html is converted, only bb code is allowed, the rest is encoded . All possible save actions are done with a post event so rogue img tags should have no effect
cookie stealing: https
Am i now invulnerable to web-based hacking attempts(when implemented correctly)? Or am i missing some other security issues in web-development?(except for possible holes in the OS platform or other software)
The easy answer is "No you're not invulnerable - nobody is!"
This is a good start, but there are a few other things you could do. The main one you haven't mentioned is validation of untrusted data against a white-list and this is important as it spans multiple exploits such as both SQLi and XSS. Take a look at OWASP Top 10 for .NET developers part 1: Injection and in particular, the section about "All input must be validated against a whitelist of acceptable value ranges".
Next up, you should apply the principle of least privilege to the accounts connecting to your SQL Server. See the heading under this name in the previous link.
Given you're working with ASP.NET, make sure Request Validation remains on and if you absolutely, positively need to disable it, just do it at a page level. More on this in Request Validation, DotNetNuke and design utopia.
For your output encoding, the main thing is to ensure that you're encoding for the right context. HTML encoding != JavaScript encoding != CSS encoding. More on this in OWASP Top 10 for .NET developers part 2: Cross-Site Scripting (XSS).
For the cookies, make them HTTP only and if possible, only allow them to be served securely (if you're happy to only run over HTTPS). Try putting your web.config through the web.config security analyser which will help point you in the right direction.
Another CSRF defense - albeit one with a usability impact - is CAPTCHA. Obviously you want to use this sparingly but if you've got any really critical functions you want to protect, this puts a pretty swift stop to it. More in OWASP Top 10 for .NET developers part 5: Cross-Site Request Forgery (CSRF).
Other than that, it sounds like you're aware of many of the important principles. It won't make you invulnerable, but it's a good start.
Am I now invulnerable to web-based hacking attempts?
Because, no matter how good you are, everyone makes mistakes, the answer is no. You almost certainly forgot to sanitize some input, or use some anti-forgery token. If you haven't now, you or another developer will as your application grows larger.
This is one of the reason we use frameworks - MVC, for example, will automatically generate anti-CSRF tokens, while LINQ-to-SQL (as you mentioned) will sanitize input for the database. So, if you are not already using a framework which makes anti-XSS and anti-CSRF measures the default, you should begin now.
Of course, these will protect you against these specific threats, but it's never possible to be secure against all threats. For instance, if you have an insecure SQL-connection password, it's possible that someone will brute-force your DB password and gain access. If you don't keep your versions of .Net/SQL-Server/everything up to date, you could be the victim of online worm (and even if you do, it's still possible to be zero-dayed).
There are even problems you can't solve in software: A script kiddie could DDOS your site. Your server-company could go bankrupt. A shady competitor could simply take a hedge-clippers to your internet line. Your warehouse could burn down. A developer could sell the source-code to a company in Russia.
The point is, again, you can't ever be secure against everything - you can only be secure against specific threats.
This is the definitive guide to web attacks. Also, I would recommend you use Metasploit against your web app.
It definitely is not enough! There are several other security issues you have to keep in mind when developing a web-app.
To get an overview you can use the OWASP Top-Ten
I think this is an very interesting post to read when thinking about web-security: What should a developer know before building a public web site?
There is a section about security that contains good links for most of the threats you are facing when developing web-apps.
The most important thing to keep in mind when thinking about security is:
Never trust user input!
[I am answering to this "old" question because I think it is always an actual topic.]
About what you didn't mention:
You missed a dangerous attack in MVC frameworks: Over Posting Attack
You also missed the most annoying threats: Denial of Service
You also should pay enough attention to file uploads (if any...) and many more...
About what you mentioned:
XSS is really really really waster and more annoying to mitigate. There are several types of encoding including Html Encoding, Javascript Encoding, CSS Encoding, Html Attribute Encoding, Url Encoding, ...
Each of them should be performed to the proper content, in the proper place - i.e. Just doing Html Encoding the content is not enough in all situations.
And the most annoying about XSS, is that there are some situations that you should perform Combinational Encoding(i.e. first JavascriptEncode and then HtmlEncode...!!!)
Take a look at the following link to become more familiar with a nightmare called XSS...!!!
XSS Filter Evasion Cheat Sheet - OWASP
One of the more powerful features of modern day browsers is the ability for software developers to write browser extensions to enhance, modify and tweak the pages visited by the user. As more of our lives migrate onto the browser, aren't we potentially exposing ourselves to a massive privacy and security holes created by the installation of a browser extension that is malicious in nature?
I realize the source code of these extensions is extractable and readable if the author has not made attempts to obfuscate the behavior. But the effectiveness of this type of review is compromised by the browser encouraging users to keep their extensions up to date. While version 1.0 of an extension may be innocuous, a users browser may suggest an upgrade to version 1.1 which could contain malicious code which could be used to scrape information from the screen of the compromised browser.
As both a user and developer of browser extensions, is the developer's reputation the only thing in place to provide assurances to their users that their browsing activity will be secure? Are there any mechanisms in place to help protect users from a compromised browser extension?
Are there any best-practices to develop extensions in a manner that provides users with the assurance that the code they install and update is benign in nature?
Browser extensions can do almost anything user can do. They can send your bank passwords, read files on local disk, execute commands etc. Security of a browser depends not only on browser itself, but also on all installed extensions.
I've written a few extensions for Chrome recently, and I had no idea how much harm extensions could really do before that.
Extensions ask for permissions, but these are very broad. Any non-trivial extension would most likely end up asking for "Full Permission", and most users would just bang the "YES" button. Even a tech savvy user may shrug this off as legitimate, I know I have.
Most extensions are free. It costs time and money to code them up, so how are developers getting their investment back? Some do it for fun, but chrome web store specifically asks if you are planning to inject adds - I can only deduce that this is a common practice for extension developers. Extensions could also act as tracking cookies, and sell usage stats to whomever.
It's near trivial to write an extension that would glob up your passwords and send them on to a third party. Even if these passwords are 'saved'. One of my extensions had a legitimate use case to modify all input fields on all pages, and I found out that chrome would just happily paste-in stored passwords in plain text. Same goes for CC information.
Many extensions include analytics packages, to help developers identify who their users are, which parts of the app is used and so forth. I think that this is a legitimate use case, but you may not necessarily agree.
If you are a developer, be advised that Chrome extensions could significantly impact page load times. My own extension, which I tirelessly optimized to be as lightweight as possible, caused all pages to have an additional 50-200ms load time.
So after I've seen what's possible, I've disabled all extensions in Chrome except for my own. I really only miss AdBlock.
Internet Explorer Browser Helper Objects are extremely unsafe. They basically allow the browser to run native code, which could be anything. I'm not sure if they're still as pervasive now as they were in years past, but they're one of the reasons why Internet Explorer is so much less secure than Firefox and other browsers.
Mozilla style plug-ins using XUL and Microsoft's Silverlight plug-ins are sandboxed to try and prevent malicious behavior. Ultimately it rests on the developer's reputation for any kind of software to be deemed trustworthy by its users, however. Even in cases where the developer is not trying to write malware, bugs in the program may expose security exploits.
Which is why you have multiple machines, and if you can't afford a new one, use a virtual machine to run most of the stuff and monitor it's behavior. Its what i do atleast before I do anything.
RnVja3Mgd2l0aCBtZSBmYW0hIGhpdCBtZSB1cCBhdCB0aGVib3NzODkwN0B5YWhv
by5jb20gaWYgeW91IGhhdmUgYW55IHF1ZXN0aW9ucw==
This question is more security related than programming related, sorry if it shouldn't be here.
I'm currently developing a web application and I'm curious as to why most websites don't mind displaying their exact server configuration in HTTP headers, like versions of Apache and PHP, with complete "mod_perl, mod_python, ..." listing and so on.
From a security point of view, I'd prefer that it would be impossible to find out if I'm running PHP on Apache, ASP.NET on IIS or even Rails on Lighttpd.
Obviously "obscurity is not security" but should I be worried at all that visitors know what version of Apache and PHP my server is running ? Is it good practice or totally unnecessary to hide this information ?
Prevailing wisdom is to remove the server ID and the version; better yet, change them to another legitimate server ID and version - that way the attacker goes off trying IIS vulnerabilities against Apache or something like that. Might as well mislead the attacker.
But honestly, there are so many other clues to go by, I wonder about whether this is worth it. I suppose it could stop attackers using a search engine to find servers with known vulnerabilities.
(Personally, I don't bother on my HTTP server, but it's written in Java and much less vulnerable to the typical kinds of attack.)
I think you usually see those headers because the systems send them by default.
I routinely remove them as they provide no real value and could, as you suggested reveal information about the server.
Hiding the information in the headers usually just slows down the lazy and ignorant villains. There are many ways to fingerprint a system.
Running nmap -O -sV against an IP will give you the OS and service versions with a fairly high degree of accuracy. The only extra info you're giving away by having your server advertise that information is which modules you have loaded.
It seems that some of the answers are missing an obvious advantage of turning off the headers.
Yes, you all are right; turning of the headers (and the statusline present e.g. at directory listings) does not stop an attacker from finding out what software you use.
However, turning this information off prevents malware which uses google to look for vulnerable systems from finding you.
tldr: Don't use it as a (or even as THE) security-measure, but as a measure to drive away unwanted traffic.
I normally turn off Apache's long header version information with ServerTokens; it adds nothing useful.
One point which nobody has picked up on, is it looks like better security to a prospective client, pen testing company etc, if you're giving out less information from your web server.
So giving less information out boosts the perceived security (i.e. it shows you have actually thought about it and done something)