Exploit Dom based XSS - security

While scanning burp suite has reported some tentative XSS(Dom-based); all stating the following,
The application may be vulnerable to DOM-based cross-site scripting. Data is read from window.location.hash and passed to $() via the following statement:
$('html, body').animate({ scrollTop: $(window.location.hash).offset().top }, 1000)
below is the response tab
Wondering what payload should I try to exploit it? or is it false positive?

Related

Best practices form processing with Express

I'm writing a website which implements a usermanagement system and I wonder what best practices regarding form processing I have to consider.
Especially performance, security, SEO and user experience are important to me. When I was working on it I came across a couple questions and I didn't find an complete node/express code snippet where I could figure out all of my below questions.
Use case: Someone is going to update the birthday of his profile. Right now I am doing a POST request to the same URL to process the form on that page and the POST request will respond with a 302 redirect to the same URL.
General questions about form processing:
Should I do a POST request + 302 redirect for form processing or rather something else like an AJAX request?
How should I handle invalid FORM requests (for example invalid login, or email address is already in use during signup)?
Express specific questions about form processing:
I assume before inserting anything into my DB I need to sanitize and validate all form fields on the server side. How would you do that?
I read some things about CSRF but I have never implemented a CSRF protection. I'd be happy to see that in the code snippet too
Do I need to take care of any other possible vulnerabilities when processing forms with Express?
Example HTML/Pug:
form#profile(method='POST', action='/settings/profile')
input#profile-real-name.validate(type='text', name='profileRealName', value=profile.name)
label(for='profile-real-name') Name
textarea#profile-bio.materialize-textarea(placeholder='Tell a little about yourself', name='profileBio')
| #{profile.bio}
label(for='profile-bio') About
input#profile-url.validate(type='url', name='profileUrl', value=profile.bio)
label(for='profile-url') URL
input#profile-location.validate(type='text', name='profileLocation', value=profile.location)
label(for='profile-location') Location
.form-action-buttons.right-align
a.btn.grey(href='' onclick='resetForm()') Reset
button.btn.waves-effect.waves-light(type='submit')
Example Route Handlers:
router.get('/settings/profile', isLoggedIn, profile)
router.post('/settings/profile', isLoggedIn, updateProfile)
function profile(req, res) {
res.render('user/profile', { title: 'Profile', profile: req.user.profile })
}
function updateProfile(req, res) {
var userId = req.user._id
var form = req.body
var profile = {
name: form.profileRealName,
bio: form.profileBio,
url: form.profileUrl,
location: form.profileLocation
}
// Insert into DB
}
Note: A complete code snippet which takes care of all form processing best practices adapted to the given example is highly appreciated. I'm fine with using any publicly available express middleware.
Should I do a POST request + 302 redirect for form processing or rather something else like an AJAX request?
No, best practice for a good user experience since 2004 or so (basically since gmail launched) has been form submission via AJAX and not web 1.0 full-page load form POSTs. In particular, error handling via AJAX is less likely to leave your user at a dead end browser error page and then hit issues with the back button. The AJAX in this case should send an HTTP PATCH request to be most semantically correct but POST or PUT will also get the job done.
How should I handle invalid FORM requests (for example invalid login, or email address is already in use during signup)?
Invalid user input should result in an HTTP 400 Bad Request status code response, with details about the specific error(s) in a JSON response body (the format varies per application but either a general message or field-by-field errors are common themes)
For email already in use I use the HTTP 409 Conflict status code as a more particular flavor of general bad request payload.
I assume before inserting anything into my DB I need to sanitize and validate all form fields on the server side. How would you do that?
Absolutely. There are many tools. I generally define a schema for a valid request in JSON Schema and use a library from npm to validate that such as is-my-json-valid or ajv. In particular, I recommend being as strict as possible: reject incorrect types, or coerce types if you must, remove unexpected properties, use small but reasonable string length limits and strict regular expression patterns for strings when you can, and of course make sure your DB library property prevents injection attacks.
I read some things about CSRF but I have never implemented a CSRF protection.
The OWSAP Node Goat Project CSRF Exercise is a good place to start with a vulnerable app, understand and exploit the vulnerability, then implement the fix (in this case with a straightforward integration of the express.csrf() middleware.
Do I need to take care of any other possible vulnerabilities when processing forms with Express?
Yes generally application developers must understand and actively code securely. There's a lot of material out there on this but particular care must be taken when user input gets involved in database queries, subprocess spawning, or being written back out to HTML. Solid query libraries and template engines will handle most of the work here, you just need to be aware of the mechanics and potential places malicious user input could sneak in (like image filenames, etc).
I am certainly no Express expert but I think I can answer at least #1:
You should follow the Post/Redirect/Get web development pattern in order to prevent duplicate form submissions. I've heard a 303-redirect is the proper http statuscode for redirecting form submissions.
I do process forms using the POST route and once I'm done I trigger a 302-redirect.
As of #3 I recommend looking into express-validator, which is well introduce here: https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/forms . It's a middleware which allows you to validate and sanitize like this:
req.checkBody('name', 'Invalid name').isAlpha();
req.checkBody('age', 'Invalid age').notEmpty().isInt();
req.sanitizeBody('name').escape();
I wasn't able to comment hence the answer even though it's not a complete answer. Just thought it might help you.
If user experience is something you're thinking about, a page redirection is a strong no. Providing a smooth flow for the people visiting your website is important to prevent drops, and since forms are already not such a pleasure to fill, easing their usage is primary. You don't want to reload their page that might have already took some time to load just to display an error message. Once the form is valid and you created the user cookie, a redirection is fine though, even if you could do things on the client app to prevent it, but that's out-of-scope.
As stated by Levent, you should checkout express-validator, which is the more established solution for this kind of purpose.
req.check('profileRealName', 'Bad name provided').notEmpty().isAlpha()
req.check('profileLocation', 'Invalid location').optional().isAlpha();
req.getValidationResult().then(function (result) {
if (result.isEmpty()) { return null }
var errors = result.array()
// [
// { param: "profileRealName", msg: "Bad name provided", value: ".." },
// { param: "profileLocation", msg: "Invalid location", value: ".." }
// ]
res.status(400).send(errors)
})
.then(function () {
// everything is fine! insert into the DB and respond..
})
From what it looks like, I can assume you are using MongoDB. Given that, I would recommend using an ODM, like Mongoose. It will allow you to define models for your schemas and put restrictions directly on it, letting the model handles these kind of redundant validations for you.
For example, a model for your user could be
var User = new Schema({
name: { type: String, required: [true, 'Name required'] },
bio: { type: String, match: /[a-z]/ },
age: { type: Number, min: 18 }, // I don't know the kind of site you run ;)
})
Using this schema on your route would be looking like
var user = new User({
name: form.profileRealName,
bio: form.profileBio,
url: form.profileUrl,
location: form.profileLocation
})
user.save(function (err) {
// and you could grab the error here if it exists, and react accordingly
});
As you can see it provides a pretty cool api, which you should read about in their docs if you want to know more.
About CRSF, you should install csurf, which has pretty good instructions and example usages on their readme.
After that you're pretty much good to go, there is not much more I can think about apart making sure you stay up to date with your critical dependencies, in case a 0-day occurs, for example the one that happened in 2015 with JWTs, but that's still kinda rare.

How to detect extension on a browser?

I'm trying to detect if an extension is installed on a user's browser.
I tried this:
var detect = function(base, if_installed, if_not_installed) {
var s = document.createElement('script');
s.onerror = if_not_installed;
s.onload = if_installed;
document.body.appendChild(s);
s.src = base + '/manifest.json';
}
detect('chrome-extension://' + addon_id_youre_after, function() {alert('boom!');});
If the browser has the extension installed I will get an error like:
Resources must be listed in the web_accessible_resources manifest key
in order to be loaded by pages outside the extension
GET chrome-extension://invalid net::ERR_FAILED
If not, I will get a different error.
GET chrome-extension://addon_id_youre_after/manifest.json net::ERR_FAILED
Here is an image of the errors I am getting:
I tried to catch the errors (fiddle)
try {
var s = document.createElement('script');
//s.onerror = window.setTimeout(function() {throw new Error()}, 0);
s.onload = function(){alert("installed")};
document.body.appendChild(s);
s.src = 'chrome-extension://gcbommkclmclpchllfjekcdonpmejbdp/manifest.json';
} catch (e) {
debugger;
alert(e);
}
window.onerror = function (errorMsg, url, lineNumber, column, errorObj) {
alert('Error: ' + errorMsg + ' Script: ' + url + ' Line: ' + lineNumber
+ ' Column: ' + column + ' StackTrace: ' + errorObj);
}
So far I am not able to catch the errors..
Any help will be appreciated
The first error is informative from Chrome, injected directly into the console and not catchable by you (as you noticed).
The GET errors are from the network stack. Chrome denies load in either case and simulates a network error - which you can catch with onerror handler on the element itself, but not in the window.onerror hander. Quote, emphasis mine:
When a resource (such as an <img> or <script>) fails to load, an error event using interface Event is fired at the element, that initiated the load, and the onerror() handler on the element is invoked. These error events do not bubble up to window, but (at least in Firefox) can be handled with a single capturing window.addEventListener.
Here's an example that will, at least, detect the network error. Note that, again, you can't catch them, as in prevent it from showing in the console. It was a source of an embarrasing problem when Google Cast extension (that was exposing a resource) was using it as a detection method.
s.onload = function(){alert("installed")};
s.error = function(){alert("I still don't know")};
Notice that you can't distinguish between the two. Internally, Chrome redirects one of the requests to chrome-extension://invalid, but such redirects are transparent to your code: be it loading a resource (like you do) or using XHR. Even the new Fetch API, that's supposed to give more control over redirects, can't help since it's not a HTTP redirect. All it gets is an uninformative network error.
As such, you can't detect whether the extension is not installed or installed, but does not expose the resource.
Please understand that this is intentional. The method you refer to used to work - you could fetch any resource known by name. But it was a method of fingerprint browsers - something that Google is explicitly calling "malicious" and wants to prevent.
As a result, web_accessible_resources model was introduced in Chrome 18 (all the way back in Aug 2012) to shield extensions from sniffing - requiring to explicitly declare resources that are exposed. Quote, emphasis mine:
Prior to manifest version 2 all resources within an extension could be accessed from any page on the web. This allowed a malicious website to fingerprint the extensions that a user has installed or exploit vulnerabilities (for example XSS bugs) within installed extensions. Limiting availability to only resources which are explicitly intended to be web accessible serves to both minimize the available attack surface and protect the privacy of users.
With Google actively fighting fingerprinting, only cooperating extensions can be reliably detected. There may be extension-specific hacks - such as specific DOM changes, request interceptions or exposed resources you can fetch - but there is no general method, and extension may change their "visible signature" at any time. I explained it in this question: Javascript check if user has a third party chrome extension installed, but I hope you can see the reason for this better.
To sum this up, if you indeed were to find a general method that exposed arbitrary extensions to fingerprinting, this would be considered malicious and a privacy bug in Chrome.
Have your Chrome extension look for a specific DIV or other element on your page, with a very specific ID.
For example:-
<div id="ExtensionCheck_JamesEggersAwesomeExtension"></div>

Why would 'data:' be a script-src for CSP?

I've seen the following Content Security Violation in my server's logs. When would there be a 'data:' type script-src? Isn't 'data:' only for e.g. base64 encoded images?
CSP violation!
{ 'csp-report':
{ 'blocked-uri': 'data:',
'document-uri': 'https://certsimple.com/blog/domain-validated-ssl',
'original-policy': longPolicyGoesHere,
referrer: '',
'violated-directive': 'script-src https://example.com https://use.typekit.net \'unsafe-inline\' https://js.stripe.com \'unsafe-eval\' https://platform.twitter.com https://cdn.mxpnl.com https://syndication.twitter.com' } }
data: is for base64 encoded, embedded data. While the most popular usage is to encode images into stylesheets to reduce the number of requests, that is not the only use. The URI scheme can be used for scripts like the following:
<script src="data:application/javascript;charset=utf-8;base64,YWxlcnQoJ1hTUycpOw=="></script>
Also available on jsfiddle.
The report you're seeing is legitimate, something is attempting to inject arbitrary javascript into your page using the Data URI scheme to obfuscate what it's doing. While this could be reflective of an issue in your application, it's more likely going to be a rogue browser extension that's either malicious and trying to do sneaky things or benign and very badly coded.

Potentially vulnerability using setInterval in Firefox addon?

I've written a Firefox addon for the first time and it was reviewed and accepted a few month ago. This add-on calls frequently a third-party API. Meanwhile it was reviewed again and now the way it calls setInterval is criticized:
setInterval called in potentially dangerous manner. In order to prevent vulnerabilities, the setTimeout and setInterval functions should be called only with function expressions as their first argument. Variables referencing function names are acceptable but deprecated as they are not amenable to static source validation.
Here's some background about the »architecture« of my addon. It uses a global Object which is not much more than a namespace:
if ( 'undefined' == typeof myPlugin ) {
var myPlugin = {
//settings
settings : {},
intervalID : null,
//called once on window.addEventlistener( 'load' )
init : function() {
//load settings
//load remote data from cache (file)
},
//get the data from the API
getRemoteData : function() {
// XMLHttpRequest to the API
// retreve data (application/json)
// write it to a cache file
}
}
//start
window.addEventListener(
'load',
function load( event ) {
window.removeEventListener( 'load', load, false ); needed
myPlugin.init();
},
false
);
}
So this may be not the best practice, but I keep on learning. The interval itself is called inside the init() method like so:
myPlugin.intervalID = window.setInterval(
myPlugin.getRemoteData,
myPlugin.settings.updateMinInterval * 1000 //milliseconds!
);
There's another point setting the interval: an observer to the settings (preferences) clears the current interval and set it exactly the same way like mentioned above when a change to the updateMinInterval setting occures.
As I get this right, a solution using »function expressions« should look like:
myPlugin.intervalID = window.setInterval(
function() {
myPlugin.getRemoteData();
},
myPlugin.settings.updateMinInterval * 1000 //milliseconds!
);
Am I right?
What is a possible scenario of »attacking« this code, I've overlooked so far?
Should setInterval and setTimeout basically used in another way in Firefox addons then in »normal« frontend javascripts? Because the documentation of setInterval exactly shows the way using declared functions in some examples.
Am I right?
Yes, although I imagine by now you've tried it and found it works.
As for why you are asked to change the code, it's because of the part of the warning message saying "Variables referencing function names are acceptable but deprecated as they are not amenable to static source validation".
This means that unless you follow the recommended pattern for the first parameter it is impossible to automatically calculate the outcome of executing the setInterval call.
Since setInterval is susceptible to the same kind of security risks as eval() it is important to check that the call is safe, even more so in privileged code such as an add-on so this warning serves as a red flag to the add-on reviewer to ensure that they carefully evaluate the safety of this line of code.
Your initial code should be accepted and cause no security issues but the add-on reviewer will appreciate having one less red flag to consider.
Given that the ability to automatically determine the outcome of executing JavaScript is useful for performance optimisation as well as automatic security checks I would wager that a function expression is also going to execute more quickly.

Best way to handle security and avoid XSS with user entered URLs

We have a high security application and we want to allow users to enter URLs that other users will see.
This introduces a high risk of XSS hacks - a user could potentially enter javascript that another user ends up executing. Since we hold sensitive data it's essential that this never happens.
What are the best practices in dealing with this? Is any security whitelist or escape pattern alone good enough?
Any advice on dealing with redirections ("this link goes outside our site" message on a warning page before following the link, for instance)
Is there an argument for not supporting user entered links at all?
Clarification:
Basically our users want to input:
stackoverflow.com
And have it output to another user:
stackoverflow.com
What I really worry about is them using this in a XSS hack. I.e. they input:
alert('hacked!');
So other users get this link:
stackoverflow.com
My example is just to explain the risk - I'm well aware that javascript and URLs are different things, but by letting them input the latter they may be able to execute the former.
You'd be amazed how many sites you can break with this trick - HTML is even worse. If they know to deal with links do they also know to sanitise <iframe>, <img> and clever CSS references?
I'm working in a high security environment - a single XSS hack could result in very high losses for us. I'm happy that I could produce a Regex (or use one of the excellent suggestions so far) that could exclude everything that I could think of, but would that be enough?
If you think URLs can't contain code, think again!
https://owasp.org/www-community/xss-filter-evasion-cheatsheet
Read that, and weep.
Here's how we do it on Stack Overflow:
/// <summary>
/// returns "safe" URL, stripping anything outside normal charsets for URL
/// </summary>
public static string SanitizeUrl(string url)
{
return Regex.Replace(url, #"[^-A-Za-z0-9+&##/%?=~_|!:,.;\(\)]", "");
}
The process of rendering a link "safe" should go through three or four steps:
Unescape/re-encode the string you've been given (RSnake has documented a number of tricks at http://ha.ckers.org/xss.html that use escaping and UTF encodings).
Clean the link up: Regexes are a good start - make sure to truncate the string or throw it away if it contains a " (or whatever you use to close the attributes in your output); If you're doing the links only as references to other information you can also force the protocol at the end of this process - if the portion before the first colon is not 'http' or 'https' then append 'http://' to the start. This allows you to create usable links from incomplete input as a user would type into a browser and gives you a last shot at tripping up whatever mischief someone has tried to sneak in.
Check that the result is a well formed URL (protocol://host.domain[:port][/path][/[file]][?queryField=queryValue][#anchor]).
Possibly check the result against a site blacklist or try to fetch it through some sort of malware checker.
If security is a priority I would hope that the users would forgive a bit of paranoia in this process, even if it does end up throwing away some safe links.
Use a library, such as OWASP-ESAPI API:
PHP - http://code.google.com/p/owasp-esapi-php/
Java - http://code.google.com/p/owasp-esapi-java/
.NET - http://code.google.com/p/owasp-esapi-dotnet/
Python - http://code.google.com/p/owasp-esapi-python/
Read the following:
https://www.golemtechnologies.com/articles/prevent-xss#how-to-prevent-cross-site-scripting
https://www.owasp.org/
http://www.secbytes.com/blog/?p=253
For example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$esapi = new ESAPI( "/etc/php5/esapi/ESAPI.xml" ); // Modified copy of ESAPI.xml
$sanitizer = ESAPI::getSanitizer();
$sanitized_url = $sanitizer->getSanitizedURL( "user-homepage", $url );
Another example is to use a built-in function. PHP's filter_var function is an example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$sanitized_url = filter_var($url, FILTER_SANITIZE_URL);
Using filter_var allows javascript calls, and filters out schemes that are neither http nor https. Using the OWASP ESAPI Sanitizer is probably the best option.
Still another example is the code from WordPress:
http://core.trac.wordpress.org/browser/tags/3.5.1/wp-includes/formatting.php#L2561
Additionally, since there is no way of knowing where the URL links (i.e., it might be a valid URL, but the contents of the URL could be mischievous), Google has a safe browsing API you can call:
https://developers.google.com/safe-browsing/lookup_guide
Rolling your own regex for sanitation is problematic for several reasons:
Unless you are Jon Skeet, the code will have errors.
Existing APIs have many hours of review and testing behind them.
Existing URL-validation APIs consider internationalization.
Existing APIs will be kept up-to-date with emerging standards.
Other issues to consider:
What schemes do you permit (are file:/// and telnet:// acceptable)?
What restrictions do you want to place on the content of the URL (are malware URLs acceptable)?
Just HTMLEncode the links when you output them. Make sure you don't allow javascript: links. (It's best to have a whitelist of protocols that are accepted, e.g., http, https, and mailto.)
You don't specify the language of your application, I will then presume ASP.NET, and for this you can use the Microsoft Anti-Cross Site Scripting Library
It is very easy to use, all you need is an include and that is it :)
While you're on the topic, why not given a read on Design Guidelines for Secure Web Applications
If any other language.... if there is a library for ASP.NET, has to be available as well for other kind of language (PHP, Python, ROR, etc)
For Pythonistas, try Scrapy's w3lib.
OWASP ESAPI pre-dates Python 2.7 and is archived on the now-defunct Google Code.
How about not displaying them as a link? Just use the text.
Combined with a warning to proceed at your own risk may be enough.
addition - see also Should I sanitize HTML markup for a hosted CMS? for a discussion on sanitizing user input
There is a library for javascript that solves this problem
https://github.com/braintree/sanitize-url
Try it =)
In my project written in JavaScript I use this regex as white list:
url.match(/^((https?|ftp):\/\/|\.{0,2}\/)/)
the only limitation is that you need to put ./ in front for files in same directory but I think I can live with that.
Using Regular Expression to prevent XSS vulnerability is becoming complicated thus hard to maintain over time while it could leave some vulnerabilities behind. Having URL validation using regular expression is helpful in some scenarios but better not be mixed with vulnerability checks.
Solution probably is to use combination of an encoder like AntiXssEncoder.UrlEncode for encoding Query portion of the URL and QueryBuilder for the rest:
public sealed class AntiXssUrlEncoder
{
public string EncodeUri(Uri uri, bool isEncoded = false)
{
// Encode the Query portion of URL to prevent XSS attack if is not already encoded. Otherwise let UriBuilder take care code it.
var encodedQuery = isEncoded ? uri.Query.TrimStart('?') : AntiXssEncoder.UrlEncode(uri.Query.TrimStart('?'));
var encodedUri = new UriBuilder
{
Scheme = uri.Scheme,
Host = uri.Host,
Path = uri.AbsolutePath,
Query = encodedQuery.Trim(),
Fragment = uri.Fragment
};
if (uri.Port != 80 && uri.Port != 443)
{
encodedUri.Port = uri.Port;
}
return encodedUri.ToString();
}
public static string Encode(string uri)
{
var baseUri = new Uri(uri);
var antiXssUrlEncoder = new AntiXssUrlEncoder();
return antiXssUrlEncoder.EncodeUri(baseUri);
}
}
You may need to include white listing to exclude some characters from encoding. That could become helpful for particular sites.
HTML Encoding the page that render the URL is another thing you may need to consider too.
BTW. Please note that encoding URL may break Web Parameter Tampering so the encoded link may appear not working as expected.
Also, you need to be careful about double encoding
P.S. AntiXssEncoder.UrlEncode was better be named AntiXssEncoder.EncodeForUrl to be more descriptive. Basically, It encodes a string for URL not encode a given URL and return usable URL.
You could use a hex code to convert the entire URL and send it to your server. That way the client would not understand the content in the first glance. After reading the content, you could decode the content URL = ? and send it to the browser.
Allowing a URL and allowing JavaScript are 2 different things.

Resources