I've seen the following Content Security Violation in my server's logs. When would there be a 'data:' type script-src? Isn't 'data:' only for e.g. base64 encoded images?
CSP violation!
{ 'csp-report':
{ 'blocked-uri': 'data:',
'document-uri': 'https://certsimple.com/blog/domain-validated-ssl',
'original-policy': longPolicyGoesHere,
referrer: '',
'violated-directive': 'script-src https://example.com https://use.typekit.net \'unsafe-inline\' https://js.stripe.com \'unsafe-eval\' https://platform.twitter.com https://cdn.mxpnl.com https://syndication.twitter.com' } }
data: is for base64 encoded, embedded data. While the most popular usage is to encode images into stylesheets to reduce the number of requests, that is not the only use. The URI scheme can be used for scripts like the following:
<script src="data:application/javascript;charset=utf-8;base64,YWxlcnQoJ1hTUycpOw=="></script>
Also available on jsfiddle.
The report you're seeing is legitimate, something is attempting to inject arbitrary javascript into your page using the Data URI scheme to obfuscate what it's doing. While this could be reflective of an issue in your application, it's more likely going to be a rogue browser extension that's either malicious and trying to do sneaky things or benign and very badly coded.
Related
I'm having an issue with a NodeJS REST api created using express.
I have two calls, a get and a post set up like this:
router.get('/:id', (request, response) => {
console.log(request.params.id);
});
router.post('/:id', (request, response) => {
console.log(request.params.id);
});
now, I want the ID to be able to contain special characters (UTF8).
The problem is, when I use postman to test the requests, it looks like they are encoded very differently:
GET http://localhost:3000/api/â outputs â
POST http://localhost:3000/api/â outputs â
Does anyone have any idea what I am missing here?
I must mention that the post call also contains a file upload so the content type will be multipart/form-data
You should encode your URL on the client and decode it on the server. See the following articles:
What is the proper way to URL encode Unicode characters?
Can urls have UTF-8 characters?
Which characters make a URL invalid?
For JavaScript, encodeURI may come in handy.
It looks like postman does UTF-8 encoding but NOT proper url encoding. Consequently, what you type in the request url box translates to something different than what would happen if you typed that url in a browser.
I'm requesting: GET localhost/ä but it encodes it on the wire as localhost/ä
(This is now an invalid URL because it contains non ascii characters)
But when I type localhost/ä in to google chrome, it correctly encodes the request as localhost/%C3%A4
So you could try manually url encoding your request to http://localhost:3000/api/%C3%A2
In my opinion this is a bug (perhaps a regression). I am using the latest version of PostMan v7.11.0 on MacOS.
Does anyone have any idea what I am missing here?
yeah, it doesn't output â, it outputs â, but whatever you're checking the result with, think you're reading something else (iso-8859-1 maybe?), not UTF-8, and renders it as â
Most likely, you're viewing the result in a web browser, and the web server is sending the wrong Content-Type header. try doing header("Content-type: text/plain;charset=utf-8"); or header("Content-type: text/html;charset=utf-8"); , then your browser should render your â correctly.
I'm developing a web crawler in nodejs. I've created a unique list of the urls in the website crawle body. But some of them have extensions like jpg,mp3, mpeg ... I want to avoid crawling those who have extensions. Is there any simple way to do that?
Two options stick out.
1) Use path to check every URL
As stated in comments, you can use path.extname to check for a file extension. Thus, this:
var test = "http://example.com/images/banner.jpg"
path.extname(test); // '.jpg'
This would work, but this feels like you'll wind up having to create a list of file types you can crawl or you must avoid. That's work.
Side note -- be careful using path. Typically, url is your best tool for parsing links because path is aimed at files/directories, not urls. On some systems (Windows), using path to manipulate a url can result in drama because of the slashes involved. Fair warning!
2) Get the HEAD for each link & see if content-type is set to text/html
You may have reasons to avoid making more network calls. If so, this isn't an option. But if it is OK to make additional calls, you could grab the HEAD for each link and check the MIME type stored in content-type.
Something like this:
var headersOptions = {
method: "HEAD",
host: "http://example.com",
path: "/articles/content.html"
};
var req = http.request(headersOptions, function (res) {
// you will probably need to also do things like check
// HTTP status codes so you handle 404s, 301s, and so on
if (res.headers['content-type'].indexOf("text/html") > -1) {
// do something like queue the link up to be crawled
// or parse the link or put it in a database or whatever
}
});
req.end();
One benefit is that you only grab the HEAD, so even if the file is a gigantic video or something, it won't clog things up. You get the HEAD, see the content-type is a video or whatever, then move along because you aren't interested in that type.
Second, you don't have to keep track of file names because you're using a standard MIME type to differentiate html from other data formats.
I use Content Security Policy. I get genuinely useful warnings like this:
CSP violation!
{ 'csp-report':
{ 'document-uri': 'about:blank',
referrer: '',
'violated-directive': 'img-src \'self\' data: pbs.twimg.com syndication.twitter.com p.typekit.net',
'original-policy': 'longPolicyGoesHere',
'blocked-uri': 'https://platform.twitter.com',
'source-file': 'https://platform.twitter.com',
'line-number': 2 } }
Cool, I need to add 'platform.twitter.com' as an img-src
But sometimes I get blank CSP warnings like this:
CSP violation!
{}
Ie, there's been a POST, but the JSON is empty. What do I do?
I found the problem in my case; it might not be the problem for you.
Since the CSP reporter calls the report-uri file with the POST method, I assumed that the $_POST variable would contain the posted data. This turned out to be false, because the data was not sent from a form or file upload (see PHP "php://input" vs $_POST).
The following code works for me perfectly (thanks to inspiration by the slightly buggy code in https://mathiasbynens.be/notes/csp-reports):
<?php
// Receive and log Content-Security-Policy report
// (WriteLog function omitted here: it just writes text into a log file)
$data=file_get_contents('php://input');
if (!$data) // Data is usually non-empty
exit(0);
// Prettify the JSON-formatted data.
$val=json_decode($data);
$data = json_encode($val,JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES);
WriteLog($data);
?>
I searched around quite a bit but couldn't find a solution for my problem.
My app uses i18next and it works fine except for one issue: german umlauts (ü,ö,ä) are displayed as �.
I don't understand were I got it wrong, since this example app has no problem with umlauts: http://i18next-example1.eu01.aws.af.cm/?setLng=de-DE (github: https://github.com/rbeere/i18next-jade-express-sample)
How can I figure this one out?
The culprit might be:
The Translation.json file is not saved as UTF8.
If any specific
fonts are used, their Unicode support is very very limited (this is
very unlikely with modern fonts).
layout.jade file doesn't declare the page encoding. Therefore it's up to the browser to auto-detect it. No matter if this fixes the problem or not, it's a good practice to declare the page encoding in the header:
meta(http-equiv="Content-Type",content="text/html; charset=utf-8")
Content-Type HTTP header field is not set properly. Change the HTTP response as follows:
app.get('/', function(req, res) {
res.header("Content-Type", "text/html; charset=utf-8");
res.render('index', { title: 'Localization with Express, Jade and i18next-node'});
});
We have a high security application and we want to allow users to enter URLs that other users will see.
This introduces a high risk of XSS hacks - a user could potentially enter javascript that another user ends up executing. Since we hold sensitive data it's essential that this never happens.
What are the best practices in dealing with this? Is any security whitelist or escape pattern alone good enough?
Any advice on dealing with redirections ("this link goes outside our site" message on a warning page before following the link, for instance)
Is there an argument for not supporting user entered links at all?
Clarification:
Basically our users want to input:
stackoverflow.com
And have it output to another user:
stackoverflow.com
What I really worry about is them using this in a XSS hack. I.e. they input:
alert('hacked!');
So other users get this link:
stackoverflow.com
My example is just to explain the risk - I'm well aware that javascript and URLs are different things, but by letting them input the latter they may be able to execute the former.
You'd be amazed how many sites you can break with this trick - HTML is even worse. If they know to deal with links do they also know to sanitise <iframe>, <img> and clever CSS references?
I'm working in a high security environment - a single XSS hack could result in very high losses for us. I'm happy that I could produce a Regex (or use one of the excellent suggestions so far) that could exclude everything that I could think of, but would that be enough?
If you think URLs can't contain code, think again!
https://owasp.org/www-community/xss-filter-evasion-cheatsheet
Read that, and weep.
Here's how we do it on Stack Overflow:
/// <summary>
/// returns "safe" URL, stripping anything outside normal charsets for URL
/// </summary>
public static string SanitizeUrl(string url)
{
return Regex.Replace(url, #"[^-A-Za-z0-9+&##/%?=~_|!:,.;\(\)]", "");
}
The process of rendering a link "safe" should go through three or four steps:
Unescape/re-encode the string you've been given (RSnake has documented a number of tricks at http://ha.ckers.org/xss.html that use escaping and UTF encodings).
Clean the link up: Regexes are a good start - make sure to truncate the string or throw it away if it contains a " (or whatever you use to close the attributes in your output); If you're doing the links only as references to other information you can also force the protocol at the end of this process - if the portion before the first colon is not 'http' or 'https' then append 'http://' to the start. This allows you to create usable links from incomplete input as a user would type into a browser and gives you a last shot at tripping up whatever mischief someone has tried to sneak in.
Check that the result is a well formed URL (protocol://host.domain[:port][/path][/[file]][?queryField=queryValue][#anchor]).
Possibly check the result against a site blacklist or try to fetch it through some sort of malware checker.
If security is a priority I would hope that the users would forgive a bit of paranoia in this process, even if it does end up throwing away some safe links.
Use a library, such as OWASP-ESAPI API:
PHP - http://code.google.com/p/owasp-esapi-php/
Java - http://code.google.com/p/owasp-esapi-java/
.NET - http://code.google.com/p/owasp-esapi-dotnet/
Python - http://code.google.com/p/owasp-esapi-python/
Read the following:
https://www.golemtechnologies.com/articles/prevent-xss#how-to-prevent-cross-site-scripting
https://www.owasp.org/
http://www.secbytes.com/blog/?p=253
For example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$esapi = new ESAPI( "/etc/php5/esapi/ESAPI.xml" ); // Modified copy of ESAPI.xml
$sanitizer = ESAPI::getSanitizer();
$sanitized_url = $sanitizer->getSanitizedURL( "user-homepage", $url );
Another example is to use a built-in function. PHP's filter_var function is an example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$sanitized_url = filter_var($url, FILTER_SANITIZE_URL);
Using filter_var allows javascript calls, and filters out schemes that are neither http nor https. Using the OWASP ESAPI Sanitizer is probably the best option.
Still another example is the code from WordPress:
http://core.trac.wordpress.org/browser/tags/3.5.1/wp-includes/formatting.php#L2561
Additionally, since there is no way of knowing where the URL links (i.e., it might be a valid URL, but the contents of the URL could be mischievous), Google has a safe browsing API you can call:
https://developers.google.com/safe-browsing/lookup_guide
Rolling your own regex for sanitation is problematic for several reasons:
Unless you are Jon Skeet, the code will have errors.
Existing APIs have many hours of review and testing behind them.
Existing URL-validation APIs consider internationalization.
Existing APIs will be kept up-to-date with emerging standards.
Other issues to consider:
What schemes do you permit (are file:/// and telnet:// acceptable)?
What restrictions do you want to place on the content of the URL (are malware URLs acceptable)?
Just HTMLEncode the links when you output them. Make sure you don't allow javascript: links. (It's best to have a whitelist of protocols that are accepted, e.g., http, https, and mailto.)
You don't specify the language of your application, I will then presume ASP.NET, and for this you can use the Microsoft Anti-Cross Site Scripting Library
It is very easy to use, all you need is an include and that is it :)
While you're on the topic, why not given a read on Design Guidelines for Secure Web Applications
If any other language.... if there is a library for ASP.NET, has to be available as well for other kind of language (PHP, Python, ROR, etc)
For Pythonistas, try Scrapy's w3lib.
OWASP ESAPI pre-dates Python 2.7 and is archived on the now-defunct Google Code.
How about not displaying them as a link? Just use the text.
Combined with a warning to proceed at your own risk may be enough.
addition - see also Should I sanitize HTML markup for a hosted CMS? for a discussion on sanitizing user input
There is a library for javascript that solves this problem
https://github.com/braintree/sanitize-url
Try it =)
In my project written in JavaScript I use this regex as white list:
url.match(/^((https?|ftp):\/\/|\.{0,2}\/)/)
the only limitation is that you need to put ./ in front for files in same directory but I think I can live with that.
Using Regular Expression to prevent XSS vulnerability is becoming complicated thus hard to maintain over time while it could leave some vulnerabilities behind. Having URL validation using regular expression is helpful in some scenarios but better not be mixed with vulnerability checks.
Solution probably is to use combination of an encoder like AntiXssEncoder.UrlEncode for encoding Query portion of the URL and QueryBuilder for the rest:
public sealed class AntiXssUrlEncoder
{
public string EncodeUri(Uri uri, bool isEncoded = false)
{
// Encode the Query portion of URL to prevent XSS attack if is not already encoded. Otherwise let UriBuilder take care code it.
var encodedQuery = isEncoded ? uri.Query.TrimStart('?') : AntiXssEncoder.UrlEncode(uri.Query.TrimStart('?'));
var encodedUri = new UriBuilder
{
Scheme = uri.Scheme,
Host = uri.Host,
Path = uri.AbsolutePath,
Query = encodedQuery.Trim(),
Fragment = uri.Fragment
};
if (uri.Port != 80 && uri.Port != 443)
{
encodedUri.Port = uri.Port;
}
return encodedUri.ToString();
}
public static string Encode(string uri)
{
var baseUri = new Uri(uri);
var antiXssUrlEncoder = new AntiXssUrlEncoder();
return antiXssUrlEncoder.EncodeUri(baseUri);
}
}
You may need to include white listing to exclude some characters from encoding. That could become helpful for particular sites.
HTML Encoding the page that render the URL is another thing you may need to consider too.
BTW. Please note that encoding URL may break Web Parameter Tampering so the encoded link may appear not working as expected.
Also, you need to be careful about double encoding
P.S. AntiXssEncoder.UrlEncode was better be named AntiXssEncoder.EncodeForUrl to be more descriptive. Basically, It encodes a string for URL not encode a given URL and return usable URL.
You could use a hex code to convert the entire URL and send it to your server. That way the client would not understand the content in the first glance. After reading the content, you could decode the content URL = ? and send it to the browser.
Allowing a URL and allowing JavaScript are 2 different things.