How do browsers assess the origin domain for XMLHttpRequest? - cross-domain

I thought that the origin domain for XMLHttpRequest was the domain that loaded the javascript that is using it.
For instance, It thought that if on a page http://mydomain1.com/ I have:
<script src="http://mydomain2.com/script.js" />
script.js can interact with mydomain2.com via XHR. This was, I thought, one of the nice things about jsonp.
I am seeing a bit of evidence in some testing that even though JS loads from mydomain2.com, XHR's origin is still mydomain1.com. Have I just been way off base all this time?

It is the domain of the page (possibly in a frame) which executes the JavaScript.
If it were from the domain with which the JavaScript was loaded, what would happen for all the people using jQuery.ajax after loading jQuery from a CDN (such as this)?
JSONP doesn't allow acting as another domain, but rather it allows injecting from another domain. The source URI of the script element is not restricted to the same domain origin restriction as XHR: in this manner JSONP can be used to freely send data (in the URI) and execute the returned JavaScript (not JSON) directly in the context of the current page.
Including script tags from remote sites allows the remote sites to inject any content into a website. If the remote sites have vulnerabilities that allow JavaScript injection, the original site can also be affected.
Happy coding.

Related

From content scripts, can I make an ajax call to a REST API on hosted on my server?

After reading blogs and some stackoverflow answers while building a chrome extension, I had for some reason thought that we cannot make an ajax call to a REST API hosted on server that comes under another domain than the hosted page. Is this correct? While developing my extension, I mistakenly made a call from a content script on clicking a button on my extension UI (UI is injected into the DOM using content script). I did not ran into any error. Everything went smooth. The host page in my test case is infact a page from stack overflow, and the REST API is hosted on my localhost. Could it be because the api was on local host?
From Chrome XHR documentation:
Regular web pages can use the XMLHttpRequest object to send and receive data from remote servers, but they're limited by the same origin policy. Extensions aren't so limited. An extension can talk to remote servers outside of its origin, as long as it first requests cross-origin permissions.
Furthermore, from the Content Script documentation:
Content scripts can also make cross-site XMLHttpRequests to the same sites as their parent extensions [...]
So the only thing you need is to add your API endpoint to host permissions in the manifest:
"permissions" : [
"*://api.example.com/*"
]

Why do browser APIs restrict cross-domain requests?

XMLHttpRequests require CORS to work cross-domain. Similarly for web fonts, WebGL textures, and a few other things. In general all new APIs seem to have this restriction.
Why?
It's so easy to circumvent: all it takes is a simple server-side proxy. In other words, server-side code isn't prohibited from doing cross-domain requests; why is client-side code? How does this give any security, to anyone?
And it's so inconsistent: I can't XMLHttpRequest, but I can <script src> or <link rel> or <img src> or <iframe>. What does restricting XHR etc. even accomplish?
If I visit a malicious website, I want to be sure that :
It cannot read my personal data from other websites I use. Think attacker.com reading gmail.com
It cannot perform actions on my behalf on other websites that I use. Think attacker.com transferring funds from my account on bank.com
Same Origin Policy solves the first problem. The second problem is called cross site request forgery, and cannot be solved with the cross-domain restrictions currently in place.
The same origin policy is in general consistent with the following rules -
Rule 1: Doesn't let you read anything from a different domain
Rule 2: Lets you write whatever you want to a different domain, but rule #1 will not allow you to read the response.
Rule 3: You can freely make cross-domain GET requests and POST requests, but you cannot control the HTTP headers
Lets see how the various things you have listed line up to the above rules :
<img> tags let you make a HTTP request, but there is no way to read the contents of the image other than simply displaying it. For example, if I do this <img src="http://bank.com/get/latest/funds"/>, the request will go through (rule 2). But there is no way for the attacker to see my balance (rule 1).
<script> tags work mostly like <img>. If you do something like <script src="http://bank.com/get/latest/funds">, the request will go through. The browser will also try to parse the response as JavaScript, and will fail.
There is a well known abuse of <script> tags called JSONP, where you collude with the cross-domain server so that you can 'read' cross-domain. But without the explicit involvement of the cross-domain server, you cannot read the response via the <script> tag
<link> for stylesheets work mostly like <script> tags, except the response is evaluated as CSS. In general, you cannot read the response - unless the response somehow happens to be well-formed CSS.
<iframe> is essentially a new browser window. You cannot read the HTML of a cross-domain iframe. Incidentally, you can change the URL of a cross-domain iframe, but you cannot read the URL. Notice how it follows the two rules I mentioned above.
XMLHttpRequest is the most versatile method to make HTTP requests. This is completely in the developers control; the browser does not do anything with the response. For example, in the case of <img>, <script> or <link>, the browser assumes a particular format and in general will validate it appropriately. But in XHR, there is no prescribed response format. So, browsers enforce the same origin policy and prevent you from reading the response unless the cross domain website explicitly allows you.
Fonts via font-face are an anomaly. AFAIK, only Firefox requires the opt-in behavior; other browsers let you use fonts just like you would use images.
In short, the same origin policy is consistent. If you find a way to make a cross-domain request and read the response without explicit permission from the cross-domain website - you'll make headlines all over the world.
EDIT : Why can't I just get around all of this with a server-side proxy?
For gmail to show personalized data, it needs cookies from your browser. Some sites use HTTP basic authentication, in which the credentials are stored in the browser.
A server-side proxy cannot get access to either the cookies or the basic auth credentials. And so, even though it can make a request, the server will not return user specific data.
Consider this scenario...
You go to my malicious website.
My site makes an XHR to your banking website and requests the form for bank transfer.
The XHR reads the token that prevents CSRF and POSTs the form alongside the security token and transfers a sum of money to my account.
(I) Profit!!!
Without Same Origin Policy in existence, you could still POST that form, but you wouldn't be able to request the CSRF token that prevents CSRFs.
Server side code does not run on the client's computer.
The main issue with XHR is that they can not just send a request but you are also able to read the response. Sending almost arbitrary requests was already possible. But reading their responses was not. That’s why the original XHR did not allow any cross-origin requests at all.
Later, when the demand for cross-origin requests with XHR arose, the CORS was established to allow cross-origin requests under specific conditions. One condition is that particular request methods, request header fields, and requests that would contain user credentials require a so called preflight request with which the client can check whether the server would allow the request. With this the server has the ability to restrict access to only specific origins as otherwise any origin could send requests.

What ways can you secure a web page so that it can ONLY be viewed from within an iFrame?

This thread was created back in 2008 Restricting IFRAME access in PHP
I am looking to do almost the exact same thing. i.e. I want to have sites which are publicly accessible as long as they are being viewed from a specific iFrame, from a specific app. The IFrame app will have user authentication giving them access to urls outside the core application. The urls are all likely to be built using Open Source PHP tools e.g. Wordpress.
Both the viewing iFrame and the viewed sites/pages will be owned by us.
Have there been any developments in last few years on ways to do this?
For various reasons not related to this particular issue, I am considering using the serverside RIA framework Vaadin (JAVA) for building the app that will contain the iFrame viewer.
The demo of the embed widget is here http://demo.vaadin.com/sampler#WebEmbed Looking at the page source I don't see anywhere that the address of the embedded webpage is displayed. So to some extent I wonder if I can hide my urls from search engines, give them very long, randomly generated URI's and maybe they will be impossible to find anyway?
You should be able to modify a framekiller to do the opposite. A framekiller is a piece of javascript to prevent clickjacking by detecting if the page has been loaded within an iframe.
Limiting the iframe to load within a specific page is more difficult. Looking at the referer is easy, but also easy to bypass. If you load the iframe from an https page the referer will be blank. A better way would be to require the server to obtain a Nonce and include this in the iframe url. Such as http://iframe_url?key=difhj8j84528423j423894hfdj897 or whatever. Having the server make a request to your server would be ideal. Doing it with client side code and jsonp to fetch the nonce is problematic because an attacker could deliver modified javascript to fetch the nonce.

Firefox or Chrome plugin to block and filter all outgoing connections

In Firefox or Chrome I'd like to prevent a private web page from making outgoing connections, i.e. if the URL starts with http://myprivatewebpage/ or https://myprivatewebpage/ in a browser tab, then that browser tab must be restricted so that it is allowed to load images, CSS, fonts, JavaScript, XmlHttpRequest, Java applets, flash animations and all other resources only from http://myprivatewebpage/ or https://myprivatewebpage/, i.e. an <img src="http://www.google.com/images/logos/ps_logo.png"> (or the corresponding <script>new Image(...) must not be able to load that image, because it's not on myprivatewebpage. I need a 100% and foolproof solution: not even a single resource outside myprivatewebpage can be accessible, not even at low probability. There must be no resource loading restrictions on Web pages other than myprivatewebpage, e.g. http://otherwebpage/ must be able to load images from google.com.
Please note that I assume that the users of myprivatewebpage are willing to cooperate to keep the web page private unless it's too much work for them. For example, they would be happy to install a Chrome or Firefox extension once, and they wouldn't be offended if they see an error message stating that access is denied to myprivatewebpage until they install the extension in a supported browser.
The reason why I need this restriction is to keep myprivatewebpage really private, without exposing any information about its use to webmasters of other web pages. If http://www.google.com/images/logos/ps_logo.png was allowed, then the use of myprivatewebpage would be logged in the access.log of Google's ps_logo.png, so Google's webmasters would have some information how myprivatewebpage is used, and I don't want that. (In this question I'm not interested in whether the restriction is reasonable, but I'm only interested in the technical solutions and its strengths and weaknesses.)
My ideas how to implement the restriction:
Don't impose any restrictions, just rely on the same origin policy. (This doesn't provide the necessary protection, the same origin policy lets all images pass through.)
Change the web application on the server so it generates HTML, JavaScript, Java applets, flash animations etc. which never attempt to load anything outside myprivatewebpage. (This is almost impossibly hard to foolproof everywhere on a complicated web application, especially with user-generated content.)
Over-sanitize the web page using a HTML output filter on the server, i.e. remove all <script>, <embed> and <object> tags, restrict the target of <img src=, <link rel=, <form action= etc. and also restrict the links in the CSS files. (This can prevent all unwanted resources if I can remember all HTML tags properly, e.g. I mustn't forget about <video>. But this is too restrictive: it removes all dyntamic web page functionality like JavaScript, Java applets and flash animations; without these most web applications are useless.)
Sanitize the web page, i.e. add an HTML output filter into the webserver which removes all offending URLs from the generated HTML. (This is not foolproof, because there can be a tricky JavaScript which generates a disallowed URL. It also doesn't protect against URLs loaded by Java applets and flash animations.)
Install a HTTP proxy which blocks requests based on the URL and the HTTP Referer, and force all browser traffic (including myprivatewebpage, otherwebpage, google.com) through that HTTP proxy. (This would slow down traffic to other than myprivatewebpage, and maybe it doesn't protect properly if XmlHttpRequest()s, Java applets or flash animations can forge the HTTP Referer.)
Find or write a Firefox or Chrome extension which intercepts all outgoing connections, and blocks them based on the URL of the tab and the target URL of the connection. I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
Some links I've found for the Chrome extension:
http://www.chromium.org/developers/design-documents/extensions/notifications-of-web-request-and-navigation
https://groups.google.com/a/chromium.org/group/chromium-extensions/browse_thread/thread/90645ce11e1b3d86?pli=1
http://code.google.com/chrome/extensions/trunk/experimental.webRequest.html
As far as I can see, only the Firefox or Chrome extension is feasible from the list above. Do you have any other suggestions? Do you have some pointers how to write or where to find such an extension?
I've found https://developer.mozilla.org/en/Setting_HTTP_request_headers and thinkahead.js in https://addons.mozilla.org/en-US/firefox/addon/thinkahead/ and http://thinkahead.mozdev.org/ . Am I correct that it's possible to write a Firefox extension using that? Is there such a Firefox extension already?
I am the author of the latter extension, though I have yet to update it to support newer versions of Firefox. My initial guess is that, yes, it will do what you want:
User visits your web page without plugin. Web page contains ThinkAhead block that would send a simple version header to the server, but this is ignored as plugin is not installed.
Since the server does not see that header, it redirects the client to a page to install the plugin.
User installs plugin.
User visits web page with plugin. Page sends version header to server, so server allows access.
The ThinkAhead block matches all pages that are not myprivatewebpage, and does something like set the HTTP status to 403 Forbidden. Thus:
When the user visits any webpage that is in myprivatewebpage, there is normal behaviour.
When the user visits any webpage outside of myprivatewebpage, access is denied.
If you want to catch bad requests earlier, instead of modifying incoming headers, you could modify outgoing headers, perhaps screwing up "If-Match" or "Accept" so that the request is never honoured.
This solution is extremely lightweight, but might not be strong enough for your concerns. This depends on what you want to protect: given the above, the client would not be able to see blocked content, but external "blocked" hosts might still notice that a request has been sent, and might be able to gather information from the request URL.

JSF - Forcing use of JSESSIONID in url for iFrame without 3rd party cookie support

HI all! I am working on a JAVA/JSF app that runs within an iFrame. The client authenticates Outside of the iFrame, then redirects back to a page that contains the application inside of an iFrame. If the client has 3rd party cookies disabled, the iFrame will not be able to access the cookie, and it will never see the jsessionid.
What I would like to do is test for the cookie in the app, and if not found, redirect using JS to the current page, with ;jsessionid appended to the end. I tried that with
;jsessionid=#{session.getId()}
Which looked OK...but would never maintain the current session. I then added an
<h:form><h:commandButton/></h:form>
to the page, turned off cookies, viewed the page in a browser, and saw that the jsessionid listed on the form was different than the one provided by session.getId().
My question is this......how can I get the correct jsessionid, the one that would be part of the form?
Thanks! Mason
--Update--
I should mention that this is on the same domain, webserver, and application. an and the #{session.getId()} on the same page will return a different jsessionid at the same time.
Sessions are by default domain- and context bound. Your issue indicates that the page which the iframe is serving runs at a different domain and/or context.
If the page in the iframe runs at a different domain, then you'll have to write a "local" servlet which acts as a proxy with help of java.net.URLConnection or Apache HttpClient and let the iframe link to that instead.
If the page in the iframe runs at same domain but at a different context (and runs at same webserver), then you need to configure the server to share the same session among all running webapps. How to do that exactly depends on the server in question. If it's Tomcat or a clone/fork, then check the emptySessionPath attribute of the HTTP connector.

Resources