I have been working on a uni project and I'm getting really stuck on why the cross site authentication cookie from our backend isn't set when I do a CORS request to it from our backend.
Our setup is as follows:
A frontend on https://frontend-domain.com sends a CORS request to https://backend-domain.com with credentials in the post body, expecting a Set-Cookie: auth-token header in the response, if credentials are correct.
The fetch to the backend has credentials: 'include' set.
The backend response includes Access-Control-Allow-Credentials: true and explicitly states Access-Control-Allow-Origin: https://frontend-domain.com. The Allowed Methods header is also correct.
The token cookie in the Set-Cookie header has the attributes SameSite=Noneand Secure, it's domain attribute is Domain=backend-domain.com.
As far as I could find on the mozilla docs or here on stack overflow, these are all the requirements for cross site cookies to work. I expected the Set-Cookie header would make the browser set the cookie, which would then be sent along with all further requests to https://backend-domain.com, given credentials: 'include' is set.
However, the cookie is never set.
Can anyone help me? I am absolutely clueless by now.
Thank you very much for reading and helping!
Edit
I am using Firefox right now.
Here is a screenshot of the request:
And here is the response:
All of the Set-Cookie headers you can see in the response dont result in an actual cookie.
The SameSite attribute of a cookie controls whether this cookie is included in
subrequests (such as the ones made by an <img> or <iframe> element or a Javascript fetch command) to a different origin
top-level navigation requests (which load a new page into the current or a new browser tab).
Details are given here. Note especially the subtly different treatment of navigation with GET and POST ("Lax-Allowing-Unsafe").
Cookies in subrequests (but not top-level navigation requests) may be additionally restricted based on browser settings if they are third-party cookies, that is, if the top-level domains of their origin and the sending web page differ. In other words: Cookies from backend-domain.com count as third-party cookies when a request is made by an HTML page from frontend-domain.com, and this is what caused the issue in your case.
Related
I noticed that when Prevent cross-site tracking is checked in Safari, I am unable to set the secure cookies. I described this issue in great detail in this question.
Then how do you set the secure cookies in Express with that setting enabled?
From MDN:
Values
The SameSite attribute accepts three values:
Lax
Cookies are allowed to be sent with top-level navigations and will be sent along with GET request initiated by third party website. This is the default value in modern browsers.
Strict
Cookies will only be sent in a first-party context and not be sent along with requests initiated by third party websites.
None
Cookies will be sent in all contexts, i.e sending cross-origin is allowed.
None used to be the default value, but recent browser versions made Lax the default value to have reasonably robust defense against some classes of cross-site request forgery (CSRF) attacks.
None requires the Secure attribute in latest browser versions. See below for more information.
It says in this article that Apple is phasing out third party cookies with Safari. I'm reading online that third party cookies are generated by a different domain than the one user is visiting, for cross-site tracking, retargeting, and ad-serving.
I am working on a project where the frontend is served on Netlify and the backend is from Heroku. Since the backend has a different domain than the front-end, the cookies generated from the node express backend are considered Third-party cookies?
Does that mean that I should have both frontend and backend on the same server going forward following this security practice?
Netlify allows you to proxy requests to your backend on a different hostname.
https://docs.netlify.com/routing/redirects/rewrites-proxies/#proxy-to-another-service
I'm not sure if it will let you set the cookies that way, Netlify might strip all headers, you should try.
If for some reason that does not work for you then you should either serve frontend and backend from the same hostname or set the cookie with JS on the client-side (which I don't recommend), also you can't set HttpOnly cookie from JS side.
I'm having some issues with the cookie for a sessionID returned by Express to the browser. For consecutive requests the received cookie is not being passed back, hence generating a new session for each request. The login-state can not be maintained.
This issue only seems to be occurring on IE11 and below on an OS lower than Win10 or so (eg: IE11 on Win7). Edge, Chrome, Safari, FF... no issues.
Some more context:
We have two applications, say one.example.com and two.example.com. With requests from both applications the logged-in user should be kept track of.
The MEAN stack accessible on two.example.com returns set-cookie headers for one day, HTTP only, with a domain of '.example.com' on path '/'.
When I load a page with say, 10 resources, all these requests receive a new cookie for the sessionID. Even on consecutive page loads. The cookie is never returned back to the server.
HTTP trace from Chrome:
GET http://localhost:3000/
HTTP/1.1 200 OK
set-cookie: test=123
set-cookie: webSessionId=s%3Av6R-...
GET http://localhost:3000/lib/bootstrap/dist/css/bootstrap.css
Cookie: test=123; webSessionId=s%3Av6R-...
HTTP/1.1 200 OK
Set-Cookie: test=123
seen Chrome returns the session, it is not returned by express for the second request
HTTP trace from IE:
response for first requested resource
request for second resource
response for second requested resource
The test123 is a hardcoded cookie I set on every request (regardless whether it has been returned)... by using res.setHeader('Set-Cookie', 'test=123');. At one point I was looking at the difference between a 'set-cookie' and 'Set-Cookie' (as can be seen in above screenshots), but that does not seem to impact IE.
So I started to play around with the other cookie properties (expiry date, domain, path, secure & http only) as soon as I provide a domain... the test-cookie is not returned by IE.
In our setup '.example.com' really is a requirement. The domain does not contain an underscore (_). In dev & tst it does contain a hyphen "two-dev.example.com". But the IE(11) problem also exists on prd (two.example.com).
Anyone has any idea as to why IE refuses to return cookies with a domain?
This sh*t is driving me bananas
using: express 4.13.1; express-session 1.11.3; cookie-parser 1.3.2
I think I found the cause... the domains we are using are two-letter domains e.g. one.xx.yy & two.xx.yy. So setting the Domain-part of the cookie to .xx.yy is causing IE to ignore the cookie, I recon.
Anyone can confirm this issue is still valid on IE11?
How to set cookies for two-letter domains in IE8?
And/or how to circumvent this? Besides the obvious of using other domains.
I'm a bit confused about the security aspects of CORS POST requests. I know there is a lot of information about this topic online, but I couldn't find a definite answer to my questions.
If I understood it correctly, the goal of the same-origin policy is to prevent CSRF attacks and the goal of CORS is to enable resource sharing if (and only if) the server agrees to share its data with applications hosted on other sites (origins).
HTTP specifies that POST requests are not 'safe', i.e. they might change the state of the server, e.g. by adding a new comment. When initiating a CORS request with the HTTP method POST, the browser only performs a 'safe' preflight request if the content-type of the request is non-standard (or if there are non-standard http headers). So POST requests with standard content-type and standard headers are executed and might have negative side effects on the server (although the response might not be accessible to the requesting script.)
There is this technique of adding a random token to every form, which the server then requires to be part of every non-'safe' request. If a script tries to forge a request, it either
does not have the random token and the server declines the request, or
it tries to access the form where the random token is defined. This response with the random token should have the appropriate head fields, such that the browser does not grant the evil script access to this response. Also in this case the attempt fails.
My conclusion is that the only protection against forged POST requests with standard content-type and headers is the technique described above (or a similar one). For any other non-'safe' request such as PUT or DELETE, or a POST with json-content, it is not necesssay to use the technique because CORS performs a 'safe' OPTIONS request.
Why did the authors of CORS exclude these POST exempt from preflight requests and therefore made it necessary to employ the technique described above?
See What is the motivation behind the introduction of preflight CORS requests?.
The reason CORS doesn’t require browsers to do a preflight for application/x-www-form-urlencoded, multipart/form-data, or text/plain content types is that if it did, that’d make CORS more restrictive than what browsers have already always allowed (and it’s not the intent of CORS to put new restrictions on what was already possible without CORS).
That is, with CORS, POST requests that you could do previously cross-origin are not preflighted—because browsers already allowed them before CORS existed, and servers knew about them. So CORS changes nothing about those “old” types of requests.
But prior to CORS, browsers wouldn’t allow you to do a cross-origin application/json POST at all, and so servers could assume they wouldn’t receive them. That’s why a CORS preflight is required for those types of “new” requests and not for the “old” ones—to give a heads-up to the server: this is a different “new” type of request that they must explicitly opt-in to supporting.
After reading a lot about CORS and pre-flight requests I still don't quite get why there are some exceptions for doing a pre-flight. Why does it matter if the Content-Type is text/plain or application/json?
If I get it right, the value of CORS is to restrict the returned data (It doesn't care if the POST destroyed the database, it only cares that the browser can't read the output of that operation). But if that's true (and probably It's not) why there are pre-flight requests at all? Wouldn't suffice to just check for a header like Access-Control-Allow-Cross-Origin-Request: true in the response?
The best answer so far I found in the: What is the motivation behind the introduction of preflight CORS requests? question, but it's still a bit confusing for me.
Why does it matter if the Content-Type is 'text/plain' or
'application/json'?
The three content types (enctype) supported by a form are as follows:
application/x-www-form-urlencoded
multipart/form-data
text/plain
If a form is received by a handler on the web server, and it is not one of the above content types then it can be assumed that it was an AJAX request that sent the form, and not an HTML <form /> tag.
Therefore, if an existing pre-CORS system is using the content type as a method of ensuring that the request is not cross-site in order to prevent Cross-Site Request Forgery (CSRF), then the authors of the CORS spec did not want to introduce any new security vulnerabilities into existing websites. They did this by insisting such requests initiate a preflight to ensure both the browser and the server are CORS compatible first.
It doesn't care if the POST destroyed the database, it only cares that
the browser can't read the output of that operation
Exactly right. By default browsers obey the Same Origin Policy. CORS relaxes this restriction, allowing another Origin to read responses from it made by AJAX.
why there are pre-flight requests at all?
As said, to ensure that both client and server are CORS compatible and it is not just an HTML form being sent that has always been able to be submitted cross domain.
e.g. this has always worked. A form on example.com POSTing to example.org:
<form method="post" action="//example.org/handler.php" />
Wouldn't suffice to just
check for a header like 'Access-Control-Allow-Cross-Origin-Request:
true' in the response?
Because of the CSRF vector. By checking that the browser can send a preflight, it ensures that the cross-origin request is authorised before the browser will send it (by examining CORS response headers). This enables the browser to protect the current user's session - remember that the attacker here is not the one running the browser, the victim is running the browser in a CSRF attack, therefore a manipulated browser that doesn't properly check CORS headers or spoofs a preflight would be of no advantage for an attacker to run themselves. Similarly, the preflight enables CSRF mitigations such as custom headers to work.
To summerise:
HTML form cross-origin
Can only be sent with certain enctype's
Cannot have custom headers
Browser will just send it without preflight because everything about a <form> submission will be standard (or "simple" as CORS puts it)
If server handler receives a request from such a form it will act upon it
AJAX cross-origin
Only possible via CORS
Early version of some browsers, like IE 8 & 9 could send cross-origin requests, but not with non-standard headers or enctype's
Can have custom headers and enctype's in fully supported browsers
In order to ensure that a cross-origin AJAX request is not spoofing a same-origin AJAX request (remember that cross-origin didn't used to be possible), if the AJAX request is not simple then the browser will send a preflight to ensure this is allowed
If server handler receives a request it will act upon it, but only if it has passed preflight checks because the initial request will be made with the OPTIONS verb and not until the browser agrees that the server is talking CORS will it send the actual GET or POST
I recently had to set Access-Control-Allow-Origin to * in order to be able to make cross-subdomain AJAX calls. I feel like this might be a security problem. What risks am I exposing myself to if I keep the setting?
By responding with Access-Control-Allow-Origin: *, the requested resource allows sharing with every origin. This basically means that any site can send an XHR request to your site and access the server’s response which would not be the case if you hadn’t implemented this CORS response.
So any site can make a request to your site on behalf of their visitors and process its response. If you have something implemented like an authentication or authorization scheme that is based on something that is automatically provided by the browser (cookies, cookie-based sessions, etc.), the requests triggered by the third party sites will use them too.
This indeed poses a security risk, particularly if you allow resource sharing not just for selected resources but for every resource. In this context you should have a look at When is it safe to enable CORS?.
Update (2020-10-07)
Current Fetch Standard omits the credentials when credentials mode is set to include, if Access-Control-Allow-Origin is set to *.
Therefore, if you are using a cookie-based authentication, your credentials will not be sent on the request.
Access-Control-Allow-Origin: * is totally safe to add to any resource, unless that resource contains private data protected by something other than standard credentials. Standard credentials are cookies, HTTP basic auth, and TLS client certificates.
Eg: Data protected by cookies is safe
Imagine https://example.com/users-private-data, which may expose private data depending on the user's logged in state. This state uses a session cookie. It's safe to add Access-Control-Allow-Origin: * to this resource, as this header only allows access to the response if the request is made without cookies, and cookies are required to get the private data. As a result, no private data is leaked.
Eg: Data protected by location / ip / internal network is not safe (unfortunately common with intranets and home appliances):
Imagine https://intranet.example.com/company-private-data, which exposes private company data, but this can only be accessed if you're on the company's wifi network. It's not safe to add Access-Control-Allow-Origin: * to this resource, as it's protected using something other than standard credentials. Otherwise, a bad script could use you as a tunnel to the intranet.
Rule of thumb
Imagine what a user would see if they accessed the resource in an incognito window. If you're happy with everyone seeing this content (including the source code the browser received), it's safe to add Access-Control-Allow-Origin: *.
AFAIK, Access-Control-Allow-Origin is just a http header sent from the server to the browser. Limiting it to a specific address (or disabling it) does not make your site safer for, for example, robots. If robots want to, they can just ignore the header. The regular browsers out there (Explorer, Chrome, etc.) by default honor the header. But an application like Postman simply ignores it.
The server end doesn't actually check what the 'origin' is of the request when it returns the response. It just adds the http header. It's the browser (the client end) which sent the request that decides to read the access-control header and act upon it. Note that in the case of XHR it may use a special 'OPTIONS' request to ask for the headers first.
So, anyone with creative scripting abilities can easily ignore the whole header, whatever is set in it.
See also Possible security issues of setting Access-Control-Allow-Origin.
Now to actually answer the question
I can't help but feel that I'm putting my environment to security
risks.
If anyone wants to attack you, they can easily bypass the Access-Control-Allow-Origin. But by enabling '*' you do give the attacker a few more 'attack vectors' to play with, like, using regular webbrowsers that honor that HTTP header.
Here are 2 examples posted as comments, when a wildcard is really problematic:
Suppose I log into my bank's website. If I go to another page and then
go back to my bank, I'm still logged in because of a cookie. Other
users on the internet can hit the same URLs at my bank as I do, yet
they won't be able to access my account without the cookie. If
cross-origin requests are allowed, a malicious website can effectively
impersonate the user.
– Brad
Suppose you have a common home router, such as a Linksys WRT54g or
something. Suppose that router allows cross-origin requests. A script
on my web page could make HTTP requests to common router IP addresses
(like 192.168.1.1) and reconfigure your router to allow attacks. It
can even use your router directly as a DDoS node. (Most routers have
test pages which allow for pings or simple HTTP server checks. These
can be abused en masse.)
– Brad
I feel that these comments should have been answers, because they explain the problem with a real life example.
This answer was originally written as a reply to What are the security implications of setting Access-Control-Allow-Headers: *, if any? and was merged despite being irrelevant to this question.
To set it to a wildcard *, means to allow all headers apart from safelisted ones, and remove restrictions that keeps them safe.
These are the restrictions for the 4 safelisted headers to be considered safe:
For Accept-Language and Content-Language: can only have values consisting of 0-9, A-Z, a-z, space or *,-.;=.
For Accept and Content-Type: can't contain a CORS-unsafe request header byte: 0x00-0x1F (except for 0x09 (HT), which is allowed), "():<>?#[\]{}, and 0x7F (DEL).
For Content-Type: needs to have a MIME type of its parsed value (ignoring parameters) of either application/x-www-form-urlencoded, multipart/form-data, or text/plain.
For any header: the value’s length can't be greater than 128.
For simplicity's sake, I'll base my answer on these headers.
Depending on server implementation, simply removing these limitations can be very dangerous (to the user).
For example, this outdated wordpress plugin has a reflected XSS vulnerability where the value of Accept-Language was parsed and rendered on the page as-is, causing script execution on the user's browser should a malicious payload be included in the value.
With the wildcard header Access-Control-Allow-Headers: *, a third party site redirecting to your site could set the value of the header to Accept Language: <script src="https://example.com/malicious-script.js"></script>, given that the wildcard removes the restriction in Point 1 above.
The preflight response would then give the greenlight to this request, and the user will be redirected to your site, triggering an XSS on their browser, which impact can range from an annoying popup to losing control of their account through cookie hijacking.
Thus, I would strongly recommend against setting a wildcard unless it is for an API endpoint where nothing is being rendered on the page.
You can set Access-Control-Allow-Headers: Pragma as an alternative solution to your problem.
Note that the value * only counts as a special wildcard value for requests without credentials (requests without HTTP cookies or HTTP authentication information), otherwise it will be read as a literal header. Documentation
In scenario where server attempts to disable the CORS completely by setting below headers.
Access-Control-Allow-Origin: * (tells the browser that server accepts
cross site requests from any ORIGIN)
Access-Control-Allow-Credentials: true (tells the browser that cross
site requests can send cookies)
There is a fail safe implemented in browsers that will result in below error
"Credential is not supported if the CORS header ‘Access-Control-Allow-Origin’ is ‘*’"
So in most scenarios setting ‘Access-Control-Allow-Origin’ to * will not be a problem. However to secure against attacks, the server can maintain a list of allowed origins and whenever server gets a cross origin request, it can validate the ORIGIN header against the list of allowed origins and then echo back the same in Access-Control-Allow-Origin header.
Since ORIGIN header can't be changed by javascript running on the browser, the malicious site will not be able to spoof it.