NodeJS & Express cookie issue with IE - node.js

I'm having some issues with the cookie for a sessionID returned by Express to the browser. For consecutive requests the received cookie is not being passed back, hence generating a new session for each request. The login-state can not be maintained.
This issue only seems to be occurring on IE11 and below on an OS lower than Win10 or so (eg: IE11 on Win7). Edge, Chrome, Safari, FF... no issues.
Some more context:
We have two applications, say one.example.com and two.example.com. With requests from both applications the logged-in user should be kept track of.
The MEAN stack accessible on two.example.com returns set-cookie headers for one day, HTTP only, with a domain of '.example.com' on path '/'.
When I load a page with say, 10 resources, all these requests receive a new cookie for the sessionID. Even on consecutive page loads. The cookie is never returned back to the server.
HTTP trace from Chrome:
GET http://localhost:3000/
HTTP/1.1 200 OK
set-cookie: test=123
set-cookie: webSessionId=s%3Av6R-...
GET http://localhost:3000/lib/bootstrap/dist/css/bootstrap.css
Cookie: test=123; webSessionId=s%3Av6R-...
HTTP/1.1 200 OK
Set-Cookie: test=123
seen Chrome returns the session, it is not returned by express for the second request
HTTP trace from IE:
response for first requested resource
request for second resource
response for second requested resource
The test123 is a hardcoded cookie I set on every request (regardless whether it has been returned)... by using res.setHeader('Set-Cookie', 'test=123');. At one point I was looking at the difference between a 'set-cookie' and 'Set-Cookie' (as can be seen in above screenshots), but that does not seem to impact IE.
So I started to play around with the other cookie properties (expiry date, domain, path, secure & http only) as soon as I provide a domain... the test-cookie is not returned by IE.
In our setup '.example.com' really is a requirement. The domain does not contain an underscore (_). In dev & tst it does contain a hyphen "two-dev.example.com". But the IE(11) problem also exists on prd (two.example.com).
Anyone has any idea as to why IE refuses to return cookies with a domain?
This sh*t is driving me bananas
using: express 4.13.1; express-session 1.11.3; cookie-parser 1.3.2

I think I found the cause... the domains we are using are two-letter domains e.g. one.xx.yy & two.xx.yy. So setting the Domain-part of the cookie to .xx.yy is causing IE to ignore the cookie, I recon.
Anyone can confirm this issue is still valid on IE11?
How to set cookies for two-letter domains in IE8?
And/or how to circumvent this? Besides the obvious of using other domains.

Related

Why are my cross site cookies not working?

I have been working on a uni project and I'm getting really stuck on why the cross site authentication cookie from our backend isn't set when I do a CORS request to it from our backend.
Our setup is as follows:
A frontend on https://frontend-domain.com sends a CORS request to https://backend-domain.com with credentials in the post body, expecting a Set-Cookie: auth-token header in the response, if credentials are correct.
The fetch to the backend has credentials: 'include' set.
The backend response includes Access-Control-Allow-Credentials: true and explicitly states Access-Control-Allow-Origin: https://frontend-domain.com. The Allowed Methods header is also correct.
The token cookie in the Set-Cookie header has the attributes SameSite=Noneand Secure, it's domain attribute is Domain=backend-domain.com.
As far as I could find on the mozilla docs or here on stack overflow, these are all the requirements for cross site cookies to work. I expected the Set-Cookie header would make the browser set the cookie, which would then be sent along with all further requests to https://backend-domain.com, given credentials: 'include' is set.
However, the cookie is never set.
Can anyone help me? I am absolutely clueless by now.
Thank you very much for reading and helping!
Edit
I am using Firefox right now.
Here is a screenshot of the request:
And here is the response:
All of the Set-Cookie headers you can see in the response dont result in an actual cookie.
The SameSite attribute of a cookie controls whether this cookie is included in
subrequests (such as the ones made by an <img> or <iframe> element or a Javascript fetch command) to a different origin
top-level navigation requests (which load a new page into the current or a new browser tab).
Details are given here. Note especially the subtly different treatment of navigation with GET and POST ("Lax-Allowing-Unsafe").
Cookies in subrequests (but not top-level navigation requests) may be additionally restricted based on browser settings if they are third-party cookies, that is, if the top-level domains of their origin and the sending web page differ. In other words: Cookies from backend-domain.com count as third-party cookies when a request is made by an HTML page from frontend-domain.com, and this is what caused the issue in your case.

How to bypass the SameSite and Secure object of setting Cookies when I don't have a secure URL?

I have recently Pushed by Server and Webapp to the cloud. It is still in the testing phase and we're working with the default links given for both the server and Website. These links are normal HTTP and not Secure. I'm setting some cookies when the user logs in, but the client is not saving the cookies now. It's throwing an error stating Set-cookie was blocked because sameSite is empty so it has been set to Lax.
Questions :
I have tried setting the sameSite to None. But this also requires me to set secure as true. Which is not possible as the links are still HTTP only.
If I do not set the SameSite, It moves it by default to Lax. So it blocks my cross site cookie.
Is there anyway I can get this to work by someway completely avoiding SameSite or making it to none and still bypass the secure parameter?
Note : [ There server and Website have different URLs and these links are of EC2 (server) and S3 static host (website) ]

Why does RFC 6797 forbid sending of the Strict-Transport-Security header over plain HTTP responses?

When reading the spec for HSTS (Strict-Transport-Security), I see an injunction in section 7.2 against sending the header when accessed over http instead of https:
An HSTS Host MUST NOT include the STS header field in HTTP responses
conveyed over non-secure transport.
Why is this? What are the risks if this is violated?
The danger is to the availability of the website itself. If the website is able to respond (either now or in the future) over HTTP but not over HTTPS, it will semi-permanently prevent browsers from accessing the site:
Browser: "I want http://example.com"
ExampleCom: "You should go to the https:// URL now and for the next 3 months!"
Browser: "I want https://example.com"
ExampleCom: [nothing]
By only serving the STS header over HTTPS connections, the site guarantees that at least right now it is not pointing browsers to an inaccessible site. Of course, if the max-age is set to 3 months and the HTTPS site breaks tomorrow, the effect is the same. This is merely an incremental protection.
If your server cannot positively tell from request characteristics whether it is being accessed over HTTP vs. HTTPS, but you believe you have set up your website to only be accessible over HTTPS anyhow (e.g. due to SSL/TLS termination in an nginx proxy), it should be safe to serve the header all the time. But if you want to serve both, e.g. if you wish to serve HTTP->HTTPS redirects from your server, find out how your proxy tells you about the connection and start gating the STS header response on that.
(Thanks to Deirdre Connolly for this explanation!)
Not sure if you have a specific issue you are trying to solve, or are only asking for curiosity sake but this might be better asked on http://security.stackexchange.com
Like you I can't see the threat from the server sending this over HTTP. It doesn't really make sense, but I'm not sure if there is a risk to be honest. Except to say if you can't set up the header properly then perhaps you're not ready to implement HSTS as it can be dangerous if misconfigured!
The far bigger danger is if a browser was to process a HSTS header received over HTTP, which section 8.1 explicitly states it MUST ignore:
If an HTTP response is received over insecure transport, the UA MUST
ignore any present STS header field(s).
The risk here is that a malicious attacker (or an accidentally misconfigured header) could take a HTTP-only website offline (or the HTTP-only parts of a mixed site) if a browser incorrectly processed it. This would effectively cause a DoS for that user(s) until either the header expiries or the site implements HTTPS.
Additionally if a browser did accept this header over HTTP rather than HTTPS, it could be possible for a MITM attacker to expire the header by setting it to a max-age of 0. For example if you have a long HSTS header set on https://www.example.com but attacker was able to publish a max-age=0 header with includeSubDomain over http://example.com, and the browser incorrectly processed that, then it could effectively remove the protection HTTPS gives to your www site.
For these reasons it's very important that clients (i.e. webbrowsers) implement this correctly and ignore the HSTS header if served over HTTP and only process it over HTTPS. This could be another reason the RFC states servers must not send this over HTTP - just in case a browser implements this wrong but, to be honest, if that happens then that browser is putting all HTTP only websites at risk as a MITM attacker could add it as per above.

Is access-control-origin: * safe if session based auth is disallowed? [duplicate]

I recently had to set Access-Control-Allow-Origin to * in order to be able to make cross-subdomain AJAX calls. I feel like this might be a security problem. What risks am I exposing myself to if I keep the setting?
By responding with Access-Control-Allow-Origin: *, the requested resource allows sharing with every origin. This basically means that any site can send an XHR request to your site and access the server’s response which would not be the case if you hadn’t implemented this CORS response.
So any site can make a request to your site on behalf of their visitors and process its response. If you have something implemented like an authentication or authorization scheme that is based on something that is automatically provided by the browser (cookies, cookie-based sessions, etc.), the requests triggered by the third party sites will use them too.
This indeed poses a security risk, particularly if you allow resource sharing not just for selected resources but for every resource. In this context you should have a look at When is it safe to enable CORS?.
Update (2020-10-07)
Current Fetch Standard omits the credentials when credentials mode is set to include, if Access-Control-Allow-Origin is set to *.
Therefore, if you are using a cookie-based authentication, your credentials will not be sent on the request.
Access-Control-Allow-Origin: * is totally safe to add to any resource, unless that resource contains private data protected by something other than standard credentials. Standard credentials are cookies, HTTP basic auth, and TLS client certificates.
Eg: Data protected by cookies is safe
Imagine https://example.com/users-private-data, which may expose private data depending on the user's logged in state. This state uses a session cookie. It's safe to add Access-Control-Allow-Origin: * to this resource, as this header only allows access to the response if the request is made without cookies, and cookies are required to get the private data. As a result, no private data is leaked.
Eg: Data protected by location / ip / internal network is not safe (unfortunately common with intranets and home appliances):
Imagine https://intranet.example.com/company-private-data, which exposes private company data, but this can only be accessed if you're on the company's wifi network. It's not safe to add Access-Control-Allow-Origin: * to this resource, as it's protected using something other than standard credentials. Otherwise, a bad script could use you as a tunnel to the intranet.
Rule of thumb
Imagine what a user would see if they accessed the resource in an incognito window. If you're happy with everyone seeing this content (including the source code the browser received), it's safe to add Access-Control-Allow-Origin: *.
AFAIK, Access-Control-Allow-Origin is just a http header sent from the server to the browser. Limiting it to a specific address (or disabling it) does not make your site safer for, for example, robots. If robots want to, they can just ignore the header. The regular browsers out there (Explorer, Chrome, etc.) by default honor the header. But an application like Postman simply ignores it.
The server end doesn't actually check what the 'origin' is of the request when it returns the response. It just adds the http header. It's the browser (the client end) which sent the request that decides to read the access-control header and act upon it. Note that in the case of XHR it may use a special 'OPTIONS' request to ask for the headers first.
So, anyone with creative scripting abilities can easily ignore the whole header, whatever is set in it.
See also Possible security issues of setting Access-Control-Allow-Origin.
Now to actually answer the question
I can't help but feel that I'm putting my environment to security
risks.
If anyone wants to attack you, they can easily bypass the Access-Control-Allow-Origin. But by enabling '*' you do give the attacker a few more 'attack vectors' to play with, like, using regular webbrowsers that honor that HTTP header.
Here are 2 examples posted as comments, when a wildcard is really problematic:
Suppose I log into my bank's website. If I go to another page and then
go back to my bank, I'm still logged in because of a cookie. Other
users on the internet can hit the same URLs at my bank as I do, yet
they won't be able to access my account without the cookie. If
cross-origin requests are allowed, a malicious website can effectively
impersonate the user.
– Brad
Suppose you have a common home router, such as a Linksys WRT54g or
something. Suppose that router allows cross-origin requests. A script
on my web page could make HTTP requests to common router IP addresses
(like 192.168.1.1) and reconfigure your router to allow attacks. It
can even use your router directly as a DDoS node. (Most routers have
test pages which allow for pings or simple HTTP server checks. These
can be abused en masse.)
– Brad
I feel that these comments should have been answers, because they explain the problem with a real life example.
This answer was originally written as a reply to What are the security implications of setting Access-Control-Allow-Headers: *, if any? and was merged despite being irrelevant to this question.
To set it to a wildcard *, means to allow all headers apart from safelisted ones, and remove restrictions that keeps them safe.
These are the restrictions for the 4 safelisted headers to be considered safe:
For Accept-Language and Content-Language: can only have values consisting of 0-9, A-Z, a-z, space or *,-.;=.
For Accept and Content-Type: can't contain a CORS-unsafe request header byte: 0x00-0x1F (except for 0x09 (HT), which is allowed), "():<>?#[\]{}, and 0x7F (DEL).
For Content-Type: needs to have a MIME type of its parsed value (ignoring parameters) of either application/x-www-form-urlencoded, multipart/form-data, or text/plain.
For any header: the value’s length can't be greater than 128.
For simplicity's sake, I'll base my answer on these headers.
Depending on server implementation, simply removing these limitations can be very dangerous (to the user).
For example, this outdated wordpress plugin has a reflected XSS vulnerability where the value of Accept-Language was parsed and rendered on the page as-is, causing script execution on the user's browser should a malicious payload be included in the value.
With the wildcard header Access-Control-Allow-Headers: *, a third party site redirecting to your site could set the value of the header to Accept Language: <script src="https://example.com/malicious-script.js"></script>, given that the wildcard removes the restriction in Point 1 above.
The preflight response would then give the greenlight to this request, and the user will be redirected to your site, triggering an XSS on their browser, which impact can range from an annoying popup to losing control of their account through cookie hijacking.
Thus, I would strongly recommend against setting a wildcard unless it is for an API endpoint where nothing is being rendered on the page.
You can set Access-Control-Allow-Headers: Pragma as an alternative solution to your problem.
Note that the value * only counts as a special wildcard value for requests without credentials (requests without HTTP cookies or HTTP authentication information), otherwise it will be read as a literal header. Documentation
In scenario where server attempts to disable the CORS completely by setting below headers.
Access-Control-Allow-Origin: * (tells the browser that server accepts
cross site requests from any ORIGIN)
Access-Control-Allow-Credentials: true (tells the browser that cross
site requests can send cookies)
There is a fail safe implemented in browsers that will result in below error
"Credential is not supported if the CORS header ‘Access-Control-Allow-Origin’ is ‘*’"
So in most scenarios setting ‘Access-Control-Allow-Origin’ to * will not be a problem. However to secure against attacks, the server can maintain a list of allowed origins and whenever server gets a cross origin request, it can validate the ORIGIN header against the list of allowed origins and then echo back the same in Access-Control-Allow-Origin header.
Since ORIGIN header can't be changed by javascript running on the browser, the malicious site will not be able to spoof it.

What factors influence IE in determining whether or not to send a cross-domain cookie?

Working on troubleshooting an interface consumed by 3rd parties. The quick overview:
3rd party sends the user out our site example.com/login to let the user authenticate with us
After signin we redirect the user back to thirdparty.com
thirdparty.com consumes a dynamic JS file on our site used to return information about the logged in user example.com/dynamicJs.js
Since this request is made against example.com it should include the cookies dropped during login (they are required for it to serve its purpose)
for IE, they are no longer being included in the request
In researching:
the cookies themselves don't appear to have changed, and manually navigating IE to the URL of dynamicJS.js results in the necessary cookies being transmitted.
example.com has P3P policies in place and is not generating any visible warnings/errors with IE
other browsers include the cookies
So, what other variables could be influencing IE and resulting in it omitting the example.com cookies when loading example.com/dynamicJS.js?
After much research we identified the root of the issue was within IIS's Custom HTTP Response Headers.
Previously we had configured the site to return a P3P header, but in diagnosing this issue we found that somehow the header was now being returned as 3P. Returning the key to P3P resolved out issue.
In researching the actual cause of this change we found that the bad header originated in the web.config, within the <httpProtocol><customHeaders> element -- however it appeared to have been placed there some time ago and remained dormant until the AppPool was stopped/restarted for maintenance.

Resources