So, recently I ran a security scan on a webpage. In one suggestion, it said "Caching Https Response" for image/js/font files, which seems counterintuitive. And this suggestion seems to be the consensus among different security products.
https://portswigger.net/kb/issues/00700100_cacheable-https-response
https://www.valencynetworks.com/kb/cacheable-https-response.html
https://support.microfocus.com/kb/doc.php?id=7009057
So, why does caching static files matter in HTTP or HTTPS mode? I thought modern practice is make everything HTTPS to avoid tampering, and browser-caching to avoid extra download.
Lastly, if we WANT to cache static content in HTTPS mode, what is the correct thing to do? Add extra logic to make it call HTTP instead? That seems like a terrible idea.
That is a warning, designed to protect people from unknowingly allowing private responses to be cached. Those descriptions all say some variation of "If sensitive information is stored in the cache...". Note the "If".
If you are knowingly allowing the content to be cached, then there's no problem. I assume the tool is showing this only on secure requests because of a presumption that such information might be sensitive.
Related
Some time ago I came across the option in one of the software I use at work, to turn off XSRF server-side protection by including a special HTTP header value on the client side. Therefore, I wonder:
How is this not a security vulnerability?
Why would you implement a security feature and allow clients to turn it off? Is there a use-case I am missing?
I am doubting my knowledge of XSRF protection at the moment and since we could not reach a consensus at work I decided to post my concerns here.
The product is Bamboo and they publicly report the option in https://confluence.atlassian.com/bamkb/rest-api-calls-fail-due-to-missing-xsrf-token-899447048.html#RESTAPIcallsfailduetoMissingXSRFToken-Workaround. I first mentioned this in an old answer here: https://stackoverflow.com/a/45090321/410939.
I can understand allowing the server to turn it off on a per API basis. However allowing the client to turn it off is a very bad idea... It's as good as not being there. The only reason I can think this is OK is for backwards compatibility. Maybe there is an older version of the client that relies on this way to mitigate CSRF, while newer clients use the new version, and switch off the older version (but one of them must be used).
I would turn the question around: Why would you implement security features and then ask users to turn them on? This is the opt-in model to security you will find everywhere, e.g. literally no one are forcing 2FA even though it is a huge security improvement.
If XSRF is session based and you run multiple tabs with the same application and you are forced to reauthenticate in one of them you will typically get a new XSRF token. Other tabs might then no longer pass the XSRF check with the risk of losing unsaved work. There could possibly be other similar scenarios.
There are sometimes trade offs between security and usability, in this case they make security default and let people who run into problems take an informed risk.
There are other ways to mitigate XSRF. So, if cookie is not an option (maybe the client doesn't support cookie), you might want to disable this cookie solution.
Some other ways to mitigate XSRF:
State Variable (Auth0 uses it) - The client will generate and pass with every request a cryptographically strong random nonce which the server will echo back along with its response allowing the client to validate the nonce. It's explained in Auth0 doc
Always check the referer header and accept requests only when referer is a trusted domain. If referer header is absent or a non-whitelisted domain, simply reject the request. When using SSL/TLS referrer is usually present. Landing pages (that is mostly informational and not containing login form or any secured content may be little relaxed and allow requests with missing referer header
I think my website is injected with some script that is using rawgit.com. Recently my website runs very slow with browser lower bar notification "Transferring data from rawgit.com.." or "Read rawgit.com"..." . I have never used RawGit to serve raw files directly from GitHub. I can see they are using https://cdn.rawgit.com/ domain to serve files.
I would like my website to block everything related to this domains, how can I achieve that ?
As I said in the comments, you are going about this problem in the wrong way. If your site already includes sources you do not recognise or allow, you are already compromised and your main focus should be on figuring out how you got compromised, and how much access an attacker may have gotten. Based on how much access they have gotten, you may need to scrap everything and restore a backup.
The safest thing to do is to bring the server offline while you investigate. Make sure that you still have access to the systems you need to access (e.g. ssh), but block any other remote ip. Just "blocking rawgit.com" blocks one of the symptoms you can see and allows an attacker to change their attack while you are fumbling with that.
I do not recommend to only block rawgit.com, not even when it's your first move to counter this problem, but if you want you can use the Content-Security-Policy header. You can whitelist the urls you do expect and thus block the urls you do not. See mdn for more information.
Do you know how to change the response header in CouchDB? Now it has Cache-control: must-revalidate; and I want to change it to no-cache.
I do not see any way to configure CouchDB's cache header behavior in its configuration documentation for general (built-in) API calls. Since this is not a typical need, lack of configuration for this does not surprise me.
Likewise, last I tried even show and list functions (which do give custom developer-provided functions some control over headers) do not really leave the cache headers under developer control either.
However, if you are hosting your CouchDB instance behind a reverse proxy like nginx, you could probably override the headers at that level. Another option would be to add the usual "cache busting" hack of adding a random query parameter in the code accessing your server. This is sometimes necessary in the case of broken client cache implementations but is not typical.
But taking a step back: why do you want to make responses no-cache instead of must-revalidate? I could see perhaps occasionally wanting to override in the other direction, letting clients cache documents for a little while without having to revalidate. Not letting clients cache at all seems a little curious to me, since the built-in CouchDB behavior using revalidated Etags should not yield any incorrect data unless the client is broken.
I have a web application that has some pretty intuitive URLs, so people have written some Chrome extensions that use these URLs to make requests to our servers. Unfortunately, these extensions case problems for us, hammering our servers, issuing malformed requests, etc, so we are trying to figure out how to block them, or at least make it difficult to craft requests to our servers to dissuade these extensions from being used (we provide an API they should use instead).
We've tried adding some custom headers to requests and junk-json-preamble to responses, but the extension authors have updated their code to match.
I'm not familiar with chrome extensions, so what sort of access to the host page do they have? Can they call JavaScript functions on the host page? Is there a special header the browser includes to distinguish between host-page requests and extension requests? Can the host page inspect the list of extensions and deny certain ones?
Some options we've considered are:
Rate-limiting QPS by user, but the problem is not all queries are equal, and extensions typically kick off several expensive queries that look like user entered queries.
Restricting the amount of server time a user can use, but the problem is that users might hit this limit by just navigating around or running expensive queries several times.
Adding static custom headers/response text, but they've updated their code to mimic our code.
Figuring out some sort of token (probably cryptographic in some way) we include in our requests that the extension can't easily guess. We minify/obfuscate our JS, so are ok with embedding it in the JS source code (since the variable name it would have would be hard to guess).
I realize this may not be a 100% solvable problem, but we hope to either give us an upper hand in combatting it, or make it sufficiently hard to scrape our UI that fewer people do it.
Welp, guess nobody knows. In the end we just sent a custom header and starting tracking who wasn't sending it.
I was looking for a way to protect a web service from "Synthetic queries". Read this security stack question for more details.
It seemed that I had little alternative, until I came across NSE India's website which implements a certain kind of measure against such synthetic queries.
I would like to know how could they have implemented a protection which works somewhat this way: You go to their website and search for a quote, lets say, RELIANCE, we get a page displaying the latest quote.
On analysis we find that the query being sent across is something like:
http://www.nseindia.com/marketinfo/equities/ajaxGetQuote.jsp?symbol=RELIANCE&series=EQ
But when we directly copy paste the query in the browser, it returns "referral denied".
I guess such a procedure may also help me. Any ideas how I may implement something similar?
It won't help you. Faking a referrer is trivial. The only protection against queries the attacker constructs is server side validation.
Referrers sometimes can be used to sometimes prevent hotlinking from other websites, but even that is rather hard to do since certain programs create fake referrers and you don't want to block those users.
The problems referrer validation could help against other websites trying to manipulate the users browser into accessing your site. Like some kinds of cross site request forgery. But it does never protect against malicious users. Against those the only thing that helps is server side validation. You can never trust the client if you don't trust the user of that client.