I want to display different html styles to users based on $_SERVER['HTTP_USER_AGENT']. How can I achieve this with varnish settings, to make it have a specific cache for a specific user agent.
I know I can achieve something similar with JS, but that's not reliable for me I want to do it server side.
The php I will use in my html to detect user agents will look like this;
<?php if($_SERVER['HTTP_USER_AGENT'] == $target):?>
<style>
//CSS
</style>
<?php endif;?>
How can I setup Varnish so it work neatly with this?
All you need to do is modify the vcl_hash method to add more info to the cache key.
https://varnish-cache.org/docs/trunk/users-guide/vcl-hashing.html
sub vcl_hash {
hash_data(req.http.User-Agent);
}
Be aware that there are no real standards that are followed for User Agent strings, so the variations are huge even for what are identical browsers. I would expect a 99% cache miss on this technique unless you will be controlling the User Agents yourself (internal system etc.)
If you want a different cache for mobile devices, the following might be more successful as it tries to detect a mobile browser, then uses a normalised cache key value to improve hit rate:
sub vcl_hash {
if (req.http.User-Agent ~ "mobile") {
// hash_data
hash_data("mobile");
}
}
Varnish supports that by default. You don't need to change Varnish's configuration. You only need to send the Vary header:
The Vary HTTP response header determines how to match future request headers to decide whether a cached response can be used rather than requesting a fresh one from the origin server.
In your specific case where you want it to vary based on the User-Agent, Varnish will understand that it needs to create different versions of the same object in cache for each different User-Agent.
Beware that using varying you cache might reduce your hit-rate considerably due to the number of variations the User-Agent header has. To avoid that, normalization is required. You can read more normalizing User-Agent headers in Varnish's documentation
Related
Had Done:
I had done uploading Kyc documents and attachments in s3 bucket
Integrated S3 with CloudFront
Blocked all public access in S3 bucket.
Only way of accessing content is 'CloudFront url'
My requirement is:
Any one can access the documents if 'CloudFront Url' known
So i want to restrict the access of URL except my application
Mainly block the access of that url in chrome, safari and all browsers
Is it possible to restrict the URL ? How ?
Lambda#Edge will let you do almost anything you want with a request as it's processed by CloudFront.
You could look at the user agent, then return a 403 if it doesn't match what you expect. Beware, however, that it's not difficult to change the user-agent string. Better is to use an authentication token.
To be honest, I don't understand your question well and you should make an attempt to describe the issue again. From a bird's eye view, I feel you are describing an IDOR vulnerability. But I will address multiple parts in my response.
AWS WAF will allow you to perform quite a bit of blocking on a wide variety of request content.
Specifically for this problem, if you choose to use AWS WAF, you can do the following to address this issue:
Create a WAF ACL, it should not be regional and should be global, set the default action of the WAF ACL to auto allow
Build regex pattern sets of what you would like to block or you can hard code specific examples
Create a rule that will block requests which have a User-Agent header that matches your regex pattern set
But at the end of the day, you might just be fighting a battle which should not necessarily be fought in the first place. Think about it like this, if you want to block all User-Agent headers which symbolize a browser, that is fine. But the problem is, the User-Agent header can easily be overwritten and spoofed such you won't see the typical browser User-Agent header. I don't suggest you to block requests based on this criteria because at the end of the day, I can just use a proxy and have it replace that request content before forwarding the traffic to the server and bypass the WAF or even Lambda#Edge.
What I would suggest is to develop some sort of authorization/authentication requirement to access these specific files. Since KYC can be sensitive, this would be a good control to put in place to be sure the files are not accessed by those who should not access them.
It seems to me like you are running into a case where an attacker can exploit an IDOR vulnerability. If that is the case, you need to program this logic in the application layer. There will be no way to prevent this at the AWS WAF layer.
If you truly wanted to fix the issue and you were dealing with an IDOR, I would use Lambda#Edge to validate that the Cookie included in the request should be able to access the KYC document. You should store in a database what KYC documents can be accessed by which specific user and you should check that the Cookies header includes the Cookies of the user who uploaded the KYC document. This would be effectively implementing authorization/authentication, but instead at just at the application layer, it would be also at the Lambda#Edge (or CDN) layer.
I am using Angular 5, with a Hapi NodeJs backend. When I send the "cache-control: private, max-age=3600" header in the http response the response is cached correctly. The problem is that when I make the exact same request in a different tab and with connections to different database the data cached in the browser tab 1 is shared with browser tab 2 when it make the same request. Is there a way for the cache to only be used per application instance using the cache-control header?
Same Webapp in Browser Tab 1. Same Domain.
Database 1
Same Webapp in Browser Tab 2. Same Domain.
Database 2
User agent needs to somehow differentiate these cache entries. Probably your best option is to adjust a cache entry key (add subdomain, path or query parameter that identifies a database to a URI).
You can also use custom HTTP header (such as X-Database), in pair with Vary HTTP header but in this case user agent may store only single response at a time because it is still uses URI as cache key and Vary HTTP header for response validation only. Relevant excerpt from The State of Browser Caching, Revisited article by Mark Nottingham:
The one hiccup I saw was that all tested browser caches would only store one variant at a time; i.e., if your responses contain Vary: Foo and you get two requests, the first with Foo: 1 and the second with Foo: 2, the second response will evict the first from cache.
Whether that’s a problem depends on how you use Vary; if you want to reuse cached responses with old values, it might reduce efficiency. However, that doesn’t seem like a common use case;
For more information check RFC 7234 Hypertext Transfer Protocol (HTTP/1.1): Caching, and Understanding The Vary Header article by Andrew Betts
Do you know how to change the response header in CouchDB? Now it has Cache-control: must-revalidate; and I want to change it to no-cache.
I do not see any way to configure CouchDB's cache header behavior in its configuration documentation for general (built-in) API calls. Since this is not a typical need, lack of configuration for this does not surprise me.
Likewise, last I tried even show and list functions (which do give custom developer-provided functions some control over headers) do not really leave the cache headers under developer control either.
However, if you are hosting your CouchDB instance behind a reverse proxy like nginx, you could probably override the headers at that level. Another option would be to add the usual "cache busting" hack of adding a random query parameter in the code accessing your server. This is sometimes necessary in the case of broken client cache implementations but is not typical.
But taking a step back: why do you want to make responses no-cache instead of must-revalidate? I could see perhaps occasionally wanting to override in the other direction, letting clients cache documents for a little while without having to revalidate. Not letting clients cache at all seems a little curious to me, since the built-in CouchDB behavior using revalidated Etags should not yield any incorrect data unless the client is broken.
As a web developer, I'm increasingly debugging issues only to find that our IT department are using our firewall to filter HTTP response headers.
They are using a whitelist of known headers, so certain newer technologies (CORS, websockets, etc) are automatically stripped until I debug the problem and request whitelisting.
The affected responses are third-party services we are consuming - so if we have an internal site that uses disqus, comments cannot be loaded because the response from disqus is having it's headers stripped. The resources we are serving are not affected, as it's only traffic coming in to the office.
Are there genuine reasons to block certain headers? Obviously there are concerns such as man-in-the-middle, redirects to phishing sites etc but these require more than just an errant header to be successful.
What are the security reasons to maintain a whitelist of allowed HTTP response headers?
Fingerprinting could be the main reason to strip the response headers:
https://www.owasp.org/index.php/Fingerprint_Web_Server_%28OTG-INFO-002%29
It depends on the stack that you're running, and most of the time, the information included in the response headers is configurable in each server, but it requires tampering with each serving application individually (and there might be cases when the software is privative and doesn't offer the option to set the HTTP headers).
Let's go with an easy example:
In our example datacenter, we're running a set of servers for different purposes, and we have configured them properly, so that they're returning no unnecessary metadata on the headers.
However, a new (imaginary) closed-source application for managing the print jobs is installed on one of the servers, and it offers a web interface that we want to use for whatever reason.
If this application returns an additional header such as (let's say) "x-printman-version" (and it might want to do that, to ensure compatibility with clients that use its API), it will be effectively exposing its version.
And, if this print job manager has known vulnerabilities for certain versions, an attacker just has to query it to know whether this particular install is vulnerable.
This, that might not seem so important, opens a window for automated/random attacks, scanning ports of interests, and waiting for the right headers to appear ("fingerprints"), in order to launch an attack with certainty of success.
Therefore, stripping most (setting policies and rules for those that we want to keep) of the additional HTTP headers might sound sensible in an organisation.
With that being clear, stripping the headers from outgoing connections responses is overkill. It's true that they can constitute a vector, but since it's an outgoing connection, this means we "trust" the endpoint. There's no straightforward reason why an attacker with control over the trusted endpoint would use the metadata instead of the payload.
I hope this helps!
We're working on a website. Our client want to check the website daily, but they're facing a problem. Whenever we make a change on the website, they have to clear their browser cache.
So I added the following header to my server configuration
Cache-Control: no-cache
As far as I see, firefox is receiving this header and I'm pretty sure that it is obeying it.
My question is, is this "Cache-Control: no-cache" guaranteed and does it work across all the browsers (including IEs)?
I find it's handy to use a "useless" version number in the requests. For example, instead of requesting script.js, request script.js?v=1.0
If you are generating your pages dynamically (PHP, etc) you can just keep the version number in a variable and only have to update it in one place whenever you update. If you want the content never to be cached, just use the output of time() as your version number.
EDIT: have you tried asking your client to change his browser caching settings? That way you can bypass the problem entirely