How would I use client.ip as a conditional in setting headers in the fetch section of a Varnish 3.0 VCL? I have some troubleshooting headers that I like to set to solve caching issues, however, I don't want them publicly visible. I'd love to be able to whitelist the headers for my ip address only.
Is there any way to access client.ip in _fetch?
You can best set all troubleshooting headers in you _recv without any conditions and remove them in you vcl deliver. this way you dont need to add the same ip check on every conditional header
if you want to use an ip range you can use the following code
acl debug {
"your ip adress1";
"you ip adress 2";
}
in you vcl_recv
if (!client.ip ~ debug) {
set req.http.x-debug = "debug";
}
in you vcl_deliver
if(!req.htt.x-debug){
remove resp.http.debugheader1;
remove resp.http.debugheader2;
}
Related
Is there any way to cache request with auth headers in varnish?
I want to ignore the auth headers while caching the request
There are various ways to approach this, depending on the importance of auth headers.
1. You don't care about auth
If you don't care about the auth part and if you want to risk serving cached content to unauthorized users, you can just use the following VCL code:
sub vcl_recv {
unset req.http.Authorization;
}
2. Ignore authorization to some extent
It is also possible to care about auth a bit, but not too much.
The following VCL snippet will allow caching even if there is an Authorization header:
sub vcl_recv {
if(req.http.Authorization) {
return(hash);
}
}
The consequence of this is that the initial cache miss will pass through to the backend and will be processed there. Potential unauthorized access will be handled there.
But as soon as the has been dealt with, the object is stored in the cache and the next requests will get cached content regardless of the authorization status of that request.
3. Perform auth on the edge
It is also possible to handle the auth part in Varnish while caching the content.
The following VCL code will handle this:
sub vcl_recv {
if(req.http.Authorization != "Basic YWRtaW46c2VjcmV0") {
return (synth(401, "Restricted"));
}
unset req.http.Authorization;
}
sub vcl_synth {
if (resp.status == 401) {
set resp.http.WWW-Authenticate = {"Basic realm="Restricted area""};
}
}
This code will actively inspect the content of the Authorization header and will ensure the username admin is used with password secret.
The YWRtaW46c2VjcmV0 string is nothing more than a base64 encoding of admin:secret.
4. Use vmod_basicauth
A more advanced and flexible way to terminate auth on the edge is by using https://git.gnu.org.ua/vmod-basicauth.git/. This VMOD can be compiled from source and can be downloaded from ftp://download.gnu.org.ua/release/vmod-basicauth.
Assuming the credentials are stored in /var/www/.htpasswd, you can leverage this VMOD to match the Authorization header to the content of the .htpasswd file.
Here's the VCL:
vcl 4.1;
import basicauth;
sub vcl_recv {
if (!basicauth.match("/var/www/.htpasswd",req.http.Authorization)) {
return (synth(401, "Restricted"));
}
unset req.http.Authorization;
}
sub vcl_synth {
if (resp.status == 401) {
set resp.http.WWW-Authenticate = {"Basic realm="Restricted area""};
}
}
This is entirely possible but also extremely dangerous: Varnish would return the same cached (authorized) content to all requests.
Example:
User A requests resource Z with proper authentication. Varnish relays the request to backend, caches the response and returns the resource.
User B requests resource Z with proper authentication. They will get the cached resource Z even if Z contains user A's content.
User X requests resource Z with invalid authentication. They will too get the cached resource anyway since the backend is bypassed.
Having said that, you can override Varnish's built-in VCL. Details are documented but the main idea is:
Copy default vcl_recv VCL (for your version) from source and add it to the end of your vcl_recv.
Remove the safeguards from vcl_recv: Just remove vcl_req_authorization which disables caching:
sub vcl_req_authorization {
if (req.http.Authorization) {
# Not cacheable by default.
return (pass);
}
}
In your vcl file issue a return statement at the end so built-in vcl is not used.
We're currently using the s-maxage directive in the Cache-Control header from our origin to control the TTL in Varnish. However, I'd like to remove it from the response before delivery, so that no other caches in the request chain act on it.
I'm currently looking at the header VMOD, to remove s-maxage from the header, but leave the rest of it intact. I believe this could be achieved with something like this:
sub vcl_deliver {
header.regsub(resp, "s-maxage=[0-9]+,?\s?", "")
}
As a newcomer to Varnish, I wanted to sanity-check this approach and make sure there isn't a better way to tackle it?
Appreciate any support or advice.
Replace header at delivery time
The following VCL snippet will strip off the s-maxage attribute from the Cache-Control header before it is sent to the client.
sub vcl_deliver {
set resp.http.cache-control = regsub(resp.http.cache-control,
"(,\s*s-maxage=[0-9]+\s*$)|(\s*s-maxage=[0-9]+\s*,)","");
}
Replace header at storage time
It is also possible to strip off this attribute from the Cache-Control header before it gets stored into a cache object. In that case, you'll use the beresp.http.cache-control variable inside vcl_backend_response.
sub vcl_backend_response {
set beresp.http.cache-control = regsub(beresp.http.cache-control,
"(,\s*s-maxage=[0-9]+\s*$)|(\s*s-maxage=[0-9]+\s*,)","");
}
Using vmod_headerplus
If you're using Varnish Enterprise, you can use the vmod_headerplus module to easily delete header attributes:
vcl 4.1;
import headerplus;
sub vcl_deliver {
headerplus.init(resp);
headerplus.attr_delete("Cache-Control","s-maxage",",");
headerplus.write();
}
vcl 4.1;
import headerplus;
sub vcl_backend_response {
headerplus.init(beresp);
headerplus.attr_delete("Cache-Control","s-maxage",",");
headerplus.write();
}
Although Varnish Enterprise is the commercial version of Varnish Cache, you can still use it without upfront license payments if you use it on AWS, Azure or GCP.
Varnish Enterprise on AWS
Varnish Enterprise on Azure
Varnish Enterprise on GCP
Its seems that purging doesn't work when using Cloudflare(Cloudflare returns 403 Forbidden).
This what i got when i searched for a solution online:
"The problem is that when you are using cloudflare, varinsh does not get the original IP of the sender. Instead it gets the IP of the cloudflare. So purging can not be done. We need to tell the varnish the original IP of the sender."
Add these following lines inside vcl_recv
if (req.restarts == 0) {
if (req.http.X-Forwarded-For) {
set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
}
if (req.method == "PURGE" || req.url == "/purge") {
# Replace these IP with your IP
if ( req.http.X-Forwarded-For !~ "(209.152.41.21|105.45.120.37)") {
return(synth(405, "This IP is not allowed to send PURGE requests."));
}
ban("req.url ~ /");
return (purge);
}
I tried this solution but it didn't work.
This question is old but could still use an answer. :-) The quote you found is partially correct. Here's why:
Your setup is likely similar to the below:
------ --------- ------------ ------
| WP | <- | Varnish | <- | CloudFlare | <--- | User |
------ --------- ------------ ------
There are two ways that the purge can happen:
User -> CloudFlare -> Varnish, or
User -> CloudFlare -> Varnish -> WP -> WP plugin -> Varnish.
The second situation can successfully cause a purge. If you have a plugin which triggers a cache purge/invalidate, it will come from a predictable IP address. In fact, if you run varnish on the same server as WP, the IP address will be [127.0.0.1]. There's a nice implementation for this situation.
The problem with the first situation (purging directly from CloudFlare) is that CloudFlare has many IP addresses which you would have to keep up to date. But more importantly, there's nothing that would prevent a bad actor from also creating a service using CloudFlare which would also be allowed to send a purge request to your server, basically rendering this security worthless.
Since X-Forwarded-For is basically just a list of all of the previous IP addresses along the way, this would also be easy to fake and bypass.
Since filtering on IP address is not useful directly through CloudFlare, you could alternatively use a token/secret that you send as part of the purge request. (i.e. only allow a special URL like:
if (req.url == "/purge-719179c7-6226-4b87-9503-1b6d54d5fea5") {... with some other Guid of course). One could argue that this is still not secure, but perhaps better than allowing all CloudFlare IPs.
I'm trying to get CORS working on my trigger.io app:
I've got the following setup in my .htaccess
Header set Access-Control-Allow-Headers: "Accept,Origin,Content-Type,X-Requested-With"
Header set Access-Control-Allow-Methods "GET,PUT,POST,DELETE,OPTIONS"
Header set Access-Control-Allow-Credentials: "true"
Header set Access-Control-Allow-Origin "http://localhost:3000,content://io.trigger.forge99d5a0b8621e11e28cc2123139286d0c"
Running the trigger App in the web (localhost:3000) works fine.
But when I deploy it to an (android) device I see the following error in the debug output:
[ERROR] XMLHttpRequest cannot load {link}http://mydevtest.lan/api/auth/currentuser.{/link} Origin content://io.trigger.forge99d5a0b8621e11e28cc2123139286d0c is not allowed by Access-Control-Allow-Origin. -- From line 1 of null
I'm fearing that setting content:// in the Access-Control-Allow-Origin header is not legal.
The Access-Control-Allow-Origin header as you have it is invalid. Valid values are either '*', or a space separated list of origins. One of the following should work:
Header set Access-Control-Allow-Origin "*"
or
Header set Access-Control-Allow-Origin "http://localhost:3000 content://io.trigger.forge99d5a0b8621e11e28cc2123139286d0c"
Note that I've never tested the latter form (with multiple origins). While the CORS spec allows it, browsers may not yet support it.
One other thing you could do is read in the value of the Origin header, validate it on your server (i.e. manually check that the value equals either "http://localhost:3000" or "content://io.trigger.forge99d5a0b8621e11e28cc2123139286d0c"), and then echo only that value in the Access-Control-Allow-Origin response header. However this requires a little more work since it introduces some server-side conditional processing.
I also fear that content:// is not allowed in CORS, could you try setting Access-Control-Allow_origin to *, if that works then that is probably the problem.
A better solution would be to avoid doing XHR requests and use forge.request.ajax which will make the request from native code and avoid any cross domain restrictions. You can find the documentation for that here http://docs.trigger.io/en/v1.4/modules/request.html#modules-request
Is it possible to let clients with certain IP's pass thru to the backend and not cache using varnish? I don't see this in any of the example configs.
i think better way is described here https://www.varnish-cache.org/lists/pipermail/varnish-misc/2011-October/021278.html
if you have list of IP's you should create an acl list:
acl passem {
"192.168.55.0/24";
}
and then in vcl.recv you should
if (client.ip ~ passem) {
return(pass);
}
I received this answer from the mailing list.
Yes, you can:
if (client.ip == IP)
{
return(pass);
}