We are using below code in Varnish 4.x:
if (req.http.X-Pool) {
ban("obj.http.X-Pool ~ " + req.http.X-Pool);
}
Now we are moving to Fastly which uses Varnish 2.x, so we are not getting what is the alternative to ban in Varnish 2.x
For help with Fastly Varnish/VCL I recommend reading through both:
https://developer.fastly.com/
https://www.integralist.co.uk/posts/fastly-varnish/
I would also generally recommend reaching out to support#fastly.com (they're a good group of people).
With regards to your question, I'm unfamiliar with bans in standard Varnish, but reading through https://varnish-cache.org/docs/trunk/users-guide/purging.html#bans it suggests that a ban is a way to prevent cached content from being served.
So the solution depends on what you're trying to achieve when that happens.
If you just want to avoid the cache you can return a pass from the various subroutines like vcl_recv and vcl_hit (although from vcl_hit will cause a hit-for-pass and so that will cause the request to go straight to the backend.
You could also add custom logic to either vcl_recv or maybe even vcl_hit (if you want to be sure the content requested was actually cached) and from there you could trigger an error which would send you to vcl_error where you could construct a synthetic response:
sub vcl_hit {
#FASTLY hit
if (<some_condition>) {
error 700
}
}
sub vcl_error {
#FASTLY error
if (obj.status == 700) {
set obj.status = 404;
synthetic {"
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<h1>404 Not Found (varnish)</h1>
</body>
</html>
"};
return(deliver);
}
Alternatively from either vcl_recv or vcl_hit you might want to restart and then have a check for the restart and do something different (change the backend or the request in some way):
sub vcl_recv {
#FASTLY recv
if (req.restarts > 0) {
if (<some_condition>) {
// do something
}
}
}
Related
I'm very new on varnish and I've a business on my hands recently. It's a local magazine website http caching (Tech Stack is Javascript + PHP). I'm trying to use varnish 4 for caching the website. What they want me to do is; any new articles should be appeared on FE immediately, any deleted articles should be erased from the FE immediately, any changes on website's current appereance should be applied directly (changing articles' current locations, they can be dragged anywhere on the website based on articles' popularity change.) and finally any changes on existing articles should be applied to website immediately. As you see on the config below, in sub vcl_recv block I tried to use return(purge) for POST requests, because new articles and article changes is applied via POST request. But it doesn't work at all. When I try create a new dummy content or make changes on existing articles, it's not purging the cache and showing the fresh content even if POST request is successful. Also, on the BE side, I tried to use if (beresp.status == 404) for deleted articles, but it doesn't work too. When I delete the dummy article I created, it's not being deleted too, I'm still seein the stale content. How should I change my config to get all these things done? Thank you.
my varnish config is ;
import directors;
import std;
backend server1 {
.host = "<some ip>";
.port = "<some port>";
}
sub vcl_init {
new bar = directors.round_robin();
bar.add_backend(server1);
}
sub vcl_recv {
set req.backend_hint = bar.backend();
if (req.http.Cookie == "") {
unset req.http.Cookie;
}
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", "");
if (req.url ~ "\.(css|js|png|gif|jp(e)?g|swf|ico)") {
unset req.http.cookie;
}
if (req.url ~ "\.*") {
unset req.http.cookie;
}
if (req.method == "POST") {
return(purge);
}
}
sub vcl_deliver {
# A bit of debugging info.
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
}
else {
set resp.http.X-Cache = "MISS";
}
}
sub vcl_backend_response {
set beresp.grace = 1h;
set beresp.ttl = 120s;
if (bereq.url ~ "\.*") {
unset beresp.http.Set-Cookie;
unset beresp.http.Cache-Control;
}
if (bereq.method == "POST") {
return(abandon);
}
if (beresp.status == 404) {
return(abandon);
}
return (deliver);
}
No need to use the director if you only have one backend. Varnish will automatically select the backend you declared if there's only 1 backend.
Purging content
The POST purge call you're doing is not ideal. Please have a look at the following page to learn more about content invalidation in Varnish: https://varnish-cache.org/docs/6.0/users-guide/purging.html#http-purging
The snippet on that page contains an ACL to protect your platform from unauthorized purges.
It's important to know that you'll need to create a hook into your CMS or your MVC controller, that does the purge call.Here's a simple example using curl in PHP:
$curl = curl_init("http://your.varnish.cache/url-to-purge");
curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "PURGE");
curl_exec($curl);
As you can see, this is an HTTP request done in cURL that uses the custom PURGE HTTP request method. This call needs to be executed in your good right after the changes are stored in the database. This post-publishing hook will ensure that Varnish clears this specific object from cache.
VCL cleanup
The statement below doesn't look like a reliable way to remove cookies, because the expression will remove cookies from all pages dat contain a dot:
if (req.url ~ "\.*") {
unset req.http.cookie;
}
The same applies to the following statement coming from the vcl_backend_response hook:
if (bereq.url ~ "\.*") {
unset beresp.http.Set-Cookie;
unset beresp.http.Cache-Control;
}
I assume some pages do actually need cookies to properly function. An admin panel for example, or the CMS, or maybe even a header that indicates whether or not you're logged in.
The best way forward is to define a blacklist or whitelist of URL patterns that can or cannot be cached.
Here's an example:
if(req.url !~ "^/(admin|user)" {
unset req.http.Cookie;
}
The example above will only keep cookies for pages that start with /admin or /user. There are other ways as well.
Conclusion
I hope the purging part is clear. If not, please take a closer look at https://varnish-cache.org/docs/6.0/users-guide/purging.html#http-purging.
In regards to the VCL cleanup: purging can only work if the right things are stored in cache. Dealing with cookies can be tricky in Varnish.
Just try to define under what circumstances cookies should be kept for specific pages. Otherwise, you can just remove the cookies.
Hope that helps. Good luck.
Thijs
(Varnish 2.1.5)
I've got some strange situation in my Varnish. I'm trying to invalidate cache objects through PURGE requests initiated from NodeJS.
My testing consists of requesting the object, letting it cache, then do a purge request, then request it again (resulting in a fetch), and then request it again resulting in a hit of the refreshed cache object.
When I test this through the Firefox debug console, it works fine. All steps seem to work as expected. When I test the entire process in NodeJS, it works as expected, just fine. However, when I let the object cache through Firefox, and then try to invalidate it through NodeJS, it reports a 404 Not in cache.
I'm a 100% sure I'm using the same URI, and I have no idea why it acts this way. Has anyone else experienced this problem? And if yes, what is the solution to this problem?
This is my VCL:
backend default {
.host = "127.0.0.1";
.port = "80";
}
acl purge {
"localhost";
"*loadbalancer-ip*";
}
sub vcl_recv {
if (req.request == "PURGE") {
if(!client.ip ~ purge) {
error 405 "Not allowed.";
}
return (lookup);
} else if (req.url ~ "(?i)\.(jpeg|jpg|png|gif|ico|js|css|xml)$") {
unset req.http.Cookie;
return (lookup);
} else {
return (pass);
}
}
sub vcl_hit {
if (req.request == "PURGE") {
set obj.ttl = 0s;
error 200 "Purged";
}
}
sub vcl_miss {
if (req.request == "PURGE") {
error 404 "Not in cache.";
}
}
sub vcl_fetch {
if (req.url ~ "(?i)\.(jpeg|jpg|png|gif|ico|js|css|xml)$") {
unset beresp.http.set-cookie;
return (deliver);
}
}
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
}
As you can see, my configuration is pretty straight forward. This configuration is for testing purposes, I know using the loadbalancer IP is not safe and I will change it to use the Forwarded-For IP once everything works.
With a little help of this thread:
What is the function of the "Vary: Accept" HTTP header?
I came to know that it considers the Vary header when determining whether to cache or not, and which version to get from cache.
In my case, the Vary header contained the User-Agent, which is why I was getting different results from different methods.
Right, I'm going to be honest, I don't know varnish vcl, I can work out some basic stuff but I don't know it very well which is obviously why I'm having issue's.
I'm trying to set up cache banning via a http request, however the request can't come in via the DNS but rather through the IP address of the varnish box otherwise I can't be sure that every varnish box cache will have the target flushed; this is because we have several varnish boxes all behind an ELB so you can't guarantee that a ban request will not go to the same box twice, hence doing this via IPs.
I'm using this to insure that only the allowed IP's are allowed to ban but this isn't working:
sub vcl_hit {
if (req.request == "BAN") {
ban("req.url ==" + req.url);
error 200 "Purged";
}
}
I don't really know what to do to get this working and I've looked but most of the tutorials I've found seem to be for full URLS rather than just ip + pattern_to_purge
from your config example i expect you use varnish 3
you can add a list of ips that is allowed to do the purge as followed
acl ban_allowed_ip {
"127.0.0.1";
"127.0.0.2";
}
inside your if(req.request =="BAN") add the following
if (!client.ip ~ ban_allowed_ip) {
error 405 "Not allowed.";
}
The answer is to use:
if (req.request == "BAN") {
if (req.http.X-Debug != "True") {
error 405 "Not allowed.";
}
ban("obj.http.x-url ~ " + req.url);
error 200 "ban added";
}
Whilst this will return 200 regardless if the item in the cache exists or not, it does add the ban.
Is there a way to tell, at a point in the varnish subroutines when you have write access to the response object, whether the object was passed to the backend directly, or first sent through a cache lookup? Right now I am fiddling around with adding application logic to receive some header and then send it back, which can be read by varnish, but I would prefer to have the varnish behaviour be a bit more application independent.
What I'm looking for would be something like the below, though the method where I //DoSomeStuff doesn't have to be the deliver.
sub vcl_recv {
if( req.url ~ "^/something/ignored.*$" ) {
return ( pass );
}
else {
unset req.http.Cookie;
return( hash );
}
}
sub vcl_deliver {
if( resp.lookup == 1 ) {
//Do Some Stuff
}
}
Yes, there multiple ways. you can hook in vcl_hash{} and add some custom header there in req., you can do the same in vcl_pass{}, you can do that one step before in vcl_recv{}, or one step after in vcl_hit{} and vcl_miss{} (note that a hit-for-pass calls vcl_pass{} also)
Look up the processing states to get your picture clearer.
In more recent varnish-cache versions, bereq.uncacheable gives you the answer.
Can I configure Varnish in such a way it shows the original page from the backend when the backend throws a 500 error page?
It's the default. I have some if (beresp.status == 500) in it :s
I assume you want to show the original 500 error only in some environments, like development.
If so, then you can assign Varnish an identity:
$ varnishd -i development
And then check that identity in your VCL:
sub vcl_fetch {
if (server.identity ~ "^development") {
return (deliver);
}
if (beresp.status == 500) {
# ...
}
}