I'm very new on varnish and I've a business on my hands recently. It's a local magazine website http caching (Tech Stack is Javascript + PHP). I'm trying to use varnish 4 for caching the website. What they want me to do is; any new articles should be appeared on FE immediately, any deleted articles should be erased from the FE immediately, any changes on website's current appereance should be applied directly (changing articles' current locations, they can be dragged anywhere on the website based on articles' popularity change.) and finally any changes on existing articles should be applied to website immediately. As you see on the config below, in sub vcl_recv block I tried to use return(purge) for POST requests, because new articles and article changes is applied via POST request. But it doesn't work at all. When I try create a new dummy content or make changes on existing articles, it's not purging the cache and showing the fresh content even if POST request is successful. Also, on the BE side, I tried to use if (beresp.status == 404) for deleted articles, but it doesn't work too. When I delete the dummy article I created, it's not being deleted too, I'm still seein the stale content. How should I change my config to get all these things done? Thank you.
my varnish config is ;
import directors;
import std;
backend server1 {
.host = "<some ip>";
.port = "<some port>";
}
sub vcl_init {
new bar = directors.round_robin();
bar.add_backend(server1);
}
sub vcl_recv {
set req.backend_hint = bar.backend();
if (req.http.Cookie == "") {
unset req.http.Cookie;
}
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", "");
if (req.url ~ "\.(css|js|png|gif|jp(e)?g|swf|ico)") {
unset req.http.cookie;
}
if (req.url ~ "\.*") {
unset req.http.cookie;
}
if (req.method == "POST") {
return(purge);
}
}
sub vcl_deliver {
# A bit of debugging info.
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
}
else {
set resp.http.X-Cache = "MISS";
}
}
sub vcl_backend_response {
set beresp.grace = 1h;
set beresp.ttl = 120s;
if (bereq.url ~ "\.*") {
unset beresp.http.Set-Cookie;
unset beresp.http.Cache-Control;
}
if (bereq.method == "POST") {
return(abandon);
}
if (beresp.status == 404) {
return(abandon);
}
return (deliver);
}
No need to use the director if you only have one backend. Varnish will automatically select the backend you declared if there's only 1 backend.
Purging content
The POST purge call you're doing is not ideal. Please have a look at the following page to learn more about content invalidation in Varnish: https://varnish-cache.org/docs/6.0/users-guide/purging.html#http-purging
The snippet on that page contains an ACL to protect your platform from unauthorized purges.
It's important to know that you'll need to create a hook into your CMS or your MVC controller, that does the purge call.Here's a simple example using curl in PHP:
$curl = curl_init("http://your.varnish.cache/url-to-purge");
curl_setopt($curl, CURLOPT_CUSTOMREQUEST, "PURGE");
curl_exec($curl);
As you can see, this is an HTTP request done in cURL that uses the custom PURGE HTTP request method. This call needs to be executed in your good right after the changes are stored in the database. This post-publishing hook will ensure that Varnish clears this specific object from cache.
VCL cleanup
The statement below doesn't look like a reliable way to remove cookies, because the expression will remove cookies from all pages dat contain a dot:
if (req.url ~ "\.*") {
unset req.http.cookie;
}
The same applies to the following statement coming from the vcl_backend_response hook:
if (bereq.url ~ "\.*") {
unset beresp.http.Set-Cookie;
unset beresp.http.Cache-Control;
}
I assume some pages do actually need cookies to properly function. An admin panel for example, or the CMS, or maybe even a header that indicates whether or not you're logged in.
The best way forward is to define a blacklist or whitelist of URL patterns that can or cannot be cached.
Here's an example:
if(req.url !~ "^/(admin|user)" {
unset req.http.Cookie;
}
The example above will only keep cookies for pages that start with /admin or /user. There are other ways as well.
Conclusion
I hope the purging part is clear. If not, please take a closer look at https://varnish-cache.org/docs/6.0/users-guide/purging.html#http-purging.
In regards to the VCL cleanup: purging can only work if the right things are stored in cache. Dealing with cookies can be tricky in Varnish.
Just try to define under what circumstances cookies should be kept for specific pages. Otherwise, you can just remove the cookies.
Hope that helps. Good luck.
Thijs
Related
This is how my varnish.vcl looks like.
vcl 4.0;
import directors;
import std;
backend client {
.host = "service1";
.port = "80";
}
sub vcl_recv {
std.log("varnish log info:" + req.http.host);
# caching pages in client
set req.backend_hint = client;
# If request is from conent or for pages remove headers and cache
if ((req.url ~ "/content/") || (req.url ~ "/cms/api/") || req.url ~ "\.(png|gif|jpg|jpeg|json|ico)$" || (req.url ~ "/_nuxt/") ) {
unset req.http.Cookie;
std.log("Cachable request");
}
# If request is not from above do not cache and pass to Backend.
else
{
std.log("Non cachable request");
return (pass);
}
}
sub vcl_backend_response {
if ((bereq.url ~ "/content/") || (bereq.url ~ "/cms/api/") || bereq.url ~ "\.(png|gif|jpg|jpeg|json|ico)$" || (bereq.url ~ "/_nuxt/") )
{
unset beresp.http.set-cookie;
set beresp.http.cache-control = "public, max-age=259200";
set beresp.ttl = 12h;
return (deliver);
}
}
# Add some debug info headers when delivering the content:
# X-Cache: if content was served from Varnish or not
# X-Cache-Hits: Number of times the cached page was served
sub vcl_deliver {
# Was a HIT or a MISS?
if ( obj.hits > 0 )
{
set resp.http.X-Cache-Varnish = "HIT";
}
else
{
set resp.http.X-Cache-Varnish = "MISS";
}
# And add the number of hits in the header:
set resp.http.X-Cache-Hits = obj.hits;
}
If I am hitting a page from same browser netwrok tab showing
X-Cache-Varnish = "HIT";
X-Cache-Hits = ;
Lets say if I hot from chrome 10 times this is what I get
X-Cache-Varnish = "HIT";
X-Cache-Hits = 9;
9 because first was a miss and rest 9 were served from cache.
If I try incognito window or a different browser it gets its own count starting from 0. I think somehow I am still caching cookies. I could not identify what I am missing.
Ideally, I want to delete all cookies for specific paths. but somehow unset does not seem to be working for me.
If you really want to make sure these requests are cached, make sure you do a return(hash); in your if-statement.
If you don't return, the built-in VCL will take over, and continue executing its standard behavior.
Apart from that, it's unclear whether or not your backend sets a Vary header which might affect your hit rate.
Instead of guessing, I suggest we use the logs to figure out it.
Run the following command to track your requests:
varnishlog -g request -q "ReqUrl ~ '^/content/'"
This statement's VSL Query expression assumes the URL starts with /content. Please adjust accordingly.
Please send me an extract of varnishlog for 1 specific URL, but also for both situations:
The one that hits the cache on a regular browser tab
The one that results in a cache miss in incognito mode or from a different browser
The logs will give more context and explain what happened.
(Varnish 2.1.5)
I've got some strange situation in my Varnish. I'm trying to invalidate cache objects through PURGE requests initiated from NodeJS.
My testing consists of requesting the object, letting it cache, then do a purge request, then request it again (resulting in a fetch), and then request it again resulting in a hit of the refreshed cache object.
When I test this through the Firefox debug console, it works fine. All steps seem to work as expected. When I test the entire process in NodeJS, it works as expected, just fine. However, when I let the object cache through Firefox, and then try to invalidate it through NodeJS, it reports a 404 Not in cache.
I'm a 100% sure I'm using the same URI, and I have no idea why it acts this way. Has anyone else experienced this problem? And if yes, what is the solution to this problem?
This is my VCL:
backend default {
.host = "127.0.0.1";
.port = "80";
}
acl purge {
"localhost";
"*loadbalancer-ip*";
}
sub vcl_recv {
if (req.request == "PURGE") {
if(!client.ip ~ purge) {
error 405 "Not allowed.";
}
return (lookup);
} else if (req.url ~ "(?i)\.(jpeg|jpg|png|gif|ico|js|css|xml)$") {
unset req.http.Cookie;
return (lookup);
} else {
return (pass);
}
}
sub vcl_hit {
if (req.request == "PURGE") {
set obj.ttl = 0s;
error 200 "Purged";
}
}
sub vcl_miss {
if (req.request == "PURGE") {
error 404 "Not in cache.";
}
}
sub vcl_fetch {
if (req.url ~ "(?i)\.(jpeg|jpg|png|gif|ico|js|css|xml)$") {
unset beresp.http.set-cookie;
return (deliver);
}
}
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
}
As you can see, my configuration is pretty straight forward. This configuration is for testing purposes, I know using the loadbalancer IP is not safe and I will change it to use the Forwarded-For IP once everything works.
With a little help of this thread:
What is the function of the "Vary: Accept" HTTP header?
I came to know that it considers the Vary header when determining whether to cache or not, and which version to get from cache.
In my case, the Vary header contained the User-Agent, which is why I was getting different results from different methods.
Is there a way to tell, at a point in the varnish subroutines when you have write access to the response object, whether the object was passed to the backend directly, or first sent through a cache lookup? Right now I am fiddling around with adding application logic to receive some header and then send it back, which can be read by varnish, but I would prefer to have the varnish behaviour be a bit more application independent.
What I'm looking for would be something like the below, though the method where I //DoSomeStuff doesn't have to be the deliver.
sub vcl_recv {
if( req.url ~ "^/something/ignored.*$" ) {
return ( pass );
}
else {
unset req.http.Cookie;
return( hash );
}
}
sub vcl_deliver {
if( resp.lookup == 1 ) {
//Do Some Stuff
}
}
Yes, there multiple ways. you can hook in vcl_hash{} and add some custom header there in req., you can do the same in vcl_pass{}, you can do that one step before in vcl_recv{}, or one step after in vcl_hit{} and vcl_miss{} (note that a hit-for-pass calls vcl_pass{} also)
Look up the processing states to get your picture clearer.
In more recent varnish-cache versions, bereq.uncacheable gives you the answer.
How can we get the the time of the cached object in varnish.
My requirement is something like, say if object is in cache for 5 mins and for a specified ip, I want to server the content from backend but not from cache.
You can setup your vcl so it will always miss when certain headers are set or when the request comes from a certain browser
in your vcl_recv set
sub vcl_recv {
if (req.http.Cache-Control ~ "no-cache" && client.ip ~ editors) {
set req.hash_always_miss = true;
}
}
https://www.varnish-cache.org/trac/wiki/VCLExampleEnableForceRefresh
I am using Varnish 3.0.3 and to use it to leverage browser caching by setting a maximum age in the HTTP headers for static resources. I tried adding the following configuration to default.vcl:
sub vcl_fetch {
if (beresp.cacheable) {
/* Remove Expires from backend, it's not long enough */
unset beresp.http.expires;
/* Set the clients TTL on this object */
set beresp.http.cache-control = "max-age=900";
/* Set how long Varnish will keep it */
set beresp.ttl = 1w;
/* marker for vcl_deliver to reset Age: */
set beresp.http.magicmarker = "1";
}
}
sub vcl_deliver {
if (resp.http.magicmarker) {
/* Remove the magic marker */
unset resp.http.magicmarker;
/* By definition we have a fresh object */
set resp.http.age = "0";
}
}
This is copied from https://www.varnish-cache.org/trac/wiki/VCLExampleLongerCaching . Maybe I just made a typo. On restart of Varnish, it no longer worked.
I have two questions. Is this the correct way to do it for Varnish 3? If so, what am I doing wrong? Secondly, is there a way to test the Varnish configuration file, before a restart? Something along the ways of what Apache has with "/sbin/service httpd configtest". That catches mistakes before going live. Thank you.
Yes, in general this is the way of overriding the backend's TTL.
Remove beresp.http.expires, set beresp.http.cache-control, set beresp.ttl.
beresp.cacheable is a 2.[01]-ism. The same test in 3.0 is to check that beresp.ttl > 0.
A small tip is to store your magic marker on req.http instead, then you don't have to clean it up before handing the object to the client.
With regards to testing a configuration file, you can call the VCL compiler directly with "varnishd -C -f /etc/varnish/default.vcl" for example. If your VCL is faulty you get the error message, if the VCL is valid you get a few pages with generated C code.