Combining headers in Varnish - varnish

I'm running multiple Varnish cache servers on top of each other. I want to "combine" the headers of each of them, that is, when I make a request to my website, I can see which cache server it had a hit on. Right now, both of the cache servers have this code:
sub vcl_deliver {
# Happens when we have all the pieces we need, and are about to send the
# response to the client.
#
# You can do accounting or modifying the final object here.
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
}
On my second cache server, I'd like the have something like this:
sub vcl_deliver {
# Happens when we have all the pieces we need, and are about to send the
# response to the client.
#
# You can do accounting or modifying the final object here.
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT, " + responsefromfirst;
} else {
set resp.http.X-Cache = "MISS, " + responsefromfirst;
}
}
With responsefromfirst being the "X-Cache" header from the previous cache. How can I do this?

how about
sub vcl_deliver {
# Happens when we have all the pieces we need, and are about to send the
# response to the client.
#
# You can do accounting or modifying the final object here.
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT, " + resp.http.X-Cache;
} else {
set resp.http.X-Cache = "MISS, " + resp.http.X-Cache;
}
}
you really just want to prepend information to header that is already there.

Related

In Angular How to Run a Loop that hit Back-End one by one

From Angular When I send some parameter to my Node.js Backend that generate a big array
and got some error or timeout issue.
I wanted to send a limited parameter to back end by Loop.
So how to create this Loop that will hit back end -> then get response from back end -> then hit next loop one by one ???
N.B: Previously I create a for loop but that hit all together that didn't meet my need.
My Previous Angular Loop that didn't work well:
for (let i = 0; i < this.branchList.length ; i++) {
this.form.get('branch_code').setValue(this.branchList[i].branch_code);
this.salSub =
this.payrollService.submitForSalaryProcess(this.form.value,
this.cycleList)
.subscribe((data)=> {
if (data['isExecuted'] == false) {
this.commonService.showErrorMsg(data['message']);
} else {
this.salProcessList = data['data'];
}
this.salSub.unsubscribe();
}
)
};

Varnish not evicting cached item

I am experiencing varnish (6.4) crashing very regularly when about 5K items are in the cache.
The problem is that I don't see any MAIN.n_lru_nuked entry in varnishstat.
Does that mean that no eviction is taking place ?
We have set the storage as malloc with 5g. varnish is running in docker a container with 10g of mem allocated to it.
varnishd -F -f /etc/varnish/default.vcl -a http=:80,HTTP -a proxy=:8443,PROXY -s malloc,5g
Here is the vcl
vcl 4.0;
import directors;
backend back1 {
.host = "xxx.xx.xx.xx";
.port = "80";
.connect_timeout = 600s;
.first_byte_timeout = 600s;
.between_bytes_timeout = 600s;
}
acl purge {
"localhost";
#back1 1
"xxx.xx.xx.xx";
}
sub vcl_init {
new loadbalancer = directors.round_robin();
loadbalancer.add_backend(back1);
}
sub vcl_backend_response {
set beresp.grace = 30s;
if (bereq.url ~ "assets") {
unset beresp.http.set-cookie;
set beresp.http.cache-control = "public, max-age=120";
set beresp.ttl = 2h;
return (deliver);
}
# Default : Any other content is cached for 2hours in Varnish and 120s in the browser . Except for the admin area backend
if ( !(bereq.url ~ "adminarea") )
{
unset beresp.http.set-cookie;
set beresp.http.cache-control = "public, max-age=120";
set beresp.ttl = 2h;
return (deliver);
}
}
sub vcl_deliver {
# Dynamically set the Expires header on every request from the web.
set resp.http.Expires = "" + (now + 120s);
# 2. Delete the temporary header from the response.
unset resp.http.via;
unset resp.http.x-powered-by;
# unset resp.http.server;
# unset resp.http.x-varnish;
}
sub vcl_recv {
if (req.method == "BAN") {
if (!client.ip ~ purge) {
return(synth(403, "Not allowed."));
}
ban("obj.http.Pid == " + req.http.Varnish-Ban-Pid ) ;
# Throw a synthetic page so the
# request won't go to the backend.
return (synth(200, "Banned pid "+ req.http.Varnish-Ban-Pid)) ;
}
# Enable caching only for GET/HEADER methods
if (req.method != "GET" && req.method != "HEAD" ) {
set req.http.X-Varnish-Pass="y";
return (pass);
}
# Do not cache multimedia
if (req.url ~ "\.(mp3|mp4|flv)$") {
return (pass);
}
# Do not check in the cache for TYPO3 backend and AJAX requests
if (req.url ~ "^/adminarea/") {
set req.http.X-Varnish-Pass="y";
return (pass);
}
if (req.http.Accept-Language) {
if (req.http.Accept-Language ~ "^fr") {
set req.http.Accept-Language = "fr";
} elsif (req.http.Accept-Language ~ "^es") {
set req.http.Accept-Language = "es";
} elsif (req.http.Accept-Language ~ "^en") {
set req.http.Accept-Language = "en";
} else {
set req.http.Accept-Language = "fr";
}
}
# Force to gzip compression if the client allow compression of any kind
if (req.http.Accept-Encoding) {
if (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} else {
unset req.http.Accept-Encoding;
}
}
# Update the X-Forwarded-For header by adding client IP address to it
if (req.http.X-Forwarded-For) {
set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
# Tell Varnish to cache anything stored in /fileadmin /assets /Resources
# (ignoring web server cache control header directives)
if (req.url ~ "assets") {
return (hash);
}
# Tell Varnish to always cache the calendar
if (req.url ~ "calendar") {
return (hash);
}
if ( !(req.url ~ "adminarea") )
{
return (hash);
}
set req.http.X-Varnish-Pass="y";
return (pass);
}
DISCLAIMER: This is just a working theory, I cannot prove this
Theory: transient storage makes container go out of memory
I notice that over time 17.37G has been allocated to the Transient storage. Your stats show that this number has been freed as well.
Transient storage consumes memory that is not contained within the -s malloc,5g.
You say that your container has 10G allocated to it, so that means if the transient storage reaches 5G at some point, your container might crash.
What goes into transient?
As the name indicates, transient is temporary storage. This type of storage is used for:
Short-lived objects (objects with a TTL lower than the shortlived runtime parameter that defaults to 10 seconds)
Non-cacheable objects that are in-flight
Request bodies
Transient is primarily used to store items that aren't going to be in regular memory for long.
Even non-cacheable objects are temporarily put in transient, because you don't want fast backends to be blocked by slow clients. This means the backend streams the response to transient and can handle other tasks, while the client can pick this response up at its own convenience.
What to happened in your case?
Does your Varnish container process large files, such as video or audio? Even if they are not cached, they need to be kept in transient?
Again, it's just a theory, no way to prove this. But if you can reproduce the problem, please check the transient varnishstat counters.
If you see the SMA.Transient.g_bytes increasing, you know that transient is the reason for the crash.

Varnish serves wrong files

i'm using a varnish 3 behind a nging to proxy multiple sites into one domain.
The basic setup works fine but I now have a problem with varnish serving the wrong files if the filename already exists in it's cache.
Basically all I do in my default.vcl is this:
if(req.url ~ "^/foo1") {
set req.backend = foo1;
set req.url = regsub(req.url, "^/foo1/", "/");
}
else if(req.url ~ "^/foo2") {
set req.backend = foo2;
set req.url = regsub(req.url, "^/foo2/", "/");
}
If I now call /foo1/index.html, /foo2/index.html will serve the same file. After a restart of varnish and a call of /foo2/index.html, /foo1/index.html will serve foo2's index.html.
As far as I found out this is an issue with the creation of the hash which does not respect the used backend but only the url (after shortening) and the domain:
11 VCL_call c hash
11 Hash c /index.html
11 Hash c mydomain
I solved this issue for now by altering my vcl_hash to also use the backend but I'm sure there must be a better, more convenient way:
sub vcl_hash {
hash_data(req.url);
hash_data(req.backend);
}
Any hint would be appreciated, thank you very much!
You have two different ways of doing this. First one, is to do what you suggested by adding extra values (e.g. req.backend) in vcl_hash.
sub vcl_hash {
hash_data(req.url);
hash_data(req.backend);
}
Second way, is to not update req in vcl_recv, but only bereq in vcl_miss/pass.
sub vcl_urlrewrite {
if(req.url ~ "^/foo1") {
set bereq.url = regsub(req.url, "^/foo1/", "/");
}
else if(req.url ~ "^/foo2") {
set bereq.url = regsub(req.url, "^/foo2/", "/");
}
}
sub vcl_miss {
call vcl_urlrewrite;
}
sub vcl_pass {
call vcl_urlrewrite;
}
sub vcl_pipe {
call vcl_urlrewrite;
}
This second approach requires more VCL but it comes with advantages as well. For example, when analyzing logs with varnishlog, you can see the vanilla request (c column), and also the updated backend request (b column).
$ varnishlog /any-options-here/
(..)
xx RxURL c /foo1/index.html
(..)
xx TxURL c /index.html
(..)
$

Varnish VCL simply replace client.ip for req.http.x-forwarded-for

In my Varnish 2 setup I have a purging/banning block like so:
acl purge {
"localhost";
"x.x.x.x"/24;
}
sub vcl_recv {
if (req.request == "PURGE") {
if (!client.ip ~ purge) {
error 405 "Not allowed.";
}
return (lookup);
}
if (req.request == "BAN") {
if (!client.ip ~ purge) {
error 405 "Not allowed.";
}
ban("obj.http.x-host == " +req.http.host+" && obj.http.x-url ~ "+req.url);
# Throw a synthetic page so the
# request wont go to the backend.
error 200 "Ban added";
}
}
I'd expect that I could simply replace the client.ip in the if-statements for req.http.x-forwarded-for, but when I do the following compile error occurs:
Message from VCC-compiler:
Expected CSTR got 'purge'
(program line 944), at
('purging-banning.vcl' Line 16 Pos 41)
if (!req.http.x-forwarded-for ~ purge) {
----------------------------------------#####----
Running VCC-compiler failed, exit 1
VCL compilation failed
I have been searching Google and StackOverflow, but I haven't found a good solution to my problem yet, or the reason why req.http.x-forwarded-for wouldn't be in the right place here.
Who can help?
Try using "ip" from the vmod_std. See: https://varnish-cache.org/docs/trunk/reference/vmod_std.generated.html#func-ip
Like so:
if (std.ip(req.http.x-forwarded-for, "0.0.0.0") !~ purge) {
error 405 "Not allowed.";
}
This simply converts a string object to an IP object. Then, the IP object can be compared to the IP acl lists.
I don't have the 'rep' to comment, so I'm trying to affirm the answer using std.ip. I have the identical situation, and using std.ip fixed it. Remember to add import std in your default.vcl.
Also, in my case, nginx was forwarding to varnish, and X-Forwarded-For sometimes had 2 IPs in it, so I used X-Real-IP, which was set to $remote_addr in my nginx forwarding config.

Is there a way to set req.connection_timeout for specific requests in Varnish?

I've got a Varnish setup sitting in front of PHP machines. For 98% of pages, a single request timeout (req.connect_timeout in VLC) works. I've got a couple pages, however, that we expect to take up to 3 minutes before they should timeout. Is there a way to set req.connection_timeout for specific requests in Varnish? If so, please show me the light in VCL. I'd like to keep the same req.connect_timeout for all pages but raise that number for these few specific pages.
Unfortunatly, this does not work for varnish > 3
Very sad. There does not seem to be a way to actually achieve this in v>3.0
Banging my head since hours on this issue.
I now do have a solution:
Use vcl_miss!
Here is an example:
sub vcl_recv {
set req.backend = director_production;
if (req.request == "POST") {
return(pipe);
}
else {
return(lookup);
}
}
sub vcl_miss {
if (req.url ~ "/longrunning") {
set bereq.first_byte_timeout = 1h; # one hour!
set bereq.between_bytes_timeout = 10m;
} else {
set bereq.first_byte_timeout = 10s;
set bereq.between_bytes_timeout = 1s;
}
}
This works for me.
What got me worried was that the documentation of varnish states that vcl_miss is always called when am object is not found in the cache. In my first version I ommited the if/else in vcl_recv. I then had to experience (once again) that somehow the documentation is wrong. One needs to explicitly state the "return(lookup)". Otherwise vcl_miss is not called. :(
I think connection_timeout limit the time for setting up the connection to the back-end, and first_byte_timeout and between_bytes_timeout limit the processing time. Have you tried setting the bereq.first_byte_timeout programmatically in vcl_recv? E.g. with something like:
backend mybackend {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 100ms;
.first_byte_timeout = 5s;
.between_bytes_timeout = 5s;
}
sub vcl_recv {
set req.backend = mybackend;
if ( req.url ~ "/slowrequest" ) {
# set req.connect_timeout = 180s; # old naming convention?
set bereq.connect_timeout = 180s;
}
# .. do default stuff
}
Let me know if it works...
I would solve it by declaring multiple backends in Varnish, each with a different timeout - but probably refering to the very same IP and server. Then you can simply set a new backend for certain URLs, to force them to use the timeouts declared there.
if (req.url ~ "[something]") {
set req.backend = backend_with_higher_timeout;
}
In VCL 4.0 you can define your backend and give varnish hint to use it:
sub vcl_recv {
if (req.method == "POST" && req.url ~ "^/admin") {
set req.backend_hint = backend_admin_slow;
}
}
Use vcl_backend_fetch and set the timeout there:
sub vcl_backend_fetch {
if (bereq.method == "POST" && bereq.url == "/slow") {
set bereq.first_byte_timeout = 300s;
}
}

Resources