Ban or Purge the varnish cache - varnish

Q1:i am caching the content for mobile and desktop. I want to purge or ban cache only for mobile or only for desktop. So how to purge and ban cache for mobile and desktop.
Q2: i want to bypass the cache for Desktop user agent.I want to cache only mobile user agent firstly.Please help.This is my VCL code for cache mobile and desktop user agent.

Regular purging in Varnish is done based on the URL and removes all variations. If you only want to remove specific objects for one of the cache variations (mobile vs desktop), you'll need to use banning.
Here's an official banning tutorial: https://www.varnish-software.com/developers/tutorials/ban/
The VCL code
If we use your VCL code as the basis, here's the complete VCL including the banning logic:
vcl 4.1;
backend default {
.port = "8000";
}
acl purge {
"localhost";
"192.168.55.0"/24;
}
include "devicedetect.vcl";
sub vcl_recv {
call devicedetect;
if(req.http.X-UA-Device ~ "^(mobile|tablet)\-.+$") {
set req.http.X-UA-Device = "mobile";
} else {
set req.http.X-UA-Device = "desktop";
}
}
sub vcl_recv {
if (req.method == "BAN") {
if (!client.ip ~ purge) {
return (synth(405));
}
if (!req.http.x-invalidate-pattern) {
if(!req.http.x-invalidate-ua-device) {
return (purge);
}
ban("obj.http.x-url == " + req.url
+ " && obj.http.x-host == " + req.http.host
+ " && obj.http.x-ua-device == " + req.http.x-invalidate-ua-device);
return (synth(200,"Ban added"));
}
if(!req.http.x-invalidate-ua-device) {
ban("obj.http.x-url ~ " + req.http.x-invalidate-pattern
+ " && obj.http.x-host == " + req.http.host);
return (synth(200,"Ban added"));
}
ban("obj.http.x-url ~ " + req.http.x-invalidate-pattern
+ " && obj.http.x-host == " + req.http.host
+ " && obj.http.x-ua-device == " + req.http.x-invalidate-ua-device);
return (synth(200,"Ban added"));
}
}
sub vcl_backend_response {
set beresp.http.x-url = bereq.url;
set beresp.http.x-host = bereq.http.host;
set beresp.http.x-ua-device = bereq.http.X-UA-Device;
}
sub vcl_deliver {
unset resp.http.x-url;
unset resp.http.x-host;
unset resp.http.x-ua-device;
}
sub vcl_hash {
hash_data(req.http.X-UA-Device);
}
How to run
Here are a couple of examples of how to execute the bans.
1. Invalidate a page for both mobile and desktop
The following command will remove the /my-page page from the cache for the domain.ext domain. This will remove both the mobile and the desktop version:
curl -XBAN http://domain.ext/my-page
2. Invalidate a page for the mobile version of the website
The following command will remove the /my-page page from the cache for the domain.ext domain, but only for the mobile version:
curl -XBAN -H "x-invalidate-ua-device: mobile" http://domain.ext/my-page
3. Invalidate a page for the mobile version of the website
The following command will remove the /my-page page from the cache for the domain.ext domain, but only for the desktop version:
curl -XBAN -H "x-invalidate-ua-device: desktop" http://domain.ext/my-page
4. Invalidate multiple pages for both the mobile and the desktop version
The following command will remove all pages from the cache that start with /my-* for the domain.ext domain. Both for the mobile and desktop version of the website
curl -XBAN -H "x-invalidate-pattern: /my-" http://domain.ext/my-page
5. Invalidate multiple pages for the mobile website
The following command will remove all pages from the cache that start with /my-* for the domain.ext domain, but only for the mobile version of the website:
curl -XBAN -H "x-invalidate-pattern: /my-" -H "x-invalidate-ua-device: mobile" http://domain.ext/my-page
6. Invalidate multiple pages for the desktop website
The following command will remove all pages from the cache that start with /my-* for the domain.ext domain, but only for the desktop version of the website:
curl -XBAN -H "x-invalidate-pattern: /my-" -H "x-invalidate-ua-device: desktop" http://domain.ext/my-page
Further customizations
The VCL code assumes that the 192.168.55.0/24 IP range will be used to invalidate the cache remotely. Please make sure the right IP addresses, hostnames and CIDRs are part of the purge ACL.
The ban executions were done using the domain.ext domain name. Please use the right hostname to invalidate your cache.
If the hostname you're using to invalidate (e.g. "localhost") is not that hostname with which the objects are stored in the cache, please assign an explicit Host header to your invalidation calls.
Here's an example where the ban call is done locally, but the Host header to match is domain.ext:
curl -XBAN -H "Host: domain.ext" -H "x-invalidate-pattern: /my-" -H "x-invalidate-ua-device: desktop" http://localhost/my-page
Bypassing the cache for desktop users
To answer your second question, here's how you bypass the cache for the desktop website:
sub vcl_recv {
if(req.http.X-UA-Device == "desktop") {
return(pass);
}
}
This small snippet of VCL code can be added to your existing one. In one of the earlier vcl_recv definitions the X-UA-Device header is set, which can be reused here.

Related

varnish 503 is wrong and backend_health has an error of 400

May I ask what causes this problem? This problem has been bothering me for a long time. How should I solve this problem? thank you in advance.'
this is my vcl, I actually tried /health_check.php and /pub/health_check.php
# VCL version 5.0 is not supported so it should be 4.0 even though actually used Varnish version is 6
vcl 4.0;
import std;
# The minimal Varnish version is 6.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'
backend default {
.host = "localhost";
.port = "8080";
.first_byte_timeout = 600s;
.probe = {
.url = "health_check.php";
.timeout = 2s;
.interval = 5s;
.window = 10;
.threshold = 5;
}
}
acl purge {
"localhost";
"127.0.0.1";
"::1";
}
sub vcl_recv {
if (req.restarts > 0) {
set req.hash_always_miss = true;
}
if (req.method == "PURGE") {
if (client.ip !~ purge) {
return (synth(405, "Method not allowed"));
}
# To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header
# has been added to the response in your backend server config. This is used, for example, by the
# capistrano-magento2 gem for purging old content from varnish during it's deploy routine.
if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) {
return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required"));
}
if (req.http.X-Magento-Tags-Pattern) {
ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
}
if (req.http.X-Pool) {
ban("obj.http.X-Pool ~ " + req.http.X-Pool);
}
return (synth(200, "Purged"));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
# We only deal with GET and HEAD by default
if (req.method != "GET" && req.method != "HEAD") {
return (pass);
}
# Bypass customer, shopping cart, checkout
if (req.url ~ "/customer" || req.url ~ "/checkout") {
return (pass);
}
# Bypass health check requests
if (req.url ~ "^/(pub/)?(health_check.php)$") {
return (pass);
}
# Set initial grace period usage status
set req.http.grace = "none";
# normalize url in case of leading HTTP scheme and domain
set req.url = regsub(req.url, "^http[s]?://", "");
# collect all cookies
std.collect(req.http.Cookie);
# Compression filter. See https://www.varnish-cache.org/trac/wiki/FAQ/Compression
if (req.http.Accept-Encoding) {
if (req.url ~ "\.(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
# No point in compressing these
unset req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ "gzip") {
set req.http.Accept-Encoding = "gzip";
} elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
set req.http.Accept-Encoding = "deflate";
} else {
# unknown algorithm
unset req.http.Accept-Encoding;
}
}
# Remove all marketing get parameters to minimize the cache objects
if (req.url ~ "(\?|&)(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=") {
set req.url = regsuball(req.url, "(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_[a-z]+|utm_[a-z]+|_bta_[a-z]+)=[-_A-z0-9+()%.]+&?", "");
set req.url = regsub(req.url, "[?|&]+$", "");
}
# Static files caching
if (req.url ~ "^/(pub/)?(media|static)/") {
# Static files should not be cached by default
return (pass);
# But if you use a few locales and don't use CDN you can enable caching static files by commenting previous line (#return (pass);) and uncommenting next 3 lines
#unset req.http.Https;
#unset req.http.X-Forwarded-Proto;
#unset req.http.Cookie;
}
# Authenticated GraphQL requests should not be cached by default
if (req.url ~ "/graphql" && req.http.Authorization ~ "^Bearer") {
return (pass);
}
return (hash);
}
sub vcl_hash {
if (req.http.cookie ~ "X-Magento-Vary=") {
hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=([^;]+);*.*$", "\1"));
}
# To make sure http users don't see ssl warning
if (req.http.X-Forwarded-Proto) {
hash_data(req.http.X-Forwarded-Proto);
}
if (req.url ~ "/graphql") {
call process_graphql_headers;
}
}
sub process_graphql_headers {
if (req.http.Store) {
hash_data(req.http.Store);
}
if (req.http.Content-Currency) {
hash_data(req.http.Content-Currency);
}
}
sub vcl_backend_response {
set beresp.grace = 3d;
if (beresp.http.content-type ~ "text") {
set beresp.do_esi = true;
}
if (bereq.url ~ "\.js$" || beresp.http.content-type ~ "text") {
set beresp.do_gzip = true;
}
if (beresp.http.X-Magento-Debug) {
set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;
}
# cache only successfully responses and 404s
if (beresp.status != 200 && beresp.status != 404) {
set beresp.ttl = 0s;
set beresp.uncacheable = true;
return (deliver);
} elsif (beresp.http.Cache-Control ~ "private") {
set beresp.uncacheable = true;
set beresp.ttl = 86400s;
return (deliver);
}
# validate if we need to cache it and prevent from setting cookie
if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
unset beresp.http.set-cookie;
}
# If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass
if (beresp.ttl <= 0s ||
beresp.http.Surrogate-control ~ "no-store" ||
(!beresp.http.Surrogate-Control &&
beresp.http.Cache-Control ~ "no-cache|no-store") ||
beresp.http.Vary == "*") {
# Mark as Hit-For-Pass for the next 2 minutes
set beresp.ttl = 120s;
set beresp.uncacheable = true;
}
return (deliver);
}
sub vcl_deliver {
if (resp.http.X-Magento-Debug) {
if (resp.http.x-varnish ~ " ") {
set resp.http.X-Magento-Cache-Debug = "HIT";
set resp.http.Grace = req.http.grace;
} else {
set resp.http.X-Magento-Cache-Debug = "MISS";
}
} else {
unset resp.http.Age;
}
# Not letting browser to cache non-static files.
if (resp.http.Cache-Control !~ "private" && req.url !~ "^/(pub/)?(media|static)/") {
set resp.http.Pragma = "no-cache";
set resp.http.Expires = "-1";
set resp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";
}
unset resp.http.X-Magento-Debug;
unset resp.http.X-Magento-Tags;
unset resp.http.X-Powered-By;
unset resp.http.Server;
unset resp.http.X-Varnish;
unset resp.http.Via;
unset resp.http.Link;
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# Hit within TTL period
return (deliver);
}
if (std.healthy(req.backend_hint)) {
if (obj.ttl + 300s > 0s) {
# Hit after TTL expiration, but within grace period
set req.http.grace = "normal (healthy server)";
return (deliver);
} else {
# Hit after TTL and grace expiration
return (restart);
}
} else {
# server is not healthy, retrieve from cache
set req.http.grace = "unlimited (unhealthy server)";
return (deliver);
}
}
My program is magento and my health_check.php file is in pub
this is my nginx error
2022/11/04 09:15:23 [crit] 3378368#0: *538 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: *.*.*.224, server: 0.0.0.0:443
2022/11/04 09:37:30 [notice] 3383586#0: signal process started
I Runing varnishlog -g request -q "ReqUrl eq '/health_check.php'", no response
I ran sudo varnishlog -g request -q "ReqUrl eq '/'", no response. And then I ran sudo varnishlog -g request -q "VCL_call eq 'BACKEND_ERROR'" return 503
It's clear that the 400 error of the health probe causes the backend to be unhealthy. Varnish will return a 503 because the backend is unhealthy regardless of the status code.
Please share the backend & probe configuration from your VCL file so I can figure out how the health checking endpoint is probed.
It also makes sense for you to run a varnishlog -g request -q "ReqUrl eq '/health_check'" where /health_check is replaced by the URL of the health checking probe.
Please also have a look at the webserver logs of the backend to see if there's any indication on why the HTTP 400 status code is returned.
Update: run the health check manually
The Nginx error logs didn't contain anything useful. Please check the access logs as well. If there is useful information in there, don't hesitate to add it to your question.
Please also check the Magento (debug) logs for more information.
You could also run curl -I http://localhost:8080/health_check.php from the command line on the Varnish server to see if anything specific comes up in the output for the script or in the Magento logs.
Update 2: provide the right health check host header
It looks as though your Nginx server doesn't route requests coming from localhost to the right Magento virtual host.
In that case I suggest adding the .host_header property to your backend definition as illustrated below:
backend default {
.host = "localhost";
.host_header = "mydomain";
.port = "8080";
.first_byte_timeout = 600s;
.probe = {
.url = "/health_check.php";
.timeout = 2s;
.interval = 5s;
.window = 10;
.threshold = 5;
}
}
This host header will ensure that the right host header is sent to the backend while performing health checks.
Since you mentioned in your comment that https://mydomain/health_check.php returns a valid 200 OK status code, this seems like a first step in the right direction.
Please update .host_header = "mydomain" and ensure the right domain name is used.
Update 3: figure out why the page doesn't load
Your backend seems to be healthy now (based on the comments), but the site still doesn't work.
In that case, run the following command to debug:
sudo varnishlog -g request -q "ReqUrl eq '/'"
This command will display the full VSL transaction log for the homepage. Go ahead and run it and add the full output to your question in case you need help.
Update 4: first byte timeout issues
The logs you provided give a clear indication of what's causing the failure.
Here's the relevant log extract:
BackendOpen 31 default 127.0.0.1 8080 127.0.0.1 57786
BackendStart 127.0.0.1 8080
Timestamp Bereq: 1668044657.403995 0.000161 0.000161
FetchError first byte timeout
BackendClose 31 default
Timestamp Beresp: 1668045257.502267 600.098433 600.098272
Timestamp Error: 1668045257.502275 600.098441 0.000008
Varnish is opening up a connection to the backend via a backend fetch thread and sends the backend request. It took Varnish 0.000161 seconds to send the request.
Now the Varnish backend thread is waiting for the backend to respond. The first byte timeout is set to 600 seconds, which is incredibly long. However, the timeout is still triggered.
The Timestamp Beresp log line indicates that it took Varnish 600.098272 to receive the backend response. But we know that this is just a timeout and the error is triggered directly after that.
Something really weird is going on and you need to figure out why your backend is taking so long to respond. This doesn't really have anything to do with Varnish itself, but it could be that your application behaves in a strange way when a proxy server is put in front of it.
It's also possible that your backend application is just slow under all circumstances. That's information I don't have and that's beyond the scope of this question.

How can I customize the message shown to a user who reaches a Guru Meditation error when making a request thru my Varnish server?

I am trying to customize what appears on the screen for a user when he reaches a Guru Meditation error screen, upon making an erring request to my backend, which has a Varnish reverse proxy in front. I have tried placing log.info(client.ip) in the different subroutines of default.vcl, only to run into compilation errors when trying to start the varnish service. I am using a Linux virtual machine.
You certainly can. Have a look at the following tutorial I created: https://www.varnish-software.com/developers/tutorials/vcl-synthetic-output-template-file/.
Modifying the synthetic output templates
Here's the VCL code you need to extend the regular vcl_synth subroutine in case you call return(synth()) from your VCL code, as well as the code you need to extend vcl_backend_response in case of a backend fetch error:
vcl 4.1;
import std;
sub vcl_synth {
set resp.http.Content-Type = "text/html; charset=utf-8";
set resp.http.Retry-After = "5";
set resp.body = regsuball(std.fileread("/etc/varnish/synth.html"),"<<REASON>>",resp.reason);
return (deliver);
}
sub vcl_backend_error {
set beresp.http.Content-Type = "text/html; charset=utf-8";
set beresp.http.Retry-After = "5";
set beresp.body = regsuball(std.fileread("/etc/varnish/synth.html"),"<<REASON>>",beresp.reason);
return (deliver);
}
As explained in the tutorial, you can store the HTML code you want to display in an HTML and load this into your VCL output.
The trick is to put a <<REASON>> placeholder in your HTML where the actual error message gets parsed into.
Adding custom logging to your VCL
If you want to add custom logging that gets sent to VSL, you can use the std.log() function that is part of vmod_std.
Here's some example VCL code that uses this function:
vcl 4.1;
import std;
sub vcl_recv {
std.log("Client IP: " + client.ip);
}
The log will be displayed through a VCL_Log tag in your VSL output.
If you want to filter out VCL_Log tags, you can use the following command:
varnishlog -g request -i VCL_Log
This is the output you may receive:
* << Request >> 32770
- VCL_Log Client IP: 127.0.0.1
** << BeReq >> 32771
If you're not filtering the VCL_Log tag, you'll see it appear in your VSL output if your run varnishlog -g request.
Tip: if you want to see the full log transaction but only for a specific URL, just run varnishlog -g request -q "ReqUrl eq '/'". This will only display the logs for the homepage.
Update: displaying the client IP in the synthetic output
The VCL code below injects the X-Forwarded-For header into the output by concatenating to the reason phrase:
vcl 4.1;
import std;
sub vcl_synth {
set resp.http.Content-Type = "text/html; charset=utf-8";
set resp.http.Retry-After = "5";
set resp.body = regsuball(std.fileread("/etc/varnish/synth.html"),"<<REASON>>",resp.reason + " (" + req.http.X-Forwarded-For + ")");
return (deliver);
}
sub vcl_backend_error {
set beresp.http.Content-Type = "text/html; charset=utf-8";
set beresp.http.Retry-After = "5";
set beresp.body = regsuball(std.fileread("/etc/varnish/synth.html"),"<<REASON>>",beresp.reason + " (" + bereq.http.X-Forwarded-For + ")");
return (deliver);
}
It's also possible to provide a second placeholder in the template and perform an extra regsuball() call. But for the sake of simplicity, the X-Forwarded-For header is just attached to the reason string.

Varnish Cache v4: Incorrect Backend Health Check Response

I've setup Varnish Cache (4) in front of my CMS to help cache requests. In the event that my CMS goes down, I'd like to deliver cached items for a set grace period. I've followed many examples provided online but am running into an issue where Varnish doesn't recognize that my backend is down. When I manually shutdown the CMS the std.health(req.backend_hint)) continues to return true and attempts to retrieve an item from the backend, which then returns a 503 response.
Question: Am I incorrectly assuming std.health(req.backend_hint)) will recognize that my CMS is down?
Here is the VCL script I've been using to test:
sub vcl_recv {
# Initial State
set req.http.grace = "none";
set req.http.x-host = req.http.host;
set req.http.x-url = req.url;
return(hash);
}
sub vcl_backend_response {
set beresp.ttl = 10s;
set beresp.grace = 1h;
}
sub vcl_deliver {
# Copy grace to resp so we can tell from client
set resp.http.grace = req.http.grace;
# Add debugging headers to cache requests
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
}
else {
set resp.http.X-Cache = "MISS";
}
}
sub vcl_hit {
if (obj.ttl >= 0s) {
# normal hit
return (deliver);
}
# We have no fresh content, lets look at the cache
elsif (std.healthy(req.backend_hint)) {
# Backend is healthy. Limit age to 10s.
if (obj.ttl + 10s > 0s) {
set req.http.grace = "normal(limited)";
return (deliver);
} else {
# No candidate for grace. Fetch a fresh object.
return(fetch);
}
} else {
# backend is sick - use full grace
if (obj.ttl + obj.grace > 0s) {
set req.http.grace = "full";
return (deliver);
} else {
# no graced object.
return (fetch);
}
}
}
Again, when I shutdown the CMS the std.healthy(req.backend_hint)) still reports the backend as healthy and never jumps to the final else statement.
Thanks for taking a look.
To properly use std.healthy you of course need to configure backend probes. So at the top of your VCL file you would first configure a probe:
probe site_probe {
.request =
"HEAD / HTTP/1.1"
"Host: example.com"
"Connection: close";
.interval = 5s; # check the health of each backend every 5 seconds
.timeout = 3s; # timing out after 1 second by default.
.window = 5; # If 3 out of the last 5 polls succeeded the backend is considered healthy, otherwise it will be marked as sick
.threshold = 3;
}
Make sure to replace example.com with your main website domain. It is important to put (or omit) the www. prefix so that the probe will not get redirect and marked as failed.
And of course your backend definition should be configured to use the defined probe:
backend default {
.host = "127.0.0.1";
.port = "8080";
.probe = site_probe;
}

Amazon CloudFront invalidation API

The images used in our application are rendered from Amazon CloudFront.
When an existing image is modified, it does not reflect the image change immediately since CloudFront take around 24 hours to update.
As a workaround, I'm planning to call CreateInvalidation to reflect the file change immediately.
Is it possible to use this invalidation call without SDK?
Using ColdFusion programming language and does not seem to have SDK for this.
You can simply make POST request. Example on PHP from Steve Jenkins
<?php
/**
* Super-simple AWS CloudFront Invalidation Script
* Modified by Steve Jenkins <steve stevejenkins com> to invalidate a single file via URL.
*
* Steps:
* 1. Set your AWS Access Key
* 2. Set your AWS Secret Key
* 3. Set your CloudFront Distribution ID (or pass one via the URL with &dist)
* 4. Put cf-invalidate.php in a web accessible and password protected directory
* 5. Run it via: http://example.com/protected_dir/cf-invalidate.php?filename=FILENAME
* or http://example.com/cf-invalidate.php?filename=FILENAME&dist=DISTRIBUTION_ID
*
* The author disclaims copyright to this source code.
*
* Details on what's happening here are in the CloudFront docs:
* http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html
*
*/
$onefile = $_GET['filename']; // You must include ?filename=FILENAME in your URL or this won't work
if (!isset($_GET['dist'])) {
$distribution = 'DISTRIBUTION_ID'; // Your CloudFront Distribution ID, or pass one via &dist=
} else {
$distribution = $_GET['dist'];
}
$access_key = 'AWS_ACCESS_KEY'; // Your AWS Access Key goes here
$secret_key = 'AWS_SECRET_KEY'; // Your AWS Secret Key goes here
$epoch = date('U');
$xml = <<<EOD
<InvalidationBatch>
<Path>{$onefile}</Path>
<CallerReference>{$distribution}{$epoch}</CallerReference>
</InvalidationBatch>
EOD;
/**
* You probably don't need to change anything below here.
*/
$len = strlen($xml);
$date = gmdate('D, d M Y G:i:s T');
$sig = base64_encode(
hash_hmac('sha1', $date, $secret_key, true)
);
$msg = "POST /2010-11-01/distribution/{$distribution}/invalidation HTTP/1.0\r\n";
$msg .= "Host: cloudfront.amazonaws.com\r\n";
$msg .= "Date: {$date}\r\n";
$msg .= "Content-Type: text/xml; charset=UTF-8\r\n";
$msg .= "Authorization: AWS {$access_key}:{$sig}\r\n";
$msg .= "Content-Length: {$len}\r\n\r\n";
$msg .= $xml;
$fp = fsockopen('ssl://cloudfront.amazonaws.com', 443,
$errno, $errstr, 30
);
if (!$fp) {
die("Connection failed: {$errno} {$errstr}\n");
}
fwrite($fp, $msg);
$resp = '';
while(! feof($fp)) {
$resp .= fgets($fp, 1024);
}
fclose($fp);
print '<pre>'.$resp.'</pre>'; // Make the output more readable in your browser
Some alternatives to invalidating an object are:
Updating Existing Objects Using Versioned Object Names, such as image_1.jpg changing to image_2.jpg
Configuring CloudFront to Cache Based on Query String Parameters such as configuring CloudFront to recognize parameters (eg ?version=1) as part of the filename, therefore your app can reference a new version by using ?version=2 and that forces CloudFront to treat it as a different object
For frequent modifications, I think the best approach is to append a query string to the image url (time stamp or hash value of the object) and configure Cloudfront to forward query strings, which will always return the latest image when query string differs.
For infrequent modifications, apart from SDK, you can use AWS CLI which will also allows to invalidate the cache upon builds, integrating with your CI/CD tools.
E.g
aws cloudfront create-invalidation --distribution-id S11A16G5KZMEQD \
--paths /index.html /error.html

Varnish round robin director with backend virtual hosts

I've been trying like mad to figure out the VCL for how to do this and am beginning to think it's not possible. I have several backend app servers that serve a variety of different hosts. I need varnish to cache pages for any host and send requests that miss the cache to the app servers with the original host info in the request ("www.site.com"). However, all the VCL examples seem to require me to use a specific host name for my backend server ("backend1" for example). Is there any way around this? I'd love to just point the cache miss to an IP and leave the request host intact.
Here's what I have now:
backend app1 {
.host = "192.168.1.11";
.probe = {
.url = "/heartbeat";
.interval = 5s;
.timeout = 1 s;
.window = 5;
.threshold = 3;
}
}
backend app2 {
.host = "192.168.1.12";
.probe = {
.url = "/heartbeat";
.interval = 5s;
.timeout = 1 s;
.window = 5;
.threshold = 3;
}
}
director pwms_t247 round-robin {
{
.backend = app1;
}
{
.backend = app2;
}
}
sub vcl_recv {
# pass on any requests that varnish should not handle
if (req.request != "HEAD" && req.request != "GET" && req.request != "BAN") {
return (pass);
}
# pass requests to the backend if they have a no-cache header or cookie
if (req.http.x-varnish-no-cache == "true" || (req.http.cookie && req.http.cookie ~ "x-varnish-no-cache=true")) {
return (pass);
}
# Lookup requests that we know should be cached
if (req.url ~ ".*") {
# Clear cookie and authorization headers, set grace time, lookup in the cache
unset req.http.Cookie;
unset req.http.Authorization;
return(lookup);
}
}
etc...
This is my first StackOverflow question so please let me know if I neglected to mention something important! Thanks.
Here is what I actually got to work. I credit ivy because his answer is technically correct, and because one of the problems was my host (they were blocking ports that prevented my normal web requests from getting through). The real problem I was having was that heartbeat messages had no host info, so the vhost couldn't route them correctly. Here's a sample backend definition with a probe that crafts a completely custom request:
backend app1 {
.host = "192.168.1.11";
.port = "80";
.probe = {
.request = "GET /heartbeat HTTP/1.1"
"Host: something.com"
"Connection: close"
"Accept-Encoding: text/html" ;
.interval = 15s;
.timeout = 2s;
.window = 5;
.threshold = 3;
}
}
I need varnish to cache pages for any host and send requests that miss
the cache to the app servers with the original host info in the
request ("www.site.com"). However, all the VCL examples seem to require
me to use a specific host name for my backend server ("backend1" for example)
backend1 is not a hostname, it's a back-end definition with an ip-address. You're defining some routing logic in your vcl file (to which backend a request is proxied), but you're not changing the hostname in the request. What you're asking for (keep the hostname the same) is the default behavior.

Resources