I have only one public ip address so use Varnish as a reverse proxy for multiple servers. Here is the configuration.
1st physical server Varnish/Apache - port 80, port 8080, ip address 10.0.0.40
2nd physical server 3 Drupal Vhosts - port 80, ip address 10.0.0.30
3rd physical server 2 Non Drupal Vhosts - port 80, ip address 10.0.0.31
In /etc/sysconfig/varnish,
DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/varnish_storage.bin,1G"
In default vcl,
backend default { .host = "127.0.0.1"; .port = "8080"; }
Reverse proxy is working ok and I can see Varnish cache working by checking http header. However I am not sure above configuration is correct or optimal, especially only one backend definition on default vcl file. Any advice?
I suggest the following approach:
NGINX > VARNISH > APACHE
Nginx: to handle SSL termination easily and also you can use it to cache the static content. As far as I know that Nginx is better than Varnish in caching the static content also Varnish is not supposed to cache the static content.
Varnish: will receive requests from Nginx and pass it to Apache.
Apache: will act as a load balancer which will send the requests to the backend servers (Drupal/Non-drupal)
Check the following resources:
1- HTTPS Everywhere With Nginx, Varnish And Apache
2- Simple load balancing with Apache
If my answer is not clear enough let me know.
Related
I would like to try a Varnish config where it listens on the default port 6081 and Apache stays on 80. The idea came from this blog about varnish.
An iptables redirect then sends all 80 traffic to 6081. Doing it this way enables me to continue using my web control panel without breaking it (the panel runs on 8080 itself and also breaks when Apache's listen is changed).
Right now I am on a clean install of the server with only Apache and Varnish installed, just to see if this works as is. I can get Varnish up and running with:
curl -I 192.168.0.1:6081
However it doesn't work on the IP alone even though the iptable rule is up and running. Following are my results and settings obviously using dummy ip 192.168.0.1
iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 6081
IP Table Rule -- (idea from here)
iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-port 6081
Results of curl -I with port 6081
curl -I http://192.168.0.1:6081
HTTP/1.1 200 OK
Date: Wed, 06 Jun 2018 21:45:20 GMT
Server: Apache/2.4.25 (Debian)
Last-Modified: Wed, 06 Jun 2018 21:08:27 GMT
Vary: Accept-Encoding
Content-Type: text/html
X-Varnish: 2
Age: 0
Via: 1.1 varnish (Varnish/6.0)
ETag: W/"29cd-56dff9168052e-gzip"
Accept-Ranges: bytes
Connection: keep-alive
Results of curl -I with no port
curl -I http://192.168.0.1
HTTP/1.1 200 OK
Date: Wed, 06 Jun 2018 21:36:49 GMT
Server: Apache/2.4.25 (Debian)
Last-Modified: Wed, 06 Jun 2018 21:08:27 GMT
ETag: "29cd-56dff9168052e"
Accept-Ranges: bytes
Content-Length: 10701
Vary: Accept-Encoding
Content-Type: text/html
/etc/default/varnish
DAEMON_OPTS="-a :6081 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m
/etc/varnish/default.vcl
# Default backend definition. Set this to point to your content server.
backend default {
.host = "127.0.0.1";
.port = "80";
What am I missing? Apache is on 80, Varnish is on 6081, 80 traffic is redirected to 6081 where Varnish is listening.
I'm not sure I totally grasp the problem here. Why the redirect from 80 to 6081?
Be default, Varnish will be exposed under 6081, this is mainly not to collide with other existing services running under popular ports like 80.
Given your setup, I'd do it the other way around. I would start Varnish under port 80 and Apache under 6081 (or any other port for that matter - I'm assuming 8089 further down the line) and of course make sure that Apache is set as a correct backend for Varnish.
After all, it's the proxy that you'd like to have in front for taking the heat.
E.g.:
/etc/varnish/default.vcl
backend default {
.host = "127.0.0.1";
.port = "8089"; # I will assume Apache runs under 8089.
}
Therefore, something like this:
$ curl -is http://127.0.0.1/foo/bar
will first hit Varnish, which in turn will try to honour the request by asking its backend (the above defined Apache).
Having said this, you can disable the 80 to 6081 redirect.
why this way?
In my opinion you should use varnish on port 80 and two sites on apache , lets say :8080 and :8081.
APACHE>set up 2 vhosts
Site1> Play your panel on port 8080
Site2> Play your site on port 8081
Varnish>
Setup BackEnd1 for your panel
Setup BackEnd2 for your site
One at 8080 for your web-panel
and one on i.e. 8081 the actual site
Tell varnish for backend1 "panel" to pass everything to backend1 8080 (so varnish will just pass you to apache)
Tell varnish for backend2 "site" to cache whatever you like for 8081
So, with few words.
panel served from varnish and passing everything to Apache
site served from varnish and there you can apply your caching rules hits/misses etc. Remeber to change /etc/default/varnish and set it on port 80.
PS: Never applyed this to a combo of Varnish/Apache but done it on Varnish/nginx.
You should check if apache is capable to do this. I doubt he cant.....
Let me see if I can help sort this out. So you want to try Varnish in parallel with your web server only so you can try it out. If this is the case, its not a problem.
First, port 6081 is for the admin functionality of Varnish. There's tons that you can do remotely over this port.
Assuming your web server is on port 80, you can configure your Varnish server for :8080, you could set your /etc/varnish configuration up link this:
NFILES=131072
MEMLOCK=82000
RELOAD_VCL=1
VARNISH_VCL_CONF=/etc/varnish/mysite.vcl
VARNISH_LISTEN_PORT=8080
VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
VARNISH_ADMIN_LISTEN_PORT=6082
VARNISH_SECRET_FILE=/etc/varnish/secret
VARNISH_MIN_THREADS=100
VARNISH_MAX_THREADS=8000
VARNISH_THREAD_TIMEOUT=240
VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin
VARNISH_STORAGE_SIZE=12G
#VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"
VARNISH_STORAGE="malloc,${VARNISH_STORAGE_SIZE}"
VARNISH_TTL=120
DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
-f ${VARNISH_VCL_CONF} \
-T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
-t ${VARNISH_TTL} \
-w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \
-u varnish -g varnish \
-S ${VARNISH_SECRET_FILE} \
-s ${VARNISH_STORAGE}"
Then, in your "mysite.vcl" configuration, you can link varnish to your web site:
backend webserver { # Define one backend
.host = "127.0.0.1"; # IP or Hostname of backend
.port = "80"; # Port for backend listener (Apache, NGINX, etc.)
Then just set up your IP Tables to accept traffic for both 8080 and 80 and you can test Varnish on :8080 and the web server on :80 independently. BTW, you should not expose the admin ports (6081, 6082, etc) to the outside.
If you decide to go with Varnish, you would put it in front of your web server. Set the varnish listen port to 80, and your web server to 8080 or any other port if they are on the same server. If they are different servers, you can leave your web server port at 80, just pull it out of the firewall so it cant be contacted directly from the outside world.
Best of luck!
Ubuntu 16.04.2
varnish-4.1.1
I stuck here:
https://varnish-cache.org/docs/4.1/tutorial/starting_varnish.html
The very first change in configuration in the whole book. It said: change host to www.varnish-cache.org and reload.
/etc/varnish/default.vcl
vcl 4.0;
backend default {
.host = "www.varnish-cache.org";
.port = "80";
}
I executed:
sudo service varnish restart
sudo service varnish reload
But anyway I constantly have "Error 503 Backend fetch failed".
I have tried:
$ sudo varnishd -d -f default.vcl
Error:
Failed to create vcl_boot/vgc.so: Permission deniedVCL compilation failed
It seems that compilation fails. Could you help me here?
It's a somewhat broken tutorial for a few reasons:
They ask you to point backend to a DNS name. The proper way is to specify IP in backend definitions
Whatever you specify (DNS or IP) it will end up passing Host header of the site you access Varnish with and ask backend server to deliver site with that hostname.
So why you're getting an error as per tutorial:
You access, e.g. http://localhost/ (or whatever hostname you access your Varnish with)
Then Varnish talks to HTTP server at varnish-cache.org and asks for http://localhost.
Obviously the varnish-cache.org server has no idea about that one and most likely (as per their configuration will issue a redirect / error / etc.) thus the error that you see.
It is best to point it to your own web server instead and do it like this:
vcl 4.0;
backend default {
.host = "127.0.0.1";
.port = "8080";
}
The above assumes that you run a web server (nginx or Apache, etc.) at the same machine with Varnish and you made it run at port 8080.
I have magento website in Linux server (Varnish cache), some of the product details page shows error as
Error 503 Backend fetch failed Guru Meditation: XID: 98757
My website IP is 52.163.xxx.xx
Please find the below details and help me to fix this issue.
/etc/default/varnish
DAEMON_OPTS="-a :8080 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
/etc/varnish/default.vcl
backend default{
.host = "127.0.0.1";
.port = "8080";
}
sudo service varnish restart
Stopping HTTP accelerator varnishd No /usr/sbin/varnishd found running; none killed.
[fail]
Starting HTTP accelerator varnishd [fail]
bind(): Address already in use
bind(): Address already in use
Error: Failed to open (any) accept sockets.
As I understand it, you are running varnish and backend webserver (say nginx or apache) on the very same linux machine, right?
First of all, try to run this command:
sudo netstat -anp | grep LISTEN | grep 8080
And see what process is bound on port 8080 and on which ip.
First part of your question suggests varnish is running, just not be able to connect to backend.
But the second part tells me you are not able to start varnish.
So please make it clear and perhaps attach output from the command above.
Let's continue with second part, i.e. varnish not able to start.
I guess you have backend server running on 8080, be it nginx, apache, whatever.
Your varnish backend config confirms it after all.
Check that web server is bound on 127.0.0.1 and not on 0.0.0.0 not to allow public traffic to connect directly do backend web server.
If this is the case, you have to change listening ip:port of varnish to non-colliding combination.
You can either:
change port to something else as 8080, let's say 80
change port of backend web server to something else if you need 8080 to be public
double check your backend web server is listening on localhost only and bind varnish to your public ip instead of 0.0.0.0 (default, means all machine's ips)
You can do the last option by changing main varnish configuration to:
DAEMON_OPTS="-a 52.163.xxx.xx:8080 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
This scenario has one important drawback. If you somehow come to new public ip, you have to change it in main varnish configuration too. If this is something you can encode into automation recipe, it shouldn't be problem. But if you manage it by hand, be sure you have really good documenting practice or you'll be hunting ghost bugs in future. :)
One mistake is having both Varnish and your backend server running on the same port 8080. You have 2 options to solve this:
Most straightforward and simple. Adjust Varnish DAEMON_OPTS to listen on port 80.
It may still work on the same ports, provided that you make Varnish and your backend server listen on different interfaces:
Varnish would normally listen on external interface. Thus, adjust your Varnish listen parameter to be bound to specific IP: DAEMON_OPTS="-a 52.163.xxx.xx:8080 ...
Bind your backend server (Apache, Nginx, whatever) to listen only on the loopback interface, 127.0.0.1.
Your VCL is "empty" and you should be using corresponding plugin for Magento which will ensure that Varnish caches things, by generating correct VCL file for you:
Magento 1.x: Turpentine plugin
Magento 2.x: .. is able to generate VCL from admin backend of your Magento installation.
I bought some domains at godaddy.com (i.e mydomain.com) for my droplet at digitalocean.com (i.e 199.216.110.210). I run a nodejs application on port 80 on the droplet. From godaddy.com, I forward with masking mydomain.com to 199.216.110.210 and I could see may app.
Now I want to run on 199.216.110.210 several node applications on different ports, using ngnix as reverse proxy. I followed the instructions here (www.digitalocean.com/community/articles/how-to-host-multiple-node-js-applications-on-a-single-vps-with-nginx-forever-and-crontab).
My nginx .conf file is
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:3000;
# same as in the link above
}
}
(and I am sure it is read: when ngnix start if I put an error there, ngnix reports it).
I start the nodejs application on port 3000:
I try mydomain.com, but ngnix shows always the welcome page.
Also doing mydomain.com: 3000 does not work,
it works only with 199.216.110.210:3000.
From godaddy.com, if I forward with masking the mydomain.com to 199.216.110.210:3000 I can see may app.
But I do not like this solution. I would like domains pointing to my droplet, without specifing the port and admin them with nginx.
How can I get a domain name to use with ngnix as reverse proxy to select my apps, mapped on different domains on different ports? I suppose that forwarding from godaddy.com is somehow limited.
In your server go to /var/log/nginx and do a tail -F *log. Now in another shell restart nginx.
I suspect that your domain name is too long and nginx will complain about its hash_bucket_size is too small. If this is the case open /etc/nginx/nginx.conf and make sure that the line
server_names_hash_bucket_size 64;
exists, has a value of 64 and is uncommented. Then do sudo service nginx reload, and check if all works as expected.
I am going to detail step by step how I am able to do it in my aws ec2 instance;
I set up a DNS record to my instance, so i can set mydomain.com to 192.168.123.123 (my specific IP).
Inside my instance I have forever running my node.js app in port 3000 (I test it work by issuing curl localhost:3000 from the command line)
I then download this .sh file in order to properly intantiate nginx; curl -o nginxStarter.sh https://gist.githubusercontent.com/renatoargh/dda1fbc854f7957ec7b3/raw/c0bc1a1ec76e50cdb4336182c53a0b222edb6c0e/start.sh
I configure nginx with this configuration file. Put this file in; /etc/nginx/nginx.conf
Start nginx with this command; sudo sh nginxStarter.sh start
PS.: For multiple apps just replicate the lines that routes the requests to specific ports, very easy...! If not needed you can eliminate lines regarding out SSL.
I'm trying to configure Varnish Cache on my Ubuntu VPS. I've got it installed and have tried following setup guides and googling etc but my headers never seem to show that varnish is caching.
I am running a node server on port 3000, BUT, port 3000 is forwarding to port 80.. so i'm not exactly sure how this plays with varnish caching. Here are the relevant config options I have changed in varnish... and I haven't touched anything else.
File: /etc/varnish/default.vcl
backend default {
.host = "127.0.0.1";
.port = "3000";
}
File: /etc/default/varnish
DAEMON_OPTS="-a :80 \
-T localhost:80 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
If that is your entire VCL file, then there are multiple reasons why Varnish may not be caching. First, you should read about the default VCL.
The default VCL only caches GET and HEAD HTTP requests, and won't cache any page that has ANY cookies. Since most sites have some cookies now-a-days (such as Google Analytics tracking cookies), this means most sites won't be cached by the default VCL.
You should create your own VCL, specific to your site. For example, here is the documentation on removing cookies. You could remove cookies that don't affect the page. The reason that Varnish won't cache pages with cookies is to avoid caching pages with login cookies that may change the page contents (for example, logged in users see their names. You don't want the page cached and served to everyone).