Varnish with multiple sites and multiple IPs - varnish

I am trying to run Varnish for two domains and each of them on different IP, and configured with its own .VCL file.
I succeeded in writing all config files, so that Varnish will listen on each IP; so that Apache will listen for Varnish on two ports.
Everything looks great, BUT!
When I load first domain in browser, it forwards (302) to second domain.
My previous setup was working as first domain to work without Varnish and second domain with Varnish.
Can anybody suggest solution or debugging approach.
10x

I have this setup working without issues. I am using one vcl file (logic on both sites/backends is almost the same). Server has multiple ip's, apache uses them all, and it servers different sites on different ip's. Some of ip's have virtualhosts on them as well.
First, check if your Apache installation is valid and that there are no redirects.
curl -I -L http://hostname1.com
Second, in your vcl, define backends (first example is if backend1 is virtualhost, example2 is if backend2 is not vhost and is accessible at this ip)
backend backend1 {
.host = "127.0.0.1";
.port = "81";
.host_header = "hostname1.com";
}
backend backend2 {
.host = "192.168.1.1";
.port = "80";
}
Third, in you vcl_recv you will have something like this:
if (req.http.host ~ "^(www\.)?hostname1\.com$") {
set req.http.host = "hostname1.com";
set req.backend_hint = backend1;
}
elseif (req.http.host ~ "^(www\.)?hostname2\.com$") {
set req.http.host = "hostname2.com";
set req.backend_hint = backend2;
}
That's it.

Related

Varnish host_header is not sended to backends

I trying to run varnish with two backends that needs exactly hostnames. But my nginx is receiving a localhost host header.
This is my configuration:
probe healthcheck {
.url = "/";
.interval = 5s;
.timeout = 15s;
.window = 5;
.threshold = 3;
}
# Define the list of backends (web servers).
# Port 443 Backend Servers for SSL
backend bimer1 {
.host = "nginx-proxy";
.host_header = "site1.example.com.br";
.port = "80";
.probe = healthcheck;
}
backend bimer2 {
.host = "nginx-proxy";
.host_header = "site2.example.com.br";
.port = "80";
.probe = healthcheck;
}
This is my nginx access log:
bimer-cache-nginx-ssl-proxy_1 | 172.17.0.3 - - [21/Jun/2017:13:41:47 +0000] "POST /ws/Servicos/Geral/Localizacoes.svc/REST/LocalizarPessoas HTTP/1.1" 502 575 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36" <-> localhost 172.17.0.1, 172.17.0.3
It's look like set host_header parameter to backend is not working to regular request. But the healthcheck is working well.
Varnish is a transparent HTTP proxy. It will forward to backend whatever Host header was sent to it by the client (your browser). So if you have accessed it via http://localhost/, then localhost is what your backend will see in the Host header.
Additionally, you should mostly never use DNS names in Varnish backend definitions. It should look like this instead:
backend bimer1 {
.host = "1.2.3.4";
# ... etc.
At present your configured backends resolve to the same machine nginx-proxy. Also the result for access.log is not from health checks. (the health checks that you have configured would show up as access to root URL / )
Perhaps you have misunderstood Varnish configuration. If your plan was to serve multiple websites through same machine, then you should use only one backend for all. Multiple backends are for multiple machines.
The answer from Danila is correct, however he doesn't really tell you how to solve the original problem, ie how to get Varnish to use the host_header value in regular (non probe) requests.
The solution is to remove the host header using unset. However, the built in vcl_recv (which is appended to your own vcl_recv) performs a sanity check, making sure that this header is set, and returns a 400 error if not. So what I did was to remove this header in vcl_backend_fetch:
sub vcl_backend_fetch {
unset bereq.http.host;
}

Limit users on nginx using wildcard

I'm starting a work with Nginx to reverse proxy my app for internet access outside my customer's network.
I managed to make it work, limiting the URLs that need to be exposed etc, but one thing is still due to finish my work.
I want to limit user access based on the username. But instead of creating an if for every user I want to block, I would like to use a wildcard because all the users I want to block ends with a specific string: #saspw
Sample of my /etc/nginx/conf.d/reverseproxy.conf
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($remote_user = '*#saspw'){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
In the $remote_user if, I would like that all users with its username ending in #saspw get a 401 error, (which I will change to a 404 later).
It does only work if I put the whole username, like joe#saspw. Using a wildcard (*,?) does not work.
Is there a way to make $remote_user solve wildcards that way?
Thank you,
Joe
Use map nginx module
map $remote_user $is_denied {
default 0;
"~.*#saspw" 1;
}
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($is_denied){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
It lets you to use regexes. Note that map must be outside server directive.

Dividing express routes among Node JS Clusters

I have a large set of routes in a Node JS application I'm trying to scale to multiple CPU cores (via NodeJS clusters).
The plan I had in mind was to have different workers handling a different set of express.js routes. For example:
/api/ requests handled by WorkerA
/admin/ handled by WorkerB
/blog/ handled by WorkerC
etc
Simply using a conditional with the worker ID is not sufficient, since requests can still land at the wrong worker. Also, the processes all run on the same port, so I can't just match & proxy_pass on the URL from inside nginx.
At this point, I'm thinking about swapping out the cluster routing (from master to worker) to match on the URL and route to the correct worker instead of just using the built-in round-robin approach. But this seems a bit hacky and I'm wondering if anyone else has solved this, or might have any other idea.
My solution was to run multiple express apps listened on different ports, and set a Nginx server at front to proxy requests
Say you have three express apps, each one would handle a specific type of routers, and listen on separate port (8081, 8082, 8083), and of course, they should run in cluster mode:
//API app used to handle /api routing
apiApp.listen(8081);
//Admin app used to handle /admin routing
adminApp.listen(8082);
//Blog app used to handle /blog routing
blogApp.listen(8083);
And config the Nginx server to proxy the requests:
server {
# let nginx server running on a public port
listen 80;
location /api {
proxy_pass http://127.0.0.1:8081
}
location /admin {
proxy_pass http://127.0.0.1:8082
}
location /blog {
proxy_pass http://127.0.0.1:8083
}
}
proxy_pass simply tells nginx to forward requests to /api to the server listening on 8081. You can check the full document here

server side events behind varnish-cache, not sending message or messages never pushed

I have a backends that use Redis pub/sub that publish messages to subscribers. this is working very well in NGINX. but when i place a varnish in front of my NGINX, messages never pushed to browsers although they are being published by the go-servers.
my config foro varnish is default installed out from apt-get install, using VCL config. I updated the default config to point to my NGINX
backend default {
.host = "NGINX_url";
.port = "80";
}
other than this, i left it commented.
Sorry if I have asked this twice, from the forums and here. I think varnish is a great and awesome software and I'm eager to implement this on our production apps.
thank you in advance
When pushing messages from the server to the browsers i suppose you are using a websocket. To use websockets with varnish you have to setup the following you your vcl
sub vcl_pipe {
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
}
}
sub vcl_recv {
if (req.http.Upgrade ~ "(?i)websocket") {
return (pipe);
}
}
https://www.varnish-cache.org/docs/3.0/tutorial/websockets.html

How to run multiple StrongLoop LoopBack apps on the same server?

I'm currently running two StrongLoop LoopBack apps (Nodejs apps) on a single server with different ports. Both apps were created using slc lb project and slc lb model from the command line.
Is it possible to run these apps on a single ports with different path and/or subdomain? If it is, how do I do that on a Linux machine?
Example:
http://api.server.com:3000/app1/ for first app.
http://api.server.com:3000/app2/ for second app.
thanks.
Since LoopBack applications are regular Express applications, you can mount them on a path of the master app.
var app1 = require('path/to/app1');
var app2 = require('path/to/app2');
var root = loopback(); // or express();
root.use('/app1', app1);
root.use('/app2', app2);
root.listen(3000);
The obvious drawback is high runtime coupling between app1 and app2 - whenever you are upgrading either of them, you have to restart the whole server (i.e. both of them). Also a fatal failure in one app brings down the whole server.
The solution presented by #fiskeben is more robust, since each app is isolated.
On the other hand, my solution is probably easier to manage (you have only one Node process instead of nginx + per-app Node processes) and also allows you to configure middleware shared by both apps.
var root = loopback();
root.use(express.logger());
// etc.
root.use('/app1', app1);
root.use('/app2', app2);
root.listen(3000);
You would need some sort of proxy in front of your server, for example nginx. nginx will listen to a port (say, 80) and redirect incoming requests to other servers on the machine based on some rules you define (hostname, path, headers, etc).
I'm no expert on nginx but I would configure it something like this:
server {
listen: 80;
server_name api.server.com;
location /app1 {
proxy_pass http://localhost:3000
}
location /app2 {
proxy_pass http://localhost:3001
}
}
nginx also supports passing query strings, paths and everything else, but I'll leave it up to you to put the pieces together :)
Look at the proxy server documentation for nginx.

Resources