I trying to run varnish with two backends that needs exactly hostnames. But my nginx is receiving a localhost host header.
This is my configuration:
probe healthcheck {
.url = "/";
.interval = 5s;
.timeout = 15s;
.window = 5;
.threshold = 3;
}
# Define the list of backends (web servers).
# Port 443 Backend Servers for SSL
backend bimer1 {
.host = "nginx-proxy";
.host_header = "site1.example.com.br";
.port = "80";
.probe = healthcheck;
}
backend bimer2 {
.host = "nginx-proxy";
.host_header = "site2.example.com.br";
.port = "80";
.probe = healthcheck;
}
This is my nginx access log:
bimer-cache-nginx-ssl-proxy_1 | 172.17.0.3 - - [21/Jun/2017:13:41:47 +0000] "POST /ws/Servicos/Geral/Localizacoes.svc/REST/LocalizarPessoas HTTP/1.1" 502 575 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36" <-> localhost 172.17.0.1, 172.17.0.3
It's look like set host_header parameter to backend is not working to regular request. But the healthcheck is working well.
Varnish is a transparent HTTP proxy. It will forward to backend whatever Host header was sent to it by the client (your browser). So if you have accessed it via http://localhost/, then localhost is what your backend will see in the Host header.
Additionally, you should mostly never use DNS names in Varnish backend definitions. It should look like this instead:
backend bimer1 {
.host = "1.2.3.4";
# ... etc.
At present your configured backends resolve to the same machine nginx-proxy. Also the result for access.log is not from health checks. (the health checks that you have configured would show up as access to root URL / )
Perhaps you have misunderstood Varnish configuration. If your plan was to serve multiple websites through same machine, then you should use only one backend for all. Multiple backends are for multiple machines.
The answer from Danila is correct, however he doesn't really tell you how to solve the original problem, ie how to get Varnish to use the host_header value in regular (non probe) requests.
The solution is to remove the host header using unset. However, the built in vcl_recv (which is appended to your own vcl_recv) performs a sanity check, making sure that this header is set, and returns a 400 error if not. So what I did was to remove this header in vcl_backend_fetch:
sub vcl_backend_fetch {
unset bereq.http.host;
}
Related
Note** I was able to get things operational by taking cloudfront out of the picture for the server.com domain by just using route-53 and the elastic beanstalk environment. Still would be great to know why cloudfront was blocking this, but not an immediate concern for development **
I am serving a node.js static socket.io client form an s3 bucket using cloudfront and route 53. I am attempting to get this client to talk to a node.js web server using elastic beanstalk. The webserver is connected with amazon certificate manager generated ssl using a route 53 domain and cloudfront.
using an http client and directly connecting to the beanstalk environment I am seeing desired functionality. However, when I try to move to SSL/https with the client and server I am receiving:
(log from /var/log/nginx/error.log on elastic beanstalk instance)
"GET /socket.io/ HTTP/1.1" 400 51 "-" "Amazon CloudFront"
here is the client code that is running from the static s3 https domain:
import ioClient from "socket.io-client";
const ENDPOINT = "https://server.com";
export const socket = ioClient(ENDPOINT);
here is the server side. process.env.port is set to 8080, and I can verify the app is listening on 8080 through elastic beanstalk logs.
const express = require("express");
const http = require("http");
const socket_io = require("socket.io");
const index = require("./routes/index");
const app = express();
app.use(index);
const server = http.createServer(app);
const io = socket_io(server);
const port = process.env.port || 4001;
server.listen(port, () => console.log(`http server listing on port ${port}`));
Inside the ALB I have an https listener set up on port 443 with Amazon Certificate Manager (ACM) ssl certification. The listener has a process that maps the 443 https to 8080 on http, from where I think ngnix should be acting as a reverse proxy to my socket.io listener on 8080
Listeners & Processors
In the root of my node project folder I have a .ebextensions directory and inside that a file named 01_files.config with these contents:
files:
"/etc/nginx/conf.d/websocketupgrade.conf" :
mode: "000755"
owner: root
group: root
content: |
proxy_pass http://localhost:8080;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header Host $host;
From the comments in this post : socket.io handshake return error "Transport unknown" I found the following socket.io error codes
engine.io message type:
open =0
close =1
ping =2
pong =3
message =4
upgrade =5
noop =6
socket.io message type:
connect = 0
disconnect = 1
event = 2
ack = 3
error = 4
binary_event = 5
binary_ack = 6
so, 400 51 could possibly mean "Bad request, upgrade-disconnect."
Here is the response body :
{"code":0,"message":"Transport unknown"}
And here is the error on the client app in browser:
polling-xhr.js:268 GET https://server.com/socket.io/?EIO=3&transport=polling&t=NIgQkyr 400
which looks like a failed polling request on the xhr transport layer
Received this solution from AWS support:
I have discussed your case with a CloudFront (CF) expert and I have
some information for you.
After going through the CF setup with my colleague, we noticed that CF
is not forwarding the HOST header to Elastic Beanstalk (EB). This
means if the connection between CF and EB is established over HTTPS,
it will fail. To fix this, we had to take a look at the CF policies.
We noticed that you are making use of Cache Policies. In this policy
there is a setting for Headers. Here I would recommend that you
specify the HOST setting. This way, it secures the HTTPS connection
against the Origins certificated.
If that does not work, there is something else I could suggest. The
error you were seeing (GET /socket.io/ HTTP/1.1" 400 51 "-" "Amazon
CloudFront" "(redacted IP, redacted IP") should have returned an
"X-Amz-Cf-Id" ID along with a value (something like "X-Amz-Cf-Id:
jkahDAKJSHDAjhAKSJDHasd=="). Since it did not, I suspect something may
have gone wrong with the websocket request parameters. For this I
would recommend taking a look at our documentation and ensure that the
correct request parameters required for CloudFront are being used. You
can find a link to that documentation here: [1].
If all else fails, I would recommend making a request to your CF and
record the key-value pair that is returned in the response header from
CF. It should look something like "X-Amz-Cf-Id:
jkahDAKJSHDAjhAKSJDHasd==" (the same ID and value I mentioned
earlier). This value allows us as support engineers to pull the
internal logs for CF using our tooling. This value will need to be
recorded at the time of the request. Once you have this key-value
pair, you can then open a support ticket with the CloudFront team,
provide them with the error details as well as the key-value pair, and
a CF engineer would be more than happy to assist you. Additionally,
you could even reference this case (case ID = 7399701121) on the new
case so that the engineer assisting you can have some more context.
To sum up, my first recommendation is that you forward the host
header. Second is to use the request parameters required for
CloudFront. And last, if you encounter another error after that,
record the value from the HTTP response given by CloudFront for the
error. Since I am not a CloudFront expert, I would suggest opening a
new CloudFront case and providing that key-value pair in the message.
I hope you find this information useful. If you have any more
questions or concerns, please feel free to reach out.
I'm starting a work with Nginx to reverse proxy my app for internet access outside my customer's network.
I managed to make it work, limiting the URLs that need to be exposed etc, but one thing is still due to finish my work.
I want to limit user access based on the username. But instead of creating an if for every user I want to block, I would like to use a wildcard because all the users I want to block ends with a specific string: #saspw
Sample of my /etc/nginx/conf.d/reverseproxy.conf
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($remote_user = '*#saspw'){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
In the $remote_user if, I would like that all users with its username ending in #saspw get a 401 error, (which I will change to a 404 later).
It does only work if I put the whole username, like joe#saspw. Using a wildcard (*,?) does not work.
Is there a way to make $remote_user solve wildcards that way?
Thank you,
Joe
Use map nginx module
map $remote_user $is_denied {
default 0;
"~.*#saspw" 1;
}
server {
listen 80; # Proxy trafic for SAS Visual Analytics Transport Services on HTTP
server_name mcdonalds-va-reverseproxy.cons.sashq-r.openstack.sas.com;
if ($is_denied){
return 401;
}
location /SASVisualAnalyticsTransport {
proxy_pass https://mtbis.mcdonalds.com.au:8343/SASVisualAnalyticsTransport/;
}
}
It lets you to use regexes. Note that map must be outside server directive.
I am trying to run Varnish for two domains and each of them on different IP, and configured with its own .VCL file.
I succeeded in writing all config files, so that Varnish will listen on each IP; so that Apache will listen for Varnish on two ports.
Everything looks great, BUT!
When I load first domain in browser, it forwards (302) to second domain.
My previous setup was working as first domain to work without Varnish and second domain with Varnish.
Can anybody suggest solution or debugging approach.
10x
I have this setup working without issues. I am using one vcl file (logic on both sites/backends is almost the same). Server has multiple ip's, apache uses them all, and it servers different sites on different ip's. Some of ip's have virtualhosts on them as well.
First, check if your Apache installation is valid and that there are no redirects.
curl -I -L http://hostname1.com
Second, in your vcl, define backends (first example is if backend1 is virtualhost, example2 is if backend2 is not vhost and is accessible at this ip)
backend backend1 {
.host = "127.0.0.1";
.port = "81";
.host_header = "hostname1.com";
}
backend backend2 {
.host = "192.168.1.1";
.port = "80";
}
Third, in you vcl_recv you will have something like this:
if (req.http.host ~ "^(www\.)?hostname1\.com$") {
set req.http.host = "hostname1.com";
set req.backend_hint = backend1;
}
elseif (req.http.host ~ "^(www\.)?hostname2\.com$") {
set req.http.host = "hostname2.com";
set req.backend_hint = backend2;
}
That's it.
I have a single node file which does the following:
It listens on two ports: 80 and 443 (for https).
It redirects connections on 80 to 443.
And on 443 is a reverse proxy that does a round-robin redirects to several local servers over plain http.
The problem that I have is that in the actual target servers, I am unable to get the actual remote IP address of the browser.
I get the address of the reverse proxy.
The request is made by the reverse proxy, so thats expected I guess.
So, I did the following in the reverse proxy (only relevant code lines shown):
proxy = httpProxy.createServer();
var https_app = express();
https.createServer(sslCerts, https_app).listen(443, function () {
...
});
https_app.all("/",function(req, res) {
...
//res.append('X-Forwarded-For',req.connection.remoteAddress);
proxy.web(req,res, {target: local_server});
...
}
I need to do something like res.append('X-Forwarded-For',req.connection.remoteAddress) to the proxy server.. in its request header. The issue of setting the address is secondary. I first need to set the header itself which can be read by the target server. The proxy itself does not set this header, which I think it should by default. Or should it? Or does it and I am doing something wrong to read it?
I cannot see your definition of proxy, but I'm assuming it is same as the following code, using express-http-proxy. I expect this alternative method will work for you:
var proxy = require('express-http-proxy');
var myProxy= proxy('localhost:443', {
forwardPath: function (req, res) {
return require('url').parse(req.url).path;
}
});
And then simply:
app.use("/*", myProxy);
I hope this helps.
I have a backends that use Redis pub/sub that publish messages to subscribers. this is working very well in NGINX. but when i place a varnish in front of my NGINX, messages never pushed to browsers although they are being published by the go-servers.
my config foro varnish is default installed out from apt-get install, using VCL config. I updated the default config to point to my NGINX
backend default {
.host = "NGINX_url";
.port = "80";
}
other than this, i left it commented.
Sorry if I have asked this twice, from the forums and here. I think varnish is a great and awesome software and I'm eager to implement this on our production apps.
thank you in advance
When pushing messages from the server to the browsers i suppose you are using a websocket. To use websockets with varnish you have to setup the following you your vcl
sub vcl_pipe {
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
}
}
sub vcl_recv {
if (req.http.Upgrade ~ "(?i)websocket") {
return (pipe);
}
}
https://www.varnish-cache.org/docs/3.0/tutorial/websockets.html