varnish purge allowed only from a public ip - varnish

My setup is the following:
Nginx(443 https) -> Varnish(port 6081) -> Nginx(port 83 - the app itself)
#nginx https conf:
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 1800;
proxy_request_buffering off;
proxy_buffering off;
proxy_pass http://127.0.0.1:6081;
}
#part of default.vlc conf:
backend default {
.host = "127.0.0.1";
.port = "83";
}
Of course, there's another nginx config for port 83, which is the application itself.
I've configured it this way, so I can run varnish behind HTTPS.
Trying to setup purge to invalidate cache for specific endpoints, I've configured the following in the default.vcl:
acl purge {
"127.0.0.1";
"some_public_ip"
}
sub vcl_recv {
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return (synth(405, "This IP is not allowed to send PURGE requests."));
}
return (purge);
}
}
Everything ok, I can execute:
curl -X PURGE -I "https://web_server/index.php".
The issue is that, if I remove "127.0.0.1" from the acl list, and only let "some_public_ip", it won't work anymore. It will return "This IP is not allowed to send PURGE requests".
I only want the purge to work for that "some_public_ip" only.
Is it possible?

Because Nginx sits in front of Varnish the client.ip variable always refers to it through localhost. The actual client IP address is stored in the X-Forwarded-For header.
You can extract it by calling std.ip(req.http.X-Forwarded-For,client.ip). This will convert the value of X-Forwarded-For from a string to an IP address and will still use client.ip as a fallback in case of errors.
Because Nginx already sets the X-Forwarded-For header, Varnish will add the IP address of its client to it.
My varnishlog output shows the following log lines:
- ReqUnset X-Forwarded-For: 178.118.13.77
- ReqHeader X-Forwarded-For: 178.118.13.77, 127.0.0.1
We have to remove the second part in order to get the right IP address. We can do this using the following VCL snippet:
set req.http.X-Forwarded-For = regsub(req.http.X-Forwarded-For,"^([^,]+),.*$","\1");
In the end, this is the VCL code we need to make this work:
vcl 4.0;
import std;
acl purge {
"127.0.0.1";
"some_public_ip"
}
sub vcl_recv {
if(req.http.X-Forwarded-For ~ ","){
set req.http.X-Forwarded-For = regsub(req.http.X-Forwarded-For,"^([^,]+),.*$","\1");
}
if (req.method == "PURGE") {
if (!std.ip(req.http.X-Forwarded-For,client.ip) ~ purge) {
return (synth(405, "This IP is not allowed to send PURGE requests."));
}
return (purge);
}
}

Related

Use Varnish cache tool with Node.js and Nginx

I have two Node.js server on two different ports(3636, 4646), I used Nginx web server as a reverse proxy for my servers , my problem is how to add Varnish cache tool to both servers?
/etc/nginx/sites-enabled/yourdomain:
upstream app_yourdomain {
server 127.0.0.1:3636;
keepalive 8;
}
server {
listen 0.0.0.0:8080;
server_name yourdomain.com yourdomain;
access_log /var/log/nginx/yourdomain.log;
# pass the request to the node.js server with the correct headers
# and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app_yourdomain/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
/etc/nginx/sites-enabled/domain2:
server {
listen 8080;
server_name domain2.com;
access_log /var/log/nginx/domain2.access.log;
location / {
proxy_pass http://127.0.0.1:4646/;
}
}
varnish config file:
DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
but when I use curl -I http://localhost there is no sign of varnish :
HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Mon, 20 Nov 2017 12:22:17 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 18 Sep 2017 06:18:46 GMT
Connection: keep-alive
ETag: "59bf6546-264"
Accept-Ranges: bytes
/etc/varnish/default.vcl:
backend default {
.host = "127.0.0.1";
.port = "8080";
}
Is there anything am I missing?
That's hard to tell without seeing your default.vcl.
Maybe you have something like this:
sub vcl_deliver {
unset resp.http.Via;
unset resp.http.X-Varnish;
}
Also make sure to have the correct backend config:
backend default {
.host = "localhost";
.port = "8080";
}

Nginx configuration for allow ip is not working deny all is working fine

i create a new conf file to block all public ip to access and give only one public ip address(office public IP) to access. but when i try to access its shows the "403 Forbidden nginx"
upstream backend_solr {
ip_hash;
server ip_address:port;
}
server {
listen 80;
server_name www.example.com;
index /example/admin.html;
charset utf-8;
access_log /var/log/nginx/example_access.log main;
location / {
allow **office_public_ip**;
deny all;
proxy_pass http://backend_solr-01/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ /favicon\.ico {
root html;
}
location ~ /\. {
deny all;
}}
but in the logs it shows accessing to the public ip but forbidden
IP_Address - - [31/Jul/2017:12:43:05 +0800] "Get /example/admin.html HTTP/1.0" www.example.com "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36" "my_office _IP" "-" "-" "-" 403 564 0.000 - - -
at last i find out the cause of the issue why the
allow ip:
deny all;
not working .its becasue its loading with a proxy ip while connecting to the site.
so we want to enalbe the proxy ip also if we want to allow for a specific public ip. here are the configuration .
upstream backend_solr {
ip_hash;
server ip_address:port;
}
server {
listen 80;
server_name www.example.com;
index /example/admin.html;
charset utf-8;
access_log /var/log/nginx/example_access.log main;
location / {
# **
set $allow false;
if ($http_x_forwarded_for ~ " 12\.22\.22\.22?$")-public ip {
set $allow true;
}
set $allow false;
if ($http_x_forwarded_for ~ " ?11\.123\.123\.123?$")- proxy ip {
set $allow true;
}
if ($allow = false) {
return 403 ;
}
# **
proxy_pass http://backend_solr-01/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ /favicon\.ico {
root html;
}
location ~ /\. {
deny all;
}
}
This nginx configuration works for me :
location / { ## Use the request url, not the directory on the filesystem.
allow xxx.xxx.xxx.xxx; ## Your specific IP
deny all;
}
But you can place allow xxx.xxx.xxx.xxx outside the location if you want to deny or allow only a specific location.

Prerender with nginx and node.js returns 504

If I understand things correctly I can setup nginx in a way that it handles crawlers (instead of nodejs doing it). So I removed app.use(require('prerender-node').set('prerenderToken', 'token')) from express configuration and made the following nginx setup (I do not use prerender token):
# Proxy / load balance (if more than one node.js server used) traffic to our node.js instances
upstream my_server_upstream {
server 127.0.0.1:9000;
keepalive 64;
}
server {
listen 80;
server_name test.local.io;
access_log /var/log/nginx/test_access.log;
error_log /var/log/nginx/test_error.log;
root /var/www/client;
# Static content
location ~ ^/(components/|app/|bower_components/|assets/|robots.txt|humans.txt|favicon.ico) {
root /;
try_files /var/www/.tmp$uri /var/www/client$uri =404;
access_log off;
sendfile off;
}
# Route traffic to node.js for specific route: e.g. /socket.io-client
location ~ ^/(api/|user/|en/user/|ru/user/|auth/|socket.io-client/|sitemap.xml) {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_pass_header X-CSRFToken;
sendfile off;
# Tells nginx to use the upstream server
proxy_pass http://my_server_upstream;
}
location / {
root /var/www/client;
index index.html;
try_files $uri #prerender;
access_log off;
sendfile off;
}
location #prerender {
set $prerender 0;
if ($http_user_agent ~* "baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator") {
set $prerender 1;
}
if ($args ~ "_escaped_fragment_") {
set $prerender 1;
}
if ($http_user_agent ~ "Prerender") {
set $prerender 0;
}
#resolve using Google's DNS server to force DNS resolution and prevent caching of IPs
resolver 8.8.8.8;
if ($prerender = 1) {
#setting prerender as a variable forces DNS resolution since nginx caches IPs and doesnt play well with load balancing
set $prerender "127.0.0.1:3000";
rewrite .* /$scheme://$host$request_uri? break;
proxy_pass http://$prerender;
}
if ($prerender = 0) {
rewrite .* /index.html$is_args$args break;
}
}
}
But when I test it by curl test.local.io?_escaped_fragment_= I get got 504 in 344ms for http://test.local.io
Node version is 6.9.1. I use vagrant to setup environment.
The above configuration works fine. All it was missing is an entry in /etc/hosts : 127.0.0.1 test.local.io

Enable Cors on node.js app with nginx proxy

I have set up a digital ocean droplet that is a reverse proxy server using nginx and node. I used this tutorial from digital ocean as a starting point
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-14-04.
I have also set up ssl with lets encrypt. The issue I am currently having is that I am unable to make cross domain ajax calls to the server. I am getting a error of No 'Access-Control-Allow-Origin' header is present. I have set up the appropriate header response in my node app and have attempted to follow the few examples I could find for nginx with no luck. Below is my code.
nginx with my attempts at headers removed
server {
listen 443 ssl;
server_name lefthookservices.com www.lefthookservices.com;
ssl_certificate /etc/letsencrypt/live/lefthookservices.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/lefthookservices.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-$
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location ~ /.well-known {
allow all;
}
}
server {
listen 80;
server_name lefthookservices.com www.lefthookservices.com;
return 301 https://$host$request_uri;
}
Here is my app.js script using express
'use strict';
var colors = require('colors/safe');
var express = require('express');
var knack = require('./knack_call.js');
var bodyParser = require('body-parser');
var cors = require('cors');
colors.setTheme({
custom: ['blue', 'bgWhite']
});
var app = express();
app.use(bodyParser.json());
// allow for cross domain ajax
app.get('/', function(request, response){
response.send('hello\n');
});
app.post('/', function(request, response){
response.header("Access-Control-Allow-Origin", "*");
response.header("Access-Control-Allow-Headers", "X-Requested-With");
response.header("Access-Control-Allow-Methods', 'GET,POST");
knack.getData(request, response);
});
app.listen(8080, '127.0.0.1', function(m){
console.log(colors.custom("Captin the server is at full strength"));
});
Any suggestion that could help me set the correct headers to allow CORS would be greatly appreciated. Thank you in advance.
As a result of Tristans answer below my Nginx code now looks like this.
server {
listen 443 ssl;
server_name lefthookservices.com www.lefthookservices.com;
ssl_certificate /etc/letsencrypt/live/lefthookservices.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/lefthookservices.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES$
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
if ($http_origin ~*(https?://.*\exponential.singularityu\.org(:[0-9]+)?$)){
set $cors "1";
}
if ($request_method = 'OPTIONS') {
set $cors "${cors}o";
}
if ($cors = "1") {
more_set_headers 'Access-Control-Allow-Origin: $http_origin';
more_set_headers 'Access-Control-Allow-Credentials: true';
proxy_pass http://127.0.0.1:8080;
}
if ($cors = "1o") {
more_set_headers 'Access-Control-Allow-Origin: $http_origin';
more_set_headers 'Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, DELETE';
more_set_headers 'Access-Control-Allow-Credentials: true';
more_set_headers 'Access-Control-Allow-Headers: Origin,Content-Type,Accept';
add_header Content-Length 0;
add_header Content-Type text/plain;
return 204;
}
proxy_pass http://127.0.0.1:8080;
}
}
location ~ /.well-known {
allow all;
}
}
Sadly this is still not working.
server {
listen 80;
server_name lefthookservices.com www.lefthookservices.com;
return 301 https://$host$request_uri;
}
It turns out the error message I was getting was inaccurate. The issue was not header setting. It turned out that I needed to make the request with jsonp and I needed to handle the incoming data differently. An error in the function called by app.js was erroring and causing the connection to time out. This resulted in the appropriate headers not being returned to the browser which caused the error message.
For anyone hoping to find an NGINX config that worked this is mine.
proxy_pass http://127.0.0.1:8080;
# proxy_http_version 1.1;
proxy_set_header Access-Control-Allow-Origin *;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection '';
proxy_set_header Host $host;
# proxy_cache_bypass $http_upgrade;
Thanks you for the suggestions.
Pull Nginx out of this equation. It doesn't have anything to do with your CORs problem if your setup is as similar to mine as I believe it is. I see that you're using the cors module, but you're not actually using it that I can see.
Your settings are simply enough that you might be able to get away with the defaults so, right below app.use(bodyParser.json());, update your app.js with:
app.use(cors());
That might work right out of the box. If it doesn't, you can pass a set of options. Mine looks something like this:
app.use(cors({
origin: myorigin.tld,
allowedHeaders: [ 'Accept-Version', 'Authorization', 'Credentials', 'Content-Type' ]
}));
Other config options are available in the docs.
You're almost there.
You have to think of the proxy as an external server as well as your Node.js application.
So, in short, you need to add a header to your nginx configuration.
Take a look at this link,
https://gist.github.com/pauloricardomg/7084524
In case this ever gets deleted:
#
# Acts as a nginx HTTPS proxy server
# enabling CORS only to domains matched by regex
# /https?://.*\.mckinsey\.com(:[0-9]+)?)/
#
# Based on:
# * http://blog.themillhousegroup.com/2013/05/nginx-as-cors-enabled-https-proxy.html
# * http://enable-cors.org/server_nginx.html
#
server {
listen 443 default_server ssl;
server_name localhost;
# Fake certs - fine for development purposes :-)
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_session_timeout 5m;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Nginx doesn't support nested If statements, so we
# concatenate compound conditions on the $cors variable
# and process later
# If request comes from allowed subdomain
# (*.mckinsey.com) then we enable CORS
if ($http_origin ~* (https?://.*\.mckinsey\.com(:[0-9]+)?$)) {
set $cors "1";
}
# OPTIONS indicates a CORS pre-flight request
if ($request_method = 'OPTIONS') {
set $cors "${cors}o";
}
# Append CORS headers to any request from
# allowed CORS domain, except OPTIONS
if ($cors = "1") {
more_set_headers 'Access-Control-Allow-Origin: $http_origin';
more_set_headers 'Access-Control-Allow-Credentials: true';
proxy_pass http://serverIP:serverPort;
}
# OPTIONS (pre-flight) request from allowed
# CORS domain. return response directly
if ($cors = "1o") {
more_set_headers 'Access-Control-Allow-Origin: $http_origin';
more_set_headers 'Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, DELETE';
more_set_headers 'Access-Control-Allow-Credentials: true';
more_set_headers 'Access-Control-Allow-Headers: Origin,Content-Type,Accept';
add_header Content-Length 0;
add_header Content-Type text/plain;
return 204;
}
# Requests from non-allowed CORS domains
proxy_pass http://serverIP:serverPort;
}
}

Serve root static file with nginx as node reverse proxy

I have a nodejs server that's served with nginx as reverse proxy. That part is ok, and static files locations are set up correctly. But I want the root address to serve a static html file, and I don't know how to configure nginx so that the root url is not redirectected to the node app. Here's my server block:
upstream promotionEngine {
server 127.0.0.1:3001;
}
server {
listen 3000;
server_name localhost;
root C:/swaven/dev/b2b.pe/promotionEngine/templates/;
index index.html;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://promotionEngine;
proxy_redirect off;
}
location /public/ {
alias C:/swaven/dev/b2b.pe/promotionEngine/public/;
}
location /assets/ {
alias C:/swaven/dev/b2b.pe/promotionEngine/assets/;
}
}
htttp://localhost:3000/ping and http://localhost:3000/public/js/riot.js are correctly served.
But http://localhost:3000 keeps being sent to the node server, where I would like it to return a static index.html. If I remove the / location bloc, the html file is correctly served. How would I configure the location to work as reverse proxy for all urls except the root one ?
UPDATED: (based on comments and discussion)
You'll need 2 exact location blocks. One to intercept the / location and another to serve just /index.html.
An exact location block is described on nginx docs:
Also, using the “=” modifier it is possible to define an exact match of URI and location. If an exact match is found, the search terminates.
Simply using the index directive does not work. Because nginx creates an internal redirect to allow other blocks to match index.html. Which gets picked up by your proxy block.
upstream promotionEngine {
server 127.0.0.1:3001;
}
server {
listen 3000;
server_name localhost;
# Do an exact match on / and rewrite to /index.html
location = / {
rewrite ^$ index.html;
}
# Do an exact match on index.html to serve just that file
location = /index.html {
root C:/swaven/dev/b2b.pe/promotionEngine/templates/;
}
# Everything else will be served here
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://promotionEngine;
proxy_redirect off;
}
location /public/ {
alias C:/swaven/dev/b2b.pe/promotionEngine/public/;
}
location /assets/ {
alias C:/swaven/dev/b2b.pe/promotionEngine/assets/;
}
}
You can use =/ this type of location have higher priority due to lookup:
location =/ {
root ...
}
This request will not even try to reach other locations.
Something like this, adjust for your own use case.
http {
map $request_uri $requri {
default 1;
/ 0;
}
...........
server {
listen 80;
server_name www.mydomain.eu;
root '/webroot/www.mydomain.eu’;
if ($requri) { return 301 https://www.mydomain.eu$request_uri; }
location / {
..........
}
}

Resources