Use Varnish cache tool with Node.js and Nginx - node.js

I have two Node.js server on two different ports(3636, 4646), I used Nginx web server as a reverse proxy for my servers , my problem is how to add Varnish cache tool to both servers?
/etc/nginx/sites-enabled/yourdomain:
upstream app_yourdomain {
server 127.0.0.1:3636;
keepalive 8;
}
server {
listen 0.0.0.0:8080;
server_name yourdomain.com yourdomain;
access_log /var/log/nginx/yourdomain.log;
# pass the request to the node.js server with the correct headers
# and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://app_yourdomain/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
/etc/nginx/sites-enabled/domain2:
server {
listen 8080;
server_name domain2.com;
access_log /var/log/nginx/domain2.access.log;
location / {
proxy_pass http://127.0.0.1:4646/;
}
}
varnish config file:
DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
but when I use curl -I http://localhost there is no sign of varnish :
HTTP/1.1 200 OK
Server: nginx/1.10.3 (Ubuntu)
Date: Mon, 20 Nov 2017 12:22:17 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 18 Sep 2017 06:18:46 GMT
Connection: keep-alive
ETag: "59bf6546-264"
Accept-Ranges: bytes
/etc/varnish/default.vcl:
backend default {
.host = "127.0.0.1";
.port = "8080";
}
Is there anything am I missing?

That's hard to tell without seeing your default.vcl.
Maybe you have something like this:
sub vcl_deliver {
unset resp.http.Via;
unset resp.http.X-Varnish;
}
Also make sure to have the correct backend config:
backend default {
.host = "localhost";
.port = "8080";
}

Related

Unable to access Elastic Beanstalk (single instance) from custom domain HTTPS

Greetings SO community,
I am attempting to configure my single-instance Elastic Beanstalk application to use a custom domain and HTTPS. Both the custom domain and SSL certificate were obtained from a third-party and uses their DNS servers (rather than Route 53).
I have added the .ebextensions/https-instance-securitygroup.config per AWS documentation (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance.html) as well as the files for Node application (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-nodejs.html). The only difference in the last step is that I did not create a .ebextensions/https-instance.config file as I am pushing my code to GitHub and using CodePipeline to build my code. So, the https.conf and certificates were manually created and uploaded to the EC2 instance.
Also, I have checked my instance's inbound rules to ensure that 80 & 443 are open on the EB instance and for the associated security group.
proxy.conf
upstream nodejs {
server 127.0.0.1:5000;
keepalive 256;
}
server {
listen 8080;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
access_log /var/log/nginx/access.log main;
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
gzip on;
gzip_comp_level 4;
gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
location /static {
alias /var/app/current/client/build/static;
}
}
https.conf
# HTTPS server
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/pki/tls/certs/server.crt;
ssl_certificate_key /etc/pki/tls/certs/server.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# For enhanced health reporting support, uncomment this block:
#if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
# set $year $1;
# set $month $2;
# set $day $3;
# set $hour $4;
#}
#access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
#access_log /var/log/nginx/access.log main;
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
So after rereading the AWS documentation for what felt like the hundredth time, I was finally able to resolve my issues. And because I wasn't strictly using the preferred method of using an .ebextensions folder, I had to fiddle with the Nginx proxy on the Elastic Beanstalk-created EC2 instance directly.
In short, I was missing the following section from my /etc/nginx/conf.d/proxy.conffile:
location / {
### START MISSING ###
set $redirect 0;
if ($http_x_forwarded_proto != "https") {
set $redirect 1;
}
if ($http_user_agent ~* "ELB-HealthChecker") {
set $redirect 0;
}
if ($redirect = 1) {
return 301 https://$host$request_uri;
}
### END OF MISSING ###
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
This documented here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-httpredirect.html
Specifically, This is a snippet from the NodeJS, default Nginx proxy config file: https://github.com/awsdocs/elastic-beanstalk-samples/blob/master/configuration-files/aws-provided/security-configuration/https-redirect/nodejs/https-redirect-nodejs.config

Extend nginx on elastic beanstalk with websockets

I want to set up an elastic beanstalk app to work with HTTP and Websockets. I can get the HTTP working on port 8081, but I can't access my websocket server because I get a 301 redirect error with nginx. I can tell my websocket server is running, does anyone know why I can't access it? Here is my proxy.config file:
upstream nodejs {
server 127.0.0.1:8081;
keepalive 256;
}
upstream wsserver {
server 127.0.0.1:3000;
keepalive 256;
}
server {
listen 8080;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
access_log /var/log/nginx/access.log main;
# prevents 502 bad gateway error
large_client_header_buffers 8 32k;
location /ws/ {
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_pass http://wsserver;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}

connect() failed (111: Connection refused) while connecting upstream

[error] 7697#7697: *100335 connect() failed (111: Connection refused) while connecting to upstream, client: XXX.XXX.XXX.XXX, server: v4.domain.com, request: "GET /socket.io/?__sails_io_sdk_version=0.13.8&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=polling&t=Luvcibs HTTP/1.1", upstream: "http://127.0.0.1:1338/socket.io/?__sails_io_sdk_version=0.13.8&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=polling&t=Luvcibs", host: "v4.domain.com", referrer: "http://v4.domain.com/?ct=t(Flash_Sals_Videotoolz_copy_05_12_29_2016)&mc_cid=404a630ab2&mc_eid=c44f7937fe"
[error] 7700#7700: *101735 connect() failed (111: Connection refused) while connecting to upstream, client: XX.XX.XX.XX, server: v4.domain.com, request: "GET /socket.io/?__sails_io_sdk_version=0.13.8&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=polling&t=LuvciVy HTTP/1.1", upstream: "http://127.0.0.1:1338/socket.io/?__sails_io_sdk_version=0.13.8&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=polling&t=LuvciVy", host: "v4.domain.com", referrer: "http://v4.domain.com/"
Recently i have configured my sails js application on Ubuntu 16.04 VPS Server which have nginx as reverse server. below is my nginx conf for site
Site runs fine but all of a sudden site break and shows 502 bad gateway.
Tried almost everything whatever i can.
Please help me to get it sorted.
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html/php;
# Add index.php to the list if you are using PHP
index index.html index.php index.htm index.nginx-debian.html;
server_name domain.com www.domain.com;
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem; # managed by Certbot
ssl_dhparam /etc/ssl/certs/dhparam.pem;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
#try_files $uri $uri/ =404;
#try_files $uri $uri/ /index.php?q=$uri&$args;
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files not-existing-file #php;
}
location #php {
#fastcgi_pass 127.0.0.1:9000;
fastcgi_read_timeout 300;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
include snippets/fastcgi-php.conf;
}
location ~* \.(css|js|png|jpg|jpeg|gif|ico)$ {
expires 1d;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
upstream sails_server {
server 127.0.0.1:1338; # fail_timeout=0;
# keepalive 64;
}
server {
listen 80;
listen [::]:80;
server_name v4.domain.com;
root /root/domain/;
#Logging
error_log /root/domain/log/error.log notice;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_read_timeout 300;
proxy_pass http://sails_server;
proxy_redirect off;
# proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection "";
# proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
# proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=one:10ms;
# proxy_cache one;
# proxy_cache_key sfs$request_uri$scheme;
# proxy_pass_request_headers on;
}
location /socket.io/ {
proxy_pass http://sails_server/socket.io/;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header Port $server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Nginx-Proxy true;
proxy_pass_request_headers on;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffers 8 32k;
proxy_buffer_size 64k;
}
}
My PM2 config file for my sails application is below
{
"apps": [
{
"name": "dj",
"script" : "./app.js",
"watch": false,
"ignore_watch" : ["node_modules", ".tmp"],
"watch_options": {
"followSymlinks": false
},
"env" : {
"PORT": 1338,
"NODE_ENV": "production"
}
}
]
}

Heroku nginx websockets

I've been trying to get websockets to work on heroku over nginx and seem to be stuck. I'm using the nginx-buildpack which has worked great but I haven't had any success thus far getting an upgraded websocket connection.
Here is my nginx.conf.erb, which is just slightly modified from the buildpack example...
daemon off;
#Heroku dynos have 4 cores.
worker_processes 4;
events {
use epoll;
accept_mutex on;
worker_connections 1024;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
log_format l2met 'measure.nginx.service=$request_time request_id=$http_heroku_request_id';
access_log logs/nginx/access.log l2met;
error_log logs/nginx/error.log;
include mime.types;
default_type application/octet-stream;
sendfile on;
#Must read the body in 5 seconds.
client_body_timeout 5;
proxy_read_timeout 950s;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
keepalive_timeout 5;
location /test {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://127.0.0.1:3000;
}
location / {
proxy_pass http://127.0.0.1:3001;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
}
To test this configuration< i've just been using the websocket.org Echo Test. Unfortunately it is not able to connect.
On port 3001 I've just got a simple socket.io server, and I'm logging any connections (none so far...)...
var app2 = require('http').createServer().listen(3001);
var io = require('socket.io').listen(app2);
io.on('connection', function(socketconnection){
socketconnection.send("Connected to Server-1");
console.log("connected to websocket server!");
socketconnection.on('message', function(message){
socketconnection.send(message);
});
});
When I try connecting with the websockets tester this is the error that I get in my heroku logs...
*3 upstream prematurely closed connection while reading response
header from upstream, client: 10.140.231.210, server: _, request: "GET /? encoding=text HTTP/1.1", upstream: "http://127.0.0.1:3001/?encoding=t
ext", host: "www.mydomain.com"
Any ideas what I may be doing wrong here?
UPDATE 1 :
Ok, I so I've found that if I alter the section of my config where it is says :
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
to:
map $http_upgrade $connection_upgrade {
default upgrade;
}
this seems to prevent that error from occurring. I suppose this is due to a blank response being returned on connect. However, now that I've done this I get a new error:
*5 connect() failed (111: Connection refused) while connecting to upstream,
client: 10.99.212.2, server: _, request: "GET /?encoding=text HTTP/1.1",
upstream: "http://127.0.0.1:3001/?encoding=text", host: "www.mydomain.com"
I suppose this may be an issue unrelated to the first, but I'm not sure!

What MongoDB user role can make POST calls?

I just have created a MongoDB instance on Ubuntu 14.04, with authentication by username:password.
The user I've created is like this:
{
"_id" : "myDatabase.myUser",
"user" : "myUser",
"db" : "myDatabase",
"roles" : [ { "role" : "readWrite", "db" : "myDatabase" } ]
}
And the URI String that I use on my REST API written in Node.js (with Express and Mongoose) is like:
mongodb://myUser:password#localhost:27017/myDatabase
The Connection is OK and the GET methods works fine, but when I use a POST method, like a signup by email/password, the response is:
Status Code:405 Not Allowed
Any idea? Thanks in advance!
FYI: I'm using Nginx as reverse proxy and Web Server for the Frontend (AngularJS app), and the config is:
server {
listen 80;
server_name example.com;
access_log /var/log/nginx/nginx.access.log;
error_log /var/log/nginx/nginx.error.log;
location / {
expires -1;
add_header Pragma "no-cache";
add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0";
root /usr/share/www;
try_files $uri $uri/ /index.html =404;
}
location /api/v1 {
proxy_set_header "Access-Control-Allow-Origin";
proxy_set_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS, PUT, DELETE";
proxy_set_header "Access-Control-Allow-Headers" "X-Requested-With,Accept,Content-Type, Origin";
proxy_pass http://127.0.0.1:3000/api/v1;
proxy_buffering on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header origin "http://example.com";
}
}
I think that it isn't a mongodb restriction. Mongo can't know if the request is POST or GET. Have you verified if the request arrives at nodejs server? I think that nginx is who returns the 405 status code.
It's possible that the failure is due to try to return a static page as response to the POST request. Try adding into nginx.conf file:
# To dispatch static pages on POST request
error_page 405 = 200 $uri;
I have added the following and nginx at least works!:
proxy_redirect off;
The nginx config for default now is:
server {
listen 80;
server_name example.com;
access_log /var/log/nginx/nginx.access.log;
error_log /var/log/nginx/nginx.error.log;
location / {
expires -1;
add_header Pragma "no-cache";
add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0";
root /usr/share/www;
try_files $uri $uri/ /index.html =404;
}
location /api/v1 {
proxy_set_header 'Access-Control-Allow-Origin' 'http:/example.com';
proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
proxy_set_header 'Access-Control-Allow-Headers' 'X-Requested-With,Accept,Content-Type, Origin';
proxy_pass http://127.0.0.1:3000/api/v1;
proxy_redirect off;
proxy_buffering on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header origin "http://example.com";
}
}
I hope this be valid for somebody.

Resources