How can I use OS environment variables in Nginx config? - linux

How do I use environment variables set in the OS inside nginx configs?
For example environment variables set are ENVIRON=dev, APP_NAME=test
Here's my Dockerfile:
FROM openresty/openresty:alpine
RUN set -ex && \
rm /etc/nginx/conf.d/default.conf
ADD nginx.conf /etc/nginx/
ADD custom.conf /etc/nginx/conf.d/
Here's my main nginx.conf
user nginx;
worker_processes auto;
pcre_jit on;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
env ENVIRON;
env APP_NAME;
set_by_lua $environ 'return os.getenv("ENVIRON")';
set_by_lua $appname 'return os.getenv("APP_NAME")';
http {
server_tokens off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}
Here's my custom.conf from /etc/nging/conf.d/
upstream app.$environ-$appname {
server $environ-$appname:80;
}
server {
listen 80;
server_name $hostname;
error_log /dev/stdout info;
access_log /dev/stdout;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
add_header X-Info proxied;
proxy_pass http://app.$environ-$appname;
}
}
Thanks!

That thing you are trying to do, I dont think you can do that.
From the documentation on the server directive of the upstream module:
Defines the address and other parameters of a server. The address can
be specified as a domain name or IP address, with an optional port, or
as a UNIX-domain socket path specified after the “unix:” prefix
You can't create upstream servers on the fly using variables.
If you want to route requests like this either use the map directive if the different locations are known at runtime, or set up an auth_request directive which points at an application on your server which can return the correct variables for each request in real time. Then have Nginx store that response in a variable using the auth_request_set directive.

Related

Should I open port 3000 of my server to serve NodeJS application?

I have an Angular + Node.JS app. When I was running the program locally I defined a baseurl = http://localhost:3000/ in my Angular app and used this prefix for accessing to my NodeJS backend in my program defined links, but now when I wanted to deploy my app on a remote server, I changed the baseurldefinition to the baseurl = http://111.222.333.444:3000/(111.222.333.444 is my server ip address for example), but it doesn't work!
How should I connect my Angular app to the NodeServer on a remote server?
EDIT: This is my /etc/nginx/nginx.conf file content:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /demo/stock-front9/dist/strategy;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
#proxy_pass http://localhost:3000;
#proxy_http_version 1.1;
# First attempt to serve request as file, then
# as directory, then redirect to index(angular) if no file found.
try_files $uri $uri/ /index.html;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I would not, I think is better to run the Node app with a tool like PM2 and then place a reverse proxy using Nginx in front of it, PM2 will act as orchestrator over your service while Nginx will provide access only through standard web ports (80, 443).
And in the case of Angular, when compiling, it should generate a static web app which you can serve using the same Nginx reverse proxy, doing it like so you'll save yourself the effort of configuring things like CORS, API routes and so forth, everything will go through Nginx.
Update on an example of Nginx config file
server {
listen 80;
server_name example.org;
location /api {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_redirect off;
proxy_http_version 1.1;
}
location / {
root /path/to/angular/compiled/app;
index index.html;
}
}
And then the angular app should point to the same host.
Good luck and cheers :)
You can still run your angular app locally. And for backend server, you can use proxy.
Please take a look at this.
https://github.com/angular/angular-cli/blob/master/docs/documentation/stories/proxy.md#using-corporate-proxy

Nginx Reverse Proxy display default page instead of remote home page

I have configured nginx as a reverse proxy and load balancer on a server and on another server there is a web application running. When i access the public URL of nginx it display the default page of RHEL instead of the homepage of the application on remote server. Also, when I add a path in the nginx IP it redirects me to the IP of the application server in the browser instead being the same nginx server. I want the IP to be same as nginx server.
Example:
Nginx IP : 52.2.2.2
Remote Ip : 52.2.2.3
Browser
http://52.2.2.2/admin_portal
IP changes in Broswer
http://52.2.2.3/admin_portal
Below are my configuration:
/etc/nginx/conf.d/load_balancer.conf
upstream backend {
server 10.128.0.2;
}
# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://backend;
}
}
My Nginx configuration file
/etc/nginx/nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Can someone please help me
Before you pass the proxy you have to rewrite it.
location / {
rewrite ^/reclaimed/Ip / last;
proxy_pass http://backend;
}

Nginx 502 Bad Gateway error on EC2 Instance

I've been having some trouble configuring an nginx server on a EC2 Linux instance. I'm running an application on port 3000 and want to map that to port 80 using nginx.
Here is my configuration file:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote-addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
top_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_names_hash_bucket_size 128;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
index index.html index.htm
server {
listen 80 default_server;
[::]:80 default_server;
server_name localhost;
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location =/40x.html {
}
error_page 500 502 503 504 /50x.html;
location =/50x.html {
}
}
include /etc/nginx/sites-enabled/default;
This is the default file that comes with nginx with very slight changes by me, most notably the inclusion of a custom file called default, whose contents are as follows:
server {
listen 80;
server_name [my_domain_name];
location / {
proxy_pass http://[my_private_ip]:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
With the items in square brackets replaced with the correct values. Whenever I try to navigate to the website I get 502 Bad Gateway nginx/1.12.1.
My server is a node.js server running on port 3000.
I've tried troubleshooting and reading other stackoverflow questions about bad gateways but I can't figure out the solution. Thank you
Follow a different approach. Allow your application to run on port 3000 (and listen on 3000 as well). In this case, you would then have to open it as
http://url:3000
Now we just need to forward all requests coming to port 80 to 3000 which can be easily done using iptables:
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
Now you should be simply able to open it with the url, without the port number

Upstream Node server closing connection to nginx

I'm using nginx as a proxy for a Node server that's rate-limiting requests. The rate is one request every 30 seconds; most requests return a response fine, but if a request is kept open for an extended period of time, I get this:
upstream prematurely closed connection while reading response header from upstream
I cannot figure out what might be causing this. Below is my nginx configuration:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
# include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /srv/www/main/htdocs;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location /vcheck {
proxy_pass http://127.0.0.1:8080$is_args$query_string;
# proxy_buffer_size 128k;
# proxy_buffers 4 256k;
# proxy_busy_buffers_size 256k;
# proxy_http_version 1.1;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection 'upgrade';
# proxy_set_header Host $host;
# proxy_cache_bypass $http_upgrade;
# proxy_redirect off;
proxy_read_timeout 600s;
}
location ~ \.php$ {
include fastcgi.conf;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index routes.php$is_args$query_string;
}
location / {
if (-f $request_filename) {
expires max;
break;
}
if ($request_filename !~ "\.(js|htc|ico|gif|jpg|png|css)$") {
rewrite ^(.*) /routes.php last;
}
}
}
}
Is there a reason why Node could be closing the connection early?
EDIT: I'm using Node's built-in HTTP server.
Seems like You've to extend response timeout of nodejs application.
So if it's expressjs app so I can guess You try this one:
install: npm i --save connect-timeout
use:
var timeout = require('connect-timeout');
app.use(timeout('60s'));
But I recommend to not to keep connection waiting and fix issue in nodejs app, find why it's halting so long.
Seems like nodejs app has issues that cannot respond and request is getting lost keeping nginx waiting.

EPIPE Error with ExpressJS, nginx proxy server

I am running multiple ExpressJS Node apps through an Nginx proxy server, and am getting an EPIPE Error thrown whenever my users try to download a file. This does not happen on my local setup (which is identical to the server's except for the proxy server), so I figure it has something to do with my Nginx configuration.
Here are my Nginx configs:
/etc/nginx/nginx.conf
user www www;
worker_processes 1;
error_log /home/alex/logs/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
include mime.types;
index index.html index.htm index.php;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /home/alex/logs/access.log main;
sendfile on;
tcp_nopush on;
gzip on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
include /etc/nginx/conf.d/*.conf;
}
/etc/nginx/conf.d/default.conf
server {
listen 80;
server_name example.com;
# log access and stuff
access_log /home/alex/logs/example-site.log main;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
# Proxy to the NodeJS server
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8999;
}
# redirect server error pages to their HTML
include /etc/nginx/errpages.conf;
}
The ExpressJS server is sending the download using the following code:
app.get('/citrite/p/:patch', function(req, res)
{
if(set.citFiles.indexOf(req.params.patch) == -1)
{
res.send(mbuild.get404());
}
else
{
track.incrViewcount(req.params.patch, 'citrite');
res.download(set.citDir + '/' + req.params.patch, files.doneSaving);
}
});
That code and everything else works fine on my local git repo, but when I push from there and pull on the server-side, the site kicks and screams - it times out on the user's end and gives me an EPIPE error in the console. I am running Node.js version 4.2.1 and ExpressJS version 4.13.3.
I figured out what the problem was: apparently, having sendfile set to on is what was causing the downloads to stall, and turning that off (specifically, removing the directive in the config) fixed the issue. Not exactly sure why this would interfere, but getting rid of the setting cleared things up.

Resources