Nginx proxy_pass to docker container doesn't work - python-3.x

I have two identiacal docker containers running on different ports on CentOS7 server. Older version runs on port 81, newer one on port 8080 (82,83 were checked as well).
When I'm trying to proxy second container and change port from 81 to 8080 I receive nginx error message (HTTP/1.1 502 Bad Gateway).
Nginx is not in a container. I just have it installed on the server.
Here is my proxy_pass setting:
location / {
proxy_pass http://0.0.0.0:8080/;
}
And some additional information:
nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
If I try to access containers directly via their ports everything works fine.
curl 0.0.0.0:81
{"msg":"Phone Masks service"}
curl 0.0.0.0:8080
{"msg":"Phone Masks service"}
nginx version: nginx/1.16.1
Docker version 19.03.4, build 9013bf583a
Full server config is pretty standard, I didn't change anything except proxy_pass setting
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://0.0.0.0:8080/;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
The command I use to start the container:
sudo docker run --rm -it -p 8080:8080 -e PORT="8080" api
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
47ef127e3e49 api "/start.sh" 26 minutes ago Up 26 minutes 80/tcp, 0.0.0.0:8080->8080/tcp infallible_borg
5d5fe891ba30 api "/start.sh" 7 hours ago Up 7 hours 80/tcp, 0.0.0.0:81->81/tcp hopeful_cerf

This is SElinux related:
setsebool -P httpd_can_network_connect true
According to this thread:
The second one [httpd_can_network_connect] allows httpd modules and scripts to make outgoing connections to ports which are associated with the httpd service. To see a list of those ports run semanage port -l | grep -w http_port_t

Related

Can't redirect traffic to localhost with nginx and docker

I'm new to Docker and nginx so this may be a simple question but I've been searching through question/answers for a while and haven't found the correct solution.
I'm trying to run an nginx server through docker to reroute all requests to my.example.com/api/... to localhost:3000/api/...
I have the following Dockerfile:
FROM nginx
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
and the nginx.conf file:
server {
server_name my.example.com;
location / {
proxy_pass http://localhost:3000/;
}
}
When I make the calls to the api on localhost:3000 this works fine but when I try to run against my.example.com I get a network error that the host isn't found. To be clear the domain I want to 'redirect' traffic from to localhost host is a valid server address, but I want to mock the api for it for development.
This isn't working because your nginx proxying request to localhost which is container itself but your app is running on host's port 3000 -- outside of container. check this article
change
proxy_pass http://localhost:3000/;
  to
proxy_pass http://host.docker.internal.
add 127.0.0.1 example.com my.example.com in etc/hosts

Two Nginx server block only one works with React app

I want to host several react app on my linux(CentOS) server with nginx.
currently I have two server block in nginx.conf.
First server is where I proxy different request to different server.
Second server is my react app.
I couldn't get my react app host by nginx.
Block 1
server {
listen 80
server_name localhost
root /usr/share/nginx/html
...
location /app1/ {
proxy_pass http://localhost:3010
}
...
}
Block 2
server {
listen 3010;
listen [::]:3010;
server_name localhost;
root /home/administrator/Projects/app1/build;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
When I using telnet to check the server is hosting or not. only port 80 is listening.
No server is listening on port 3010.
How should I change my Nginx configuration to make it works?
Update
I check the nginx error log and I got
1862#0: bind() to 0.0.0.0:3010 failed (13: Permission denied)
I've search on it and there are a lot answer about non-root user cannot listen to port below 1024
But the server is trying to bind on 3010
Do I still have to add run nginx by root?
This is probably related to SELinux security policy. If You check on the below command and see http is only allowed to bind on the list of ports.
[root#localhost]# semanage port -l | grep http_port_t
http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000
Maybe you can use any of the listed ports or add your custom port using the below command
$ semanage port -a -t http_port_t -p tcp 3010
If semanage command is not installed, you can check using yum provides semanage command to identify which package to install.

configure nginx for it to support file system

I am inside /root directory and I have a folder inside it called testfolder. Inside that folder I have a bunch of folders and subfolders which I want to host on the nginx server.
I am running the following command to start my Nginx server:
docker run --name file-server -v $(pwd)/testfolder:/app -p 8080:80 -d nginx
/etc/nginx/sites-available/default file has the following contents:
location /testfolder {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
alias /root/testfolder/;
autoindex on;
try_files $uri $uri/ =404;
}
Now when I start my server and hit /testfolder, It gives me a 403 error
Serving static files using nginx as web server is a good option.
For making the static files available you need to copy your testfolder to /usr/share/nginx/html inside the nginx image. After which you will be able to see the files on your browser on port 8080.
Docker cmd:-
docker run -it --rm -d -p 8080:80 --name web -v ~/code/docker/testfolder:/usr/share/nginx/html nginx
For accessing the directory in list view for static files, we need to create a custom nginx conf file and pass it to the nginx container.
Ex:-
Docker command:-
docker run -it --rm -d -p 8080:80 --name web -v ~/code/nginx-static:/usr/share/nginx/html -v ~/code/nginx-static/default.conf:/etc/nginx/conf.d/default.conf nginx
default.conf:-
server{
listen 80 default_server;
listen [::]:80 default_server;
location / {
autoindex on;
root /usr/share/nginx/html;
}
}

Nginx Load Balancer High availability running in Docker

I am running, nginx load balancer container for my backend web server,
Since I have only one container running as nginx load balancer, In case of container die/crash, clients cannot reach webservers,
Below is the nginx.conf
events {}
http {
upstream backend {
server 1.18.0.2;
server 1.18.0.3;
}
# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
Below is the Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
I am running nginx container like below
docker run -d -it -p 8082:80 nginx-ls
I am accessing above server with http://serve-ip:8082
Container may crash or die any time, in that case my application is not reachable, So I tried to run another container like below and I am getting below error since I am using same port and it is oblivious we cannot repurpose same port in same host.
docker run -d -it -p 8082:80 nginx-ls
06a20239bd303fab2bbe161255fadffd5b468098424d652d1292974b1bcc71f8
docker: Error response from daemon: driver failed programming external connectivity on endpoint
suspicious_darwin (58eeb43d88510e4f67f618aaa2ba06ceaaa44db3ccfb0f7335a739206e12a366): Bind for
0.0.0.0:8082 failed: port is already allocated.
So I ran in different port, It works fine
docker run -d -it -p 8083:80 nginx-ls
But How do we tell/configure clients use port 8083 container when container 8082 is down
or is there any other best method to achieve nginx load balancer with high availability?
Note: For some reasons, I cannot use docker-compose

Running Nginx as non root user

I installed Nginx using Ansible. To install on Centos7 I used the yum package so it by default was run as root user. I want it to start and run as a different user (ex - nginx user) in the Centos box. When I try to run it with a different user I get the following error:
Job for nginx.service failed because the control process exited with
error code. See "systemctl status nginx.service" and "journalctl -xe"
for details.
I know it's not advisable to run as root. So how do I get around this and run nginx as a non root user. Thanks
Add/Change the following in your /etc/nginx/nginx.conf:
user nginx;
You should create the user and grant permissions on the webroot directories recursively.
This way only master process runs as root. Because: Only root processes can listen to ports below 1024. A webserver typically runs at port 80 and/or 443. That means it needs to be started as root.
Note from the documentation on master and worker processes:
The main purpose of the master process is to read and evaluate
configuration files, as well as maintain the worker processes.
The worker processes do the actual processing of requests.
To run master process as non root user:
Change the ownership of the files whose path are specified by following Nginx directives:
error_log
access_log
pid
client_body_temp_path
fastcgi_temp_path
proxy_temp_path
scgi_temp_path
uwsgi_temp_path
Change the listen directives to ports above 1024, log in as desired user and run nginx by nginx -c /path/to/nginx.conf
Just in case it helps, for testing/debugging purpose, I sometimes run an nginx instance as a non privileged user on my Debian (stretch) laptop.
I use a minimal config file like this:
worker_processes 1;
error_log stderr;
daemon off;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log access.log;
server {
listen 8080;
server_name localhost;
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass localhost:8081;
}
}
}
and I start the process with:
/usr/sbin/nginx -c nginx.conf -p $PWD
Just in case it helps someone stumbling over this question in 2020, here is my minimal nginx.conf for running a web server on port 8088, works for a non-root user. No modding of file permissions necessary! (Tested on Centos 7.4 with nginx 1.16.1)
error_log /tmp/error.log;
pid /tmp/nginx.pid;
events {
# No special events for this simple setup
}
http {
server {
listen 8088;
server_name localhost;
# Set a number of log, temp and cache file options that will otherwise
# default to restricted locations accessible only to root.
access_log /tmp/nginx_host.access.log;
client_body_temp_path /tmp/client_body;
fastcgi_temp_path /tmp/fastcgi_temp;
proxy_temp_path /tmp/proxy_temp;
scgi_temp_path /tmp/scgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
# Serve local files
location / {
root /home/<your_user>/web;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
Why not use the rootless bitnami/nginx image:
$ docker run --name nginx bitnami/nginx:latest
More info
To verify it is not running as root but as your standard user (belonging to the docker group):
$ docker exec -it nginx id
uid=1**8 gid=0(root) groups=0(root)
And to verify that Nginx isn't listening to a root-restricted port 443 even internally:
$ docker ps -a | grep nginx
2453b37a9084 bitnami/nginx:latest "/opt/bitnami/script…" 4 minutes ago Up 30 seconds 8080/tcp, 0.0.0.0:8443->8443/tcp jenkins_nginx
It's easy to configure (see docs) and runs even under random UIDs defined at run time (i.e. not hard-coded in the Dockerfile). In fact this is Bitnami's policy to have all their containers rootless and prepared for UID changes at runtime, which is why we've been using them for a few years now under very security-conscious Openshift 3.x (bitnami/nginx in particular as a reverse proxy needed to enable authentication to MLflow web app).

Resources