I am inside /root directory and I have a folder inside it called testfolder. Inside that folder I have a bunch of folders and subfolders which I want to host on the nginx server.
I am running the following command to start my Nginx server:
docker run --name file-server -v $(pwd)/testfolder:/app -p 8080:80 -d nginx
/etc/nginx/sites-available/default file has the following contents:
location /testfolder {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
alias /root/testfolder/;
autoindex on;
try_files $uri $uri/ =404;
}
Now when I start my server and hit /testfolder, It gives me a 403 error
Serving static files using nginx as web server is a good option.
For making the static files available you need to copy your testfolder to /usr/share/nginx/html inside the nginx image. After which you will be able to see the files on your browser on port 8080.
Docker cmd:-
docker run -it --rm -d -p 8080:80 --name web -v ~/code/docker/testfolder:/usr/share/nginx/html nginx
For accessing the directory in list view for static files, we need to create a custom nginx conf file and pass it to the nginx container.
Ex:-
Docker command:-
docker run -it --rm -d -p 8080:80 --name web -v ~/code/nginx-static:/usr/share/nginx/html -v ~/code/nginx-static/default.conf:/etc/nginx/conf.d/default.conf nginx
default.conf:-
server{
listen 80 default_server;
listen [::]:80 default_server;
location / {
autoindex on;
root /usr/share/nginx/html;
}
}
Related
I'm new to Docker and nginx so this may be a simple question but I've been searching through question/answers for a while and haven't found the correct solution.
I'm trying to run an nginx server through docker to reroute all requests to my.example.com/api/... to localhost:3000/api/...
I have the following Dockerfile:
FROM nginx
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
and the nginx.conf file:
server {
server_name my.example.com;
location / {
proxy_pass http://localhost:3000/;
}
}
When I make the calls to the api on localhost:3000 this works fine but when I try to run against my.example.com I get a network error that the host isn't found. To be clear the domain I want to 'redirect' traffic from to localhost host is a valid server address, but I want to mock the api for it for development.
This isn't working because your nginx proxying request to localhost which is container itself but your app is running on host's port 3000 -- outside of container. check this article
change
proxy_pass http://localhost:3000/;
to
proxy_pass http://host.docker.internal.
add 127.0.0.1 example.com my.example.com in etc/hosts
I have two identiacal docker containers running on different ports on CentOS7 server. Older version runs on port 81, newer one on port 8080 (82,83 were checked as well).
When I'm trying to proxy second container and change port from 81 to 8080 I receive nginx error message (HTTP/1.1 502 Bad Gateway).
Nginx is not in a container. I just have it installed on the server.
Here is my proxy_pass setting:
location / {
proxy_pass http://0.0.0.0:8080/;
}
And some additional information:
nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
If I try to access containers directly via their ports everything works fine.
curl 0.0.0.0:81
{"msg":"Phone Masks service"}
curl 0.0.0.0:8080
{"msg":"Phone Masks service"}
nginx version: nginx/1.16.1
Docker version 19.03.4, build 9013bf583a
Full server config is pretty standard, I didn't change anything except proxy_pass setting
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://0.0.0.0:8080/;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
The command I use to start the container:
sudo docker run --rm -it -p 8080:8080 -e PORT="8080" api
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
47ef127e3e49 api "/start.sh" 26 minutes ago Up 26 minutes 80/tcp, 0.0.0.0:8080->8080/tcp infallible_borg
5d5fe891ba30 api "/start.sh" 7 hours ago Up 7 hours 80/tcp, 0.0.0.0:81->81/tcp hopeful_cerf
This is SElinux related:
setsebool -P httpd_can_network_connect true
According to this thread:
The second one [httpd_can_network_connect] allows httpd modules and scripts to make outgoing connections to ports which are associated with the httpd service. To see a list of those ports run semanage port -l | grep -w http_port_t
I installed Nginx using Ansible. To install on Centos7 I used the yum package so it by default was run as root user. I want it to start and run as a different user (ex - nginx user) in the Centos box. When I try to run it with a different user I get the following error:
Job for nginx.service failed because the control process exited with
error code. See "systemctl status nginx.service" and "journalctl -xe"
for details.
I know it's not advisable to run as root. So how do I get around this and run nginx as a non root user. Thanks
Add/Change the following in your /etc/nginx/nginx.conf:
user nginx;
You should create the user and grant permissions on the webroot directories recursively.
This way only master process runs as root. Because: Only root processes can listen to ports below 1024. A webserver typically runs at port 80 and/or 443. That means it needs to be started as root.
Note from the documentation on master and worker processes:
The main purpose of the master process is to read and evaluate
configuration files, as well as maintain the worker processes.
The worker processes do the actual processing of requests.
To run master process as non root user:
Change the ownership of the files whose path are specified by following Nginx directives:
error_log
access_log
pid
client_body_temp_path
fastcgi_temp_path
proxy_temp_path
scgi_temp_path
uwsgi_temp_path
Change the listen directives to ports above 1024, log in as desired user and run nginx by nginx -c /path/to/nginx.conf
Just in case it helps, for testing/debugging purpose, I sometimes run an nginx instance as a non privileged user on my Debian (stretch) laptop.
I use a minimal config file like this:
worker_processes 1;
error_log stderr;
daemon off;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log access.log;
server {
listen 8080;
server_name localhost;
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass localhost:8081;
}
}
}
and I start the process with:
/usr/sbin/nginx -c nginx.conf -p $PWD
Just in case it helps someone stumbling over this question in 2020, here is my minimal nginx.conf for running a web server on port 8088, works for a non-root user. No modding of file permissions necessary! (Tested on Centos 7.4 with nginx 1.16.1)
error_log /tmp/error.log;
pid /tmp/nginx.pid;
events {
# No special events for this simple setup
}
http {
server {
listen 8088;
server_name localhost;
# Set a number of log, temp and cache file options that will otherwise
# default to restricted locations accessible only to root.
access_log /tmp/nginx_host.access.log;
client_body_temp_path /tmp/client_body;
fastcgi_temp_path /tmp/fastcgi_temp;
proxy_temp_path /tmp/proxy_temp;
scgi_temp_path /tmp/scgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
# Serve local files
location / {
root /home/<your_user>/web;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
Why not use the rootless bitnami/nginx image:
$ docker run --name nginx bitnami/nginx:latest
More info
To verify it is not running as root but as your standard user (belonging to the docker group):
$ docker exec -it nginx id
uid=1**8 gid=0(root) groups=0(root)
And to verify that Nginx isn't listening to a root-restricted port 443 even internally:
$ docker ps -a | grep nginx
2453b37a9084 bitnami/nginx:latest "/opt/bitnami/script…" 4 minutes ago Up 30 seconds 8080/tcp, 0.0.0.0:8443->8443/tcp jenkins_nginx
It's easy to configure (see docs) and runs even under random UIDs defined at run time (i.e. not hard-coded in the Dockerfile). In fact this is Bitnami's policy to have all their containers rootless and prepared for UID changes at runtime, which is why we've been using them for a few years now under very security-conscious Openshift 3.x (bitnami/nginx in particular as a reverse proxy needed to enable authentication to MLflow web app).
I am running an Express-based website in a Docker container based on the Node.js image. How do I use Let's Encrypt with a container based on that image?
The first thing I've done is to create a simple express-based docker image.
I am using the following app.js, taken from express's hello world example in their docs:
var express = require('express');
var app = express();
app.get('/', function (req, res) {
res.send('Hello World!');
});
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
I also ended up with the following packages.json file after running their npm init in the same doc:
{
"name": "exampleexpress",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.14.0"
}
}
I've created the following Dockerfile:
FROM node:onbuild
EXPOSE 3000
CMD node app.js
Here's the output when I do my docker build step. I've removed most of the npm install output for brevity sake:
$ docker build -t exampleexpress .
Sending build context to Docker daemon 1.262 MB
Step 1 : FROM node:onbuild
# Executing 3 build triggers...
Step 1 : COPY package.json /usr/src/app/
Step 1 : RUN npm install
---> Running in 981ca7cb7256
npm info it worked if it ends with ok
<snip>
npm info ok
Step 1 : COPY . /usr/src/app
---> cf82ea76e369
Removing intermediate container ccd3f79f8de3
Removing intermediate container 391d27f33348
Removing intermediate container 1c4feaccd08e
Step 2 : EXPOSE 3000
---> Running in 408ac1c8bbd8
---> c65c7e1bdb94
Removing intermediate container 408ac1c8bbd8
Step 3 : CMD node app.js
---> Running in f882a3a126b0
---> 5f0f03885df0
Removing intermediate container f882a3a126b0
Successfully built 5f0f03885df0
Running this image works like this:
$ docker run -d --name helloworld -p 3000:3000 exampleexpress
$ curl 127.0.0.1:3000
Hello World!
We can clean this up by doing: docker rm -f helloworld
Now, I've got my very basic express-based website running in a Docker container, but it doesn't yet have any TLS set up. Looking again at the expressjs docs, the security best practice when using TLS is to use nginx.
Since I want to introduce a new component (nginx), I'll do that with a second container.
Since nginx will need some certificates to work with, let's go ahead and generate those with the letsencrypt client. The letsencrypt docs on how to use letsencrypt in Docker can be found here: http://letsencrypt.readthedocs.io/en/latest/using.html#running-with-docker
Run the following commands to generate the initial certificates. You will need to run this on a system that is connected to the public internet, and has port 80/443 reachable from the letsencrypt servers. You'll also need to have your DNS name set up and pointing to the box that you run this on:
export LETSENCRYPT_EMAIL=<youremailaddress>
export DNSNAME=www.example.com
docker run --rm \
-p 443:443 -p 80:80 --name letsencrypt \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
quay.io/letsencrypt/letsencrypt:latest \
certonly -n -m $LETSENCRYPT_EMAIL -d $DNSNAME --standalone --agree-tos
Make sure to replace the values for LETSENCRYPT_EMAIL and DNSNAME. The email address is used for expiration notifications.
Now, let's set up an nginx server that will make use of this newly generated certificate. First, we'll need an nginx config file that is configured for TLS:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
#add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location ^~ /.well-known/ {
root /usr/share/nginx/html;
allow all;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://expresshelloworld:3000;
}
}
}
We can put this config file into our own custom nginx image with the following Dockerfile:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
This can be build with the following command: docker build -t expressnginx .
Next, we'll create a custom network so we can take advantage of Docker's service discovery feature:
docker network create -d bridge expressnet
Now, we can fire up the helloworld and nginx containers:
docker run -d \
--name expresshelloworld --net expressnet exampleexpress
docker run -d -p 80:80 -p 443:443 \
--name expressnginx --net expressnet \
-v /etc/letsencrypt:/etc/letsencrypt \
-v /usr/share/nginx/html:/usr/share/nginx/html \
expressnginx
Double check that nginx came up properly by taking a look at the output of docker logs expressnginx.
The nginx config file should redirect any requests on port 80 over to port 443. We can test that by running the following:
curl -v http://www.example.com/
We should also, at this point, be able to make a successful TLS connection, and see our Hello World! response back:
curl -v https://www.example.com/
Now, to set up the renewal process. The nginx.conf above has provisions for the letsencrypt .well-known path for the webroot verification method. If you run the following command, it will handle renewal. Normally, you'll run this command on some sort of cron so that your certs will be renewed before they expire:
export LETSENCRYPT_EMAIL=me#example.com
export DNSNAME=www.example.com
docker run --rm --name letsencrypt \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
-v "/usr/share/nginx/html:/usr/share/nginx/html" \
quay.io/letsencrypt/letsencrypt:latest \
certonly -n --webroot -w /usr/share/nginx/html -d $DNSNAME --agree-tos
There are many ways to achieve this depending on your setup. One popular way is to setup nginx in front of your Docker container, and handle the certificates entirely within your nginx config.
The nginx config can contain a list of 'usptreams' (your Docker containers) and 'servers' which essentially map requests to particular upstreams. As part of that mapping you can also handle SSL.
You can use certbot to help you set this up.
I've recently implemented https with let's encrypt using nginx. I'm listing the challenges I've faced, and the way I've implemented step-by-step here.
Challenge:
Docker file system is ephemeral. That means after each time you make a build the certificates that are stored or if generated inside the container, will vanish. So it's very tricky to generate certificates inside the container.
Steps to overcome it:
Below guide is independent of kind of the app you have, as it only involves nginx and docker.
First install nginx on you server (not on container, but directly on the server.) You can follow this guide to generate certificate for your domain using certbot.
Now stop this nginx server and start the build of your app. Install nginx on your container and open port 80, 443 on your docker container. (if using aws open on ec2 instance also as by default aws open only port 80)
Next run your container and mount the volumes that contain certificate file directly on the container. I've answered a question here on how to do the same.
This will enable https on your app. Incase you are not able to observe, and are using chrome try clearing dns cache for chrome
Auto renewal process :
Let's encrypt certificates are valid only for 3 months. In the above guide steps to configure auto renewal is also setup. But you've to stop and restart your container every 3 months atleast to make sure the certificates mounted on your docker container are up to date. (You will have to restart the nginx server we set up in the first step to make the renewal happen smoothly)
You may have a look here : https://certbot.eff.org/docs/using.html?highlight=docker#running-with-docker
Then what I personally do is :
Create a Docker volume to store the certs and generate the certs with the above image
Create a Docker user-defined network (https://docs.docker.com/engine/userguide/networking/#/user-defined-networks)
Create an image based on nginx with your configuration (maybe this will be useful)
Create a Nginx container based on your image, mount the volume in it and connect it to the network (also forward port 80 and 443 to whatever you want)
I would create a container for your node.js app and connect it to the same network
Now if you configured nginx correctly (point to the right path for the TLS certs and proxy to the right URL, like http://my-app:3210) you should have access to your app in https.
Front end - NGINX - which listening 443 port, and proxies to beck end
Back end - you docker container
I'm trying to use a dockerized version of nginx as a proxy server for my node (ExpressJS) application. Without any configuration to nginx and publishing port 80 for the container, I am able to see the default nginx landing page. So I know that much is working.
Now I can mount my sites-enabled directory that contains the configuration for proxy_pass localhost:3000. I have my node application running locally (not in any Docker container) and I can access it via port 3000 (i.e. localhost:3000). However, I would assume that with nginx container running, mapped to port 80, and proxying my localhost:3000, that I would be able to see my very simple (hello world) application. Instead I receive a 502.
Do I need to pass something into docker? Is this likely a nginx configuration error? Here is my nginx configuration:
server {
listen 0.0.0.0:80;
server_name localhost;
location / {
proxy_pass http://localhost:3000;
}
}
I have tried using this question but it did not seem to help. That is unless I'm doing something completely wrong.
If you're using docker-for-mac 18.03 or newer it auto creates a special DNS entry host.docker.internal that dynamically binds to the host inet ip. You can then use the dns name to proxy services running on the host machine from inside a container as a stand-in for localhost.
i.e. an nginx config file:
server {
listen 0.0.0.0:80;
server_name localhost;
location / {
proxy_pass http://host.docker.internal:3000;
}
}
You can get your current IP address as shown here:
ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}'
Then you can use the --add-host flag with docker run:
docker run --add-host localnode:$(ifconfig en0 | grep inet | grep -v inet6 | awk '{print \$2}') ...
In your proxypass use localnode instead of localhost.
Yes. Docker needs to know about your host machine. You can set an alias to that with the --add-host switch. On a *nix box to create an alias to a name "localbox", this would be:
docker run my_repo/my_image --add-host=localbox:<host_name>`
On boot2docker it would be:
docker run my_repo/my_image --add-host=localbox:192.168.59.3`
where you should replace "192.168.59.3" with whatever boot2docker ip returns.
Then, you should access your host machine always through the alias localbox, so just change your nginx config to:
location / {
proxy_pass http://localbox:3000;
}
On linux, this works for me:
In the docker-compose.yml, mount an entrypoint script into the nginx container:
nginx:
image: nginx:1.19.2
# ...
volumes:
- ./nginx-entrypoint.sh:/docker-entrypoint.d/nginx-entrypoint.sh:ro
The contents of the entrypoint map a local address to the host local address.
apt update
apt install iproute2 -y
echo "`ip route | awk '/default/ { print $3 }'`\tdocker.host.internal" >> /etc/hosts
Then, instead of using localhost inside the container, you can use docker.host.internal.
I had the same problem. Fixed it by using the local ip address of the docker host, instead of localhost.
So if the local ip address of your docker host in your LAN is 192.168.2.2:
location / {
proxy_pass http://192.168.2.2:3000;
}
Of course this solution only works well if you have assigned a static ip to your docker host.
And finally, if you are using Nginx as a reverse proxy for multiple services, you can spin all of that with docker-compose. Make sure to expose ports “80:80” only on the Nginx service. Other services you can expose only the service port without mapping to the underlying network like so:
web:
.....
expose:
- 8080
nginx:
.....
port:
- “80:80”
and then use Nginx configuration proxy_pass http://service-name:port
You don’t need the upstream app part at all