How to use Let's Encrypt with Docker container based on the Node.js image - node.js

I am running an Express-based website in a Docker container based on the Node.js image. How do I use Let's Encrypt with a container based on that image?

The first thing I've done is to create a simple express-based docker image.
I am using the following app.js, taken from express's hello world example in their docs:
var express = require('express');
var app = express();
app.get('/', function (req, res) {
res.send('Hello World!');
});
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
I also ended up with the following packages.json file after running their npm init in the same doc:
{
"name": "exampleexpress",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.14.0"
}
}
I've created the following Dockerfile:
FROM node:onbuild
EXPOSE 3000
CMD node app.js
Here's the output when I do my docker build step. I've removed most of the npm install output for brevity sake:
$ docker build -t exampleexpress .
Sending build context to Docker daemon 1.262 MB
Step 1 : FROM node:onbuild
# Executing 3 build triggers...
Step 1 : COPY package.json /usr/src/app/
Step 1 : RUN npm install
---> Running in 981ca7cb7256
npm info it worked if it ends with ok
<snip>
npm info ok
Step 1 : COPY . /usr/src/app
---> cf82ea76e369
Removing intermediate container ccd3f79f8de3
Removing intermediate container 391d27f33348
Removing intermediate container 1c4feaccd08e
Step 2 : EXPOSE 3000
---> Running in 408ac1c8bbd8
---> c65c7e1bdb94
Removing intermediate container 408ac1c8bbd8
Step 3 : CMD node app.js
---> Running in f882a3a126b0
---> 5f0f03885df0
Removing intermediate container f882a3a126b0
Successfully built 5f0f03885df0
Running this image works like this:
$ docker run -d --name helloworld -p 3000:3000 exampleexpress
$ curl 127.0.0.1:3000
Hello World!
We can clean this up by doing: docker rm -f helloworld
Now, I've got my very basic express-based website running in a Docker container, but it doesn't yet have any TLS set up. Looking again at the expressjs docs, the security best practice when using TLS is to use nginx.
Since I want to introduce a new component (nginx), I'll do that with a second container.
Since nginx will need some certificates to work with, let's go ahead and generate those with the letsencrypt client. The letsencrypt docs on how to use letsencrypt in Docker can be found here: http://letsencrypt.readthedocs.io/en/latest/using.html#running-with-docker
Run the following commands to generate the initial certificates. You will need to run this on a system that is connected to the public internet, and has port 80/443 reachable from the letsencrypt servers. You'll also need to have your DNS name set up and pointing to the box that you run this on:
export LETSENCRYPT_EMAIL=<youremailaddress>
export DNSNAME=www.example.com
docker run --rm \
-p 443:443 -p 80:80 --name letsencrypt \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
quay.io/letsencrypt/letsencrypt:latest \
certonly -n -m $LETSENCRYPT_EMAIL -d $DNSNAME --standalone --agree-tos
Make sure to replace the values for LETSENCRYPT_EMAIL and DNSNAME. The email address is used for expiration notifications.
Now, let's set up an nginx server that will make use of this newly generated certificate. First, we'll need an nginx config file that is configured for TLS:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
#add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location ^~ /.well-known/ {
root /usr/share/nginx/html;
allow all;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://expresshelloworld:3000;
}
}
}
We can put this config file into our own custom nginx image with the following Dockerfile:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
This can be build with the following command: docker build -t expressnginx .
Next, we'll create a custom network so we can take advantage of Docker's service discovery feature:
docker network create -d bridge expressnet
Now, we can fire up the helloworld and nginx containers:
docker run -d \
--name expresshelloworld --net expressnet exampleexpress
docker run -d -p 80:80 -p 443:443 \
--name expressnginx --net expressnet \
-v /etc/letsencrypt:/etc/letsencrypt \
-v /usr/share/nginx/html:/usr/share/nginx/html \
expressnginx
Double check that nginx came up properly by taking a look at the output of docker logs expressnginx.
The nginx config file should redirect any requests on port 80 over to port 443. We can test that by running the following:
curl -v http://www.example.com/
We should also, at this point, be able to make a successful TLS connection, and see our Hello World! response back:
curl -v https://www.example.com/
Now, to set up the renewal process. The nginx.conf above has provisions for the letsencrypt .well-known path for the webroot verification method. If you run the following command, it will handle renewal. Normally, you'll run this command on some sort of cron so that your certs will be renewed before they expire:
export LETSENCRYPT_EMAIL=me#example.com
export DNSNAME=www.example.com
docker run --rm --name letsencrypt \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
-v "/usr/share/nginx/html:/usr/share/nginx/html" \
quay.io/letsencrypt/letsencrypt:latest \
certonly -n --webroot -w /usr/share/nginx/html -d $DNSNAME --agree-tos

There are many ways to achieve this depending on your setup. One popular way is to setup nginx in front of your Docker container, and handle the certificates entirely within your nginx config.
The nginx config can contain a list of 'usptreams' (your Docker containers) and 'servers' which essentially map requests to particular upstreams. As part of that mapping you can also handle SSL.
You can use certbot to help you set this up.

I've recently implemented https with let's encrypt using nginx. I'm listing the challenges I've faced, and the way I've implemented step-by-step here.
Challenge:
Docker file system is ephemeral. That means after each time you make a build the certificates that are stored or if generated inside the container, will vanish. So it's very tricky to generate certificates inside the container.
Steps to overcome it:
Below guide is independent of kind of the app you have, as it only involves nginx and docker.
First install nginx on you server (not on container, but directly on the server.) You can follow this guide to generate certificate for your domain using certbot.
Now stop this nginx server and start the build of your app. Install nginx on your container and open port 80, 443 on your docker container. (if using aws open on ec2 instance also as by default aws open only port 80)
Next run your container and mount the volumes that contain certificate file directly on the container. I've answered a question here on how to do the same.
This will enable https on your app. Incase you are not able to observe, and are using chrome try clearing dns cache for chrome
Auto renewal process :
Let's encrypt certificates are valid only for 3 months. In the above guide steps to configure auto renewal is also setup. But you've to stop and restart your container every 3 months atleast to make sure the certificates mounted on your docker container are up to date. (You will have to restart the nginx server we set up in the first step to make the renewal happen smoothly)

You may have a look here : https://certbot.eff.org/docs/using.html?highlight=docker#running-with-docker
Then what I personally do is :
Create a Docker volume to store the certs and generate the certs with the above image
Create a Docker user-defined network (https://docs.docker.com/engine/userguide/networking/#/user-defined-networks)
Create an image based on nginx with your configuration (maybe this will be useful)
Create a Nginx container based on your image, mount the volume in it and connect it to the network (also forward port 80 and 443 to whatever you want)
I would create a container for your node.js app and connect it to the same network
Now if you configured nginx correctly (point to the right path for the TLS certs and proxy to the right URL, like http://my-app:3210) you should have access to your app in https.

Front end - NGINX - which listening 443 port, and proxies to beck end
Back end - you docker container

Related

configure nginx for it to support file system

I am inside /root directory and I have a folder inside it called testfolder. Inside that folder I have a bunch of folders and subfolders which I want to host on the nginx server.
I am running the following command to start my Nginx server:
docker run --name file-server -v $(pwd)/testfolder:/app -p 8080:80 -d nginx
/etc/nginx/sites-available/default file has the following contents:
location /testfolder {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
alias /root/testfolder/;
autoindex on;
try_files $uri $uri/ =404;
}
Now when I start my server and hit /testfolder, It gives me a 403 error
Serving static files using nginx as web server is a good option.
For making the static files available you need to copy your testfolder to /usr/share/nginx/html inside the nginx image. After which you will be able to see the files on your browser on port 8080.
Docker cmd:-
docker run -it --rm -d -p 8080:80 --name web -v ~/code/docker/testfolder:/usr/share/nginx/html nginx
For accessing the directory in list view for static files, we need to create a custom nginx conf file and pass it to the nginx container.
Ex:-
Docker command:-
docker run -it --rm -d -p 8080:80 --name web -v ~/code/nginx-static:/usr/share/nginx/html -v ~/code/nginx-static/default.conf:/etc/nginx/conf.d/default.conf nginx
default.conf:-
server{
listen 80 default_server;
listen [::]:80 default_server;
location / {
autoindex on;
root /usr/share/nginx/html;
}
}

Nginx Load Balancer High availability running in Docker

I am running, nginx load balancer container for my backend web server,
Since I have only one container running as nginx load balancer, In case of container die/crash, clients cannot reach webservers,
Below is the nginx.conf
events {}
http {
upstream backend {
server 1.18.0.2;
server 1.18.0.3;
}
# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
Below is the Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
I am running nginx container like below
docker run -d -it -p 8082:80 nginx-ls
I am accessing above server with http://serve-ip:8082
Container may crash or die any time, in that case my application is not reachable, So I tried to run another container like below and I am getting below error since I am using same port and it is oblivious we cannot repurpose same port in same host.
docker run -d -it -p 8082:80 nginx-ls
06a20239bd303fab2bbe161255fadffd5b468098424d652d1292974b1bcc71f8
docker: Error response from daemon: driver failed programming external connectivity on endpoint
suspicious_darwin (58eeb43d88510e4f67f618aaa2ba06ceaaa44db3ccfb0f7335a739206e12a366): Bind for
0.0.0.0:8082 failed: port is already allocated.
So I ran in different port, It works fine
docker run -d -it -p 8083:80 nginx-ls
But How do we tell/configure clients use port 8083 container when container 8082 is down
or is there any other best method to achieve nginx load balancer with high availability?
Note: For some reasons, I cannot use docker-compose

How to setup Nginix Docker Reverse proxy

I am trying to use Nginix reverse proxy container for my web application(another docker container) which runs on non standard port ,
unfortunately I cannot edit my web application container as its developed by some vendor , so I have a plain request that I need to setup nginx as frontend with 80/443 and forward all requests to 10.0.0.0:10101(web app container).
I had tried jwilder/nginx proxy and default docker nginx container not able to get the right configurtaion .any lead would be great.
At the moment I haven't shared any conf files , I can share it on demand. here is the environment details
OS - Ubuntu
Azure
Use proxy_pass nginx feature
Assuming you have both containers linked and the web app container's name is webapp use this configuration on nginx container
upstream backend {
server webapp:10101;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
NOTE: please note that I am skipping some configurations as this is just an example
Put the configuration in nginx.conf file and then deploy the container like this
docker run -d -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf -p 80:80 nginx
Then you'll be able to access your webapp on http://locahost

Running Nginx as non root user

I installed Nginx using Ansible. To install on Centos7 I used the yum package so it by default was run as root user. I want it to start and run as a different user (ex - nginx user) in the Centos box. When I try to run it with a different user I get the following error:
Job for nginx.service failed because the control process exited with
error code. See "systemctl status nginx.service" and "journalctl -xe"
for details.
I know it's not advisable to run as root. So how do I get around this and run nginx as a non root user. Thanks
Add/Change the following in your /etc/nginx/nginx.conf:
user nginx;
You should create the user and grant permissions on the webroot directories recursively.
This way only master process runs as root. Because: Only root processes can listen to ports below 1024. A webserver typically runs at port 80 and/or 443. That means it needs to be started as root.
Note from the documentation on master and worker processes:
The main purpose of the master process is to read and evaluate
configuration files, as well as maintain the worker processes.
The worker processes do the actual processing of requests.
To run master process as non root user:
Change the ownership of the files whose path are specified by following Nginx directives:
error_log
access_log
pid
client_body_temp_path
fastcgi_temp_path
proxy_temp_path
scgi_temp_path
uwsgi_temp_path
Change the listen directives to ports above 1024, log in as desired user and run nginx by nginx -c /path/to/nginx.conf
Just in case it helps, for testing/debugging purpose, I sometimes run an nginx instance as a non privileged user on my Debian (stretch) laptop.
I use a minimal config file like this:
worker_processes 1;
error_log stderr;
daemon off;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
access_log access.log;
server {
listen 8080;
server_name localhost;
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass localhost:8081;
}
}
}
and I start the process with:
/usr/sbin/nginx -c nginx.conf -p $PWD
Just in case it helps someone stumbling over this question in 2020, here is my minimal nginx.conf for running a web server on port 8088, works for a non-root user. No modding of file permissions necessary! (Tested on Centos 7.4 with nginx 1.16.1)
error_log /tmp/error.log;
pid /tmp/nginx.pid;
events {
# No special events for this simple setup
}
http {
server {
listen 8088;
server_name localhost;
# Set a number of log, temp and cache file options that will otherwise
# default to restricted locations accessible only to root.
access_log /tmp/nginx_host.access.log;
client_body_temp_path /tmp/client_body;
fastcgi_temp_path /tmp/fastcgi_temp;
proxy_temp_path /tmp/proxy_temp;
scgi_temp_path /tmp/scgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
# Serve local files
location / {
root /home/<your_user>/web;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
}
Why not use the rootless bitnami/nginx image:
$ docker run --name nginx bitnami/nginx:latest
More info
To verify it is not running as root but as your standard user (belonging to the docker group):
$ docker exec -it nginx id
uid=1**8 gid=0(root) groups=0(root)
And to verify that Nginx isn't listening to a root-restricted port 443 even internally:
$ docker ps -a | grep nginx
2453b37a9084 bitnami/nginx:latest "/opt/bitnami/script…" 4 minutes ago Up 30 seconds 8080/tcp, 0.0.0.0:8443->8443/tcp jenkins_nginx
It's easy to configure (see docs) and runs even under random UIDs defined at run time (i.e. not hard-coded in the Dockerfile). In fact this is Bitnami's policy to have all their containers rootless and prepared for UID changes at runtime, which is why we've been using them for a few years now under very security-conscious Openshift 3.x (bitnami/nginx in particular as a reverse proxy needed to enable authentication to MLflow web app).

nginx/apache redirection for output port on docker container on vps

I'm a linux noob in admin of docker container using apache or nginx on VPS.
I use an OVH classic Vps (4go ram, 25Go SSD) with already installed image of ubuntu 15.04 + docker.
Install of docker container is really easy, and in my case i install without problem the image sharelatex.
docker run -d \
-v ~/sharelatex_data:/var/lib/sharelatex \
-p 5000:80 \
--name=sharelatex \
sharelatex/sharelatex
Site is accessible on IP of the VPS at http://51.255.47.40:5000 port show that site work without any problem.
I have already a sub domain (tools.sebastienreycoyrehourcq.fr) configurated to go on the server ip vps (51.255.47.40 routed to External in webfaction panel ), not working, don't understand why.
I install an apache server on 51.255.47.40, but i suppose the best option is probably to install a docker image of nginx or apache ? Can you advice me on this point ? And after that, how can i redirect to 5000 port of the docker image on a classic 80 port of apache or nginx linked to my subdomain ?
Previous answers probably covers most of the issues, especially if there were redirection problems of your domain name.
In order to be fully portable and use all the possibilities of docker, my recommendation would be to used the Nginx official docker image and make it the only one accessible from the outside (with the opening of ports) and use the --link to manage connectivity between your Nginx containers and your other containers.
I have done that in similar situation which works pretty well. Below is a tentative translation of what I have done to your situation.
You start your share latex container without specifying any external port :
docker run -d \
-v ~/sharelatex_data:/var/lib/sharelatex \
--name=sharelatex \
sharelatex/sharelatex
You prepare an nginx conf file for your shareLatex server that you place in $HOME/nginx/conf that will look like
upstream sharelatex {
# this will refer to the name you pass as link to the nginx container
server sharelatex;
}
server {
listen 80;
server_name tools.sebastienreycoyrehourcq.fr;
location ^~ / {
proxy_pass http://sharelatex/;
}
}
You then start your nginx docker container with the appropriate volume links and container links :
docker run -d --link sharelatex:sharelatex --name NginxMain -v $HOME/nginx/conf:/etc/nginx/sites-available -v -p 80:80 kekev76/nginx
ps : this has been done with our own kekev76/nginx image that is public on github and docker but you can adapt the principle to the official nginx image.
nginx-proxy (https://github.com/jwilder/nginx-proxy) and then running sharelatex with VIRTUAL_HOST set to tools.sebastienreycoyrehourcq.fr should be enough to get this working.
e.g.
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
followed by
docker run -d \
-e VIRTUAL_HOST=tools.sebastienreycoyrehourcq.fr \
-v ~/sharelatex_data:/var/lib/sharelatex \
-p 5000:80 \
--name=sharelatex \
sharelatex/sharelatex
The subdomain tools.sebastienreycoyrehourcq.fr is not configured properly. It does not resolve to any IP address which is the reason it does not work.
After you configure your subdomain, you can run the sharelatex container on port 80 with this command:
docker run -d \
-v ~/sharelatex_data:/var/lib/sharelatex \
-p 80:80 \
--name=sharelatex \
sharelatex/sharelatex
This way you can access the app at http://tools.sebastienreycoyrehourcq.fr

Resources