HTTPS server in Docker container - linux

I have a problem about how to deploy https server in docker. It failed to access the content due to SSL error. And then I did an experiment to test SSL function in docker container. The experiment is to listen on a port (tls), and if a connection comes then send back the content of a file.
My Dockerfile is like:
FROM ruanhao/centos-dev
EXPOSE 8443
COPY banner .
COPY server.crt.pem .
COPY server.key.pem .
CMD socat -U openssl-listen:8443,reuseaddr,cert=server.crt.pem,key=server.key.pem,verify=0,fork open:banner
And I run the docker as docker run -d -p 8443:8443 --name tls -it ruanhao/socat-tls
Then I used curl to get the content. curl -k -v -L https://192.168.99.100:8443, but it failed:
* Rebuilt URL to: https://192.168.99.100:8443/
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 8443 (#0)
* Unknown SSL protocol error in connection to 192.168.99.100:-9850
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to 192.168.99.100:-9850
I don't know why it is like this. Is there something I do not know about the usage of TLS in docker? Do you know how to fix it? Thank you.

Dockerfile
ADD ./apache.conf /etc/apache2/sites-enabled/000-default.conf
ADD ./ssl/ /ssl/
RUN a2enmod ssl
CMD service apache2 start
ssl folder contains server.key & server.key files.
apache
<virtualHost _default_:443>
DocumentRoot "/var/www/"
SSLEngine on
SSLCertificateFile /ssl/server.crt
SSLCertificateKeyFile /ssl/server.key
</VirtualHost>

Related

What's the simplest way to deploy a node app on linux mint?

I have a node app, it runs great in VS Code, and I can launch it using the script command from the terminal window.
It's just a utility I'd like running in the background on my dev machine. It watches for keystrokes and sets my busy / idle indicator to other apps. It uses iohook if you're curious, to do this.
How can I deploy it to just be running in the background (all the time, including at startup)? I'm going to deploy it as a web server so I don't have to mess with linux services.
I already have apache and nginx and all that other web server stuff installed from the numerous tutorials I've done, but I don't know how to deploy to any of them.
I tried this:
https://plainenglish.io/blog/deploying-a-localhost-server-with-node-js-and-express-js
but that only enables launching from vs code or command line, it's not a real "server" that runs all the time, doesn't require a terminal window and starts on system startup.
I need this running from apache or nginx or something like that.
https://www.ionos.com/digitalguide/websites/web-development/nodejs-for-a-website-with-apache-on-ubuntu/
I tried:
To access the Node.js script from the web, install the Apache modules proxy and proxy_http with the commands:
sudo a2enmod proxy
sudo a2enmod proxy_http
Once the installation is complete, restart Apache for the changes to take effect:
sudo service apache2 restart
[Linux Mint] Running a web app on local machine
First website
[Terminal] Install node.js: sudo apt-get install nodejs
[Terminal] Install apache: sudo apt-get install apache2
[Terminal] Install PM: sudo npm install -g pm2
[Terminal] start apache: sudo systemctl status apache2
[Browser] test apache: localhost
[Terminal] Go to your web root. It can be wherever you like: cd /home/homer-simpson/websites
[Nemo][root] Create app folder: /home/homer-simpson/websites/hello-app
[Nemo][root] Create node.js hello world file: /var/www/html/hello-app/hello.js
[Terminal] Make hello.js executable: sudo chmod 755 hello.js
[xed][root] Create node.js app in this file: See hello.js listing
[Terminal] Run from terminal as test: node hello.js
[Browser] Test web site: http://localhost:4567/
Shut down node app CTRL+C
Start up app in PM: sudo pm2 start hello.js
[Terminal][root] Add PM to startup scripts: sudo pm2 startup systemd
[Terminal][root] Save PM apps: pm2 save
[Terminal] enable apache modules: sudo a2enmod proxy && sudo a2enmod proxy_http && sudo service apache2 restart
[Nemo][root] Open as root apache config: /etc/apache2/sites-available/000-default.conf
[xed][root] add / replace config for your website: See 000-default.conf listing
[Terminal] restart apache: sudo systemctl restart apache2
[Browser] test the website: http://localhost/hello-app
Hello World!
Subsequent websites:
[Nemo][root] Create new app folder under your website root: /home/homer-simpson/websites/another-app
[Nemo][root] Copy scripts into there and make executable
[Terminal] Start up app in PM: sudo pm2 start another-app.js
[Terminal][root] Save PM config: pm2 save
[Terminal][root] Add new website to apache config under a new Location tag with the port number of the new app (must be unique): sudo xed /etc/apache2/sites-available/000-default.conf
[Terminal] restart apache: sudo systemctl restart apache2
View it over the LAN:
[Firewall] Set to "Home" profile. Incoming deny, outgoing allow, enabled
[Firewall] Add a rule, simple tab, port 80, name "Apache"
[Terminal] Get your hostname: hostname
[Terminal][root] Change your machine name to something cool: hostname tazerface
[xed][root] Change the host name in /etc/hosts
[xed][root] Change the host name in /etc/hostname
Reboot tazerface. <= that's your machine name now. Omg that's such a cool name.
Make sure pm2 started automatically and has your apps listed as "online": pm2 list
[phone][browser] Test your website: http://tazerface/hello-app
If it doesn't work, make sure tazerface isn't using a wifi provided by a network repeater. It needs to be on the same wifi network as the phone (but can be on either the 5GHz or 2.4GHz variant)
Add free ssl certificate:
[Terminal] Add ssl module: sudo a2enmod ssl
[Firewall] Add a rule, simple tab, port 443, name "Apache ssl"
[Terminal] create self signed free certificate: sudo openssl req -x509 -nodes -days 999999 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt
[Terminal] Leave all answers default (by hitting enter) except common name, enter tazerface
[Terminal] Test certificate: openssl verify apache-selfsigned.crt
[Terminal] Replace contents on .conf with ssl .conf listing below: sudo xed /etc/apache2/sites-available/000-default.conf
[Terminal] Restart Apache: sudo systemctl restart apache2
[phone][browser] Test your website: https://tazerface/hello-app
hello.js
var http = require('http');
//create a server object:
const port = 4567
http.createServer(function (req, res) {
res.write('Hello World!'); //write a response to the client
res.end(); //end the response
}).listen(port); //the server object listens on port 4567
// Console will print the message
console.log(`Server running at ${port}`);
000-default.conf (no ssl)
<VirtualHost *:80>
ServerName example.com
<Directory /var/www/>
Options -Indexes +FollowSymLinks
AllowOverride None
Require all granted
</Directory>
ProxyRequests Off
ProxyPreserveHost On
ProxyVia Full
<Proxy *>
Require all granted
</Proxy>
<Location /hello-app>
ProxyPass http://127.0.0.1:4567
ProxyPassReverse http://127.0.0.1:4567
</Location>
</VirtualHost>
000-default.conf (ssl)
# The only thing in the firewall that needs to be open is 80/443
<VirtualHost *:80>
Redirect / https://tazerface/
</VirtualHost>
<VirtualHost *:443>
ServerName tazerface
<Directory /var/www/>
Options -Indexes +FollowSymLinks
AllowOverride None
Require all granted
</Directory>
SSLEngine on
SSLCertificateFile /etc/ssl/certs/apache-selfsigned.crt
SSLCertificateKeyFile /etc/ssl/private/apache-selfsigned.key
ProxyRequests Off
ProxyPreserveHost On
ProxyVia Full
<Proxy *>
Require all granted
</Proxy>
<Location /hello-app>
ProxyPass http://127.0.0.1:4567
ProxyPassReverse http://127.0.0.1:4567
</Location>
</VirtualHost>

Issues getting Apache VirtualHost to properly Map

I'm setting up a docker container on a linux server and I'm trying to set up a VirtualHost so that when I visit the domain I own it will show that website.
I have a DNS record on my domain to use the IP address of my linux server, and I installed apache on there to test and it worked properly.
If I start my container with
docker run -dit --name web-app -p 8080:80 web-image
I can go to mydomain.com:8080 and see my website, but it doesn't work if I just navigate to mydomain.com.
My VirtualHost stanza in httpd.conf is
<VirtualHost *:80>
ServerAdmin admin#mydomain.com
ServerName mydomain.com
ServerAlias mydomain.com
DocumentRoot /usr/local/apache2/htdocs
</VirtualHost>
The only thing I can think is that I need to update my domain DNS definition to accept the Docker Container IP address?
Is there something I'm missing?
It's quite obvious, that the website is available on port 8080, but not port 80, since you define -p 8080:80. You need to expose port 80 instead.
docker run -dit --name web-app -p 80:80 web-image

How to use Let's Encrypt with Docker container based on the Node.js image

I am running an Express-based website in a Docker container based on the Node.js image. How do I use Let's Encrypt with a container based on that image?
The first thing I've done is to create a simple express-based docker image.
I am using the following app.js, taken from express's hello world example in their docs:
var express = require('express');
var app = express();
app.get('/', function (req, res) {
res.send('Hello World!');
});
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
I also ended up with the following packages.json file after running their npm init in the same doc:
{
"name": "exampleexpress",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.14.0"
}
}
I've created the following Dockerfile:
FROM node:onbuild
EXPOSE 3000
CMD node app.js
Here's the output when I do my docker build step. I've removed most of the npm install output for brevity sake:
$ docker build -t exampleexpress .
Sending build context to Docker daemon 1.262 MB
Step 1 : FROM node:onbuild
# Executing 3 build triggers...
Step 1 : COPY package.json /usr/src/app/
Step 1 : RUN npm install
---> Running in 981ca7cb7256
npm info it worked if it ends with ok
<snip>
npm info ok
Step 1 : COPY . /usr/src/app
---> cf82ea76e369
Removing intermediate container ccd3f79f8de3
Removing intermediate container 391d27f33348
Removing intermediate container 1c4feaccd08e
Step 2 : EXPOSE 3000
---> Running in 408ac1c8bbd8
---> c65c7e1bdb94
Removing intermediate container 408ac1c8bbd8
Step 3 : CMD node app.js
---> Running in f882a3a126b0
---> 5f0f03885df0
Removing intermediate container f882a3a126b0
Successfully built 5f0f03885df0
Running this image works like this:
$ docker run -d --name helloworld -p 3000:3000 exampleexpress
$ curl 127.0.0.1:3000
Hello World!
We can clean this up by doing: docker rm -f helloworld
Now, I've got my very basic express-based website running in a Docker container, but it doesn't yet have any TLS set up. Looking again at the expressjs docs, the security best practice when using TLS is to use nginx.
Since I want to introduce a new component (nginx), I'll do that with a second container.
Since nginx will need some certificates to work with, let's go ahead and generate those with the letsencrypt client. The letsencrypt docs on how to use letsencrypt in Docker can be found here: http://letsencrypt.readthedocs.io/en/latest/using.html#running-with-docker
Run the following commands to generate the initial certificates. You will need to run this on a system that is connected to the public internet, and has port 80/443 reachable from the letsencrypt servers. You'll also need to have your DNS name set up and pointing to the box that you run this on:
export LETSENCRYPT_EMAIL=<youremailaddress>
export DNSNAME=www.example.com
docker run --rm \
-p 443:443 -p 80:80 --name letsencrypt \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
quay.io/letsencrypt/letsencrypt:latest \
certonly -n -m $LETSENCRYPT_EMAIL -d $DNSNAME --standalone --agree-tos
Make sure to replace the values for LETSENCRYPT_EMAIL and DNSNAME. The email address is used for expiration notifications.
Now, let's set up an nginx server that will make use of this newly generated certificate. First, we'll need an nginx config file that is configured for TLS:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
#add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
server_name www.example.com;
ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location ^~ /.well-known/ {
root /usr/share/nginx/html;
allow all;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://expresshelloworld:3000;
}
}
}
We can put this config file into our own custom nginx image with the following Dockerfile:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
This can be build with the following command: docker build -t expressnginx .
Next, we'll create a custom network so we can take advantage of Docker's service discovery feature:
docker network create -d bridge expressnet
Now, we can fire up the helloworld and nginx containers:
docker run -d \
--name expresshelloworld --net expressnet exampleexpress
docker run -d -p 80:80 -p 443:443 \
--name expressnginx --net expressnet \
-v /etc/letsencrypt:/etc/letsencrypt \
-v /usr/share/nginx/html:/usr/share/nginx/html \
expressnginx
Double check that nginx came up properly by taking a look at the output of docker logs expressnginx.
The nginx config file should redirect any requests on port 80 over to port 443. We can test that by running the following:
curl -v http://www.example.com/
We should also, at this point, be able to make a successful TLS connection, and see our Hello World! response back:
curl -v https://www.example.com/
Now, to set up the renewal process. The nginx.conf above has provisions for the letsencrypt .well-known path for the webroot verification method. If you run the following command, it will handle renewal. Normally, you'll run this command on some sort of cron so that your certs will be renewed before they expire:
export LETSENCRYPT_EMAIL=me#example.com
export DNSNAME=www.example.com
docker run --rm --name letsencrypt \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
-v "/usr/share/nginx/html:/usr/share/nginx/html" \
quay.io/letsencrypt/letsencrypt:latest \
certonly -n --webroot -w /usr/share/nginx/html -d $DNSNAME --agree-tos
There are many ways to achieve this depending on your setup. One popular way is to setup nginx in front of your Docker container, and handle the certificates entirely within your nginx config.
The nginx config can contain a list of 'usptreams' (your Docker containers) and 'servers' which essentially map requests to particular upstreams. As part of that mapping you can also handle SSL.
You can use certbot to help you set this up.
I've recently implemented https with let's encrypt using nginx. I'm listing the challenges I've faced, and the way I've implemented step-by-step here.
Challenge:
Docker file system is ephemeral. That means after each time you make a build the certificates that are stored or if generated inside the container, will vanish. So it's very tricky to generate certificates inside the container.
Steps to overcome it:
Below guide is independent of kind of the app you have, as it only involves nginx and docker.
First install nginx on you server (not on container, but directly on the server.) You can follow this guide to generate certificate for your domain using certbot.
Now stop this nginx server and start the build of your app. Install nginx on your container and open port 80, 443 on your docker container. (if using aws open on ec2 instance also as by default aws open only port 80)
Next run your container and mount the volumes that contain certificate file directly on the container. I've answered a question here on how to do the same.
This will enable https on your app. Incase you are not able to observe, and are using chrome try clearing dns cache for chrome
Auto renewal process :
Let's encrypt certificates are valid only for 3 months. In the above guide steps to configure auto renewal is also setup. But you've to stop and restart your container every 3 months atleast to make sure the certificates mounted on your docker container are up to date. (You will have to restart the nginx server we set up in the first step to make the renewal happen smoothly)
You may have a look here : https://certbot.eff.org/docs/using.html?highlight=docker#running-with-docker
Then what I personally do is :
Create a Docker volume to store the certs and generate the certs with the above image
Create a Docker user-defined network (https://docs.docker.com/engine/userguide/networking/#/user-defined-networks)
Create an image based on nginx with your configuration (maybe this will be useful)
Create a Nginx container based on your image, mount the volume in it and connect it to the network (also forward port 80 and 443 to whatever you want)
I would create a container for your node.js app and connect it to the same network
Now if you configured nginx correctly (point to the right path for the TLS certs and proxy to the right URL, like http://my-app:3210) you should have access to your app in https.
Front end - NGINX - which listening 443 port, and proxies to beck end
Back end - you docker container

ssl certificate not working on apache linux ec2 instance

I am enabling SSL certificate on my apache linux ecc2 instance.
But when i m adding the following lines
NameVirtualHost *:443
<VirtualHost *:443>
ServerName www.example.com
# other configurations
SSLEngine on
SSLCertificateFile /etc/httpd/conf/ssl.crt/mydomain.crt
SSLCertificateKeyFile /etc/httpd/conf/ssl.key/mydomain.key
</VirtualHost>
apache restart is failing.
but when i change in port in above lines to 80.apache starts working.Although i have enabled the port 443 on ec2 admin panel.
I dont know whats the issue.
I have got four certificates from comodo ssl organisation.Out of them i have used only mydomain.crt.Others are intermediate certificates.Do i need to use them as well?
Ensure you have the apache SSL module installed. You can check if it's installed by running:
apachectl -t -D DUMP_MODULES | grep ssl
If it's not running, try this (assuming standard Amazon Linux AMI):
sudo yum install -y mod_ssl
or if you're using apache 2.4
sudo yum install -y mod24_ssl

How to change port GitLab on CentOS 6?

I tried to change port number on these files. But when I run gitlab-ctl reconfigure for updating. I can't access my link (http://myaddress.example:8790) Those files which I changed are:
/var/opt/gitlab/gitlab-rails/etc/gitlab.yml
/opt/gitlab/embedded/conf/nginx.conf
/opt/gitlab/embedded/cookbooks/gitlab/attributes/default.rb
/opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml
/var/opt/gitlab/nginx/conf/gitlab-http.conf
In /etc/gitlab/gitlab.rb I change external_url "http://myaddress.example:8790"
How can I fix it?
Using GIT_LAB_PATH as you gitlab base directory (the root where is it installed)
Modify GIT_LAB_PATH/gitlab/gitlab.yml
## GitLab settings
gitlab:
## Web server settings
host: HOSTNAME
port: HTTP_PORT
https: false
If you use non-standard ssh port you need to specify it
ssh_port: SSH_PORT
Modify GIT_LAB_PATH/gitlab-shell/config.yml to use the new HTTP port:
Url to gitlab instance. Used for api calls. Should end with a slash.
gitlab_url: "http://HOSTNAME:HTTP_PORT/"
Modify /etc/nginx/sites-available/gitlab
listen *:HTTP_PORT default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
Modify /etc/ssh/sshd_config
What ports, IPs and protocols we listen for
Port SSH_PORT
Restart SSH, restart GitLab, restart nginx:
sudo service ssh restart
sudo service gitlab restart
sudo service nginx restart
Test GitLab:
sudo -u git -H bundle exec rake gitlab:env:info RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production
Update the remote origins for your repos to use the new port:
ssh://git#HOSTNAME:HTTP_PORT/repo_name
Dear all I missing open port in firewall
I type it on linu terminal
/etc/init.d/iptables stop
It worked

Resources