proxy_pass https in a Docker Container - node.js

I am trying to add SSL in my Docker. Everything set up except for my nodejs app.
This is my nginx configuration
map $http_user_agent $is_desktop {
default 0;
~*linux.*android|windows\s+(?:ce|phone) 0; # exceptions to the rule
~*spider|crawl|slurp|bot 1; # bots
~*windows|linux|os\s+x\s*[\d\._]+|solaris|bsd 1; # OSes
}
map $is_desktop $is_mobile {
1 0;
0 1;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
}
access_log /usr/logs/nginx/lion/lion.$year-$month-$day.log;
ssl_certificate /etc/nginx/ssl/example.com/b2d6debd2142c643.crt;
ssl_certificate_key /etc/nginx/ssl/example.com/example.key;
location / {
index index.html;
if ($is_mobile) {
root /usr/src/lion;
}
if ($is_desktop) {
proxy_pass http://martinez:3000;
}
error_page 404 =301 /;
}
location /dist {
alias /usr/src/martinez/dist;
access_log off;
}
location /img {
if ($is_desktop) {
root /usr/src/martinez/dist;
access_log off;
}
}
error_page 405 =200 $uri;
}
server {
listen 443;
server_name www.example.com;
return 301 $scheme://$host$request_uri;
}
When I try to access this one, it will show a not secure on the URL address bar and this is probably because the proxy_pass is not in https. When I tried changing it into https I get a 502 Bad Gateway which I can't resolve. I already tried exposing and opening the ports on my docker-compose.yml
docker-compose.yml
martinez:
restart: always
build: ./martinez/
expose:
- "3000"
- "443"
ports:
- "3000:443"
tty: true
env_file: .env
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
- "443:443"
- "3000:3000"
links:
- martinez:martinez

Related

Nginx direct into nothing

i try to connect to https://localhost:4001 and get nothing returned. i dont know where my error is. in my console i get following log:
chatcloud-nginx-1 | xxx_nginx to: - [\x16\x03\x01\x02\x00\x01\x00\x01\xFC\x03\x03\xCD\x90\xB1\x0F$\xB8w\x8EtM4\xC9q\xE8\xFD\xFE\xE7\x17\x93\xA8a\xD2GR\xE0\xC8gcJ\xD62\xD0 \xAA\xCF\x1A\xCB\xE5\xFDj\xFA\x19Po\xBF\xE2'\xAC\xDF\xC1\xE2\xCE\x17|P\xA9\x96\xCF4\xD4\x84\xD3k\x83\xDA\x00 \x8A\x8A\x13\x01\x13\x02\x13\x03\xC0+\xC0/\xC0,\xC00\xCC\xA9\xCC\xA8\xC0\x13\xC0\x14\x00\x9C\x00\x9D\x00/\x005\x01\x00\x01\x93\x9A\x9A\x00\x00\x00\x00\x00\x0E\x00\x0C\x00\x00\x09localhost\x00\x17\x00\x00\xFF\x01\x00\x01\x00\x00] msec 1652730101.543 request_time 0.001processed by 172.23.0.1172.23.0.1sendfileon**strong text**
it seems like my nginx is working, but i dont get my application running.
DockerfileNginx:
FROM nginx
RUN rm /etc/nginx/conf.d/*
COPY ./nginx /etc/nginx
COPY /*.html /etc/nginx/html/
COPY /login_register/*.html /etc/nginx/html/
COPY ./cert /etc/nginx/html/
COPY /public/style.css /etc/nginx/html/
nginx.conf:
**user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
log_format upstreamlog '$server_name to: $upstream_addr [$request] '
'msec $msec request_time $request_time'
'processed by $proxy_add_x_forwarded_for'
'$remote_addr'
sendfile on;
keepalive_timeout 65;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
upstream nginxServer {
hash $remote_addr consistent;
server chatclient1:4000;
server chatclient2:4000;
}
server {
listen 4001;
listen 443 ssl;
server_name ChatClient_nginx;
access_log /var/log/nginx/access.log upstreamlog;
error_log /var/log/nginx/error-ssl.log;
include /etc/nginx/mime.types;
keepalive_timeout 70;
ssl_certificate_key /etc/nginx/cert/key.pem;
ssl_certificate /etc/nginx/cert/cert.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass https://nginxServer;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
}
location = /status {
access_log off;
default_type text/plain;
add_header Content-Type text/plain;
return 200 "alive";
}
location = / {
proxy_pass https://nginxServer/;
}
location = /register {
proxy_pass https://nginxServer/register;
}
location = /chatroom {
proxy_pass https://nginxServer/chatroom;
}
}
}**
docker-compose.yml :
version : '3.2'
services:
userdb:
image: mongo:latest
container_name: mongodb
volumes:
- userdb:/data/db
networks:
- chatnetworks
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- 9090:9090
volumes:
- ./prometheus:/etc/prometheus
- prometheus-data:/prometheus
command: --web.enable-lifecycle --config.file=/etc/prometheus/prometheus.yml
networks:
- chatnetworks
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: always
volumes:
- ./grafana/grafana.ini:/etc/grafana/grafana.ini
- ./grafana/provisioning:/etc/grafana/provisioning
- ./grafana/data:/var/lib/grafana
user: "1000"
ports:
- 3000:3000
networks:
- chatnetworks
chatclient1:
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000'
restart: 'on-failure'
networks:
- chatnetworks
labels:
- "traefik.http.routers.chatapp.rule=PathPrefix(`/`)"
- traefik.http.services.chatapp.loadBalancer.sticky.cookie.name=server_id
- traefik.http.services.chatapp.loadBalancer.sticky.cookie.httpOnly=true
chatclient2:
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000'
restart: 'on-failure'
networks:
- chatnetworks
labels:
- "traefik.http.routers.chatapp.rule=PathPrefix(`/`)"
- traefik.http.services.chatapp.loadBalancer.sticky.cookie.name=server_id
- traefik.http.services.chatapp.loadBalancer.sticky.cookie.httpOnly=true
nginx:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
build:
context: .
dockerfile: ./DockerfileNginx
volumes_from:
- chatclient1
- chatclient2
ports:
- "4001:4001"
networks:
- chatnetworks
redis:
image: redis:alpine
ports:
- "6379:6379"
networks:
- chatnetworks
labels:
- traefik.enable=false
volumes:
userdb:
prometheus-data:
networks:
chatnetworks:
the nginx image work because i get redirected into an empty page, and in my console log i get the request log.

Not found error with two Node projects served by NGINX with Docker

I'm learning Docker and my goal is to serve two Node.js projects whit same docker-compose using NGINX. My two project(A and B) are simple hello world:
'use strict';
const express = require('express');
// Constants
const PORT = 8301;
const HOST = '0.0.0.0';
const PATH = '/project-a';
// App
const app = express();
app.get(PATH, (req, res) => {
res.send('<h1>Hello World</h1><p>Project A</p>');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}${PATH}`);
Above there is A, B has the same code but change only port 8302 and path /project-b. Below the docker-compose:
version: '3.7'
services:
website_a:
image: project_a/node
build:
context: ./projects_a
dockerfile: Dockerfile
container_name: project_a
restart: always
command: sh -c "node server.js"
expose:
- 8301
website_b:
image: project_b/node
build:
context: ./projects_b
dockerfile: Dockerfile
container_name: project_b
restart: always
command: sh -c "node server.js"
expose:
- 8302
nginx:
image: node-project-multisite/nginx
build: nginx
container_name: multisite_project_nginx
ports:
- 80:80
depends_on:
- website_a
- website_b
And the nginx's conf:
server {
listen 80;
listen [::]:80;
server_name 127.0.0.1;
# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /project-a {
proxy_pass http://website_a:8301/project-a;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# max uploadable file size
client_max_body_size 4G;
}
}
server {
listen 80;
listen [::]:80;
server_name 127.0.0.1;
# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /project-b {
proxy_pass http://website_b:8302/project-b;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# max uploadable file size
client_max_body_size 4G;
}
}
Using the docker-compose without NGINX I can see both hello world, but with NGINX I can use only A, in B there is the message below:
Where I put a mistake?
You should use only one "server" since they share server_name and port
server {
listen 80;
listen [::]:80;
server_name 127.0.0.1;
# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /project-a {
proxy_pass http://website_a:8301/project-a;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# max uploadable file size
client_max_body_size 4G;
}
location /project-b {
proxy_pass http://website_b:8302/project-b;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# max uploadable file size
client_max_body_size 4G;
}
}

Could not find named location "#app" containerised node application

I have a containerised nodejs app on my server and I have a nginx webserver so it can use https that is supposed to redirect to the node app but I always get the error in the title and I have no clue why? My node app is showing as restarting though, which might be a problem, but again I don't know why it's restarting as it gives me nothing in the logs:
My Dockerfile:
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 8080
CMD ["npm", "start"]
My docker compose file:
version: '3'
services:
app:
container_name: app
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
links:
- db
networks:
- app-network
db:
container_name: db
image: mongo
ports:
- '27017:27017'
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- app
links:
- app
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email email#gmail.com --agree-tos --no-eff-email --force-renewal -d domain.com -d www.domain.com
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/root/app/views/
o: bind
dhparam:
driver: local
driver_opts:
type: none
device: /home/root/app/dhparam/
o: bind
networks:
app-network:
driver: bridge
And my nginx conf file:
server {
listen 80;
listen [::]:80;
server_name domain.com www.domain.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name domain.com www.domain.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/api.wasdstudios.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.wasdstudios.com/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #app;
}
location #nodejs {
proxy_pass http://app:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
First part of my node file:
const express = require('express');
const app = express();
const path = require('path');
const db = require('mongoose');
// Import routes
const authRoute = require('./routes/auth');
const recoverRoute = require('./routes/recover');
const getUser = require('./routes/getUser');
// Connect to db
console.log(process.env.DB_CONNECT);
db.connect('mongodb://db:27017/app-mongo-database', { useNewUrlParser: true }, (err, client) =>
{
if(err){
console.log(err);
}
else{
console.log("connected to db");
}
});
// Middleware
app.use(express.json());
app.use('/static', express.static(path.join(__dirname, 'static')));
app.use('/auth/getUser', getUser);
// Route middlewares
app.use('/auth', authRoute);
app.use('/auth/recover', recoverRoute )
app.listen(8080, () => console.log('Server started'));
This is the log I get when i look into the app's containers log:
And when I go to my domain I get the obvious:
Update:
I do a docker-compose up -b -d command and this is the output ( npm start is running correctly):
It's now displaying this when I do a docker-compose ps
I'ts showing npm start now as it should, but it still does not work with the same error.
Solved it.
I had to add the add the apps-network into my db service file:
version: '3'
services:
app:
container_name: app
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
links:
- db
networks:
- app-network
db:
container_name: db
image: mongo
ports:
- '27017:27017'
networks: ////////// <<<<< here
- app-network
and update my nginx.conf file to this, I made an error and my try files uri wasd called app, not nodejs.:
location / {
try_files $uri #app; ///// <<<<< this should be nodejs not 'app'
}
location #nodejs {. /// <<<<< as long as it's the same as this name
proxy_pass http://app:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
}
You can compare code, if needed, with my question to see if you fall in the same error as me.

How to set up nginx reverse proxy with multiple node apps

I have two Vue.js apps that I want to run on the same domain (e.g., https://localhost:8080/app1 and https://localhost:8080/app2). Both apps run in separate docker containers, and i have set up a third docker container running nginx with a reverse proxy in order to have ssl.
I am able to visit the apps at the wanted locations, but there are some resources missing (images, fonts etc). I realize that my nginx server looks for them at https://localhost:8080/my_resource, but I can't figure out how to forward these to the correct locations (i.e., https://localhost:8080/app1/my_resource, and similar for app2).
I've tried using the "try_files" directive in nginx, like so:
location / {
try_files $uri $uri/ http://app1:8080 http://app2:8080
}
but it does not work.
Here is my nginx config file
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
# Change the default configuration to enable ssl
server {
listen 443 ssl;
listen [::443] ssl;
ssl_certificate /etc/nginx/certs/my_app.crt;
ssl_certificate_key /etc/nginx/certs/my_app.key;
server_name localhost;
server_tokens off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
if ($http_referer = "https://localhost:8080/app1/") {
proxy_pass http://app1:8080;
break;
}
if ($http_referer = "https://localhost:8080/app2/") {
proxy_pass http://app2:8080;
break;
}
}
location /app1/ {
proxy_pass http://app1:8080/;
}
location /app2/ {
proxy_pass http://app2:8080/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
And this is my docker-compose
version: "3.6"
services:
app1:
image: "app1"
expose:
- "8080"
command: ["serve", "-s", "/app/app1/dist", "-l", "8080"]
app2:
image: "app2"
expose:
- "8080"
command: ["serve", "-s", "/app/app2/dist", "-l", "8080"]
nginx:
image: "nginx"
ports:
- "8080:443"
depends_on:
- "app1"
- "app2"
Thanks for any input :)
After a lot of trial and error, I found a solution. I do not think this is the optimal solution, but it's working. Here is my nginx configuration:
# Pass any http request to the https service
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
# Configure the ssl service
server {
listen 443 ssl;
listen [::443] ssl;
ssl_certificate /etc/nginx/certs/my_app.crt;
ssl_certificate_key /etc/nginx/certs/my_app.key;
server_name localhost;
server_tokens off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
proxy_intercept_errors on;
error_page 404 = #second;
proxy_pass http://app1:80;
}
location #second {
proxy_pass http://app2:80;
}
location /app1/ {
rewrite ^/app1(.*) /$1 break;
proxy_pass http://app1:80;
}
location /app2/ {
rewrite ^/app2(.*) /$1 break;
proxy_pass http://app2:80;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

Webpack development server seperate subdomain proxied by nginx

im currently stuck on a probem with the webpack-dev-server which listen on a wrong domain with a wromng port. I've dockerized my Symfony application having 3 container, node, php and nginx. On the Node container the webpack-dev-server is running with the following (shortened) configuration
output: {
filename: '[name].[hash].js',
chunkFilename: '[name].[chunkhash].js',
path: Path.resolve(__dirname, 'web/static'),
publicPath: '/static/'
},
devServer: {
contentBase: Path.join(__dirname, 'web'),
host: '0.0.0.0',
port: 8080,
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, PATCH, OPTIONS",
"Access-Control-Allow-Headers": "X-Requested-With, content-type, Authorization"
},
disableHostCheck: true,
open: false,
overlay: true,
compress: true
},
The nginx is configured to find the php application on www.application.box (docker port mapping 80 => 80)
The webpack-dev-server is reachable on static.application.box (proxied port 80 to 8089) and running on port 8080. also port 8080 is mapped to the host.
While all assets correctly resolved from static.application.box/static/some-assets.css/js the socketjs-node/info request as well as the websocket it self are running on www.application.box:8080/socketjs-node/info?t= (which is working since the port is mapped to the node container)
I've tried several things, but without success. So how can i modify the webpack-dev-server/nginx configuration to get the js and websocket on static.application.box/socketjs-node/info?t ?
I ran into the same problem with webpack-dev-server a week ago, but it should be noted that I modified /etc/hosts to have seperate project.local domains and that I used https.
Description:
In this case the webpack-dev-server ran on a docker container client:8080 and was proxied to client.project.local:80 via nginx
Like you I didnt find a way to configure webpack-dev-server to use my host and port so I created another nginx proxy especially for that :8080/sockjs-node. [1]
But then I had the problem, that the dev-server tried to access https://client.project.local:8080/sockjs-node/info?t=1234567890
Which is a port too much for nginx, since client.project.local is already a proxy to client:8080. So I added in the webpack.conf.js config.output.publicPath = '//client.project.local/ and ... voilĂ :
https://client.project.local/sockjs-node/info?t=1234567890.
works like a charm.
Configs
webpack.conf.js:
const fs = require('fs')
const sslCrt = fs.readFileSync('/path/to/ssl/ca.crt')
const sslKey = fs.readFileSync('/path/to/ssl/ca.key')
// ...
{
// ...
devServer: {
hot: true, // <- responsible for all of this, but still dont wanna miss it ;)
inline: true,
compress: true,
host: process.env.HOST, // set in Dockerfile for client container
port: process.env.PORT, // set in Dockerfile for client container
disableHostCheck: true, // when manipulating /etc/hosts
headers: { 'Access-Control-Allow-Origin': '*' },
https: {
cert: sslCrt,
key: sslKey
},
// ...
}
output: {
publicPath: '//client.project.local/' // host from /etc/hosts (note // at beginning)
},
}
nginx client config:
# http
server {
listen 80 default;
listen [::]:80 default ipv6only=on;
server_name www.client.project.local client.project.local www.project.local project.local;
# your other config like root, access_log, charset ..
location / {
proxy_pass https://client:8080/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
# https
server {
listen 443 ssl default;
listen [::]:443 ssl default ipv6only=on;
ssl_certificate project.local.crt;
ssl_certificate_key project.local.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl on;
server_name www.client.project.local client.project.local www.project.local project.local;
# your other config like root, access_log, charset ..
location / {
proxy_pass https://client:8080/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
# http/s websocket for webpack-dev-server
server {
listen 8080 default;
listen [::]:8080 default ipv6only=on;
ssl_certificate project.local.crt;
ssl_certificate_key project.local.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl on;
server_name www.client.project.local client.project.local www.project.local project.local;
# your other config like root, access_log, charset ..
location /sockjs-node/ {
proxy_pass https://client:8080/sockjs-node/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Remember to expose port 8080 for nginx container aswell in for example in docker-compose.yml. I added a shortend version for the sake completion
docker-compose.yml
version: "3"
networks:
project-net-ext:
project-net:
internal: true
driver: bridge
services:
client:
hostname: client
build: ./container/client
volumes:
- ./path/to/code:/code:ro # read-only
# write needed only for initial package download
ports:
- "8080:8080"
networks:
- project-net
# project-net-ext only needed for initial package download
nginx:
hostname: nginx
build: ./container/nginx
volumes:
- ./path/to/code:/code:ro # read-only
# write needed only for initial package download
ports:
- "80:80" # http
- "443:443" # https
- "8080:8080" # webpack-dev-server :8080/sockjs-node/info
links:
- client
networks:
- project-net # needed for nginx to connect to client container,
# even though you've linked them
- project-net-ext # nginx of course needs to be public
[1]: I dont know if its considered to be dirty. At least it feels a bit like it is, but it works and as the name suggests: Its a dev-server and once you npm build for productive, its gone - for ever
this can be fixed by setting devServer.sockPort: 'location'
webpack.config.js:
devServer {
sockPort: 'location'
// ...
}
Here's a complete nginx.conf that will allow you to proxy webpack-dev-server without requiring any changes other than sockPort
nginx.conf:
events {}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8081;
# uncomment if you need ssl
# listen 4443 ssl;
# ssl_certificate cert.pem;
# ssl_certificate_key privkey.pem;
location / {
# webpack-dev-server port
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Host localhost;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}

Resources