Could not find named location "#app" containerised node application - node.js

I have a containerised nodejs app on my server and I have a nginx webserver so it can use https that is supposed to redirect to the node app but I always get the error in the title and I have no clue why? My node app is showing as restarting though, which might be a problem, but again I don't know why it's restarting as it gives me nothing in the logs:
My Dockerfile:
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 8080
CMD ["npm", "start"]
My docker compose file:
version: '3'
services:
app:
container_name: app
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
links:
- db
networks:
- app-network
db:
container_name: db
image: mongo
ports:
- '27017:27017'
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- app
links:
- app
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email email#gmail.com --agree-tos --no-eff-email --force-renewal -d domain.com -d www.domain.com
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/root/app/views/
o: bind
dhparam:
driver: local
driver_opts:
type: none
device: /home/root/app/dhparam/
o: bind
networks:
app-network:
driver: bridge
And my nginx conf file:
server {
listen 80;
listen [::]:80;
server_name domain.com www.domain.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name domain.com www.domain.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/api.wasdstudios.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.wasdstudios.com/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #app;
}
location #nodejs {
proxy_pass http://app:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
First part of my node file:
const express = require('express');
const app = express();
const path = require('path');
const db = require('mongoose');
// Import routes
const authRoute = require('./routes/auth');
const recoverRoute = require('./routes/recover');
const getUser = require('./routes/getUser');
// Connect to db
console.log(process.env.DB_CONNECT);
db.connect('mongodb://db:27017/app-mongo-database', { useNewUrlParser: true }, (err, client) =>
{
if(err){
console.log(err);
}
else{
console.log("connected to db");
}
});
// Middleware
app.use(express.json());
app.use('/static', express.static(path.join(__dirname, 'static')));
app.use('/auth/getUser', getUser);
// Route middlewares
app.use('/auth', authRoute);
app.use('/auth/recover', recoverRoute )
app.listen(8080, () => console.log('Server started'));
This is the log I get when i look into the app's containers log:
And when I go to my domain I get the obvious:
Update:
I do a docker-compose up -b -d command and this is the output ( npm start is running correctly):
It's now displaying this when I do a docker-compose ps
I'ts showing npm start now as it should, but it still does not work with the same error.

Solved it.
I had to add the add the apps-network into my db service file:
version: '3'
services:
app:
container_name: app
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
links:
- db
networks:
- app-network
db:
container_name: db
image: mongo
ports:
- '27017:27017'
networks: ////////// <<<<< here
- app-network
and update my nginx.conf file to this, I made an error and my try files uri wasd called app, not nodejs.:
location / {
try_files $uri #app; ///// <<<<< this should be nodejs not 'app'
}
location #nodejs {. /// <<<<< as long as it's the same as this name
proxy_pass http://app:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
}
You can compare code, if needed, with my question to see if you fall in the same error as me.

Related

MEAN+Docker: Hide backend from outside the world

I'm new to the backend. And I'm trying to server my application made on the MEAN stack using Docker-compose.
I have app: Docker + Nginx. Angular Frontend + Node.js Backend.
Problem: to hide my backend from outside the world
My docker-compose.yml: (one network)
version: '3'
services:
front: ..... # NO PORTS
back: .... # No PORTS
nginx:
.......
ports:
- "80:80"
- "443:443"
.......
certbot: .....
networks:
app-network:
driver: bridge
My nginx.conf
server {
listen 80;
server_name example.com www.example.com;
root /usr/share/nginx/html;
location / {
try_files $uri /index.html;
rewrite ^ https://$host$request_uri? permanent;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name ....
server_tokens off;
ssl_ ....
location /back {
try_files $uri #back;
}
location #back {
proxy_pass http://back:3001;
rewrite ^/back/(.*) /$1 break;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
}
location / {....}
location #front {
proxy_pass http://front:80;
add_header .....
}
}
Angular service TS:
getItems() {
return this.http.get('/back/items')
.pipe(map( res => res))
}

Nginx direct into nothing

i try to connect to https://localhost:4001 and get nothing returned. i dont know where my error is. in my console i get following log:
chatcloud-nginx-1 | xxx_nginx to: - [\x16\x03\x01\x02\x00\x01\x00\x01\xFC\x03\x03\xCD\x90\xB1\x0F$\xB8w\x8EtM4\xC9q\xE8\xFD\xFE\xE7\x17\x93\xA8a\xD2GR\xE0\xC8gcJ\xD62\xD0 \xAA\xCF\x1A\xCB\xE5\xFDj\xFA\x19Po\xBF\xE2'\xAC\xDF\xC1\xE2\xCE\x17|P\xA9\x96\xCF4\xD4\x84\xD3k\x83\xDA\x00 \x8A\x8A\x13\x01\x13\x02\x13\x03\xC0+\xC0/\xC0,\xC00\xCC\xA9\xCC\xA8\xC0\x13\xC0\x14\x00\x9C\x00\x9D\x00/\x005\x01\x00\x01\x93\x9A\x9A\x00\x00\x00\x00\x00\x0E\x00\x0C\x00\x00\x09localhost\x00\x17\x00\x00\xFF\x01\x00\x01\x00\x00] msec 1652730101.543 request_time 0.001processed by 172.23.0.1172.23.0.1sendfileon**strong text**
it seems like my nginx is working, but i dont get my application running.
DockerfileNginx:
FROM nginx
RUN rm /etc/nginx/conf.d/*
COPY ./nginx /etc/nginx
COPY /*.html /etc/nginx/html/
COPY /login_register/*.html /etc/nginx/html/
COPY ./cert /etc/nginx/html/
COPY /public/style.css /etc/nginx/html/
nginx.conf:
**user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
log_format upstreamlog '$server_name to: $upstream_addr [$request] '
'msec $msec request_time $request_time'
'processed by $proxy_add_x_forwarded_for'
'$remote_addr'
sendfile on;
keepalive_timeout 65;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
upstream nginxServer {
hash $remote_addr consistent;
server chatclient1:4000;
server chatclient2:4000;
}
server {
listen 4001;
listen 443 ssl;
server_name ChatClient_nginx;
access_log /var/log/nginx/access.log upstreamlog;
error_log /var/log/nginx/error-ssl.log;
include /etc/nginx/mime.types;
keepalive_timeout 70;
ssl_certificate_key /etc/nginx/cert/key.pem;
ssl_certificate /etc/nginx/cert/cert.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass https://nginxServer;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
}
location = /status {
access_log off;
default_type text/plain;
add_header Content-Type text/plain;
return 200 "alive";
}
location = / {
proxy_pass https://nginxServer/;
}
location = /register {
proxy_pass https://nginxServer/register;
}
location = /chatroom {
proxy_pass https://nginxServer/chatroom;
}
}
}**
docker-compose.yml :
version : '3.2'
services:
userdb:
image: mongo:latest
container_name: mongodb
volumes:
- userdb:/data/db
networks:
- chatnetworks
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- 9090:9090
volumes:
- ./prometheus:/etc/prometheus
- prometheus-data:/prometheus
command: --web.enable-lifecycle --config.file=/etc/prometheus/prometheus.yml
networks:
- chatnetworks
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: always
volumes:
- ./grafana/grafana.ini:/etc/grafana/grafana.ini
- ./grafana/provisioning:/etc/grafana/provisioning
- ./grafana/data:/var/lib/grafana
user: "1000"
ports:
- 3000:3000
networks:
- chatnetworks
chatclient1:
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000'
restart: 'on-failure'
networks:
- chatnetworks
labels:
- "traefik.http.routers.chatapp.rule=PathPrefix(`/`)"
- traefik.http.services.chatapp.loadBalancer.sticky.cookie.name=server_id
- traefik.http.services.chatapp.loadBalancer.sticky.cookie.httpOnly=true
chatclient2:
build:
context: .
dockerfile: ./Dockerfile
ports:
- '4000'
restart: 'on-failure'
networks:
- chatnetworks
labels:
- "traefik.http.routers.chatapp.rule=PathPrefix(`/`)"
- traefik.http.services.chatapp.loadBalancer.sticky.cookie.name=server_id
- traefik.http.services.chatapp.loadBalancer.sticky.cookie.httpOnly=true
nginx:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
build:
context: .
dockerfile: ./DockerfileNginx
volumes_from:
- chatclient1
- chatclient2
ports:
- "4001:4001"
networks:
- chatnetworks
redis:
image: redis:alpine
ports:
- "6379:6379"
networks:
- chatnetworks
labels:
- traefik.enable=false
volumes:
userdb:
prometheus-data:
networks:
chatnetworks:
the nginx image work because i get redirected into an empty page, and in my console log i get the request log.

Cannot make requests using AXIOS between two containers when using NGINX

I'm creating three containers using docker-compose.
The first one, (front) running Vue.js, the second one with an API running Node.js and the third one is running Nginx to "protect" with SSL (to do something like this).
After some tests to make the Vue.js works properly, the docker-compose.yml become this one:
version: '3'
services:
api:
image: node-image
container_name: api
build:
context: 'api.equilibrista.app'
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
TZ: America/Sao_Paulo
restart:
always
networks:
- equilibrista-network
front:
image: vue-image
container_name: front
build:
context: 'www.equilibrista.app'
dockerfile: Dockerfile
ports:
- "8888:8888"
environment:
TZ: America/Sao_Paulo
restart: always
networks:
- equilibrista-network
depends_on:
- api
server:
image: nginx:latest
container_name: server
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./equilibrista_nginx/web-root:/var/www/html
- ./equilibrista_nginx/nginx-conf:/etc/nginx/conf.d
- ./equilibrista_nginx/certbot-etc:/etc/letsencrypt
- ./equilibrista_nginx/certbot-var:/var/lib/letsencrypt
- ./equilibrista_nginx/dhparam:/etc/ssl/certs
depends_on:
- api
- front
networks:
- equilibrista-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- ./equilibrista_nginx/certbot-etc:/etc/letsencrypt
- ./equilibrista_nginx/certbot-var:/var/lib/letsencrypt
- ./equilibrista_nginx/web-root:/var/www/html
depends_on:
- server
command: certonly --webroot --webroot-path=/var/www/html --email eu#rafaelmiller.com --agree-tos --no-eff-email --force-renewal -d equilibrista.app -d www.equilibrista.app
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: equilibrista_nginx/views
o: bind
dhparam:
driver: local
driver_opts:
type: none
device: equilibrista_nginx/dhparam/
o: bind
networks:
equilibrista-network:
driver: bridge
(it's working fine, as you can see in equilibrista.app)
But, when try to make a GET request, axios is trying to use the http://api:3000/api/ as the API_URL and not the container_name as I expected.
env.API_URL = 'http://api:3000/api/'
axios.get(env.API_URL + 'exam', { params: { examId: '5e2b58d0a0d4295c96fa5e75' } }).then(response => ...
maybe this can be usefull:
nginx.conf file:
server {
listen 80;
listen [::]:80;
server_name equilibrista.app www.equilibrista.app;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 80;
listen [::]:80;
server_name api.equilibrista.app;
location #api {
proxy_pass http://api:3000;
}
location / {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, DELETE, PATCH, PUT';
try_files $uri #api;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name equilibrista.app www.equilibrista.app;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/equilibrista.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/equilibrista.app/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #front;
}
location #front {
proxy_pass http://front:8888;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name api.equilibrista.app;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/equilibrista.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/equilibrista.app/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #api;
}
location #api {
proxy_pass http://api:3000;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
root /home/node/app;
index index.html index.htm index.nginx-debian.html;
}
I got stuck in this for some days...

Webpack development server seperate subdomain proxied by nginx

im currently stuck on a probem with the webpack-dev-server which listen on a wrong domain with a wromng port. I've dockerized my Symfony application having 3 container, node, php and nginx. On the Node container the webpack-dev-server is running with the following (shortened) configuration
output: {
filename: '[name].[hash].js',
chunkFilename: '[name].[chunkhash].js',
path: Path.resolve(__dirname, 'web/static'),
publicPath: '/static/'
},
devServer: {
contentBase: Path.join(__dirname, 'web'),
host: '0.0.0.0',
port: 8080,
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, PATCH, OPTIONS",
"Access-Control-Allow-Headers": "X-Requested-With, content-type, Authorization"
},
disableHostCheck: true,
open: false,
overlay: true,
compress: true
},
The nginx is configured to find the php application on www.application.box (docker port mapping 80 => 80)
The webpack-dev-server is reachable on static.application.box (proxied port 80 to 8089) and running on port 8080. also port 8080 is mapped to the host.
While all assets correctly resolved from static.application.box/static/some-assets.css/js the socketjs-node/info request as well as the websocket it self are running on www.application.box:8080/socketjs-node/info?t= (which is working since the port is mapped to the node container)
I've tried several things, but without success. So how can i modify the webpack-dev-server/nginx configuration to get the js and websocket on static.application.box/socketjs-node/info?t ?
I ran into the same problem with webpack-dev-server a week ago, but it should be noted that I modified /etc/hosts to have seperate project.local domains and that I used https.
Description:
In this case the webpack-dev-server ran on a docker container client:8080 and was proxied to client.project.local:80 via nginx
Like you I didnt find a way to configure webpack-dev-server to use my host and port so I created another nginx proxy especially for that :8080/sockjs-node. [1]
But then I had the problem, that the dev-server tried to access https://client.project.local:8080/sockjs-node/info?t=1234567890
Which is a port too much for nginx, since client.project.local is already a proxy to client:8080. So I added in the webpack.conf.js config.output.publicPath = '//client.project.local/ and ... voilĂ :
https://client.project.local/sockjs-node/info?t=1234567890.
works like a charm.
Configs
webpack.conf.js:
const fs = require('fs')
const sslCrt = fs.readFileSync('/path/to/ssl/ca.crt')
const sslKey = fs.readFileSync('/path/to/ssl/ca.key')
// ...
{
// ...
devServer: {
hot: true, // <- responsible for all of this, but still dont wanna miss it ;)
inline: true,
compress: true,
host: process.env.HOST, // set in Dockerfile for client container
port: process.env.PORT, // set in Dockerfile for client container
disableHostCheck: true, // when manipulating /etc/hosts
headers: { 'Access-Control-Allow-Origin': '*' },
https: {
cert: sslCrt,
key: sslKey
},
// ...
}
output: {
publicPath: '//client.project.local/' // host from /etc/hosts (note // at beginning)
},
}
nginx client config:
# http
server {
listen 80 default;
listen [::]:80 default ipv6only=on;
server_name www.client.project.local client.project.local www.project.local project.local;
# your other config like root, access_log, charset ..
location / {
proxy_pass https://client:8080/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
# https
server {
listen 443 ssl default;
listen [::]:443 ssl default ipv6only=on;
ssl_certificate project.local.crt;
ssl_certificate_key project.local.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl on;
server_name www.client.project.local client.project.local www.project.local project.local;
# your other config like root, access_log, charset ..
location / {
proxy_pass https://client:8080/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
# http/s websocket for webpack-dev-server
server {
listen 8080 default;
listen [::]:8080 default ipv6only=on;
ssl_certificate project.local.crt;
ssl_certificate_key project.local.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl on;
server_name www.client.project.local client.project.local www.project.local project.local;
# your other config like root, access_log, charset ..
location /sockjs-node/ {
proxy_pass https://client:8080/sockjs-node/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Remember to expose port 8080 for nginx container aswell in for example in docker-compose.yml. I added a shortend version for the sake completion
docker-compose.yml
version: "3"
networks:
project-net-ext:
project-net:
internal: true
driver: bridge
services:
client:
hostname: client
build: ./container/client
volumes:
- ./path/to/code:/code:ro # read-only
# write needed only for initial package download
ports:
- "8080:8080"
networks:
- project-net
# project-net-ext only needed for initial package download
nginx:
hostname: nginx
build: ./container/nginx
volumes:
- ./path/to/code:/code:ro # read-only
# write needed only for initial package download
ports:
- "80:80" # http
- "443:443" # https
- "8080:8080" # webpack-dev-server :8080/sockjs-node/info
links:
- client
networks:
- project-net # needed for nginx to connect to client container,
# even though you've linked them
- project-net-ext # nginx of course needs to be public
[1]: I dont know if its considered to be dirty. At least it feels a bit like it is, but it works and as the name suggests: Its a dev-server and once you npm build for productive, its gone - for ever
this can be fixed by setting devServer.sockPort: 'location'
webpack.config.js:
devServer {
sockPort: 'location'
// ...
}
Here's a complete nginx.conf that will allow you to proxy webpack-dev-server without requiring any changes other than sockPort
nginx.conf:
events {}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8081;
# uncomment if you need ssl
# listen 4443 ssl;
# ssl_certificate cert.pem;
# ssl_certificate_key privkey.pem;
location / {
# webpack-dev-server port
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Host localhost;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}

proxy_pass https in a Docker Container

I am trying to add SSL in my Docker. Everything set up except for my nodejs app.
This is my nginx configuration
map $http_user_agent $is_desktop {
default 0;
~*linux.*android|windows\s+(?:ce|phone) 0; # exceptions to the rule
~*spider|crawl|slurp|bot 1; # bots
~*windows|linux|os\s+x\s*[\d\._]+|solaris|bsd 1; # OSes
}
map $is_desktop $is_mobile {
1 0;
0 1;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
}
access_log /usr/logs/nginx/lion/lion.$year-$month-$day.log;
ssl_certificate /etc/nginx/ssl/example.com/b2d6debd2142c643.crt;
ssl_certificate_key /etc/nginx/ssl/example.com/example.key;
location / {
index index.html;
if ($is_mobile) {
root /usr/src/lion;
}
if ($is_desktop) {
proxy_pass http://martinez:3000;
}
error_page 404 =301 /;
}
location /dist {
alias /usr/src/martinez/dist;
access_log off;
}
location /img {
if ($is_desktop) {
root /usr/src/martinez/dist;
access_log off;
}
}
error_page 405 =200 $uri;
}
server {
listen 443;
server_name www.example.com;
return 301 $scheme://$host$request_uri;
}
When I try to access this one, it will show a not secure on the URL address bar and this is probably because the proxy_pass is not in https. When I tried changing it into https I get a 502 Bad Gateway which I can't resolve. I already tried exposing and opening the ports on my docker-compose.yml
docker-compose.yml
martinez:
restart: always
build: ./martinez/
expose:
- "3000"
- "443"
ports:
- "3000:443"
tty: true
env_file: .env
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
- "443:443"
- "3000:3000"
links:
- martinez:martinez

Resources