Docker flask uwsgi 404 on a remote host - python-3.x

Update 2
I configure my routes in configure_blueprints called from create_app. I don`t want put all my views handlers in uwsgi.py. They are stored in separate modules.
def configure_blueprints(app):
from .root import root_bp
...
blueprints = [
(root_bp, None),
...
]
for bp, endpoint in blueprints:
app.register_blueprint(bp, url_prefix=endpoint)
return app
def create_app(config_fn=None):
app = Flask(__name__, template_folder='../templates', static_folder='../static')
...
configure_blueprints(app)
return app
app/root/views.py
root_bp = Blueprint('root_bp', __name__)
#root_bp.route('/')
def root():
if not current_user.is_authenticated:
return redirect('/login/')
return render_template('index.html')
Here is output of SIMPLE_SETTINGS=app.config,instance.docker python3 manage.py list_routes
2017-03-01 06:48:11,381 - passlib.registry - DEBUG - registered 'sha512_crypt' handler: <class 'passlib.handlers.sha2_crypt.sha512_crypt'>
...
root_bp.root HEAD,OPTIONS,GET /
...
This is implementation of the list_routes command
#manager.command
def list_routes():
import urllib
output = []
for rule in flask.current_app.url_map.iter_rules():
options = {}
for arg in rule.arguments:
options[arg] = '[{0}]'.format(arg)
methods = ','.join(rule.methods)
try:
url = flask.url_for(rule.endpoint, **options)
except BaseException as e:
print('Exc={}'.format(e))
line = urllib.parse.unquote('{:50s} {:20s} {}'.format(rule.endpoint, methods, url))
output.append(line)
for line in sorted(output):
print(line)
I do not understand why routes should be placed to a file and can not be configured dynamically. If this does not work, what should I do?
Update
uwsgi.ini
[uwsgi]
env = SIMPLE_SETTINGS=app.config,instance.docker
callable = app
wsgi-file = /var/www/app/uwsgi.py
uid = www-data
gid = www-data
socket = /var/www/app/uwsgi.sock
chmod-socket = 666
logto = /var/log/uwsgi/app/app.log
chdir = /var/www/app
plugin = python3
master = true
processes = 1
/var/www/app/uwsgi.py
from app import create_app
app = create_app()
I configure blueprints inside create_app function. I think they should be available at the time of application start. Also I use Flask-script.
SIMPLE_SETTINGS=app.config,instance.docker python3 manage.py shell
The shell starts without errors.
Original post
I have studied all related questions. I could not solve my problem. I deploy my project through the docker-machine on the remote host. Here is my configuration:
Dockerfile
FROM ubuntu:latest
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
RUN apt-get update && apt-get install -y \
python3 python3-pip git libpq-dev libevent-dev uwsgi-plugin-python3 \
nginx supervisor
COPY nginx/flask.conf /etc/nginx/sites-available/
COPY supervisor/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY . /var/www/app
RUN mkdir -p /var/log/nginx/app /var/log/uwsgi/app /var/log/supervisor \
&& rm /etc/nginx/sites-enabled/default \
&& ln -s /etc/nginx/sites-available/flask.conf /etc/nginx/sites-enabled/flask.conf \
&& echo "daemon off;" >> /etc/nginx/nginx.conf \
&& pip3 install -r /var/www/app/python_modules \
&& chown -R www-data:www-data /var/www/app \
&& chown -R www-data:www-data /var/log
WORKDIR /var/www/app
CMD ["/usr/bin/supervisord"]
docker-compose.yml
version: '2'
services:
db:
image: postgres
volumes:
- moderator-db:/var/lib/postgresql/data
redis:
image: redis
rabbitmq:
image: rabbitmq:3.6
api:
build: .
mem_limit: 1000m
ports:
- "80:80"
depends_on:
- redis
- db
- rabbitmq
volumes:
moderator-redis:
driver: local
moderator-db:
driver: local
supervisord.conf
[supervisord]
nodaemon=true
[program:nginx]
command=/usr/sbin/nginx
[program:uwsgi]
command=uwsgi --ini /var/www/app/uwsgi.ini
flask nginx conf
server {
server_name localhost;
listen 80 default_server;
charset utf-8;
sendfile on;
client_max_body_size 70M;
keepalive_timeout 0;
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 2048 8k;
proxy_ignore_client_abort on;
location / {
include uwsgi_params;
uwsgi_pass unix:///var/www/app/uwsgi.sock;
}
location /static {
root /var/www/app/static/;
}
}
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9dec354d97e moderator_api "/usr/bin/supervisord" 13 seconds ago Up 9 seconds 0.0.0.0:80->80/tcp moderator_api_1
243c48dac303 postgres "docker-entrypoint..." 3 hours ago Up 14 seconds 5432/tcp moderator_db_1
23901a761ef1 redis "docker-entrypoint..." 3 hours ago Up 14 seconds 6379/tcp moderator_redis_1
7cc0683bfe18 rabbitmq:3.6 "docker-entrypoint..." 3 hours ago Up 16 seconds 4369/tcp, 5671-5672/tcp, 25672/tcp moderator_rabbitmq_1
tail -f /var/log/uwsgi/app/app.log
[pid: 24|app: 0|req: 1/1] 123.23.7.216 () {44 vars in 945 bytes} [Tue Feb 28 14:53:57 2017] GET / => generated 233 bytes in 10 msecs (HTTP/1.1 404) 3 headers in 311 bytes (1 switches on core 0)
If you need any additional information, please let me know. I tried different configurations: nginx-proxy image, gunicorn etc. I have same problem with the remote host. What am I still missing?

I spent a lot of time and was not able to solve the problem with the uwsgi socket. I decided to switch to the gunicorn HTTP uwsgi web server. I liked the simplicity of setup.
supervisord.conf
[supervisord]
nodaemon=true
environment=SIMPLE_SETTINGS="app.config,instance.docker,instance.prod"
[program:nginx]
command=/usr/sbin/nginx
[program:app]
command=gunicorn app:app -b 0.0.0.0:8000 --name app --log-level=debug --log-file=- --worker-class gevent
nginx flask.conf
server {
server_name your-domain;
listen 80;
charset utf-8;
sendfile on;
client_max_body_size 70M;
keepalive_timeout 0;
root /var/www/app/static/;
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 2048 8k;
proxy_ignore_client_abort on;
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://localhost:8000;
}
}
One last caveat. I used a flask snippet with minor modifications for the docker.
class ReverseProxied(ProxyFix):
def __init__(self, app, config, **kwargs):
self.config = config
super().__init__(app, **kwargs)
def __call__(self, environ, start_response):
script_name = environ.get('HTTP_X_SCRIPT_NAME', '')
if script_name:
environ['SCRIPT_NAME'] = script_name
path_info = environ['PATH_INFO']
if path_info.startswith(script_name):
environ['PATH_INFO'] = path_info[len(script_name):]
scheme = environ.get('HTTP_X_SCHEME', '')
if scheme:
environ['wsgi.url_scheme'] = scheme
# fix for docker to proxy server_name
server_name = self.config.get('SERVER_NAME')
if server_name:
environ['HTTP_X_FORWARDED_HOST'] = server_name
return super().__call__(environ, start_response)
...
app.wsgi_app = ReverseProxied(app.wsgi_app, app.config)
app/config.py
SERVER_NAME = 'localhost:8000'
instance/prod.py
SERVER_NAME = 'your-domain'
Now everything works fine.

Related

NGINX (13 Permission Denied) When Passing Port Through PM2 EcoSystem Config File

I am using NGINX and PM2 to run a NodeJS app on an EC2 instance. The app runs on port 9443 and NGINX is listening in on port 443.
When I hardcode port 9443 directly into my index.js everything works great, but if I pass in the port via the PM2 ecosystem config file, I get a 502/bad gateway error and the 13: Permission Denied error in my NGINX error.log file. Could someone help me with this?
nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
}
server.conf
server {
listen 80;
listen [::]:80;
server_name $MY_SERVER_NAME$;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
access_log /location/to/logs;
ssl_certificate /location/to/cert;
ssl_certificate_key /location/to/cert/key;
server_name $MY_SERVER_NAME$;
location / {
proxy_set_header HOST $host;
proxy_set_header X-REal-IP $remote_addr;
proxy_pass http://127.0.0.1:9080;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
ecosystem.config.js
module.exports = {
apps: [
{
name: 'my-server',
cwd: '/home/centos/my-server',
script: '/home/centos/my-server/index.js',
watch: ['/home/centos/my-server/'],
ignore_watch: ['/home/centos/my-server/node_modules'],
env_production: {
NODE_ENV: 'production',
PROD_PORT: 9080,
},
},
],
};
Output of ps aux | grep nginx and ps aux | grep pm2
root 847 0.0 0.0 119320 2240 ? Ss 16:52 0:00 nginx: master process /usr/sbin/nginx
nginx 848 0.0 0.2 152000 10084 ? S 16:52 0:00 nginx: worker process
nginx 849 0.0 0.2 152000 8076 ? S 16:52 0:00 nginx: worker process
centos 1259 0.6 1.4 837268 54648 ? Ssl 17:00 0:00 PM2 v4.5.6: God Daemon (/home/centos/.pm2)
index.js: not the full file, but the part where the PORT is used
const http = require('http');
const winston = require('winston');
const nconf = require('nconf');
module.exports = async function (app) {
const DEV_PORT = 9080;
if (nconf.get('nodeEnv') === 'local') {
app.listen(DEV_PORT, () => winston.info(`Listening on port ${DEV_PORT}...`));
} else {
http.createServer(app).listen(process.env.PROD_PORT);
}
};

Not found error with two Node projects served by NGINX with Docker

I'm learning Docker and my goal is to serve two Node.js projects whit same docker-compose using NGINX. My two project(A and B) are simple hello world:
'use strict';
const express = require('express');
// Constants
const PORT = 8301;
const HOST = '0.0.0.0';
const PATH = '/project-a';
// App
const app = express();
app.get(PATH, (req, res) => {
res.send('<h1>Hello World</h1><p>Project A</p>');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}${PATH}`);
Above there is A, B has the same code but change only port 8302 and path /project-b. Below the docker-compose:
version: '3.7'
services:
website_a:
image: project_a/node
build:
context: ./projects_a
dockerfile: Dockerfile
container_name: project_a
restart: always
command: sh -c "node server.js"
expose:
- 8301
website_b:
image: project_b/node
build:
context: ./projects_b
dockerfile: Dockerfile
container_name: project_b
restart: always
command: sh -c "node server.js"
expose:
- 8302
nginx:
image: node-project-multisite/nginx
build: nginx
container_name: multisite_project_nginx
ports:
- 80:80
depends_on:
- website_a
- website_b
And the nginx's conf:
server {
listen 80;
listen [::]:80;
server_name 127.0.0.1;
# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /project-a {
proxy_pass http://website_a:8301/project-a;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# max uploadable file size
client_max_body_size 4G;
}
}
server {
listen 80;
listen [::]:80;
server_name 127.0.0.1;
# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /project-b {
proxy_pass http://website_b:8302/project-b;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# max uploadable file size
client_max_body_size 4G;
}
}
Using the docker-compose without NGINX I can see both hello world, but with NGINX I can use only A, in B there is the message below:
Where I put a mistake?
You should use only one "server" since they share server_name and port
server {
listen 80;
listen [::]:80;
server_name 127.0.0.1;
# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location /project-a {
proxy_pass http://website_a:8301/project-a;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# max uploadable file size
client_max_body_size 4G;
}
location /project-b {
proxy_pass http://website_b:8302/project-b;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# max uploadable file size
client_max_body_size 4G;
}
}

Could not find named location "#app" containerised node application

I have a containerised nodejs app on my server and I have a nginx webserver so it can use https that is supposed to redirect to the node app but I always get the error in the title and I have no clue why? My node app is showing as restarting though, which might be a problem, but again I don't know why it's restarting as it gives me nothing in the logs:
My Dockerfile:
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY --chown=node:node . .
EXPOSE 8080
CMD ["npm", "start"]
My docker compose file:
version: '3'
services:
app:
container_name: app
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
links:
- db
networks:
- app-network
db:
container_name: db
image: mongo
ports:
- '27017:27017'
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- app
links:
- app
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email email#gmail.com --agree-tos --no-eff-email --force-renewal -d domain.com -d www.domain.com
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/root/app/views/
o: bind
dhparam:
driver: local
driver_opts:
type: none
device: /home/root/app/dhparam/
o: bind
networks:
app-network:
driver: bridge
And my nginx conf file:
server {
listen 80;
listen [::]:80;
server_name domain.com www.domain.com;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
rewrite ^ https://$host$request_uri? permanent;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name domain.com www.domain.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/api.wasdstudios.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.wasdstudios.com/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
try_files $uri #app;
}
location #nodejs {
proxy_pass http://app:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
# enable strict transport security only if you understand the implications
}
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
}
First part of my node file:
const express = require('express');
const app = express();
const path = require('path');
const db = require('mongoose');
// Import routes
const authRoute = require('./routes/auth');
const recoverRoute = require('./routes/recover');
const getUser = require('./routes/getUser');
// Connect to db
console.log(process.env.DB_CONNECT);
db.connect('mongodb://db:27017/app-mongo-database', { useNewUrlParser: true }, (err, client) =>
{
if(err){
console.log(err);
}
else{
console.log("connected to db");
}
});
// Middleware
app.use(express.json());
app.use('/static', express.static(path.join(__dirname, 'static')));
app.use('/auth/getUser', getUser);
// Route middlewares
app.use('/auth', authRoute);
app.use('/auth/recover', recoverRoute )
app.listen(8080, () => console.log('Server started'));
This is the log I get when i look into the app's containers log:
And when I go to my domain I get the obvious:
Update:
I do a docker-compose up -b -d command and this is the output ( npm start is running correctly):
It's now displaying this when I do a docker-compose ps
I'ts showing npm start now as it should, but it still does not work with the same error.
Solved it.
I had to add the add the apps-network into my db service file:
version: '3'
services:
app:
container_name: app
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
links:
- db
networks:
- app-network
db:
container_name: db
image: mongo
ports:
- '27017:27017'
networks: ////////// <<<<< here
- app-network
and update my nginx.conf file to this, I made an error and my try files uri wasd called app, not nodejs.:
location / {
try_files $uri #app; ///// <<<<< this should be nodejs not 'app'
}
location #nodejs {. /// <<<<< as long as it's the same as this name
proxy_pass http://app:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
}
You can compare code, if needed, with my question to see if you fall in the same error as me.

How to run flask appbuilder with uWSGI and Nginx

I build a web server with flask appbuilder,i can run this project by command:
python3 run.py or fabmanage run,but it always No response when not interaction after some hours,So i try to run it with Nginx.
Here is my config:
uwsgi.ini:
[uwsgi]
base = /root/flask_spider/gttx_spider/web
all_base = /root/flask_spider/gttx_spider/
app = run
module = %(app)
chdir = %(base)
virtualenv = %(all_base)/venv
socket = %(all_base)/uwsgi_gttx_spider.sock
logto = /var/log/uwsgi/%n.log
master = true
processes = 500
chmod-socket = 666
vacuum = true
callable = app
nginx.conf
server {
listen 82;
server_name gttx_spider;
charset utf-8;
client_max_body_size 75M;
location / {
include uwsgi_params;
uwsgi_pass unix:/root/flask_spider/gttx_spider/uwsgi_gttx_spider.sock;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
}
and modify run.py
from app import app
app.run(host='0.0.0.0')
#app.run(host='0.0.0.0', port=8080,debug=True)
and then:
sudo ln -s /root/flask_spider/gttx_spider/nginx.conf /etc/nginx/conf.d/
sudo /etc/init.d/nginx restart
uwsgi --ini uwsgi_gttx_spider.ini
When i access IP:82 and get this log in nginx:
[error] 11104#11104: *3 upstream timed out (110: Connection timed out) while reading response header from upstream
When i access IP:5000,
And this log in uwsgi:
2018-09-10 19:36:25,747:INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2018-09-10 19:36:38,434:INFO:werkzeug:115.192.37.57 - - [10/Sep/2018 19:36:38] "GET / HTTP/1.1" 302 -
2018-09-10 19:36:38,466:INFO:werkzeug:115.192.37.57 - - [10/Sep/2018 19:36:38] "GET /home/ HTTP/1.1" 200 -
Also,i try this:
mv web/run.py web/run_bak.py
vi run.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8090)
And access IP:82,it will return 'Hello World' and everything is fine
The difference is werkzeug will run the flask appbuilder project on 5000,and how to run in uwsgi.sock,Please Help,thanks!

Webpack development server seperate subdomain proxied by nginx

im currently stuck on a probem with the webpack-dev-server which listen on a wrong domain with a wromng port. I've dockerized my Symfony application having 3 container, node, php and nginx. On the Node container the webpack-dev-server is running with the following (shortened) configuration
output: {
filename: '[name].[hash].js',
chunkFilename: '[name].[chunkhash].js',
path: Path.resolve(__dirname, 'web/static'),
publicPath: '/static/'
},
devServer: {
contentBase: Path.join(__dirname, 'web'),
host: '0.0.0.0',
port: 8080,
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, PATCH, OPTIONS",
"Access-Control-Allow-Headers": "X-Requested-With, content-type, Authorization"
},
disableHostCheck: true,
open: false,
overlay: true,
compress: true
},
The nginx is configured to find the php application on www.application.box (docker port mapping 80 => 80)
The webpack-dev-server is reachable on static.application.box (proxied port 80 to 8089) and running on port 8080. also port 8080 is mapped to the host.
While all assets correctly resolved from static.application.box/static/some-assets.css/js the socketjs-node/info request as well as the websocket it self are running on www.application.box:8080/socketjs-node/info?t= (which is working since the port is mapped to the node container)
I've tried several things, but without success. So how can i modify the webpack-dev-server/nginx configuration to get the js and websocket on static.application.box/socketjs-node/info?t ?
I ran into the same problem with webpack-dev-server a week ago, but it should be noted that I modified /etc/hosts to have seperate project.local domains and that I used https.
Description:
In this case the webpack-dev-server ran on a docker container client:8080 and was proxied to client.project.local:80 via nginx
Like you I didnt find a way to configure webpack-dev-server to use my host and port so I created another nginx proxy especially for that :8080/sockjs-node. [1]
But then I had the problem, that the dev-server tried to access https://client.project.local:8080/sockjs-node/info?t=1234567890
Which is a port too much for nginx, since client.project.local is already a proxy to client:8080. So I added in the webpack.conf.js config.output.publicPath = '//client.project.local/ and ... voilĂ :
https://client.project.local/sockjs-node/info?t=1234567890.
works like a charm.
Configs
webpack.conf.js:
const fs = require('fs')
const sslCrt = fs.readFileSync('/path/to/ssl/ca.crt')
const sslKey = fs.readFileSync('/path/to/ssl/ca.key')
// ...
{
// ...
devServer: {
hot: true, // <- responsible for all of this, but still dont wanna miss it ;)
inline: true,
compress: true,
host: process.env.HOST, // set in Dockerfile for client container
port: process.env.PORT, // set in Dockerfile for client container
disableHostCheck: true, // when manipulating /etc/hosts
headers: { 'Access-Control-Allow-Origin': '*' },
https: {
cert: sslCrt,
key: sslKey
},
// ...
}
output: {
publicPath: '//client.project.local/' // host from /etc/hosts (note // at beginning)
},
}
nginx client config:
# http
server {
listen 80 default;
listen [::]:80 default ipv6only=on;
server_name www.client.project.local client.project.local www.project.local project.local;
# your other config like root, access_log, charset ..
location / {
proxy_pass https://client:8080/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
# https
server {
listen 443 ssl default;
listen [::]:443 ssl default ipv6only=on;
ssl_certificate project.local.crt;
ssl_certificate_key project.local.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl on;
server_name www.client.project.local client.project.local www.project.local project.local;
# your other config like root, access_log, charset ..
location / {
proxy_pass https://client:8080/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
# http/s websocket for webpack-dev-server
server {
listen 8080 default;
listen [::]:8080 default ipv6only=on;
ssl_certificate project.local.crt;
ssl_certificate_key project.local.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl on;
server_name www.client.project.local client.project.local www.project.local project.local;
# your other config like root, access_log, charset ..
location /sockjs-node/ {
proxy_pass https://client:8080/sockjs-node/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Remember to expose port 8080 for nginx container aswell in for example in docker-compose.yml. I added a shortend version for the sake completion
docker-compose.yml
version: "3"
networks:
project-net-ext:
project-net:
internal: true
driver: bridge
services:
client:
hostname: client
build: ./container/client
volumes:
- ./path/to/code:/code:ro # read-only
# write needed only for initial package download
ports:
- "8080:8080"
networks:
- project-net
# project-net-ext only needed for initial package download
nginx:
hostname: nginx
build: ./container/nginx
volumes:
- ./path/to/code:/code:ro # read-only
# write needed only for initial package download
ports:
- "80:80" # http
- "443:443" # https
- "8080:8080" # webpack-dev-server :8080/sockjs-node/info
links:
- client
networks:
- project-net # needed for nginx to connect to client container,
# even though you've linked them
- project-net-ext # nginx of course needs to be public
[1]: I dont know if its considered to be dirty. At least it feels a bit like it is, but it works and as the name suggests: Its a dev-server and once you npm build for productive, its gone - for ever
this can be fixed by setting devServer.sockPort: 'location'
webpack.config.js:
devServer {
sockPort: 'location'
// ...
}
Here's a complete nginx.conf that will allow you to proxy webpack-dev-server without requiring any changes other than sockPort
nginx.conf:
events {}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8081;
# uncomment if you need ssl
# listen 4443 ssl;
# ssl_certificate cert.pem;
# ssl_certificate_key privkey.pem;
location / {
# webpack-dev-server port
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Host localhost;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}

Resources