I have backend express js, frontend react js . Express api reachable but not React
===> React Dockerfile :
FROM node:16 as client-build
WORKDIR /front
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 80
# Setup the server
FROM nginx
WORKDIR /front
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=client-build /front/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
===> Backend Dockerfile :
FROM node:16
WORKDIR /back
COPY package*.json ./
RUN npm install
COPY . .
CMD npm run start
===> docker-compose :
version: "3.8"
services:
mongodb:
image: mongo:3.6.8
restart: unless-stopped
ports:
- "27017:27017"
volumes:
- db:/data/db
networks:
- backend
back:
depends_on:
- mongodb
build: ./back
ports:
- "9000:9000"
networks:
- backend
- frontend
restart: on-failure
front:
depends_on:
- back
build: ./front
ports:
- "80:9001"
networks:
- frontend
restart: unless-stopped
volumes:
db:
driver: local
networks:
backend:
frontend:
I use docker on windows ps and on ubuntu server, both results are same.
localhost:9000 or domain.com:9000 - backend reachable
but localhost / localhost:9001 / domain.com / domain.com / 9001 not reachable
Only when i run a curl on terminal with local ip (that i got from docker 192.172****) i get result but not from outside
I'm new at docker. My goal is make the frontend accessable via domain and api private. But for the very next step all i need is to access frontend via browser.
===> Nginx conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name _;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /api/ {
proxy_pass http://back:9000/api/;
}
}
}
Related
i am trying to setup a node + react app for local develpoment environment using docker + nginx
nginx is running on 3002
for route /api strip off the /api and redirect to service api i.e. the node app
redirect other routes to service client i.e. react app
docker-compose file
version: "3"
services:
api:
build:
context: api
dockerfile: Dockerfile.dev
ports:
- 3000:3000
volumes:
- ./api:/app
client:
stdin_open: true
build:
context: client
dockerfile: Dockerfile.dev
ports:
- 3001:3000
volumes:
- ./client:/app
nginx:
restart: always
build:
context: nginx
dockerfile: Dockerfile.dev
ports:
- 3002:80
volumes:
- ./nginx/default.dev.conf:/etc/nginx/conf.d/default.conf
depends_on:
- client
- api
nginx default.dev.conf
upstream api {
server api:3000;
}
upstream client {
server client:3001;
}
server {
listen 80;
location / {
proxy_pass http://client;
}
location /sockjs-node {
proxy_pass http://client;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /api/(.*) /$1;
proxy_pass http://api/;
}
}
client Dockerfile.dev
FROM node:alpine
WORKDIR /app
CMD ["npm", "run", "start"]
api Dockerfile.dev
FROM node:alpine
WORKDIR /app
CMD ["npm", "run", "dev"]
nginx Dockerfile.dev
FROM nginx
on visiting http://localhost:3002/api - works
on visiting http://localhost:3002 - i get 502 Bad Gateway
http://localhost:3000 and http://localhost:3001 work
on docker exec -it bytecode_api_1 sh (i.e. going inside terminal of api container) ping client and ping nginx work
the issue is in the connection between reat-app and nginx i.e. neither is reqeust from react going to nginx and nor is a request to http://localhost:3002 being directed to react-server
EDIT: figured it out with the help of some good people on discord, i had made a silly mistake, it should be
upstream client {
server client:3000;
}
i.e. the container port
I have this problem where when I run Docker locally it works, but when I try to run a multi container app on Azure it gives me this error:
2020/11/02 19:10:39 [emerg] 1#1: host not found in upstream "php" in /etc/nginx/nginx.conf:38
nginx: [emerg] host not found in upstream "php" in /etc/nginx/nginx.conf:38
It's obviously referring to the fastcgi_pass php:9000; line, I tried using localhost but that didn't work. Any ideas?
docker-compose.yml:
version: '3.1'
services:
db:
image: mysql:5.7
restart: always
environment:
- MYSQL_ROOT_PASSWORD=password
ports:
- "3306:3306"
php:
image: joelcastillo/php:1.1
ports:
- "9000:9000"
restart: always
web:
image: joelcastillo/nginx:1.3
ports:
- "80:80"
links:
- php
PHP Dockerfile:
FROM php:7-fpm
# Use the default production configuration
RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
RUN docker-php-ext-install mysqli
RUN usermod -u 1000 www-data
RUN apt-get update && apt-get install -y libpng-dev libmagickwand-dev --no-install-recommends
RUN docker-php-ext-install opcache
RUN docker-php-ext-install gd
RUN pecl install imagick
RUN docker-php-ext-enable imagick
Nginx Dockerfile:
FROM nginx
COPY ./ /code
COPY ./docker/nginx/nginx.conf /etc/nginx/nginx.conf
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80 default_server;
root /code/public_html;
index index.php;
location / {
try_files $uri $uri/ /index.php;
}
# Forward images to prod site
location ~ ^/app/uploads {
try_files $uri #prod;
}
location #prod {
rewrite ^/app/uploads/(.*)$ https://jee-o.com/app/uploads/$1 redirect;
}
location ~ \.php$ {
proxy_read_timeout 3600;
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
}
If you are using docker-compose, all services will be created with a network through which they can communicate by container name (e.g. "php").
If you use some other mechanism (your question does not go into detail how you configure your Azure instance), you have to make sure that:
You create a network to which you can attach your containers (e.g. docker network create my-network)
The backend container is created with the name "php" and attached to your network (e.g. docker run --name php --network my-network <image>)
If you have already started a container you can still attach it to a network: docker network connect my-network php
I want:
(1) https://example.com to serve static content using SSL
(2) https://sub.example.com/myapp to serve dynamic content from NodeJS using SSL.
I have a domain example.com and one subdomain sub.example.com. I have a Docker container that has nginx and a nodejs app in it. Using Ubuntu on Digital Ocean.
I am getting 502 Bad Gateway errors. Not sure if this CAN work or if it can, how to fix it.
The nodejs app listens on 8080 (no host specified, no SSL)
Nginx listens on 80 and 443 for example.com but redirects all http (80) requests to https (443)
Nginx listens on 443 for sub.example.com.
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
- ./data/nginx/html:/etc/nginx/html
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
node:
image: node:11
volumes:
- ./data/node:/usr/local/src/node
ports:
- "8080:8080"
working_dir: /usr/local/src/node/myapp
command: node server.js
nginx.conf:
server {
listen 80;
server_name example.com;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name example.com;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location /myapp {
proxy_pass http://localhost:8080;
}
}
I am googling and trying to use nginx for simple node.js application with docker compose. But when I look at localhost:8081 My request returned me as 502 bad gate way. How can I handle this error?
My file structure below:
Load-Balancer:
DockerFile:
FROM nginx:stable-alpine
LABEL xxx yyyyyy
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 8081
CMD ["nginx", "-g", "daemon off;"]
nginx.conf:
events { worker_connections 1024; }
http {
upstream localhost {
server backend1:3001;
server backend2:3001;
server backend3:3001;
}
server {
listen 8081;
server_name localhost;
location / {
proxy_pass http://localhost;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
}
}
}
docker-compose.yml
version: '3.2'
services:
backend1:
build: ./backend
tty: true
volumes:
- './backend'
backend2:
build: ./backend
tty: true
volumes:
- './backend'
backend3:
build: ./backend
tty: true
volumes:
- './backend'
loadbalancer:
build: ./load-balancer
tty: true
links:
- backend1
- backend2
- backend3
ports:
- '8081:8081'
volumes:
backend:
My repository: https://github.com/yusufkaratoprak/nginx_docker_loadbalancer
There is no command set for the backend container images to run.
The official nodejs images run node by default, which will start the cli when a tty exists. I assuming the tty was enabled in the compose definition to keep the containers from crashing.
A simple Dockerfile for an app would look like:
FROM node:boron
WORKDIR /app
COPY src/. /app/
RUN npm install
EXPOSE 3001
CMD [ "node", "/app/index.js" ]
A tty shouldn't be needed for most daemons, remove the tty settings from the docker-compose.yml. The links are also redundant in version 2+ compose files.
version: '3.2'
services:
backend1:
build: ./backend
volumes:
- './backend'
backend2:
build: ./backend
volumes:
- './backend'
backend3:
build: ./backend
volumes:
- './backend'
loadbalancer:
build: ./load-balancer
ports:
- '8081:8081'
volumes:
backend:
My application is working. console.log is recorded on logs, and able to display process.env.URL on cmd. However, on the browser, it returns error 502, bad gateway.
This is my docker-compose
version: "2"
volumes:
mongostorage:
services:
app:
build: ./app
ports:
- "3000"
links:
- mongo
- redis
command: node ./bin/www
nginx:
build: ./nginx
ports:
- "80:80"
links:
- app:app
mongo:
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db
volumes:
- mongostorage:/data/db
ports:
- "27017:27017"
redis:
image: redis
volumes:
- ./data/redis/db:/data/db
ports:
- "6379:6379"
nginx.conf
events {
worker_connections 1024;
}
http{
upstream app.dev{
least_conn;
server app:3000 weight=10 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
server_name app.dev;
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
This is app/Dockerfile
FROM node:6.3
WORKDIR /var/www/app
RUN mkdir -p /var/www/app
COPY package.json /var/www/app
RUN npm install
COPY . /var/www/app
This is nginx/Dockerfile
FROM nginx:latest
EXPOSE 80
COPY nginx.conf /etc/nginx/nginx.conf
In your nginx.conf file, In proxy_pass_http you should use the server name i.e
proxy_pass http://app.dev/ instead of proxy_pass http://app and it should work;