How to use Traefik PathPrefix on an Nginx Docker image? - node.js

I am making a React front-end application, and my aim is to serve the final (npm run build) version using a Docker image, which contains a multi-stage build with my the source code and Nginx. There multiple other Dockerized services involved, and Traefik acts as the load balancer / forwarder / whatever else Traefik does.
Now here's the thing: When I build the image it works fine as long as Traefik is not involved and I'm simply forwarding the application to a port in my docker-compose.yml file as follows:
version: '2.4'
services:
ui:
ports:
- someport:80
By doing this, I find the application served at http://address:someport no problem.
Now what I eventually have to do is give the application a PathPrefix, and Traefik is a good tool for this. Say I want to eventually find the application from http://address/app.
From what I understand, this is easily done in the docker-compose.yml file using labels, like:
version: '2.4'
services:
ui:
ports:
- 80
labels:
- traefik.enable=true
- traefik.http.routers.myapp.rule=PathPrefix("/app")
- traefik.http.routers.myapp.entrypoints=web
Please notify me if these labels are insufficient for what I want to achieve!
So after launching my application and navigating to http://address/app, nothing is rendered. Important! -> By looking at the console, it seems that the application is trying to access http://address/static/css... and http://address/static/js... source files and does not find them. There is nothing behind these addresses, but by adding /app in between like http://address/app/static..., the source files would be found like the are supposed to. How do I get my application to do this?
Is this a Traefik issue or an Nginx issue, or perhaps related to npm, which I find unlikely?
The Dockerfile for the as somewhat like:
# build env
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm run build
# production env
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

I think what you're missing is that you need to let React know that it is not hosted at the root / but a sub-path. In order to configure a different path and thus enabling React to properly build the paths to also access the assets such as CSS and JS, you need to set the "homepage" key in your package.json to a different value, for example,
"homepage": "http://address/app",
You can read more on the topic in the deployment section of the React docs.

This is an interesting problem we had a couple days ago as well. I'll share what I found in the hopes that it points you in some form of right direction.
Using PathPrefix only, the application has to fully listen on the prefix.
From https://docs.traefik.io/routing/routers/#rule
Since the path is forwarded as-is, your service is expected to listen on /products.
There is a Middleware that strips out prefixes should your app not listen on the subpath:
https://docs.traefik.io/middlewares/stripprefix/
However the caveat there is
If your backend is serving assets (e.g., images or Javascript files), chances are it must return properly constructed relative URLs.
We haven't quite found solutions yet, other than changing the paths in our applications. But maaaybe this helps

For example in Next.js, I had to create next.config.js with content:
{
module.exports = {
basePath: '/nextjs',
}
where nextjs is my PathPrefix value from traefik.

Related

Is it a good practice to keep nodejs server and react frontend in same directory?

I need to serve my React built files (build directory) using nodejs server. With React being wrapped in Docker my nodejs server can not access build directory within /frontend. So what I am thinking of is to move my server.js into /frontend and having a single Dockerfile for both of them.
It would have something like this CMD ['npm run build', 'node server.js']
Would that be illegal and bad practice ?
Modern stack monoliths
If your are developing a single site with its backend, it could work to have frontend and backend in the same repository or directory. It will be like the modern monoliths: mean, mern, mevn with some challenges related to the fact to have different application types in one repository.
Spa with several apis
But, if your site will be a complex spa with several menus, modules, roles, invocation to several rest apis, etc I advice you a distributed or decoupled architecture. I mean one app or process by repository/server. Here some advantages :
frontend (spa) with react
own domain like acme.com
you could use specialized services for static webs
you are not bound to only one api. You could consume several apis.
react developers could have its own git strategy
custom build for webs
backend (api) with nodejs
own domain like acme.api.com
you could scale only the backend because the heavy process is in this layer
nodejs developers could have its own git strategy
serve the spa
If your web is simple, you could use
https://www.npmjs.com/package/http-server
https://www.npmjs.com/package/serve
https://www.npmjs.com/package/static-server
But if your web has complexities like backend: env variables portability, one build, etc you could use:
https://github.com/usil/nodeboot-spa-server
References
spa with several apis image: https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern
modern stack monoliths image: https://lzomedia.com/blog/different-ways-to-connect-react-frontend-and-node-backend/
You can't (*) run multiple servers in a single container.
For your setup, though, you don't need multiple servers. You can compile the React application to static files and serve those from your other application. There are some more advanced approaches where a server injects the data or a rendered copy of the page as it serves these up; these won't necessarily work with a React dev server in a separate container (probably some of the setup described in #JRichardsz's answer goes into this more).
If both halves are in the same repository, you can potentially use a Docker multi-stage build to compile the front-end application to static files, then copy the result into the main server image. This could look like:
# Build the frontend.
FROM node:lts AS frontend
WORKDIR /app
COPY frontend/package*.json ./
RUN npm ci
COPY frontend/ ./
RUN npm build
# Built files are left in /app/build; this stage has no CMD.
# Build the main server.
FROM node:lts
WORKDIR /app
COPY server/package*.json ./
RUN npm ci
COPY server/ ./
# Copy the build tree from the frontend image into this one.
COPY --from=frontend /app/build ./static
RUN npm build
# Normal metadata to set up and run the server container.
EXPOSE 3000
CMD ["npm", "run", "start"]
(*) It's technically possible, but you need to install some sort of process manager, which adds significant complication. It's less of a concern with the approach I describe here but you also lose some things like the ability to see just a single process's logs or the ability to update only one part of the system without restarting the rest of it. The CMD you suggest won't do it. I'd almost always use multiple containers over trying to shoehorn in something like supervisord.

How to use a docker image to generate static files to serve from nginx image?

I'm either missing something really obvious or I'm approaching this totally the wrong way, either way I could use some fresh insights.
I have the following docker images (simplified) that I link together using docker-compose:
frontend (a Vue.js app)
backend (Django app)
nginx
postgres
In development, I don't use nginx but instead the Vue.js app runs as a watcher with yarn serve and Django uses manage.py runserver.
What I would like to do for production (in CI/CD):
build and push backend image
build and push nginx image
build the frontend image with yarn build command
get the generated files in the nginx container (through a volume?)
deploy the new images
The problem is: if I put yarn build as the CMD in the Dockerfile, the compilation happens when the container is started, and I want it to be done in the build step in CI/CD.
But if I put RUN yarn build in the image, what do I put as CMD? And how do I get the generated static files to nginx?
The solutions I could find use multistage builds for the frontend that have an nginx image as the last step, combining the two. But I need the nginx image to be independent of the frontend image, so that doesn't work for me.
I feel like this is a problem that has been solved by many, yet I cannot find an example. Suggestions are much appreciated!
Create a volume in your docker-compose.yml file and mount the same volume to both your frontend container (to a path where the built files are, like dist folder) and nginx container (to your web root path). This way both containers will have the access to same volume.
And also, keep your yarn build as RUN command.
EDIT:
Containers need to run a program or command in order to be classified as a container, and so they could be started, stopped etc. That is by design.
If you are not planning to serve from frontend container using a command, then you should either remove it as a service (since its not) from docker-compose.yml and add it as a build stage in your nginx dockerfile, or you could use some kind of command that will run indefinitely in your frontend container, for example tail -f index.html. The first solution is a better practice.
In your nginx dockerfile add frontend build dockerfile as a first build stage:
From: node as frontend-build
WORKDIR /app
RUN yarn build
From:nginx
COPY --from=frontend-build /app/dist /usr/shared/nginx/html
...
CMD ["nginx"]

Deploy and run Docker in AWS

I just finish my React and NodeJS project and create a Dockerfile for each one and then create a docker-compose file to create the docker image for each one (frontend and backend).
I also push my images to my repository in Docker hub.
What should I do now? I want to run my docker project in AWS EC2, so I create in my AWS dashboard a new EC2 instance and install docker there and also manage to download my images from my docker hub...
But now I'm pretty stuck, do I need to create a container to run both of them? Do I need to run each one alone?
Also, I'm using Nginx to use port 80 instead of the default 3000.
I really lost now (first time working with Docker and AWS)
Thanks!
EDIT 1
My dockerfile for React is:
# build environment
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
# new
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
My dockerfile for Nodejs is:
FROM node:8
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 5000
CMD [ "npm", "start" ]
and my Nginx config file is:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
my docker-compose.yml file is:
version: "2"
services:
frontend:
image: itaik/test-docker:frontend
build: ./frontend
ports:
- "80:80"
depends_on:
- backend
backend:
image: itaik/test-docker:backend
build: ./backend
ports:
- "5000:5000"
On my computer (windows) I download the Docker desktop app and I using the docker compose-up command and it's create a container with both of my images and everything is great, this is what i'm trying to achieve on my AWS EC2 (Linux).
Hope things more clear now.
EDIT 2
Ok so somehow I manage to run both my images with difference containers and they are now both online like I wanted
I use the command:
docker run -d -p 5000:5000 itaik/test-docker:backend
docker run -d -p 80:80 itaik/test-docker:frontend
But now, for some reason all my API calls to "localhost:5000" are getting an error:
GET http://localhost:5000/user/ net::ERR_CONNECTION_REFUSED
Is this related to my React/Nodejs code or to the way I setup my docker images?
Thanks!
EDIT 3
Even after I use docker-compose and both images are running on the same network bridge I still got that error.
The only solution I can think about is to manually edit my code and change "localhost" to AWS public IP, but I guess I just missing something that need to be done...
Maybe my network bridge is not visible to the public IP somehow?
Because I do get response when I enter to the public IP (port 80) and to port 5000.
But the API call to localhost:5000 is getting an error.
Probably the shortest path to get this working is to
Merge frontend with backend into same docker image (because you have to serve your bundle from somewhere and backend is the nearest already prepared place for it)
Make sure hosts and ports set up and container works properly on you machine
Push image into DockerHub (or e.g. AWS ECR)
Get machine At AWS EC2 and install docker there (and, if needed, Docker Compose)
Make sure control groups for you machine allow incoming traffic to the port (right click on instance at AWS Console), your application will serve on (for your example you should open port :80)
Pull image and start container (or 2 via docker-compose) with port set up as 80:3000 (lets assume your apps serves on :3000 on container)
on same AWS EC2 console click on your instance and you will see public address of your machine (smth like ec2-203-0-113-25.compute-1.amazonaws.com)
I think those are the most common pitfalls
I would recommend to deploy your app in single image (e.g. Express can easily serve your frontend-app after you built it into static bundle) because it may take some time to make containers communicate with each other.
But if you still want to serve your app with 2 components - frontend from Nginx, and backend from NodeJS, then you can use docker-compose. Just make sure that hosts for each components set up properly (they won't see each other on localhost)
UPD 2
There are actually 2 ways to make frontend communicate with backend (I suppose your are probably using axios or smth similar for API calls):
1st way - is to explicitly set up http://<host>:<port> as axios baseURL so all your api calls become interpolated into http://ec2-203-....amazonaws.com:3000/api/some-data. But you should know exact host, your backend is serving on. Maybe this is the most explicit and easiest way :)
2nd way - is to make browser locate your server. I'm not deeply familiar with how that works under the hood, but on high level it works like this: user gets your application as bundle from yousite.com. And app needs to fetch data from backend, but in code those calls are "relative" - only /api/some-data is stated.
As there is no backend host set up, app asks browser to locate it's "home" and, I suppose, location.hostname becomes the host for such calls, so full url becomes http://yousite.com/api/some-data.
But, as you can see, if user will make this call, it will hit Nginx instead of actual backend, because Nginx is the one who serves frontend.
So next thing that you have to do - it to proxy such calls from Nginx to NodeJs. And there is another thing - only API calls should be proxied. Other calls should, as usually, return your bundle. So you have to set up proxy this way:
location /api/ {
proxy_pass http://backend:5000;
}
note as NodeJs app is on backend:5000, not localhost:5000 - this how docker-compose sets up DNS
So now Nginx makes 2 things:
Serves static content
Reverse-proxies api calls to your backend
As you may have noticed, it has many tricky parts to make your app work in 2 containers. And this is how you can serve static content (from directory build) with express, so you won't need to use Nginx:
const app = express();
app.use('/', express.static(path.join(__dirname, 'build')));
I would recommend to try 1st way - explicitly set up host:port - and enable all the logging you can so you know whats going on. And after you are done with 1st part, update Nginx to make it work in proper way
Good luck!
first of all, to interact with another docker service running on the same ec2 instance, you don't need to go through the public internet.
Second, your react application cannot access the backend service via http://localhost:5000, because the backend service is not running in the same docker instance.
You should be able to access the backend service through http://backend:5000 from the react docker. Please check and get back to me.
Not related to your problem, but a suggestion,
Is your react application public-facing or internal application, if it's public-facing, you can easily run your react application from amazon S3 itself since amazon s3 supports static website hosting. you could simply copy the react application into an s3 bucket and run it as a static website. this also allows you to setup SSL via cloudfront.
hope this helps.
Reference:
how to setup the s3 static website

switching command based on node env variable for the docker-compose.yml file

I am trying to host an application, so I have created a server and a client folder. In server folder node js and client its react using create-react-app.
So for docker, I have created two Dockerfile in server and client folder. In the project root created a docker-compose.yml file.
for local development, I need to have the auto reloading capability for the server so, I have put
command: nodemon index.js
in the docker-compose.yml file. Everything is working fine. when i build the docker. But when I host this application I need to change to
`command: node index.js`
The only way I think it will add a node environment variable when I host the application, but the problem is I can access in the index.js of server folder like
process.env.APPLICATION_ENVIRONMENT
but how can I access in the docker-compose.yml file? Since I want to use the same docker-compose.yml file for hosting and make developer to start work easily by having the capability of server auto reloading.
Is there any other better way to do this. ?
docker-compose files support variable substitution. You can then use this to store and set the command you want to run directly in the docker-compose file.
E.g a sample docker-compose.yml:
version: "3"
services:
server:
build: ./server
command: ${NODE_COMMAND:-nodemon} index.js
${NODE_COMMAND:-nodemon} will default to nodemon if no NODE_COMMAND variable is present in your shell. You can override this value in production by providing a value for NODE_COMMAND when you start the containers i.e :
$ NODE_COMMAND=node docker-compose up -d
Alternatively, on your hosted server you could create a .env file in the same directory that you run your docker-compose commands like so:
NODE_COMMAND=node
And docker-compose will automatically substitute the value in your compose file. See the linked page about variable substitution for more details.
Hopefully this helps.

Git repo structure with multiple docker files

What is the best way to structure a repository / project that has multiple Dockerfile for provisioning services.
e.g.
Dockerfile # build Nodejs app server
Dockerfile # build Nginx forward proxy
Dockerfile # build Redis cache server
What is the best practices and standard structure within a repository to contain this information?
You generally have one folder per Dockerfile, as:
each one can use multiple other resource files (config files, data files, ...) when doing their respective docker build -t xxx .
each one can have its own .dockerignore
My project b2d, for instance, has one Dockerfile per application:

Resources