How to serve/update content from one Docker container in another - node.js

I'm trying to run NodeJS served by nginx. I want to proxy to the NodeJS server, and serve static content like images, css, js.
Here's my docker-compose file:
version: "3.7"
services:
web:
build: .
image: 127.0.0.1:5000/test
volumes:
- public:/app/public
deploy:
replicas: 2
nginx:
image: nginx:stable-alpine
ports:
- 80:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
# share files from web
- public:/static:ro
volumes:
public:
My Dockerfile runs npx webpack -p to build all static files (/app/public), and then runs the NodeJS server with node /app/src/server.min.js. The NodeJS serves server-side React with ajax endpoints for minimal page updates. As I mentioned above the nginx container serves the static content (css, js, images, etc).
The problem is that I can't update the static files. Once the volume is created and populated, those files aren't able to be altered, i.e. I can't update CSS or JS.
You can see this behavior with docker-compose up or docker stack deploy.
Is there some way that I could recreate the volume, or serve the files in some other way between the containers?

There's three basic approaches you can take here.
The first is, at build time, copy those static assets somewhere outside of Docker space where Nginx or something else can host them. If you're running in AWS anyways, you can serve them directly out of S3; if you're in a local environment, use a host path instead of a named volume. This mostly just avoids the "only on first use" volume behavior, but it requires some work outside of Docker.
You can build the same content into two images, and not try to share it using a volume. You'd have to add a second Dockerfile for the nginx image, and if there's some sort of build pipeline (Webpack?) to build the static content you'd have to ensure that's run up front. A Dockerfile line like
COPY --from=127.0.0.1:5000/test /app/public /static
might work.
You can also have the web image copy its own data manually at startup, instead of relying on Docker to do this for you. You can have an entrypoint script like
#!/bin/sh
if [ -d /static ]; then
cp -r /app/public/* /static
fi
exec "$#"
Add that to your image, and mount the shared volume on /static in both containers.
All of these cases avoid the behavior of Docker automatically populating volumes. I try to avoid that in general because of exactly this issue: it only happens the first time you run a container, but in reality you're frequently making updates, redeploying, etc. but that volume already exists so Docker won't update it. (This behavior also doesn't work at all in Kubernetes, in spite of it otherwise being able to run standard Docker images without modification.)

Related

Is it possible to use a docker volume without overwriting node_modules? [duplicate]

Supposed I have a Docker container and a folder on my host /hostFolder. Now if I want to add this folder to the Docker container as a volume, then I can do this either by using ADD in the Dockerfile or mounting it as a volume.
So far, so good.
Now /hostFolder contains a sub-folder, /hostFolder/subFolder.
I want to mount /hostFolder into the Docker container (whether as read-write or read-only does not matter, works both for me), but I do NOT want to have it included /hostFolder/subFolder. I want to exclude this, and I also want the Docker container be able to make changes to this sub-folder, without the consequence of having it changed on the host as well.
Is this possible? If so, how?
Using docker-compose I'm able to use node_modules locally, but ignore it in the docker container using the following syntax in the docker-compose.yml
volumes:
- './angularApp:/opt/app'
- /opt/app/node_modules/
So everything in ./angularApp is mapped to /opt/app and then I create another mount volume /opt/app/node_modules/ which is now empty directory - even if in my local machine ./angularApp/node_modules is not empty.
If you want to have subdirectories ignored by docker-compose but persistent, you can do the following in docker-compose.yml:
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules
This will mount your current directory as a shared volume, but mount a persistent docker volume in place of your local node_modules directory. This is similar to the answer by #kernix, but this will allow node_modules to persist between docker-compose up runs, which is likely the desired behavior.
For those trying to get a nice workflow going where node_modules isn't overridden by local this might help.
Change your docker-compose to mount an anonymous persistent volume to node_modules to prevent your local overriding it. This has been outlined in this thread a few times.
services:
server:
build: .
volumes:
- .:/app
- /app/node_modules
This is the important bit we were missing. When spinning up your stack use docker-compose -V. Without this if you added a new package and rebuilt your image it would be using the node_modules from your initial docker-compose launch.
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
To exclude a file, use the following
volumes:
- /hostFolder:/folder
- /dev/null:/folder/fileToBeExcluded
With the docker command line:
docker run \
--mount type=bind,src=/hostFolder,dst=/containerFolder \
--mount type=volume,dst=/containerFolder/subFolder \
...other-args...
The -v option may also be used (credit to Bogdan Mart), but --mount is clearer and recommended.
First, using the ADD instruction in a Dockerfile is very different from using a volume (either via the -v argument to docker run or the VOLUME instruction in a Dockerfile). The ADD and COPY commands just take a copy of the files at the time docker build is run. These files are not updated until a fresh image is created with the docker build command. By contrast, using a volume is essentially saying "this directory should not be stored in the container image; instead use a directory on the host"; whenever a file inside a volume is changed, both the host and container will see it immediately.
I don't believe you can achieve what you want using volumes, you'll have to rethink your directory structure if you want to do this.
However, it's quite simple to achieve using COPY (which should be preferred to ADD). You can either use a .dockerignore file to exclude the subdirectory, or you could COPY all the files then do a RUN rm bla to remove the subdirectory.
Remember that any files you add to image with COPY or ADD must be inside the build context i.e. in or below the directory you run docker build from.
for the people who also had the issue that the node_modules folder would still overwrite from your local system and the other way around
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules/
This is the solution, With the trailing / after the node_modules being the fix.
Looks like the old solution doesn't work anymore(at least for me).
Creating an empty folder and mapping target folder to it helped though.
volumes:
- ./angularApp:/opt/app
- .empty:/opt/app/node_modules/
I found this link which saved me: Working with docker bind mounts and node_modules.
This working solution will create a "exclude" named volume in docker volumes manager. The volume name "exclude" is arbitrary, so you can use a custom name for the volume intead exclude.
services:
node:
command: nodemon index.js
volumes:
- ./:/usr/local/app/
# the volume above prevents our host system's node_modules to be mounted
- exclude:/usr/local/app/node_modules/
volumes:
exclude:
You can see more infos about volumes in Official docs - Use a volume with docker compose
To exclude a mounted file contained in the volume of your machine, you will have to overwrite it by allocating a volume to this same file.
In your config file:
services:
server:
build : ./Dockerfile
volumes:
- .:/app
An example in you dockerfile:
# Image Location
FROM node:13.12.0-buster
VOLUME /app/you_overwrite_file

how to make container for multiple servers in one code base to deploy golang app with docker?

I have a repo that has multiple servers gonna running. the structure like this
// Golang Apps
- account = port 4001
- event = port 4002
- place = port 4003
// Node js
- gateway = port 4000
I usually run in local using script like this
// script.sh here:
#!/bin/bash
EnvAPP="${ENV_APP:-dev}"
function cleanup {
kill "$ACCOUNTS_PID"
kill "$EVENTS_PID"
kill "$PLACES_PID"
}
trap cleanup EXIT
go build -tags $EnvAPP -o ./tmp/srv-accounts ./cmd/server/accounts
go build -tags $EnvAPP -o ./tmp/srv-events ./cmd/server/events
go build -tags $EnvAPP -o ./tmp/srv-places ./cmd/server/places
./tmp/srv-accounts &
ACCOUNTS_PID=$!
./tmp/srv-events &
EVENTS_PID=$!
./tmp/srv-places &
PLACES_PID=$!
sleep 1
node ./cmd/gateway/index.js
is that possible I create one Dockerfile for this case into Production? should I run the script.sh in the Dockerfile for this case? how about the image should I use in Dockerfile? I have no idea for this case using docker because the in one code base for multiple servers running , and the problem also port of servers running
maybe one of you ever has this case? it would be great to know how to solve this problem
I am using GraphQL Federation ( Go ) for this case, so I have multiple services and Gateway ( NodeJS )
I want to deploy this into Production for this question
You need four separate Dockerfiles for this, to launch four separate containers with four different programs. The Go component Dockerfiles can be fairly straightforward:
# Dockerfile.accounts
FROM golang:1.16 AS build
WORKDIR /app
COPY . .
ARG ENV_APP=dev
RUN go build -tags "$ENV_APP" -o /accounts ./cmd/server/accounts
FROM ubuntu:20.04
COPY --from=build /accounts /usr/local/bin
CMD accounts
(If the three images are really identical aside from the specific command directory being built, you could pass that in as an ARG as well. I'm assuming the ./cmd/server/* packages require packages elsewhere in your source directory like a ./pkg/support or whatever, which would require the Dockerfiles to be at the top level.)
Since your script is just running the four programs, I'd generally recommend using Docker Compose as a way to launch the four containers together. "Launch some containers with known options" is the only thing Compose does, but it would do everything your script does.
# docker-compose.yml
version: '3.8'
services:
accounts:
build:
context: .
dockerfile: Dockerfile.accounts
events:
build:
context: .
dockerfile: Dockerfile.events
places:
build:
context: .
dockerfile: Dockerfile.places
gateway:
build:
context: .
dockerfile: Dockerfile.gateway
# (Since a Node app can't reuse Go code, this could also
# reasonably be `build: cmd/gateway` using a
# `cmd/gateway/Dockerfile`)
ports:
- 3000:3000
Just running docker-compose up will start all four containers in the foreground; once it's up, pressing Ctrl+C will stop them all. You can configure the gateway to use the other container names accounts, events, places as host names; http://accounts/graphql for example.
You could also adapt your launcher script as-is. Run docker build instead of go build to build images, docker run to start a container (probably with fixed --names), docker stop && docker rm to stop them. You should docker network create a network and docker run --net all of the containers on them so they can communicate in the same way as the Compose setup.

Docker - volumes explanation

As far as I know, volume in Docker is some permanent data for the container, which can map local folder and container folder.
In early day, I am facing Error: Cannot find module 'winston' issue in Docker which mentioned in:
docker - Error: Cannot find module 'winston'
Someone told me in this post:
Remove volumes: - ./:/server from your docker-compose.yml. It overrides the whole directory contains node_modules in the container.
After I remove volumes: - ./:/server, the above problem is solved.
However, another problem occurs.
[solved but want explanation]nodemon --legacy-watch src/ not working in Docker
I solve the above issue by adding back volumes: - ./:/server, but I don't know what is the reason of it
Question
What is the cause and explanation for above 2 issues?
What happen between build and volumes, and what is the relationship between build and volumes in docker-compose.yml
Dockerfile
FROM node:lts-alpine
RUN npm install --global sequelize-cli nodemon
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
...
test-web:
environment:
- NODE_ENV=local
- PORT=3030
build: . <------------------------ It takes Dockerfile in current directory
command: >
./wait-for-db-redis.sh test-db npm run dev
volumes:
- ./:/server <------------------------ how and when does this line works?
ports:
- "3030:3030"
depends_on:
- test-db
When you don't have any volumes:, your container runs the code that's built into the image. This is good! But, the container filesystem is completely separate from the host filesystem, and the image contains a fixed copy of your application. When you change your application, after building and testing it in a non-Docker environment, you need to rebuild the image.
If you bind-mount a volume over the application directory (.:/server) then the contents of the host directory replace the image contents; any work you do in the Dockerfile gets completely ignored. This also means /server/node_modules in the container is ./node_modules on the host. If the host and container environments don't agree (MacOS host/Linux container; Ubuntu host/Alpine container; ...) there can be compatibility issues that cause this to break.
If you also mount an anonymous volume over the node_modules directory (/server/node_modules) then only the first time you run the container the node_modules directory from the image gets copied into the volume, and then the volume content gets mounted into the container. If you update the image, the old volume contents take precedence (changes to package.json get ignored).
When the image is built only the contents of the build: block have an effect. There are no volumes: mounted, environment: variables aren't set, and the build environment isn't attached to networks:.
The upshot of this is that if you don't have volumes at all:
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
It is completely disconnected from the host environment. You need to docker-compose build the image again if your code changes. On the other hand, you can docker push the built image to a registry and run it somewhere else, without needing a separate copy of Node or the application source code.
If you have a volume mount replacing the application directory then everything in the image build is ignored. I've seen some questions that take this to its logical extent and skip the image build, just bind-mounting the host directory over an unmodified node image. There's not really benefit to using Docker here, especially for a front-end application; install Node instead of installing Docker and use ordinary development tools.

How to run nodejs and reactjs in Docker

I have a nodejs app to run backend and another reactjs app to run frontend for a website, then put to docker image. But I don't know how to deal with CMD command in Dockerfile. Does Docker have any command solve this?
I thought that i could use docker-compose to build 2 separate image but it seem to be wasted because node image has to be installed 2 times.
Does anyone has solution?
Rule of thumb, single process per container.
I thought that I could use docker-compose to build 2 separate image
but it seems to be wasted because node image has to be installed 2
times.
First thing, manage 2 separate docker image is fine but running two process in the container is not fine at all.
second thing, You do not need to build 2 separate images, if you can run two processes from the same code then you can run both applications from single docker-compose.
version: '3.7'
services:
react-app:
image: myapp:latest
command: node server.js
ports:
- 3000:3000
node-app:
image: myapp:latest
ports:
- 3001:3001
command: react-scripts start"
Each container should have only one concern. Decoupling applications
into multiple containers makes it easier to scale horizontally and
reuse containers. For instance, a web application stack might consist
of three separate containers, each with its own unique image, to
manage the web application, database, and an in-memory cache in a
decoupled manner.
Limiting each container to one process is a good rule of thumb
Dockerfile Best practice
Whether put your backend and front-end inside the same container is a design choice (Remember that docker container are designed to share a lot of resources from the host machine).
You can use a shell script and run that shell script with CMD in your Dockerfile.

Making files in different docker containers accessible to each other via file path

I have used docker-compose to dockerise a python app dependent on a database which works fine. The python app generates a powerpoint file which it stores in /tmp within the container. It then needs to be converted to pdf for the dockerised frontend to render it. I intend to do this using a dockerised libreoffice image https://hub.docker.com/r/domnulnopcea/libreoffice-headless/
The libreoffice container is run as follows
sudo docker run -v /YOUR_HOST_PATH/:/tmp libreoffice-headless libreoffice --headless --convert-to pdf /tmp/MY_PPT_FILE --outdir /tmp
Where YOUR_HOST_PATH is within my python app container
What I need to happen
I need the python app to call the libreoffice container and convert the ppt file residing in the python app container and then make the path of the converted document available for the frontend to render.
Basically how to make files in different docker containers accessible to each other using docker-compose
My docker-compose.yaml:
version: '3'
services:
backend:
image: interrodata_backend
build: ./backend
ports:
- "9090:9090"
depends_on:
- db
environment:
- DATABASE_HOST=db
db:
image: nielsen_db
restart: always
build: ./db
How to call commands in another container?
In this answer, #Horgix explains ways to invoke an executable resided in another container. For your case, the cleanest way is to make your libreoffice container a service, and expose an HTTP API to outside. Then call this API from the Python app container.
How to share files between different containers?
You can use either volumes or bind-mounts to achieve this.
For example, to use bind-mounts:
docker run -v /host/path:/tmp python-app
docker run -v /host/path:/tmp libreoffice-headless
The Python app generates files to its own /tmp directory. And the libreoffice app will find the same files in its own /tmp directory. They are sharing the same directory.
Same idea for using volumes. You can find more information here.

Resources