nginx doesnt see updated static content - node.js

I'm using docker-compose to set up nginx and node
services:
nginx:
container_name: nginx
build: ./nginx/
ports:
- "80:80"
- "443:443"
links:
- node:node
volumes_from:
- node
volumes:
- /etc/nginx/ssl:/etc/nginx/ssl
node:
container_name: node
build: .
env_file: .env
volumes:
- /usr/src/app
- ./logs:/usr/src/app/logs
expose:
- "8000"
environment:
- NODE_ENV=production
command: npm run package
I have node and nginx share the same volume so that nginx can serve the static content generated by node.
When i update the source code in node. I remove the node container and rebuild it via the below
docker rm node
docker-compose -f docker-compose.prod.yml up --build -d node
I can see that the new node container has the updated source code with the proper updated static content
docker exec -it node bash
root#e0cd1b990cd2:/usr/src/app# cat public/style.css
this shows the updated content i want to see
.project_detail .owner{color:#ccc;padding:10px}
However, when i login to the nginx container
docker exec -it nginx bash
root#a459b271e787:/# cat /usr/src/app/public/style.css
.project_detail .owner{padding:10px}
as you can see , nginx is not able to see the newly updated static files served by node - despite the node update. It however works if i restart the nginx container as well.
Am i doing something wrong? Do i have to restart both nginx and node containers to see the updated content?

Instead of sharing volume of one container with another, share a common directory on the host with both the containers. For example, if the directory is at /home/user/app, then it should be present in volumes section like:
volumes:
- /home/user/app:/usr/src/app
This should be done for both the containers.

Related

Running Gulp in a Node.js container, Error: Cannot find module but volumes seem right

I'm setting up an older WordPress theme into a Docker environment, going through the process of creating a docker-compose.yml file that lets me run various services in various containers.
I've gotten very far… nginx, mysql running with php and ssl. Working.
The final piece of the puzzle is setting up a container for Node that will run Gulp to build the final theme (a gulpfile that processes all the css and js).
I've been through dozens of answers on Stack Overflow and looking at many projects on github that are similar but not the same. They've taught me a lot, I think I'm a step or two away from a deep enough understanding to grasp what I'm missing.
The Node service is mounted to a volume mapped to the local directory where gulp needs to run. npm install builds node_modules somewhere, but not where I want, and even so, the end result is always…
internal/modules/cjs/loader.js:834
throw err;
^
Error: Cannot find module '/gulp'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:831:15)
…despite multiple attempts and mounting, working directories, COPYing and anything else that feels logical.
This is what I have so far…
docker-compose.yml
version: '3.9'
networks:
wordpress:
services:
nginx:
build:
context: .
dockerfile: nginx.dockerfile
container_name: nginx
depends_on:
- php
- mysql
ports:
- 80:80
- 443:443
volumes:
- ./wordpress:/var/www/html:delegated
networks:
- wordpress
mysql:
image: mysql:latest
command: --default-authentication-plugin=mysql_native_password
container_name: mysql
environment:
MYSQL_DATABASE: wpdb
MYSQL_USER: wpdbuser
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
restart: always
tty: true
ports:
- 3306:3306
volumes:
- ./mysql:/var/lib/mysql
networks:
- wordpress
php:
build:
context: .
dockerfile: php.dockerfile
container_name: php
volumes:
- ./wordpress:/var/www/html:delegated
networks:
- wordpress
wp:
build:
context: .
dockerfile: wp.dockerfile
container_name: wp
entrypoint: ['wp', '--allow-root']
links:
- mysql:mysql
volumes:
- ./wordpress:/var/www/html:delegated
networks:
- wordpress
phpmyadmin:
build:
context: .
dockerfile: phpmyadmin.dockerfile
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
depends_on:
- mysql
ports:
- 8081:80
- 8082:443
environment:
PMA_HOST: mysql
MYSQL_ROOT_PASSWORD: secret
restart: always
networks:
- wordpress
node:
build:
context: .
dockerfile: node.dockerfile
image: node:12.19.1-alpine3.9
container_name: nodejs
volumes_from:
- nginx
working_dir: /var/www/html/wp-content/themes/the-theme
networks:
- wordpress
Notable and I think relevant is…
nginx service mounts a local directory that has WP inside it.
wp service is wp-cli, not a wordpress image, instead wp is run to perform tasks such as
docker-compose run --rm wp core download
I want to run node gulp in much the same way as wp
the node.dockerfile is empty right now, but I'm conscious and interested in what someone else is doing with a shell script here.
Basically I'm knowledgeable enough to have a sense of where I'm messing up (with paths) but not enough to work out what to try next. Or to articulate a question.
I can show all the information I've gathered…
docker-compose run --rm node --version
…works great, result: v12.19.0
docker-compose run --rm node npm install Success. Then…
docker-compose run --rm node gulp -v
Error: Error: Cannot find module '/var/www/html/wp-content/themes/the-theme/gulp'
Not really knowing what I'm doing, assuming the node service is mounted to the theme directory I thought I'd try:
docker-compose run --rm node node_modules/gulp -v but that gave an error:
/usr/local/bin/docker-entrypoint.sh: exec: line 8: node_modules/gulp: Permission denied
I can confirm that a node_modules directory was indeed created where I wanted it to be when I ran npm install, and it had read from the packages.json and installed everything. I could run npm list and everything.
In case it's relevant, my project folder is…
- workspace
-- wordpress
--- wp-content
---- themes
----- the-theme
------ package.json
------ node_modules
-- docker-compose.yml
-- various-docker-files
…and it's inside that the-theme directory that I want to run gulp just like I would locally. It seems I can run npm install there, but nothing installed can be found.
I go on to attempt numerous things, such as setting the working_dir: to node_modules, or to mount none_modules.
I try things like docker-compose run --rm node ls and I can see the insides of the_theme, some linux system directories, and node_modules.
docker-compose run --rm node ls -al node_modules
…shows me all the installed node packages.
Multiple answers elsewhere suggest rebuilding the npm install etc, no effect. And I feel I'm hampered by the fact that many of these questions and answers feature simple node.js apps where people have an untouched ready-to-go package.json they can just COPY into a simple directly structure i.e. /app in their dockerfile, where I'm dealing with a package.json that has to be read from a deeper sub directory, on a mounted local volume. Perhaps confusing me.
Additional answers suggested that it would be correct to have my node service volumes like this…
volumes:
- ./wordpress:/var/www/html:delegated
- /var/www/html/wp-content/themes/the-theme/node_modules
working_dir: /var/www/html/wp-content/themes/the-theme
But that was more of the same. Gulp not found, and a discrepancy between what was in node_modules via docker-compose run and what was on my local node_modules.
I also at various points learned that perhaps my node_modules should not even be in my theme directory as it would be locally, that they should be off in the node container – which is reasonable, but I'm not sure how to approach it, while still having access to packages.json that lives persistently in ./wordpress/wp-content/themes/the-theme and also have access to css and js that is being updated in a sub directory of that.
To summarize:
I want to run
docker-compose run --rm node gulp build
on /wordpress/wp-content/themes/the-theme
and for the gulp command to do its thing, with output at
/wordpress/wp-content/themes/the-theme/dist or similar
Update #1:
node:
build:
context: .
dockerfile: node.dockerfile
image: node:12.19.1-alpine3.9
container_name: nodejs
depends_on:
- nginx
ports:
- 3001:3000
tty: true
volumes:
- ./wordpress:/var/www/html:delegated
- /var/www/html/wp-content/themes/inti-acf-starter/node_modules
working_dir: /var/www/html/wp-content/themes/inti-acf-starter
networks:
- wordpress
I've added tty: true so that I can -it into the node container and poke around.
This drops me in my working_dir. I can npm install here. I can see a node_modules directory. I cd into it and it's full. I try to run any of these I get:
sh: gulp: not found
OK. So npm install --global I guess.
This doesn't install everything in my package.json, and what is installed and appearing in node_modules under /var/www/html/wp-content/themes/the-theme
…is not reflected locally in ./wordpress/wp-content/themes/the-theme/node_modules
I feel I should remove that second volume and just use the locally mapped one. But I'm not sure what to do about the rest. Installing just gulp globally means I can run just that inside the container with -it, but the gulpfile full of requires now can't find any.
Update #2:
node:
build:
context: .
dockerfile: node.dockerfile
image: node:12.19.1-alpine3.9
container_name: nodejs
depends_on:
- nginx
ports:
- 3001:3000
tty: true
volumes:
- ./wordpress:/var/www/html:delegated
- /var/www/html/wp-content/themes/the-theme/node_modules
working_dir: /var/www/html/wp-content/themes/the-theme
networks:
- wordpress
I understand now that each npm install that node is actually dropping everything into:
/usr/local/lib/node_modules/inti-acf-starter/node_modules
With npm install -g gulp-cli#2.1.0 first I can now run Gulp!
My project compiles correctly. Well most of it. I have to keep the node service tty, and run npm and gulp inside of that only.
Running things like:
docker-compose run --rm node gulp build
…can't find gulp.

How to copy build files from one container to another or host on docker

I am trying to dockerize application where I have a php backend and vuejs frontend. Backend is working as I expect however after running npm run build within the frontend container, I need to copy build files from dist folder to nginx container or to host and then use volume to bring those files to nginx container.
I tried to use named volume
services:
frontend:
.....
volumes:
- static:/var/www/frontend/dist
nginx:
.....
volumes:
- static:/var/www/frontend/dist
volumes:
static:
I also tried to do following as suggested on here to bring back dist folder to host
services:
frontend:
.....
volumes:
- ./frontend/dist:/var/www/frontend/dist
However none of the above options is working for me. Below is my docker-compose.yml file and frontend Dockerfile
version: "3"
services:
database:
image: mysql:5.7.22
.....
backend:
build:
context: ./docker/php
dockerfile: Dockerfile
.....
frontend:
build:
context: .
dockerfile: docker/node/Dockerfile
target: 'build-stage'
container_name: frontend
stdin_open: true
tty: true
volumes:
- ./frontend:/var/www/frontend
nginx:
build:
context: ./docker/nginx
dockerfile: Dockerfile
container_name: nginx
restart: unless-stopped
ports:
- 80:80
volumes:
- ./backend/public:/var/www/backend/public:ro
- ./frontend/dist:/var/www/frontend/dist:ro
depends_on:
- backend
- frontend
Frontend Dockerfile
# Develop stage
FROM node:lts-alpine as develop-stage
WORKDIR /var/wwww/frontend
COPY /frontend/package*.json ./
RUN npm install
COPY /frontend .
# Build stage
FROM develop-stage as build-stage
RUN npm run build
You can combine the frontend image and the Nginx image into a single multi-stage build. This basically just involves copying your docker/node/Dockerfile as-is into the start of docker/nginx/Dockerfile, and then COPY --from=build-stage into the final image. You will also need to adjust some paths since you'll need to make the build context be the root of your project.
# Essentially what you had in the question
FROM node:lts AS frontend
WORKDIR /frontend
COPY frontend/package*.json .
RUN npm install
COPY frontend .
RUN npm run build
# And then assemble the Nginx image with content
FROM nginx
COPY --from=frontend /frontend/dist /var/www/html
Once you've done this, you can completely delete the frontend container from your docker-compose.yml file. Note that it never did anything – the image didn't declare a CMD, and the docker-compose.yml didn't provide a command: to run either – and so this shouldn't really change your application.
You can use a similar technique to copy the static files from your PHP application into the Nginx proxy. When all is said and done, this leaves you with a simpler docker-compose.yml file:
version: "3.8"
services:
database:
image: mysql:5.7.22
.....
backend:
build: ./docker/php # don't need to explicitly name Dockerfile
# note: will not need volumes: to export files
.....
# no frontend container any more
nginx:
build:
context: . # because you're COPYing files from other components
dockerfile: docker/nginx/Dockerfile
restart: unless-stopped
ports:
- 80:80
# no volumes:, everything is built into the image
depends_on:
- backend
(There are two practical problems with trying to share content using Docker named volumes. The first is that volumes never update their content once they're created, and in fact hide the content in their original image, so using volumes here actually causes changes in your source code to be ignored in favor of arbitrarily old content. In environments like Kubernetes, the cluster doesn't even provide the copy-on-first-use that Docker has, and there are significant limitations on how files can be shared between containers. This works better if you build self-contained images and don't try to share files.)

1 way sync instead of 2 way sync for Docker Volume?

I am using Docker Compose for my local development environment for a Full Stack Javascript project.
part of my Docker Compose file look like this
version: "3.5"
services:
frontend:
build:
context: ./frontend/
dockerfile: dev.Dockerfile
env_file:
- .env
ports:
- "${FRONTEND_PORT_NUMBER}:${FRONTEND_PORT_NUMBER}"
container_name: frontend
volumes:
- ./frontend:/code
- frontend_deps:/code/node_modules
- ../my-shared-module:/code/node_modules/my-shared-module
I am trying to develop a custom Node module called my-shared-module, that's why i added - ../my-shared-module:/code/node_modules/my-shared-module to the Docker Compose file. The node module is hosted in a private Git repo, and is defined like this in package.json
"dependencies": {
"my-shared-module": "http://gitlab+deploy-token....#gitlab.com/.....git",
My problem is,
When I run update my node modules in the docker container using npm install, it download my-shared-module from my private Git repo into /code/node_modules/my-shared-module, and that overwrites the files in host ../my-shared-module, because they are synced.
So my question is, is it possible to have 1 way volume sync in Docker?
when host changes, update container
when container changes, don't update host ?
Unfortunately I don't think this is possible in Docker. Mounting a host volume is always two-way unless you consider a readonly mount to be one-way, but that prevents you from being able modify the file system with things like npm install.
Your best options here would either be to rebuild the image with the new files each time, or bake into your CMD a step to copy the mounted files into a new folder outside of the mounted volume. That way any file changes won't be persisted back to the host machine.
You can script something to do this. Mount your host node_modules to another directory inside the container, and in the entrypoint, copy the directory:
version: "3.5"
services:
frontend:
build:
context: ./frontend/
dockerfile: dev.Dockerfile
env_file:
- .env
ports:
- "${FRONTEND_PORT_NUMBER}:${FRONTEND_PORT_NUMBER}"
container_name: frontend
volumes:
- ./frontend:/code
- frontend_deps:/code/node_modules
- /code/node_modules/my-shared-module
- ../my-shared-module:/host/node_modules/my-shared-module:ro
Then add an entrypoint script to your Dockerfile with something like:
#!/bin/sh
if [ -d /host/node_modules/my-shared-module ]; then
cp -r /host/node_modules/my-shared-module/. /code/node_modules/my-shared-module/.
fi
exec "$#"

Docker is not building container with changes in source code

I'm relatively new with Docker and I just created an Node.js application that should connect with other services also running on Docker.
So I get the source code and a Dockerfile to setup this image and a docker-compose to orchestrate the environment.
I had a few problems in the beginning so I just updated my source code and found out that it's not getting updated in the next build of docker-compose.
For example I commented all the lines that connect to Redis and MongoDB. I run the application locally and it's fine. But when I create it again in a container, I get the errors "Connection refused..."
I tried many things and this is what i get at the momment:
Dockerfile
FROM node:9
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node app.js
EXPOSE 8090
docker-compose.yml
version: '3'
services:
app:
build: .
ports:
- "8090:8090"
container_name: app
redis:
image: redis:latest
ports:
- "6379:6379"
container_name: redis
mongodb:
image: mongo:latest
container_name: "mongodb"
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
up.sh
sudo docker stop app
sudo docker rm app
docker-compose build --no-cache app
sudo docker-compose up --force-recreate
Any ideas on what could be the problem? Why doesn't it use the current source code? It is using some sort of cache.

Docker how to start nodejs app with redis in the Container?

I have simple but curious question, i have based my image on nodejs image and i have installed redis on the image, now i wanted to start redis and nodejs app both running in the container when i do the docker-compose up. However i can only get one working, node always gives me an error. Does anyone has any idea to
How to start the nodejs application on the docker-compose up ?
How to start the redis running in the background in the same image/container ?
My Docker file as below.
# Set the base image to node
FROM node:0.12.13
# Update the repository and install Redis Server
RUN apt-get update && apt-get install -y redis-server libssl-dev wget curl gcc
# Expose Redis port 6379
EXPOSE 6379
# Bundle app source
COPY ./redis.conf /etc/redis.conf
EXPOSE 8400
WORKDIR /root/chat/
CMD ["node","/root/www/helloworld.js"]
ENTRYPOINT ["/usr/bin/redis-server"]
Error i get from the console logs is
[36mchat_1 | [0m[1] 18 Apr 02:27:48.003 # Fatal error, can't open config file 'node'
Docker-yml is like below
chat:
build: ./.config/etc/chat/
volumes:
- ./chat:/root/chat
expose:
- 8400
ports:
- 6379:6379
- 8400:8400
environment:
CODE_ENV: debug
MYSQL_DATABASE: xyz
MYSQL_USER: xyz
MYSQL_PASSWORD: xyz
links:
- mysql
#command: "true"
A docker file can have but one entry point(either CMD or ENTRYPOINT, not both). But, you can run multiple processes in a single docker image using a process manager like systemd. There are countless recipes for doing this all over the internet. You might use this docker image as a base:
https://github.com/million12/docker-centos-supervisor
However, I don't see why you wouldn't use docker compose to spin up a separate redis container, just like you seem to want to do with mysql. BTW where is the mysql definition in the docker-compose file you posted?
Here's an example of a compose file I use to build a node image in the current directory and spin up redis as well.
web:
build: .
ports:
- "3000:3000"
- "8001:8001"
environment:
NODE_ENV: production
REDIS_HOST: redis://db:6379
links:
- "db"
db:
image: docker.io/redis:2.8
It should work with a docker file looking like the one you have minus trying to start up redis.

Resources