Docker /dist output not mounted into host directory - node.js

I have recently added Docker to my javascript monorepo to build and serve a particular package. Everything is working great, however I did not succeed to make the contents under ./packages/common/dist available to the host directory under ./common-dist which is one of my requirements.
When running docker-compose up, the directory common-dist is indeed created on the host, but the files build under packages/common/dist on the volume are not appearing; the folder stays empty at all.
docker-compose.yml
version: "3"
services:
nodejs:
image: nodejs
container_name: app_nodejs
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- ./common-dist:/app/packages/common/dist
Dockerfile
FROM node:12-alpine
# Install mozjpeg system dependencies
# #see https://github.com/imagemin/imagemin-mozjpeg/issues/1#issuecomment-52784569
RUN apk --update add \
build-base \
autoconf \
automake \
libtool \
pkgconf \
nasm
WORKDIR /app
COPY . .
RUN yarn install
RUN yarn run common:build
RUN ls /app/packages/common/dist # -> Yip, all files are there!
# CMD ["node", "/app/packages/common/dist/index.js"]
$ docker-compose build
$ docker-compose up # -> ./common-dist appears, but remains empty
Could this be related to some permission issues or am I lacking an understanding of what docker-compose actually does here?
Many thanks in advance!

Related

docker-compose permissions error? pm2.json not found

I'm trying to set up a Profittrailer Docker container through Docker Compose.
I've tested the Docker image using docker run which launches the container just fine.
When using docker-compose up instead, PM2 (NodeJS process manager) fails to find the configuration file. I believe this happens because the container is unable to write to the shared volume.
Below is my Dockerfile:
FROM eclipse-temurin:8-jdk-alpine
ARG PT_VERSION=2.5.32
ENV PT_VERSION ${PT_VERSION}
RUN mkdir -p /app/
# install tools
RUN apk update && apk add unzip curl
# install nodejs
RUN apk add --update nodejs npm
RUN npm install pm2#latest -g
# install profittrailer
RUN curl https://github.com/taniman/profit-trailer/releases/download/$PT_VERSION/ProfitTrailer-$PT_VERSION.zip -L -o /app/profittrailer.zip
RUN unzip /app/profittrailer.zip -d /app/ && mv /app/ProfitTrailer-$PT_VERSION /app/ProfitTrailer
WORKDIR /app/ProfitTrailer
RUN chmod +x ProfitTrailer.jar
VOLUME /app/ProfitTrailer
CMD pm2 start pm2-ProfitTrailer.json && pm2 log 0
EXPOSE 8081
And the docker-compose.yml file:
version: '3'
services:
profittrailer:
container_name: profittrailer
image: "doccie/profittrailer:latest"
volumes:
- /home/[user]/.profittrailer:/app/ProfitTrailer
restart: unless-stopped
ports:
- "8081:8081"
The output logged (inside the container) is:
profittrailer | [PM2] Spawning PM2 daemon with pm2_home=/root/.pm2
profittrailer | [PM2] PM2 Successfully daemonized
profittrailer | [PM2][ERROR] File pm2-ProfitTrailer.json not found
I managed to get it working by not sharing the entire app directory, but limiting it to the folders and files I needed for configuration editing and migration.
version: '3'
services:
profittrailer:
container_name: profittrailer
image: "doccie/profittrailer:latest"
volumes:
- /home/[USERNAME]/.profittrailer/data:/app/ProfitTrailer/data
- /home/[USERNAME]/.profittrailer/application.properties:/app/ProfitTrailer/application.properties
restart: unless-stopped
ports:
- "8081:8081"
I needed to set up the application.properties file first, since otherwise, it would create the file as a folder instead.

How to dockerize aspnet core application and postgres sql with docker compose

In the process of integrating the docker file into my previous sample project so everything was automated for easy code sharing and execution. I have some dockerize problem and tried to solve it but to no avail. Hope someone can help. Thank you. Here is my problem:
My repository: https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate
Branch for dockerize (my problem in macOS):
https://github.com/ThanhDeveloper/WebApplicationAspNetCoreTemplate/pull/1
Docker file:
# syntax=docker/dockerfile:1
FROM node:16.11.1
FROM mcr.microsoft.com/dotnet/sdk:5.0
RUN apt-get update && \
apt-get install -y wget && \
apt-get install -y gnupg2 && \
wget -qO- https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y build-essential nodejs
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
RUN dotnet tool restore
EXPOSE 80/tcp
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
Docker compose:
version: "3.9"
services:
web:
container_name: backendnet5
build: .
ports:
- "5005:5000"
depends_on:
- database
database:
container_name: postgres
image: postgres:latest
ports:
- "5433:5433"
environment:
- POSTGRES_PASSWORD=admin
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
Commands:
docker-compose build
docker compose up
Problems:
I guess the problem is not being able to run command line dotnet ef database update my migrations. Many thanks for any help.
In your appsettings.json file, you say that the database hostname is 'localhost'. In a container, localhost means the container itself.
Docker compose creates a bridge network where you can address each container by it's service name.
You connection string is
User ID=postgres;Password=admin;Host=localhost;Port=5432;Database=sample_db;Pooling=true;
but should be
User ID=postgres;Password=admin;Host=database;Port=5432;Database=sample_db;Pooling=true;
You also map port 5433 on the database to the host, but postgres listens on port 5432. If you want to map it to port 5433 on the host, the mapping in the docker compose file should be 5433:5432. This is not what's causing your issue though. This just prevents you from connecting to the database from the host, if you need to do that.

Docker / docker-compose workflow: angular changes not being reflected

When I make changes to my app source code and rebuild my docker images, the changes are not being reflected in the updated containers. I have:
Checked that the changes are being pulled to the remote machine correctly
Cleared the browser cache and double checked with different browsers
Checked that the development build files are not being pulled onto the remote machine by mistake
Banged my head against a number of nearby walls
Every time I pull new code from the repo or make a local change, I do the following in order to do a fresh rebuild:
sudo docker ps -a
sudo docker rm <container-id>
sudo docker image prune -a
sudo docker-compose build --no-cache
sudo docker-compose up -d
But despite all that, the changes do not make it through - I simply dont know how it isn't working as the output during build appears to be taking the local files. Where can it be getting the old files from, cos I've checked and double checked that the local source has changed?
Docker-compose:
version: '3'
services:
angular:
build: angular
depends_on:
- nodejs
ports:
- "80:80"
- "443:443"
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/usr/share/nginx/html
- ./dhparam:/etc/ssl/certs
- ./nginx-conf/prod:/etc/nginx/conf.d
networks:
- app-net
nodejs:
build: nodejs
ports:
- "8080:8080"
volumes:
certbot-etc:
certbot-var:
web-root:
Angular dockerfile:
FROM node:14.2.0-alpine AS build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/
RUN apk update && apk add --no-cache bash git
RUN npm install
COPY . /app
RUN ng build --outputPath=./dist --configuration=production
### prod ###
FROM nginx:1.17.10-alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
I found it. I followed tutorial to get https to work, and thats where the named volumes came in. Its a two step process, and it needed all those named volumes for the first step, but the web-root volume is what was screwing things up; deleting that solved my problem. At least I understand docker volumes better now...

Docker x NodeJS - Issue with node_modules

I'm a web developer who currently is working on a next.js project (it's just a framework to SSR ReactJS). I'm using Docker config on this project and I discovered an issue when I add/remove dependencies. When I add a dependency, build my project and up it with docker-compose, my new dependency isn't added to my Docker image. I have to clean my docker system with docker system prune to reset everything then I could build and up my project. After that, my dependency is added to my Docker container.
I use Dockerfile to configure my image and different docker-compose files to set different configurations depending on my environments. Here is my configuration:
Dockerfile
FROM node:10.13.0-alpine
# SET environment variables
ENV NODE_VERSION 10.13.0
ENV YARN_VERSION 1.12.3
# Install Yarn
RUN apk add --no-cache --virtual .build-deps-yarn curl \
&& curl -fSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" \
&& tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ \
&& ln -snf /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn \
&& ln -snf /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg \
&& rm yarn-v$YARN_VERSION.tar.gz \
&& apk del .build-deps-yarn
# Create app directory
RUN mkdir /website
WORKDIR /website
ADD package*.json /website
# Install app dependencies
RUN yarn install
# Build source files
COPY . /website/
RUN yarn run build
docker-compose.yml (dev env)
version: "3"
services:
app:
container_name: website
build:
context: .
ports:
- "3000:3000"
- "3332:3332"
- "9229:9229"
volumes:
- /website/node_modules/
- .:/website
command: yarn run dev 0.0.0.0 3000
environment:
SERVER_URL: https://XXXXXXX.com
Here my commands to run my Docker environment:
docker-compose build --no-cache
docker-compose up
I suppose that something is wrong in my Docker's configuration but I can't catch it. Do you have an idea to help me?
Thanks!
Your volumes right now are not set up to do what you intend to do. The current set below means that you are overriding the contents of your website directory in the container with your local . directory.
volumes:
- /website/node_modules/
- .:/website
I'm sure your intention is to map your local directory into the container first, and then override node_modules with the original contents of the image's node_modules directory, i.e. /website/node_modules/.
Changing the order of your volumes like below should solve the issue.
volumes:
- .:/website
- /website/node_modules/
You are explicitly telling Docker you want this behavior. When you say:
volumes:
- /website/node_modules/
You are telling Docker you don't want to use the node_modules directory that's baked into the image. Instead, it should create an anonymous volume to hold the node_modules directory (which has some special behavior on its first use) and persist the data there, even if other characteristics like the underlying image change.
That means if you change your package.json and rebuild the image, Docker will keep using the volume version of your node_modules directory. (Similarly, the bind mount of .:/website means everything else in the last half of your Dockerfile is essentially ignored.)
I would remove the volumes: block in this setup to respect the program that's being built in the image. (I'd also suggest moving the command: to a CMD line in the Dockerfile.) Develop and test your application without using Docker, and build and deploy an image once it's essentially working, but not before.

Docker Compose Django, Webpack and building static files

I’m trying to figure it out how to build static files using Node and Webpack during production build and mount them as volume which is going to be served for Django webapp and used for Django collectstatic.
I’ve got all services separated to its own containers and each has own Dockerfile.
Current problem is that i can’t access generated access files generated by webpack inside Django app. Question is, can I achieve this using separate Dockerfiles for Node and Django, or shall this be done in one Dockerfile?
Node Dockerfile
FROM node:alpine
WORKDIR ./code
COPY ./package.json ./yarn.lock /code/
COPY ./webpack.base.config.js ./webpack.prod.config.js /code/
RUN yarn install --production
ADD static /code/static/
RUN yarn run prod
Python app Dockerfile
FROM python:3.6.2-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add \
bash \
curl \
build-base \
postgresql-dev \
postgresql-client \
libpq \
tzdata
WORKDIR /code
ADD requirements.txt /code
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ADD ./ /code
ENV TZ=Europe/London
EXPOSE 8000
Docker Compose production
version: '3'
services:
frontend:
build: docker/services/node
volumes:
- static_files:/code/static
webapp:
build: .
env_file:
- .env
expose:
- "8000"
volumes:
- ./public:/code/public/
- static_files:/code/static
command: ["./docker/scripts/wait-for-it.sh", "database:5432", "--", "./docker/services/webapp/run-prod.sh"]
depends_on:
- frontend
- database
database:
image: postgres
env_file:
- .env
expose:
- "5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
nginx:
build: docker/services/nginx
env_file:
- .env
ports:
- "80:80"
volumes:
- ./public:/www/public/
depends_on:
- webapp
healthcheck:
test: ["CMD", "curl", "-f", "http://0.0.0.0:8000"]
interval: 30s
timeout: 10s
retries: 3
volumes:
postgres_data:
static_files:
You can use a multi stage build for this. In your case the first stage would generate your static files, while the second stage would package your python app and copy the static files from the node.js docker image. The resulting image will only contain python dependencies.
Here is the multistage dockerfile, documentation can be found here https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
# static files build stage
FROM node:alpine as static # named as static for easy reference
WORKDIR /code
COPY ./package.json ./yarn.lock /code/
COPY ./webpack.base.config.js ./webpack.prod.config.js /code/
RUN yarn install --production
ADD static /code/static/
RUN yarn run prod
# python app package stage
FROM python:3.6.2-alpine as final # named as final because it's final :)
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add \
bash \
curl \
build-base \
postgresql-dev \
postgresql-client \
libpq \
tzdata
WORKDIR /code
ADD requirements.txt /code
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ADD ./ /code
# This next command has access to the file contents of the previous stage.
# Ideally you should rewrite the paths to copy the static files from where they have been generated to where they should end up
# The 'from' clause is used to reference the first build stage
COPY --from=static /code/static/path/to/static/files /code/desired/location
ENV TZ=Europe/London
EXPOSE 8000
You can then use this single image in your docker-compose file.

Resources