I’m trying to figure it out how to build static files using Node and Webpack during production build and mount them as volume which is going to be served for Django webapp and used for Django collectstatic.
I’ve got all services separated to its own containers and each has own Dockerfile.
Current problem is that i can’t access generated access files generated by webpack inside Django app. Question is, can I achieve this using separate Dockerfiles for Node and Django, or shall this be done in one Dockerfile?
Node Dockerfile
FROM node:alpine
WORKDIR ./code
COPY ./package.json ./yarn.lock /code/
COPY ./webpack.base.config.js ./webpack.prod.config.js /code/
RUN yarn install --production
ADD static /code/static/
RUN yarn run prod
Python app Dockerfile
FROM python:3.6.2-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add \
bash \
curl \
build-base \
postgresql-dev \
postgresql-client \
libpq \
tzdata
WORKDIR /code
ADD requirements.txt /code
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ADD ./ /code
ENV TZ=Europe/London
EXPOSE 8000
Docker Compose production
version: '3'
services:
frontend:
build: docker/services/node
volumes:
- static_files:/code/static
webapp:
build: .
env_file:
- .env
expose:
- "8000"
volumes:
- ./public:/code/public/
- static_files:/code/static
command: ["./docker/scripts/wait-for-it.sh", "database:5432", "--", "./docker/services/webapp/run-prod.sh"]
depends_on:
- frontend
- database
database:
image: postgres
env_file:
- .env
expose:
- "5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
nginx:
build: docker/services/nginx
env_file:
- .env
ports:
- "80:80"
volumes:
- ./public:/www/public/
depends_on:
- webapp
healthcheck:
test: ["CMD", "curl", "-f", "http://0.0.0.0:8000"]
interval: 30s
timeout: 10s
retries: 3
volumes:
postgres_data:
static_files:
You can use a multi stage build for this. In your case the first stage would generate your static files, while the second stage would package your python app and copy the static files from the node.js docker image. The resulting image will only contain python dependencies.
Here is the multistage dockerfile, documentation can be found here https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
# static files build stage
FROM node:alpine as static # named as static for easy reference
WORKDIR /code
COPY ./package.json ./yarn.lock /code/
COPY ./webpack.base.config.js ./webpack.prod.config.js /code/
RUN yarn install --production
ADD static /code/static/
RUN yarn run prod
# python app package stage
FROM python:3.6.2-alpine as final # named as final because it's final :)
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add \
bash \
curl \
build-base \
postgresql-dev \
postgresql-client \
libpq \
tzdata
WORKDIR /code
ADD requirements.txt /code
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ADD ./ /code
# This next command has access to the file contents of the previous stage.
# Ideally you should rewrite the paths to copy the static files from where they have been generated to where they should end up
# The 'from' clause is used to reference the first build stage
COPY --from=static /code/static/path/to/static/files /code/desired/location
ENV TZ=Europe/London
EXPOSE 8000
You can then use this single image in your docker-compose file.
Related
Dockerfile:
FROM node:lts-slim AS base
# Install dependencies
RUN apt-get update \
&& apt-get install --no-install-recommends -y openssl
# Create app directory
WORKDIR /usr/src
FROM base AS builder
# Files required by npm install
COPY package*.json ./
# Files required by prisma
COPY prisma ./prisma
# Install app dependencies
RUN npm ci
# Bundle app source
COPY . .
# Build app
RUN npm install -g prisma --force
RUN prisma generate
RUN npm run build \
&& npm prune --omit=dev
FROM base AS runner
# Copy from build image
COPY --from=builder /usr/src/node_modules ./node_modules
COPY --from=builder /usr/src/dist ./dist
COPY --from=builder /usr/src/package*.json ./
COPY prisma ./prisma
RUN apt-get update \
&& apt-get install --no-install-recommends -y procps openssl
RUN chown -R node /usr/src/node_modules
RUN chown -R node /usr/src/dist
RUN chown -R node /usr/src/package*.json
USER node
# Start the app
EXPOSE 80
CMD ["node", "dist/index.js"]
docker-compose.yml
version: '3'
services:
mysql:
image: mysql:latest
container_name: mysql
ports:
- 3306:3306
bot:
container_name: bot
build:
context: .
depends_on:
- mysql
docker-compose.prod.yml
version: '3'
services:
mysql:
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: '123123'
MYSQL_DATABASE: 'test'
MYSQL_USER: 'test'
MYSQL_PASSWORD: '123123'
bot:
ports:
- "3000:80"
env_file:
- docker-compose.prod.bot.env
volumes:
mysql:
for some reason after running this commands:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml run bot npx prisma migrate deploy
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
im getting an error when the bot container running up, that he cant find any node module...
im using ubuntu 20.4 to run docker inside, installed docker and for some reason only this part is not working, wehn im running a build on normal machine without docker, build is working fine.
The only problem is with the docker.
error:
bot | node:internal/modules/cjs/loader:936
bot | throw err;
bot | ^
bot |
bot | Error: Cannot find module 'envalid'
bot | Require stack:
bot | - /usr/src/dist/config.js
I am a newbie in Docker, and while going through a course online, I stumbled into a problem. While trying to build separate dev, prod and test images, only my dev one seems to build correctly.
My prod and test images build with their flag as "<none>", even though I run the build with the following command:
"sudo docker build -t ultimatenode:test --target test ."
Also my prod image is supposed to be smaller in size, because I removed the original node_modules and ./test folder, but there seems to be some mistake on my part.
Would anyone be kind enough to check the problem in the following Dockerfile:
FROM node:16 as base
EXPOSE 80
WORKDIR /app
COPY package*.json ./
RUN npm config list
RUN npm ci \
&& npm cache clean --force
ENV PATH /app/node_modules/.bin:$PATH
CMD ["node", "server.js"]
#DEVELOPMENT
FROM base as dev
ENV NODE_ENV=development
# NOTE: these apt dependencies are only needed
# for testing. they shouldn't be in production
RUN apt-get update -qq \
&& apt-get install -qy --no-install-recommends \
bzip2 \
ca-certificates \
curl \
libfontconfig \
&& rm -rf /var/lib/apt/lists/*
RUN npm config list
RUN npm install --only=development \
&& npm cache clean --force
COPY . /app
CMD ["nodemon", "server.js"]
#TEST
FROM dev as test
COPY . .
RUN npm audit
#PREPROD
FROM test as preprod
#Removing unecessary folders
RUN rm -rf ./tests && rm -rf ./node_modules
FROM base as prod
COPY --from=pre-prod /app /app
WORKDIR /app
HEALTHCHECK CMD curl http://127.0.0.1/ || exit 1
CMD ["node", "server.js"]
Also, here is my docker-compose.yml
version: '2.4'
services:
redis:
image: redis:alpine
db:
image: postgres:9.6
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
volumes:
- db-data:/var/lib/postgresql/data
vote:
image: bretfisher/examplevotingapp_vote
ports:
- '5000:80'
depends_on:
- redis
result:
build:
context: .
target: dev
ports:
- '5001:80'
volumes:
- .:/app
environment:
- NODE_ENV=development
depends_on:
- db
worker:
image: bretfisher/examplevotingapp_worker
depends_on:
- redis
- db
volumes:
db-data:
I've recently configured my Django project with Docker. When I run my container, my Django app is running but is missing (404) all of the static assets:
GET http://localhost:8000/static/theme/css/sb-admin-2.min.css net::ERR_ABORTED 404 (Not Found)
The assets load without issue outside of the Docker container. My Django project is configured to use WhiteNoise. I know that many people say it's not worth the trouble, and just migrate static assets to S3. I'll resort to that if I must, but I'd really like to try and get this to work for my local environment configuration.
Here's the relevant configuration:
project structure:
- cs-webapp
- cs
- settings.py
- urls.py
- wsgi.py
- staticfiles/
...
- webapp
- migrations/
- models/
- views/
...
Dockerfile
docker-compose.yml
manage.py
...
settings.py
BASE_DIR = Path(__file__).resolve().parent.parent
print("!!!!!! BASE_DIR IS " + str(BASE_DIR)) # this prints '/app'
STATIC_URL = '/static/'
STATIC_ROOT = BASE_DIR / 'staticfiles'
#STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
docker-compose.yml
version: '3'
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- /app
command: >
sh -c "sleep 10; python manage.py collectstatic --noinput &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
env_file:
- .env
- db.env
depends_on:
- db
rabbitmq:
...
celery-worker:
...
celery-beat:
...
db:
...
Dockerfile
FROM python:3.7-alpine
ENV PYTHONBUFFERED=1
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache git libffi-dev
RUN apk add --update --no-cache postgresql-client jpeg-dev
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev
RUN pip install -r /requirements.txt
RUN apk del .tmp-build-deps
RUN mkdir /app
COPY . /app
WORKDIR /app
I have the following dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
RUN yarn build
COPY . .
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup -g 1001 nodejs
RUN adduser -S 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
USER nextjs
EXPOSE 3000
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
ENV NEXT_TELEMETRY_DISABLED 1
ENV HOST=0.0.0.0
CMD yarn start
and the following docker-compose file
version: '3.5'
services:
ellis-development:
image: ellis-development
container_name: app
build:
context: .
dockerfile: dockerfile.dev
environment:
- NEXT_PUBLIC_ENV
- SENDGRID_API_KEY
- MONGODB_URI
- NEXTAUTH_SECRET
- NEXTAUTH_URL
ports:
- 3000:3000
volumes:
- .:/app
links:
- mongodb
depends_on:
- mongodb
mongodb:
image: mongo:latest
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: testing123
ports:
- 27017:27017
volumes:
- data:/data/db
volumes:
data:
I have all my envs setup in a .env file like so
NEXT_PUBLIC_ENV=local
SENDGRID_API_KEY=<redacted>
MONGODB_URI=mongodb://localhost:27017/<redacted>
NEXTAUTH_SECRET=eb141u85
NEXTAUTH_URL="http://localhost:3000"
This creates the following when running docker compose
and I can connect to localhost:27017 using mongodb compass.
However, for some reason docker cannot connect to my application.
What am I doing wrong here? First time setting up mongodb with docker so 🤷♂️
Fixed, spotted the error as soon as I posted the question... 😅
Changed
MONGODB_URI=mongodb://localhost:27017/<redacted>
to
MONGODB_URI=mongodb://mongodb:27017/<redacted>
I have recently added Docker to my javascript monorepo to build and serve a particular package. Everything is working great, however I did not succeed to make the contents under ./packages/common/dist available to the host directory under ./common-dist which is one of my requirements.
When running docker-compose up, the directory common-dist is indeed created on the host, but the files build under packages/common/dist on the volume are not appearing; the folder stays empty at all.
docker-compose.yml
version: "3"
services:
nodejs:
image: nodejs
container_name: app_nodejs
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- ./common-dist:/app/packages/common/dist
Dockerfile
FROM node:12-alpine
# Install mozjpeg system dependencies
# #see https://github.com/imagemin/imagemin-mozjpeg/issues/1#issuecomment-52784569
RUN apk --update add \
build-base \
autoconf \
automake \
libtool \
pkgconf \
nasm
WORKDIR /app
COPY . .
RUN yarn install
RUN yarn run common:build
RUN ls /app/packages/common/dist # -> Yip, all files are there!
# CMD ["node", "/app/packages/common/dist/index.js"]
$ docker-compose build
$ docker-compose up # -> ./common-dist appears, but remains empty
Could this be related to some permission issues or am I lacking an understanding of what docker-compose actually does here?
Many thanks in advance!