Situation: I want a docker structure where I can have different projects under a common root. (normally with a mean structure) but using different images (not all in a mean image). I want all the projects shared in the container that I create for a specific project but only launch the configuration of the project that I want. The image just install the basic dependences and the modules specifics for every apps are installed by the docker-compose override.
That is my ideal structure:
apps
| +--app1
| | +--node_modules //empty in host
| | +--package.json
| | +--docker-compose.app1.yml //override compose
| | +--index.js
| | +--...
| +--app2
| | +--node_modules //empty in host
| | +--...
| ....
| +--node_modules //global node_modules folder (empty in host)
| docker-compose.yml //principal compose
| Dockerfile //my personalized nodejs image config
I'm using a intermediate solution:
apps
| +--app1
| | +--index.js
| | +--...
| +--app2
| | +--index.js
| | +--...
| ....
| +--node_modules //global for all projects (empty in host)
| docker-compose.yml //principal compose
| docker-compose.app1.yml //override for app1
| docker-compose.app2.yml //override for app2
| ....
| Dockerfile //my personalized nodejs image config
DOCKERFILE
FROM node #base image
# User no root
RUN useradd --user-group --create-home --shell /bin/false apps
ENV HOME=/home/apps #my home folder
# Copy modules config and dependencies
COPY package.json npm-shrinkwrap.json $HOME/
RUN chown -R apps:apps $HOME/ #changing permissions no root user
WORKDIR $HOME
RUN npm install #install modules
RUN npm cache clean #clean cache
# user no root active
USER apps
Docker-compose
version: "2"
services:
web:
build: .
image: myNodejs:0.1
container_name: my_nodejs
volumes:
- .:/home/apps #sharing my host route with the container
- /home/apps/curso_mean/node_modules # route to node_modules
Docker-compose.app1
version: "2"
services:
web:
container_name: app1
command: node app1/index.js
ports:
- "3000:3000"
environment:
- NODE_ENV=development
Command to launch app1
docker-compose -f docker-compose.yml -f docker-compose.app1.yml up -d
The problem here is that I have the same node_modules for all the projects and are installed in the image (I would prefer install that in the compose) but I only use a image and I can have different configuration for every project.
Questions
Is possible have the first structure?
Is possible install the specific modules in the docker-compose override executing more than one command or for other way and keep the global node_modules installed in the image?
Related
I'm new to BitBucket Piepline and trying to use it as GitLab Ci way.
Now I happen to face an issues where I was trying to build docker in docker container using dnd.
error during connect: Post "http://docker:2375/v1.24/auth": dial tcp: lookup docker on 100.100.2.136:53: no such host
The above error show and I did some research believe was from the docker daemon.
Yet the atlassin claim there were no intention to work on privilege mode thus I think of any other option.
bitbucker-pipelines.yml
definitions:
services:
docker: # can only be used with a self-hosted runner
image: docker:23.0.0-dind
pipelines:
default:
- step:
name: 'Login'
runs-on:
- 'self.hosted'
services:
- docker
script:
- echo $ACR_REGISTRY_PASSWORD | docker login -u $ACR_REGISTRY_USERNAME registry-intl.ap-southeast-1.aliyuncs.com --password-stdin
- step:
name: 'Build'
runs-on:
- 'self.hosted'
services:
- docker
script:
- docker build -t $ACR_REGISTRY:latest .
- docker tag $(docker images | awk '{print $1}' | awk 'NR==2') $ACR_REGISTRY:$CI_PIPELINE_ID
- docker push $ACR_REGISTRY:$CI_PIPELINE_ID
- step:
Dockerfile
FROM node:14.17.0
RUN mkdir /app
#working DIR
WORKDIR /app
# Copy Package Json File
COPY ["package.json","./"]
# Expose port 80
EXPOSE 80
# Install git
RUN npm install git
# Install Files
RUN npm install
# Copy the remaining sources code
COPY . .
# Run prisma db
RUN npx prisma db pull
# Run prisma client
RUN npm i #prisma/client
# Build
RUN npm run build
CMD [ "npm","run","dev","node","build/server.ts"]
I have a NestJS project that uses TypeORM with a MySQL database.
I dockerized it using docker compose, and everything works fine on my machine (Mac).
But when I run it from my remote instance (Ubuntu 22.04) I got the following error:
server | yarn run v1.22.19
server | $ node dist/main
server | node:internal/modules/cjs/loader:998
server | throw err;
server | ^
server |
server | Error: Cannot find module '/usr/src/app/dist/main'
server | at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
server | at Module._load (node:internal/modules/cjs/loader:841:27)
server | at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
server | at node:internal/main/run_main_module:23:47 {
server | code: 'MODULE_NOT_FOUND',
server | requireStack: []
server | }
server |
server | Node.js v18.12.0
server | error Command failed with exit code 1.
server | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
server exited with code 1
Here is my Dockerfile:
FROM node:18-alpine AS development
# Create app directory
WORKDIR /usr/src/app
# Copy files needed for dependencies installation
COPY package.json yarn.lock ./
# Disable postinstall script that tries to install husky
RUN npx --quiet pinst --disable
# Install app dependencies
RUN yarn install --pure-lockfile
# Copy all files
COPY . .
# Increase the memory limit to be able to build
ENV NODE_OPTIONS=--max_old_space_size=4096
ENV GENERATE_SOURCEMAP=false
# Entrypoint command
RUN yarn build
FROM node:18-alpine AS production
# Set env to production
ENV NODE_ENV=production
# Create app directory
WORKDIR /usr/src/app
# Copy files needed for dependencies installation
COPY package.json yarn.lock ./
# Disable postinstall script that tries to install husky
RUN npx --quiet pinst --disable
# Install app dependencies
RUN yarn install --production --pure-lockfile
# Copy all files
COPY . .
# Copy dist folder generated in development stage
COPY --from=development /usr/src/app/dist ./dist
# Entrypoint command
CMD ["node", "dist/main"]
And here is my docker-compose.yml file:
version: "3.9"
services:
server:
container_name: blognote_server
image: bladx/blognote-server:latest
build:
context: .
dockerfile: ./Dockerfile
target: production
environment:
RDS_HOSTNAME: ${MYSQL_HOST}
RDS_USERNAME: ${MYSQL_USER}
RDS_PASSWORD: ${MYSQL_PASSWORD}
JWT_SECRET: ${JWT_SECRET}
command: yarn start:prod
ports:
- "3000:3000"
networks:
- blognote-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
links:
- mysql
depends_on:
- mysql
restart: unless-stopped
mysql:
container_name: blognote_database
image: mysql:8.0
command: mysqld --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
ports:
- "3306:3306"
networks:
- blognote-network
volumes:
- blognote_mysql_data:/var/lib/mysql
restart: unless-stopped
networks:
blognote-network:
external: true
volumes:
blognote_mysql_data:
Here is what I tried to do:
I cleaned everything on my machine and then run docker compose --env-file .env.docker up but this did work.
I run my server image using docker (not docker compose) and it did work too.
I tried to make a snapshot then connect to it and run node dist/main manually, but this also worked.
So I don't know why I'm still getting this error.
And why do I have a different behavior using docker compose (on my remote instance)?
Am I missing something?
Your docker-compose.yml contains two lines that hide everything the image does:
volumes:
# Replace the image's `/usr/src/app`, including the built
# files, with content from the host.
- .:/usr/src/app
# But: the `node_modules` directory is user-provided content
# that and needs to be persisted separately from the container
# lifecycle. Keep that tree in an anonymous volume and never
# update it, even if it changes in the image or the host.
- /usr/src/app/node_modules
You should delete this entire block.
You will see volumes: blocks like that that try to simulate a local-development environment in an otherwise isolated Docker container. This will work only if the Dockerfile only COPYs the source code into the image without modifying it at all, and the node_modules library tree never changes.
In your case, the Dockerfile produces a /usr/src/app/dist directory in the image which may not be present on the host. Since the first bind mount hides everything in the image's /usr/src/app directory, you don't get to see this built tree; and your image is directly running node on that built application and not trying to simulate a local development environment. The volumes: don't make sense here and cause problems.
I found many answers online to the exception Cannot find module '#angular-devkit/build-angular/package.json, among which adding #angular-devkit/build-angular#0.1102.9 --force , but none worked in my case.
Problem
Running an Angular application using docker-compose: after building the application, the error below occurs when running docker-compose up; note, the app compiles successfully when running docker run <image>, but I need it to work with Docker Compose.
defuse-ui | An unhandled exception occurred: Cannot find module '#angular-devkit/build-angular/package.json'
defuse-ui | Require stack:
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/node_modules/#angular-devkit/architect/node/node-modules-architect-host.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/node_modules/#angular-devkit/architect/node/index.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/models/architect-command.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/commands/serve-impl.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/node_modules/#angular-devkit/schematics/tools/export-ref.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/node_modules/#angular-devkit/schematics/tools/index.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/utilities/json-schema.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/models/command-runner.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/lib/cli/index.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/lib/init.js
defuse-ui | - /usr/local/lib/node_modules/#angular/cli/bin/ng
defuse-ui | See "/tmp/ng-nVUdCb/angular-errors.log" for further details.
defuse-ui exited with code 127
Dockerfile
FROM node:14.16.1
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install -g #angular/cli#11.2.9 #angular-devkit/build-angular#0.1102.9 --force
RUN npm install
COPY . /app
CMD ng serve --host 0.0.0.0 --port 4200
package.json, package-lock.json and other files are accessible at https://github.com/radon-h2020/radon-defuse/tree/905d7a339a31bc68dbd9b7a5258b5e9b19e65c68 release 1.0.0
Versions:
Docker version 20.10.17, build 100c701
docker-compose version 1.29.2, build unknown
After days of trials, I finally found a solution on this useful webpage.
The docker-compose.yml has to be updated with a new volume pointing to the node_modules:
services:
defuse-ui:
build: ./defuse
container_name: defuse-ui
ports:
- 4200:4200
expose:
- 4200
volumes:
- /app/node_modules # <-- Add this
- ./defuse:/app # <!-- or remove this
A possible explanation, as described in the webpage is that
when docker builds the image, the node_modules directory is created within the worker directory, and all the dependencies are installed there. Then on runtime the working directory from outside docker is mounted into the docker instance (which does not have the installed node_modules), hiding the node_modules you just installed.
A workaround is to use a data volume to store all the node_modules, as data volumes copy in the data from the built docker image before the worker directory is mounted
2 possible solutions:
Uninstall/remove node_modules and install again:
npm install -g #angular/cli#11
npm install #angular-devkit/build-angular#0.1102.9
Remove this line in docker-compose.yml:
angular-ayaresa:
build:
context: build/angular
ports:
- "4200:4200"
volumes:
- ../source/angular:/usr/src/app/
- /usr/src/app/node_modules/ // -- remove this line
container_name: angular-ayaresa
networks:
- ayaresa-network
In package.json:
Instead of
"#angular/cli": "1.6.0",
change it to
"#angular/cli": "^1.6.0",
Or, in front of particular version of your local machine, add "^".
Then run this command:
npm update
npm update -g #angular/cli
My folder structure looks like this (monorepo):
project
|
+--- /api
| |
| +--- /.offline-cache
| +--- /src
| | +--- index.js
| | +--- ...
| |
| +--- Dockerfile
| +--- package.json
| +--- yarn.lock
|
+--- /common
| |
| +--- /src
| | +--- index.js
| |
| +--- package.json
|
+--- /ui
| |
| +--- /.offline-cache
| +--- /src
| | +--- index.js
| | +--- ...
| |
| +--- Dockerfile
| +--- package.json
| +--- yarn.lock
|
+--- docker-compose.yml
The offline-cache and building the docker-images for every 'service' (ui, api) are working.
Now I want to access/install the common module inside api and ui as well.
Running yarn add ./../common inside /api works and installs the module inside the api folder and adds it to package.json and yarn.lock file.
But when I try to rebuild the docker-image I get an error telling me
error Package "" refers to a non-existing file '"/common"'.
That's because there is no common folder inside the docker container and the installed package isn't added to the offline-mirror :(
I can't copy the common folder to the docker-image because it is outside the build context and I don't want to publish to NPM. What else can I do to get this working?
You can specify a context in your docker-compose.yml, which does not need to be the same directory as your Dockerfile.
So you can create something like this:
version: '3.5'
services:
ui:
build:
context: .
dockerfile: ui/Dockerfile
ports:
- 'xxxx:xxxx'
api:
build:
context: .
dockerfile: api/Dockerfile
ports:
- 'xxxx:xxxx'
The same thing can be done with docker build as well, by adding the -f option, while running the command from the root directory.
docker build -f ui/Dockerfile xxxxxx/ui .
docker build -f api/Dockerfile xxxxxx/api .
You need to be aware, that you have to modify your Dockerfile slightly as well, to match the file structure of the project (using WORKDIR).
FROM node:18-alpine
# switch to root and copy all relevant files
WORKDIR /app
COPY ./ui/ ./ui/
COPY ./common/ ./common/
# switch to relevant path (in this case ui)
WORKDIR /app/ui
RUN yarn && yarn build
CMD ["yarn", "start"]
I read this about long-term caching and tried to implement this in my project, but the manifest file generates wrong links to assets when I'm trying to build it in docker container, but generation process works well.
Dockerfile.web
FROM node:8.2.1-alpine
WORKDIR /web
ADD /tmp/app.tar.gz /web
# At the end node_modules will be removed because of bug with npm prune.
# In this case we need re-install production-only deps to reduce container weight.
RUN yarn install && \
yarn run build-production && \
rm -rf node_modules && \
yarn install --production && \
rm -rf src
RUN adduser -D mySecretUser
USER mySecretUser
Anyone know what it can be, why building in docker container is different?
I tried to remove all images, switch off docker container caching, remove dist directory before generation – not works.
I found that problem was introduced by my building script which uses docker-compose.
Docker-compose.yml
version: '3'
services:
# Web-server which is responsible for server-side rendering.
# It also add additional middleware layers when proxify requests
# for dependent systems such as API (for example enrich with auth data before sending).
web:
build:
context: .
dockerfile: ./docker/Dockerfile.web
ports:
- "${WEBSERVER_PORT}"
env_file:
- ./docker/web.env
volumes:
- assets:/web/dist/assets
command: ["yarn", "run", "_production"]
# Nginx used as proxy-wrapper which is placed before web-server.
# It's resposible for static files and proxy to web-server.
# It also can be used for proxify websocket connection to specific game server.
nginx:
build:
context: .
dockerfile: ./docker/Dockerfile.nginx
volumes:
- assets:/www/assets
ports:
- "80:${NGINX_PORT}"
env_file:
- ./docker/web.env
- ./nginx/nginx.env
depends_on:
- web
command: ["/bin/sh", "-c", "envsubst < /etc/nginx/templates/default.template > /etc/nginx/sites-enabled/default && nginx -g 'daemon off;'"]
volumes:
assets:
There I have used shared volume assets and I didn't clear it when start docker-compose again.
I have using this command to build:
rm -rf tmp && mkdir -p tmp && tar -czvf tmp/app.tar.gz src config .babelrc mq.json postcss.config.js webpack.*.js package.json yarn.lock && export $(cat ./docker/web.env | grep -v ^# | xargs) && docker-compose -p cruiserwars build,
where I have preparing tar.gz with sources, setting up environment variables and then use docker-compose.yml to build my project. But there I forgot to remove volume that has been created before...
So solution will be to have this command instead:
rm -rf tmp && mkdir -p tmp && tar -czvf tmp/app.tar.gz src config .babelrc mq.json postcss.config.js webpack.*.js package.json yarn.lock && export $(cat ./docker/web.env | grep -v ^# | xargs) && docker-compose down -v && docker-compose -p cruiserwars build,
you can see that I have added docker-compose down -v to stop containers and remove volume which has been create before.