docker can't find file to start app after build - node.js

I am using Typescript here and using node:latest in Docker, and I am using docker-compose as well,
I always failed to run it with docker-compose, when I run docker run ( manual ) it was work well,
here is my Dockerfile
FROM node:latest
RUN mkdir -p /home/myapp
WORKDIR /home/myapp
RUN npm i -g prisma2
ENV PATH /home/myapp/node_modules/.bin:$PATH
COPY package.json /home/myapp/
RUN npm install
COPY . /home/myapp
RUN prisma2 lift save --name 'init'
RUN prisma2 lift up
EXPOSE 8100
RUN npm run build
RUN pwd
RUN ls
RUN ls dist
CMD node dist/server.js
and my docker-compose.yml:
version: "3"
services:
app:
environment:
DB_URI: postgres://myuser:password#postgres:5555/prod
NODE_ENV: production
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
volumes:
- ./home/edupro/:/home/myapp/
- ./node_modules:/home/myapp/node_modules
ports:
- "8100:8100"
postgres:
container_name: postgres
image: postgres:10-alpine
ports:
- "5555:5555"
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: password
POSTGRES_DB: prod
when it finishes doing CMD node /dist/server.js ( which folder I build because I am using TYpescript )
it gets an error like this :
Cannot find module '/home/edupro/dist/server.js'
I have to try to change volumes in docker-compose.yml as well like this:
- /home/myapp/node_modules:/home/myapp/node_modules
or
- ./:/home/myapp/node_modules
but still the same. do I miss something ? or did wrong mount?
how is the correct way to resolve that?

you need to remove the volume sections from your compose since that will overwrite all the files you build in your dockerfile, so delete this:
volumes:
- ./home/edupro/:/home/myapp/
- ./node_modules:/home/myapp/node_modules

Related

Docker Prisma Error P1001: Can't reach database server at `postgres`:`5432`

After hours of searchs, I must bow dow and ask you some advices on my problem :
My backend (express + prisma + postgresql) is Dockerized, functionning BUT I can't use npx prisma commands from my wsl2 zsh terminal.
Here is my .env
# Database settings
NODE_ENV=dev
DB_USER=user
DB_PASS=password
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}#postgres/chimere?schema=public"
Dockerfile :
FROM node:17-alpine3.14 as base
WORKDIR /user/src/app
COPY package*.json /user/src/app/
EXPOSE 5000
FROM base as dev
ENV NODE_ENV=development
RUN npm install -g nodemon && npm install
COPY . /user/src/app/
RUN npx prisma generate
CMD ["nodemon", "src/index.js"]
FROM base as production
ENV NODE_ENV=production
RUN npm ci
COPY . /user/src/app/
RUN npx prisma generate
CMD ["node", "src/index.js"]
docker-compose.yml :
version: '3.8'
services:
postgres:
image: postgres
restart: always
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
web:
build:
context: ./
target: dev
restart: always
volumes:
- .:/usr/src/app
- uploaded-files:/usr/src/app/public/media/files
- uploaded-pictures:/usr/src/app/public/media/pictures
command: npm run start:dev
ports:
- "5000:5000"
environment:
NODE_ENV: development
DEBUG: nodejs-docker-express:*
volumes:
postgres:
uploaded-files:
uploaded-pictures:
and Prisma Schema :
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "linux-musl"]
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
Like you can see I'm prettry new to Docker and almost everything is an adjusted copypasta from Google (:
How can I get my app to work AND get my commands to work aswell ?
Thanks !
I was facing the same issue but it was on a mysql container. Basically I was having a problem with my DATABASE_URL in my .env file. It was looking something like this:
DATABASE_URL="mysql://${DB_USER}:${DB_PASS}#localhost:3306/project_name"
The problem was that localhost. Apparently, when running inside a container, instead of the localhost, it uses the container's name. I changed my docker compose to specify the container's name:
version: '3.1'
services:
db:
image: mysql
container_name: mysql
ports:
- 3306:3306
Notice the container_name property. After that, I changed my .env to:
DATABASE_URL="mysql://${DB_USER}:${DB_PASS}#mysql:3306/project_name"
I would suggest you to try something similar. Maybe something along these lines:
version: '3.8'
services:
postgres:
image: postgres
container_name: postgres
restart: always
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
And for your .env you can leave the way it is now unless you chose a different container's name, then you would subsitute with the name you chose inside the curly braces (remember to remove the curly braces):
DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}#{container_name}/chimere?schema=public"
Check this GitHub issue for more information as well:
https://github.com/prisma/prisma/issues/1385
You need to act from inside the container.
First, create an interactive shell in the container using docker exec:
docker exec -it <name of your containter> sh
Note: the -i flag keeps input open to the container, and the -t flag creates a pseudo-terminal that the shell can attach to.
Then, once inside the container, execute the commands you need:
npx prisma migrate dev --name <name of your migration>

Docker run exit wit code 0 - Node.js & Postgres

I've build a small Node.js app with a Postgres database. I wanted to Dockerize it, and everything worked fine when I was using docker-compose, now I'm trying to put it on AWS EC2, and I'm using docker run, but it doesn't work.
Docker Run command:
docker run -d -p 80:4000 --name appname hubname/appname
the container starts and stops immediatly
I'm fairly new to Docker so I guess I'm missing something.
Please find my Dockerfile:
FROM node:14 as base
WORKDIR /home/node/app
COPY package*.json ./
RUN npm install
COPY . .
FROM base as production
ENV NODE_PATH=./build
RUN npm start
My docker-compose:
version: '3.7'
services:
appname-db:
image: postgres
container_name: appname-db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- '5432:5432'
volumes:
- appname-db:/var/lib/postgresql/data
ts-node-docker:
build:
context: .
dockerfile: Dockerfile
target: production
volumes:
- ./src:/home/node/app/src
- ./nodemon.json:/home/node/app/nodemon.json
container_name: ts-node-docker
environment:
DB_HOST: ${DB_HOST}
DB_SCHEMA: ${DB_SCHEMA}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
depends_on:
- appname-db:
condition: service_healthy
expose:
- '4000'
ports:
- '80:4000'
stdin_open: true
command: npm run start
volumes:
appname-db:
When I do:
docker ps -a I see that the container started and exited right after.
I can't find any logs whatsoever it's very frustrating.
Any help/pointer would be really appreciated.
Edit 1 - Some additional information:
When I'm using docker-compose we can see 2 containers:
However this is what I have with docker run:
You should use CMD and not RUN when you execute the npm run start command. CMD runs every time a container is started, while RUN is just executed once, at build time.
This means you start the container, but nothing else is started inside of it. So it exits because there's nothing more to execute. RUN was already executed last time you built the container.
Use RUN for npm install, otherwise it will try every time to reinstall node_modules you already have. But for the start command – do this instead: CMD ["npm", "run", "start"]
In case it could help you I show you my working docker-compose.yml and Dockerfile. Express API with a Postgres DB.
This is tested and works both on an Ubuntu server and on a dev machine (with docker-compose up for both environments). Note that there are two Dockerfiles (dev/prod).
You don't need to add password/user stuff in your docker-compose, you can just include your env_file and Docker/docker-compose sets everything up correctly for you.
version: "3.8"
services:
# D A T A B A S E #
db:
env_file:
- ./.env
image: postgres
container_name: "db"
restart: always
ports:
- "${POSTGRES_PORT}:${POSTGRES_PORT}"
volumes:
- ./db-data/:/var/lib/postgresql/data/
# A P I #
api:
container_name: "api"
env_file:
- ./.env
build:
context: .
dockerfile: "DOCKERFILE.${NODE_ENV}"
ports:
- ${API_PORT}:${API_PORT}
depends_on:
- db
restart: always
volumes:
- /app/node_modules
- .:/usr/src/service
working_dir: /usr/src/service
# A D M I N E R #
adminer:
env_file:
- ./.env
container_name: "adminer"
image: adminer:latest
restart: always
ports:
- ${ADMINER_PORT}:${ADMINER_PORT}
depends_on:
- db
Dockerfile.dev:
FROM node:14.17.3
ENV NODE_ENV=dev
WORKDIR /
COPY ./package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "run", "start"]
Dockerfile.prod is pretty much the same, just change the environment to prod.
I see is that you include the full path from your home directory when copying, try without that. Just copy from the location relative to your Dockerfile at /.

How to solve docker compose cannot find module '/dockerize'

I have a docker-compose setup with three containers - mysql db, serverless app and test (to test the serverless app with mocha). On executing with docker-compose up --build, its seems like the order of execution is messed up and not in correct sequence. Both mysql db and serverless app need to be in working state in order for test to run correctly (depends_on barely works).
Hence I tried using dockerize module to set a timeout and listen to tcp ports on 3306 and 4200 before starting the test container. But I'm getting an error saying Cannot find module '/dockerize'.
Is there anything wrong with the way my docker-compose.yml and Dockerfile are setup? I'm very new to docker so any help would be welcomed.
Dockerfile.yml
FROM node:12
# Create app directory
RUN mkdir -p /app
WORKDIR /app
# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget --no-check-certificate https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
# Install app dependencies
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -g serverless#2.41.2
RUN npm install
# Bundle app source
COPY . /app
docker-compose.yml
version: '3'
services:
mysql-coredb:
image: mysql:5.6
container_name: mysql-coredb
expose:
- "3306"
ports:
- "3306:3306"
serverless-app:
build: .
image: serverless-app
container_name: serverless-app
depends_on:
- mysql-coredb
command: sls offline --stage local --host 0.0.0.0
expose:
- "4200"
ports:
- "4200:4200"
links:
- mysql-coredb
volumes:
- /app/node_modules
- .:/app
test:
image: node:12
working_dir: /app
container_name: test
volumes:
- /app/node_modules
- .:/app
command: dockerize
-wait tcp://serverless-app:4200 -wait tcp://mysql-coredb:3306 -timeout 15s
bash -c "npm test"
depends_on:
- mysql-coredb
- serverless-app
links:
- serverless-app
Your final test container is running a bare node image. This doesn't see or use any of the packages you install in the Dockerfile. You can set that container to also build: .; Compose will run through the build sequence a second time, but since all of the inputs are the same as the main serverless-app container, Docker will use the build cache for everything, the build will run very quickly, and you'll have two names for the same physical image.
You can also safely remove most of the options you have in the Dockerfile. You only need to specify image: with build: if you're planning to push the image to a registry; you don't need to override container_name:; the default command: should come from the Dockerfile CMD; expose: and links: are related to first-generation Docker networking and aren't necessary; overwriting the image code with volumes: results in ignoring the image contents entirely and getting host-specific behavior (just using Node on the host will be much easier than trying to convince Docker to act like a local Node development environment). So I'd trim this down to:
version: '3.8'
services:
mysql-coredb:
image: mysql:5.6
ports:
- "3306:3306"
serverless-app:
build: .
depends_on:
- mysql-coredb
ports:
- "4200:4200"
test:
build: .
command: >-
dockerize
-wait tcp://serverless-app:4200
-wait tcp://mysql-coredb:3306
-timeout 15s
npm test
depends_on:
- mysql-coredb
- serverless-app

Avoid restarting docker when code is changed

I'm using docker for the first time, so it could be a stupid question for you.
I configured: dockerfile and docker-compose.
Every time i change my code ( nodejs code ), I need to run the command:
docker-compose up --build
If i want to see the changes, I want to know if there is a way to update my code without have to run that command every time.
Thanks.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV NODE_PATH=/app/node_modules
EXPOSE 1337
CMD ["npm", "start"]
docker-compose:
version: '3'
services:
node_api:
container_name: app
restart: always
build: .
ports:
- "1337:1337"
links:
- mongo
mongo:
container_name: mongo
image: mongo:latest
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
- MONGO_INITDB_DATABASE=test
ports:
- "27017:27017"
restart: always
You can mount the directory containing your source code in the container and use a tool such as nodemon, which will watch files and restart the application on changes.
See the article Docker Tips : Development With Nodemon for details.

Npm install errror during docker container building

I created very simple docker file for my nodejs web application:
FROM node:8.11.4
FROM mysql:latest
WORKDIR /ess-explorer
COPY . .
RUN npm install
RUN cd config && cp config.json.example config.json && cp database.json.example database.json && cd ../
RUN npm run migrate
EXPOSE 3000
CMD ["npm", "dev"]
And docker.yml
version: '3'
services:
essblockexplorer:
container_name: ess-explorer
build: .
depends_on:
- db
privileged: true
ports:
- 3000:3000
- 3010:3010
db:
container_name: mysql
image: mysql
restart: always
volumes:
- db-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: '123'
volumes:
db-data:
After command docker-compose -f docker.yml build evey time I've got an error
Step 5/9 : RUN npm install
---> Running in d3644d792807
/bin/sh: 1: npm: not found
ERROR: Service 'essblockexplorer' failed to build: The command '/bin/sh -c npm install' returned a non-zero code: 127
What am i doing wrong? I found similar issues but i didnt find the real solution for solving this problem
You shouldn't need the mysql image in your Dockerfile at all; ideally your app container (essblockexplorer) accesses the db container (db) via a NodeJS client. All you need to do is;
Remove the FROM mysql:latest line from your Dockerfile.
Access the MySQL database via a NodeJS client using (db) as the hostname (this is automatically loaded as an alias into your container).

Resources