Docker compose can't find named Dockerfile - linux

my project structure is defined like this (names are just for example):
- docker-compose.yml
- env_files
- foo.env
- dockerfiles
-service_1
-foo.Dockerfile
-requirements.txt
- code_folder_1
- ...
- code_folder_2
- ...
In my docker-compose:
some_service:
container_name: foo_name
build:
context: .
dockerfile: ./dockerfiles/service_1/foo.Dockerfile
ports:
- 80:80
env_file:
- ./env_files/foo.env
Dockerfile:
FROM python:3.8-slim
WORKDIR /some_work_dir
COPY ./dockerfiles/intermediary/requirements.txt .
RUN pip3 install --upgrade pip==21.3.1 && \
pip3 install -r requirements.txt
COPY . .
EXPOSE 80
And after i run docker compose build in the directory where compose file is located I get this error:
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount4158855397/Dockerfile: no such file or directory
I really do not understand why this is happening. I need to set context:. because I have multiple folders that I need to COPY inside foo.Dockerfile
Same error was replicated in macOS Monterey 12.5.1 and Ubuntu 18.04.4 LTS (Bionic Beaver)

I solved a similar issue by writing a script like the following:
#!/bin/bash
SCRIPT_PATH=$(readlink -f $(dirname $0))
SCRIPT_PATH=${SCRIPT_PATH} docker-compose -f ${SCRIPT_PATH}/docker-compose.yaml up -d --build $#
And changing the yaml into:
some_service:
container_name: foo_name
build:
context: .
dockerfile: ${SCRIPT_PATH}/dockerfiles/service_1/foo.Dockerfile
ports:
- 80:80
env_file:
- ${SCRIPT_PATH}/env_files/foo.env

Related

Error response from daemon: failed to create shim when installing Nodejs

I am trying to create a docker image with node.js version 16. However I failed to find solution to this problem despite searching stackoverflow and other platform.
So basically I am using docker compose up and this is how my docker-compose.yml looks like:
version: '3.1'
services:
redis:
image: 'redis:alpine'
ports:
- "6379:6379"
webserver:
image: 'nginx:alpine'
working_dir: /application
volumes:
- '.:/application'
- './docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf'
ports:
- '3000:80'
php-fpm:
build: docker/php-fpm
working_dir: /application
volumes:
- '.:/application'
- './docker/php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini'
node:
build: docker/node
working_dir: /application
volumes:
- './:/application'
ports:
- '8080:8080'
The files are living in docker folder with 3 nested folders.
node/Dockerfile:
FROM node:16
WORKDIR /application
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
The npm start command consists of npm install and a postinstall where simply npm run dev would be executed.
Unfortunatley, I get the following error
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh -c 'cd /application && npm install'": stat /bin/sh -c 'cd /application && npm install': no such file or directory: unknown
Also I want to add is, I am running this under WSL2.

How to solve docker compose cannot find module '/dockerize'

I have a docker-compose setup with three containers - mysql db, serverless app and test (to test the serverless app with mocha). On executing with docker-compose up --build, its seems like the order of execution is messed up and not in correct sequence. Both mysql db and serverless app need to be in working state in order for test to run correctly (depends_on barely works).
Hence I tried using dockerize module to set a timeout and listen to tcp ports on 3306 and 4200 before starting the test container. But I'm getting an error saying Cannot find module '/dockerize'.
Is there anything wrong with the way my docker-compose.yml and Dockerfile are setup? I'm very new to docker so any help would be welcomed.
Dockerfile.yml
FROM node:12
# Create app directory
RUN mkdir -p /app
WORKDIR /app
# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget --no-check-certificate https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
# Install app dependencies
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -g serverless#2.41.2
RUN npm install
# Bundle app source
COPY . /app
docker-compose.yml
version: '3'
services:
mysql-coredb:
image: mysql:5.6
container_name: mysql-coredb
expose:
- "3306"
ports:
- "3306:3306"
serverless-app:
build: .
image: serverless-app
container_name: serverless-app
depends_on:
- mysql-coredb
command: sls offline --stage local --host 0.0.0.0
expose:
- "4200"
ports:
- "4200:4200"
links:
- mysql-coredb
volumes:
- /app/node_modules
- .:/app
test:
image: node:12
working_dir: /app
container_name: test
volumes:
- /app/node_modules
- .:/app
command: dockerize
-wait tcp://serverless-app:4200 -wait tcp://mysql-coredb:3306 -timeout 15s
bash -c "npm test"
depends_on:
- mysql-coredb
- serverless-app
links:
- serverless-app
Your final test container is running a bare node image. This doesn't see or use any of the packages you install in the Dockerfile. You can set that container to also build: .; Compose will run through the build sequence a second time, but since all of the inputs are the same as the main serverless-app container, Docker will use the build cache for everything, the build will run very quickly, and you'll have two names for the same physical image.
You can also safely remove most of the options you have in the Dockerfile. You only need to specify image: with build: if you're planning to push the image to a registry; you don't need to override container_name:; the default command: should come from the Dockerfile CMD; expose: and links: are related to first-generation Docker networking and aren't necessary; overwriting the image code with volumes: results in ignoring the image contents entirely and getting host-specific behavior (just using Node on the host will be much easier than trying to convince Docker to act like a local Node development environment). So I'd trim this down to:
version: '3.8'
services:
mysql-coredb:
image: mysql:5.6
ports:
- "3306:3306"
serverless-app:
build: .
depends_on:
- mysql-coredb
ports:
- "4200:4200"
test:
build: .
command: >-
dockerize
-wait tcp://serverless-app:4200
-wait tcp://mysql-coredb:3306
-timeout 15s
npm test
depends_on:
- mysql-coredb
- serverless-app

Unable to start docker Container from docker-compose - "unknown flag: --iidfile"

When I try to run ' docker-compose up" the command prompt throws the below error :
$ docker-compose up
Building tomcat
unknown flag: --iidfile
See 'docker build --help'.
ERROR: Service 'tomcat' failed to build
Below is the Dockerfile and docker-compose.yml file:
$ cat Dockerfile.dev
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"] `
$ cat docker-compose.yml
version: '3'
services:
tomcat:
restart: always
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
tests:
restart: always
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- .:/app
command: ["npm", "run", "test"]
Do I have something wrong configuration in Dockerfile or docker-compose.yml?
I faced the same problem when using docker-compose 1.29.1. Downgrading to docker-compose 1.26.2 resolved this problem.
reinstall docker-compose 1.26.2
rm -f /usr/local/bin/docker-compose
curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose
Faced the same issue. Removed docker and installed docker-ce. This helped me to solve the issue.

docker can't find file to start app after build

I am using Typescript here and using node:latest in Docker, and I am using docker-compose as well,
I always failed to run it with docker-compose, when I run docker run ( manual ) it was work well,
here is my Dockerfile
FROM node:latest
RUN mkdir -p /home/myapp
WORKDIR /home/myapp
RUN npm i -g prisma2
ENV PATH /home/myapp/node_modules/.bin:$PATH
COPY package.json /home/myapp/
RUN npm install
COPY . /home/myapp
RUN prisma2 lift save --name 'init'
RUN prisma2 lift up
EXPOSE 8100
RUN npm run build
RUN pwd
RUN ls
RUN ls dist
CMD node dist/server.js
and my docker-compose.yml:
version: "3"
services:
app:
environment:
DB_URI: postgres://myuser:password#postgres:5555/prod
NODE_ENV: production
build:
context: .
dockerfile: Dockerfile
depends_on:
- postgres
volumes:
- ./home/edupro/:/home/myapp/
- ./node_modules:/home/myapp/node_modules
ports:
- "8100:8100"
postgres:
container_name: postgres
image: postgres:10-alpine
ports:
- "5555:5555"
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: password
POSTGRES_DB: prod
when it finishes doing CMD node /dist/server.js ( which folder I build because I am using TYpescript )
it gets an error like this :
Cannot find module '/home/edupro/dist/server.js'
I have to try to change volumes in docker-compose.yml as well like this:
- /home/myapp/node_modules:/home/myapp/node_modules
or
- ./:/home/myapp/node_modules
but still the same. do I miss something ? or did wrong mount?
how is the correct way to resolve that?
you need to remove the volume sections from your compose since that will overwrite all the files you build in your dockerfile, so delete this:
volumes:
- ./home/edupro/:/home/myapp/
- ./node_modules:/home/myapp/node_modules

Npm install errror during docker container building

I created very simple docker file for my nodejs web application:
FROM node:8.11.4
FROM mysql:latest
WORKDIR /ess-explorer
COPY . .
RUN npm install
RUN cd config && cp config.json.example config.json && cp database.json.example database.json && cd ../
RUN npm run migrate
EXPOSE 3000
CMD ["npm", "dev"]
And docker.yml
version: '3'
services:
essblockexplorer:
container_name: ess-explorer
build: .
depends_on:
- db
privileged: true
ports:
- 3000:3000
- 3010:3010
db:
container_name: mysql
image: mysql
restart: always
volumes:
- db-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: '123'
volumes:
db-data:
After command docker-compose -f docker.yml build evey time I've got an error
Step 5/9 : RUN npm install
---> Running in d3644d792807
/bin/sh: 1: npm: not found
ERROR: Service 'essblockexplorer' failed to build: The command '/bin/sh -c npm install' returned a non-zero code: 127
What am i doing wrong? I found similar issues but i didnt find the real solution for solving this problem
You shouldn't need the mysql image in your Dockerfile at all; ideally your app container (essblockexplorer) accesses the db container (db) via a NodeJS client. All you need to do is;
Remove the FROM mysql:latest line from your Dockerfile.
Access the MySQL database via a NodeJS client using (db) as the hostname (this is automatically loaded as an alias into your container).

Resources