Error response from daemon: failed to create shim when installing Nodejs - node.js

I am trying to create a docker image with node.js version 16. However I failed to find solution to this problem despite searching stackoverflow and other platform.
So basically I am using docker compose up and this is how my docker-compose.yml looks like:
version: '3.1'
services:
redis:
image: 'redis:alpine'
ports:
- "6379:6379"
webserver:
image: 'nginx:alpine'
working_dir: /application
volumes:
- '.:/application'
- './docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf'
ports:
- '3000:80'
php-fpm:
build: docker/php-fpm
working_dir: /application
volumes:
- '.:/application'
- './docker/php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini'
node:
build: docker/node
working_dir: /application
volumes:
- './:/application'
ports:
- '8080:8080'
The files are living in docker folder with 3 nested folders.
node/Dockerfile:
FROM node:16
WORKDIR /application
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
The npm start command consists of npm install and a postinstall where simply npm run dev would be executed.
Unfortunatley, I get the following error
Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh -c 'cd /application && npm install'": stat /bin/sh -c 'cd /application && npm install': no such file or directory: unknown
Also I want to add is, I am running this under WSL2.

Related

What should the behaviour of running `docker-compose exec <container_name> npm install` be on the host?

I am setting up Docker for development purposes. With separate Dockerfile.dev for a NextJS front-end and an ExpressJS-powered back-end. For the NextJS application, the volume mounting is done such that the application runs perfectly fine in the dev server, in the container. However, due to empty node_modules directory in the host, I enter the following command in a new terminal instance:
docker-compose exec frontend npm install.
This gives me my needed modules so that I get normal linting while developing.
However, on starting up the Express backend, I have issues with installing mongodb from npm (module not found when running the container). So I resorted to the same strategy by performing docker-compose backend npm install and everything works. However, the node_modules directory in the host is still empty, which is not the same behaviour as when I had done it with the frontend.
Why is this the case?
#Dockerfile.dev (frontend)
FROM node:19-alpine
WORKDIR "/app"
COPY ./package*.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
#Dockerfile.dev (backend)
FROM node:19-alpine
WORKDIR "/app"
RUN npm install -g nodemon
COPY ./package*.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
docker-compose.yml:
version: '3'
services:
server:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /app/node_modules
- ./server:/app
ports:
- "5000:5000"
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
ports:
- "3000:3000"
mongodb:
image: mongo:latest
volumes:
- /home/panhaboth/Apps/home-server/data:/data/db
ports:
- "27017:27017"

How to solve docker compose cannot find module '/dockerize'

I have a docker-compose setup with three containers - mysql db, serverless app and test (to test the serverless app with mocha). On executing with docker-compose up --build, its seems like the order of execution is messed up and not in correct sequence. Both mysql db and serverless app need to be in working state in order for test to run correctly (depends_on barely works).
Hence I tried using dockerize module to set a timeout and listen to tcp ports on 3306 and 4200 before starting the test container. But I'm getting an error saying Cannot find module '/dockerize'.
Is there anything wrong with the way my docker-compose.yml and Dockerfile are setup? I'm very new to docker so any help would be welcomed.
Dockerfile.yml
FROM node:12
# Create app directory
RUN mkdir -p /app
WORKDIR /app
# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget --no-check-certificate https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
# Install app dependencies
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install -g serverless#2.41.2
RUN npm install
# Bundle app source
COPY . /app
docker-compose.yml
version: '3'
services:
mysql-coredb:
image: mysql:5.6
container_name: mysql-coredb
expose:
- "3306"
ports:
- "3306:3306"
serverless-app:
build: .
image: serverless-app
container_name: serverless-app
depends_on:
- mysql-coredb
command: sls offline --stage local --host 0.0.0.0
expose:
- "4200"
ports:
- "4200:4200"
links:
- mysql-coredb
volumes:
- /app/node_modules
- .:/app
test:
image: node:12
working_dir: /app
container_name: test
volumes:
- /app/node_modules
- .:/app
command: dockerize
-wait tcp://serverless-app:4200 -wait tcp://mysql-coredb:3306 -timeout 15s
bash -c "npm test"
depends_on:
- mysql-coredb
- serverless-app
links:
- serverless-app
Your final test container is running a bare node image. This doesn't see or use any of the packages you install in the Dockerfile. You can set that container to also build: .; Compose will run through the build sequence a second time, but since all of the inputs are the same as the main serverless-app container, Docker will use the build cache for everything, the build will run very quickly, and you'll have two names for the same physical image.
You can also safely remove most of the options you have in the Dockerfile. You only need to specify image: with build: if you're planning to push the image to a registry; you don't need to override container_name:; the default command: should come from the Dockerfile CMD; expose: and links: are related to first-generation Docker networking and aren't necessary; overwriting the image code with volumes: results in ignoring the image contents entirely and getting host-specific behavior (just using Node on the host will be much easier than trying to convince Docker to act like a local Node development environment). So I'd trim this down to:
version: '3.8'
services:
mysql-coredb:
image: mysql:5.6
ports:
- "3306:3306"
serverless-app:
build: .
depends_on:
- mysql-coredb
ports:
- "4200:4200"
test:
build: .
command: >-
dockerize
-wait tcp://serverless-app:4200
-wait tcp://mysql-coredb:3306
-timeout 15s
npm test
depends_on:
- mysql-coredb
- serverless-app

How to solve docker-entrypoint.sh executable file not found in $PATH: unknown?

I'm having trouble while trying to start the backend for my application on Docker. I get the following error when I select Compose Up on VS Code. I read that I need to add RUN chmod -x docker-entrypoint.sh on my Dockerfile, but it didn't solved the problem.
Successfully built some_id_abc123
mongodb is up-to-date
cache is up-to-date
Recreating some_project_server ... error
ERROR: for some_project_server Cannot start service server: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "docker-entrypoint.sh": executable file not found in $PATH: unknown
ERROR: for server Cannot start service server: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "docker-entrypoint.sh": executable file not found in $PATH: unknown
ERROR: Encountered errors while bringing up the project.
The terminal process "/bin/zsh '-c', 'docker-compose -f "docker-compose.yml" up -d --build'" terminated with exit code: 1.
Dockerfile:
FROM node
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run dev
docker-compose.yml:
version: "3"
services:
database:
image: mongo
container_name: mongodb
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=some_db
- MONGO_INITDB_ROOT_USERNAME=some_user
- MONGO_INITDB_ROOT_PASSWORD=some_password
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongo-volume:/data/db
networks:
- server-network
redis:
image: redis
container_name: cache
command: redis-server --requirepass some_password
restart: always
volumes:
- ./redis-data:/var/lib/redis
ports:
- "6379:6379"
networks:
- server-network
server:
build: .
depends_on:
- database
- redis
environment:
- ENV_VARIABLES=some_env_variables
ports:
- "3000:3000"
volumes:
- .:/usr/app
networks:
- server-network
networks:
server-network:
driver: bridge
Finally, here's the file structure of the project:
project
- src
- test
- .dockerignore
- docker-compose.yml
- Dockerfile
- mongo-init.js
- package-lock.json
- package.json
What is this problem's origin and how to solve it? I'm starting with Docker and this is my first professional project I'm working that Docker is necessary.
Try
docker compose build
and next
docker compose up
Found it HERE
Worked for me.

“docker-compose up” Error : Cannot start service fetcher: OCI runtime create failed: container_linux.go:344

I'm running docker on a ubuntu server. For a while everything was working well, then the docker stoped and when I tried to put it up again i had this error :
ERROR: Service 'back' failed to build: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown
Here is my docker-compose.yml
version: "2"
services:
back:
build: ./crypto_feed_back
restart: always
command: node server.js
volumes:
- .:/usr/app/
front:
build: ./crypto_feed_front
restart: always
command: npm start
volumes:
- .:/usr/app/
depends_on:
- back
fetcher:
build: ./crypto_feed_fetch
restart: always
command: /usr/sbin/crond -l 2 -f
depends_on:
- postgres
postgres:
image: postgres:12-alpine
restart: always
environment:
POSTGRES_USER: crypto_feed
POSTGRES_DB: crypto_feed
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
caddy:
image: elswork/arm-caddy:1.0.0
restart: always
environment:
ACME_AGREE: "true"
volumes:
- ./caddy/Caddyfile:/etc/Caddyfile
- ./caddy/certs:/root/.caddy
ports:
- 80:80/tcp
- 443:443/tcp
depends_on:
- front
And back's Dockerfile :
FROM node:alpine
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install --production
# If you are building your code for production
# RUN npm ci --only=production
COPY ./api/ ./api/
COPY ./server.js .
EXPOSE 3001
CMD npm start
If someone has an idea on how I can solve this, it would be really nice !

Npm install errror during docker container building

I created very simple docker file for my nodejs web application:
FROM node:8.11.4
FROM mysql:latest
WORKDIR /ess-explorer
COPY . .
RUN npm install
RUN cd config && cp config.json.example config.json && cp database.json.example database.json && cd ../
RUN npm run migrate
EXPOSE 3000
CMD ["npm", "dev"]
And docker.yml
version: '3'
services:
essblockexplorer:
container_name: ess-explorer
build: .
depends_on:
- db
privileged: true
ports:
- 3000:3000
- 3010:3010
db:
container_name: mysql
image: mysql
restart: always
volumes:
- db-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: '123'
volumes:
db-data:
After command docker-compose -f docker.yml build evey time I've got an error
Step 5/9 : RUN npm install
---> Running in d3644d792807
/bin/sh: 1: npm: not found
ERROR: Service 'essblockexplorer' failed to build: The command '/bin/sh -c npm install' returned a non-zero code: 127
What am i doing wrong? I found similar issues but i didnt find the real solution for solving this problem
You shouldn't need the mysql image in your Dockerfile at all; ideally your app container (essblockexplorer) accesses the db container (db) via a NodeJS client. All you need to do is;
Remove the FROM mysql:latest line from your Dockerfile.
Access the MySQL database via a NodeJS client using (db) as the hostname (this is automatically loaded as an alias into your container).

Resources