docker: Command "/bin/sh" not found - node.js

ENV PORT=3000
ENV NODE_ENV=production
EXPOSE $PORT
WORKDIR $APP_DIR
COPY yarn.lock package.json $APP_DIR/
RUN ["/usr/local/bin/yarn"]
COPY . $APP_DIR
ENTRYPOINT ["/usr/local/bin/yarn", "run"]
CMD ['dev']
Was running this using this command
docker run --rm -p 3000:3000 my-app:latest
And the console outputs
yarn run v0.17.9
error Command "/bin/sh" not found.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I expect /usr/local/bin/yarn run dev to be executed inside docker, am I missing something?

Try switching to double quotes, single quotes aren't valid for a json string:
CMD ["dev"]

Related

React app is not loading from docker image in local

My Docker file
# FROM node:16.14.2
FROM node:alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install
COPY . .
CMD [ "npm", "start"]
Command to run image: docker run -it -d -p 4001:4001 react-app:test2
Project structure
project structure
Output after docker run
result after docker run
Based on this context, a possible mistake for me is basically that you do not copy the rest of the source code correctly.
Try to be more consistent in the Dockerfile, also have a look at the multistage Docker build (within the same file) to optimise the image.
Anyway, your file should be something like:
FROM node:16-alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install
COPY . ./
CMD [ "npm", "start"]
Based on the code in the repo, I managed to spot the following problem.It's neither the Dockerfile, nor the code itslef. It throws some warnings though.
Implicitly, the application is supposed to be running on port 3000, if it is not chnaged manually at some point (in this project there are only default settings). Thus the application starts correclty on port 3000, However you expose 4001:4001. On this port nothing is running according to this Dockerfile.
Try using port 3000 instead and it should work just fine:
docker run -it -d -p 3000:3000 <image-name>:<image-tag>

npm install package through running node container

I've followed the steps in the node.js documentation for creating a Dockerfile. I'm trying to run the command docker exec -it mynodeapp /bin/bash in order to go inside the container and install a new package via npm, but I get the following error
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
Any ideas what I'm doing wrong?
for ref this is how my docker-compose and dockerfile look like
FROM node:latest
RUN mkdir /app
WORKDIR /app
RUN npm install -g nodemon
COPY package.json package.json
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
and
version: '3'
services:
nodejs:
container_name: mynodeapp
build: .
command: nodemon --inspect server.js
ports:
- '5000:8080'
volumes:
- '.:/app'
networks:
- appnet
networks:
appnet:
driver: 'bridge'
Change docker exec mynodeapp -it /bin/bash to docker exec -it mynodeapp /bin/sh
According to docker documentation the correct syntax is the following:
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
-i and -t are the options
mynodeapp is CONTAINER
/bin/bash - is
a COMMAND inside container
And another problem is that there is no bash shell inside container, so you can use sh shell.

Dockerized React App failed to bind to $PORT on Heroku

I'm trying to deploy a Dockerized React App to Heroku, but keep getting the
"R10: Failed to bind to $PORT error on Heroku"
.
The dockerized app runs perfectly fine when i docker run it locally.
My docker file looks like the following:
FROM node:10.15.3
RUN mkdir -p /app
WORKDIR /app
COPY . .
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install --verbose
RUN npm install serve -g -silent
# start app
RUN npm run build
CMD ["serve", "-l", "tcp://0.0.0.0:${PORT}", "-s", "/app/build"]
I followed the online solution to change the "listening" port on serve to $PORT from Heroku. Now the application is served on Heroku's port according to logs, but still, get the
"Failed to bind to $PORT error"
.
Please help!
variable substitution does not happen in CMD that is why ${PORT} is consider as a text instead of consuming its value.
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
docker-cmd
Change CMD to
CMD ["sh", "-c", "serve -l tcp://0.0.0.0:${PORT} -s /app/build"]

Docker Container exits upon running with "sh -c"

I am trying to run a webserver (right now still locally) out of a docker container. I am currently going step by step to understand the different parts.
Dockerfile:
FROM node:12.2.0-alpine as build
ENV environment development
WORKDIR /app
COPY . /app
RUN cd /app/client && yarn && yarn build
RUN cd /app/server && yarn
EXPOSE 5000
CMD ["sh", "-c","NODE_ENV=${environment}", "node", "server/server.js"]
Explanation:
I have the "sh", "-c" part in the CMD command due to the fact that without it I was getting this error:
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:346: starting container process caused "exec:
\"NODE_ENV=${environment}\": executable file not found in $PATH":
unknown.
Building the container:
Building the container works just fine with:
docker build -t auth_example .
It takes a little while since the build context is (even after excluding all the node_modules) roughly 37MB, but that's okay.
Running the container:
Running the container and the app inside works like a charm if I do:
MyZSH: docker run -it -p 5000:5000 auth_example /bin/sh
/app # NODE_ENV=development node server/server.js
However, when running the container via the CMD command like this:
MyZSH: docker run -p 5000:5000 auth_example
Nothing happens, no errors, no nothing. The logs are empty and a docker ps -a reveals that the container was exited right upon start. I did some googling and tried different combinations of -t -i -d but that didn't solve it either.
Can anybody shed some light on this or point me into the right direction?
The problem is you're passing three arguments to sh -c whereas you'd usually pass one (sh -c "... ... ...").
It's likely you don't need the sh -c invocation at all; use /usr/bin/env to alias that environment variable instead (or just directly pass in NODE_ENV instead of environment):
FROM node:12.2.0-alpine as build
ENV environment development
WORKDIR /app
COPY . /app
RUN cd /app/client && yarn && yarn build
RUN cd /app/server && yarn
EXPOSE 5000
CMD /usr/bin/env NODE_ENV=${environment} node server/server.js

Passing NODE_ENV to docker to run package.json scripts

This is my dockerfile :
FROM node:6-onbuild
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 80
EXPOSE ${PORT}
CMD [ "npm","run", "start" ]
and in package.json I do have this :
"scripts": {
"start": "node start.js",
"stagestart": "NODE_ENV=content-staging node start.js"
}
the start script is for production, now I want a way to run the staging script in dockerfile. is there a way to read NODE_ENV inside dockerfile, so I can have one dockerfile which handle staging and production.
Here is two possible implementation.
FYI: you don't need to mention NODE_ENV in package.json if you
already set NODE_ENV at the system level or set NODE_ENV during build time or runtime in docker.
Here Dockerfile as same but I used to alpine base image
FROM node:alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 3000
ARG DOCKER_ENV
ENV NODE_ENV=${DOCKER_ENV}
RUN if [ "$DOCKER_ENV" = "stag" ] ; then echo your NODE_ENV for stage is $NODE_ENV; \
else echo your NODE_ENV for dev is $NODE_ENV; \
fi
EXPOSE ${PORT}
CMD [ "npm","run", "start" ]
when you build this Dockerfile with this command
docker build --build-arg DOCKER_ENV=stag -t test-node .
You will see at layer
---> Running in a6231eca4d0b your NODE_ENV for stage is stag
When you run this docker container and run this command your output will be
/usr/src/app # echo $NODE_ENV
stag
Simplest Approch same image and but set environment variable at run time
Your Dockerfile
FROM node:alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 3000
EXPOSE ${PORT}
CMD [ "npm","run", "start" ]
Run this docker image with this command
docker build -t test-node .
docker run --name test -e NODE_ENV=content-staging -p 3000:3000 --rm -it test-node ash
So when you run this command at container you will see
/usr/src/app # echo $NODE_ENV
content-staging
So this is how you can start your node application with NODE_ENV without setting environment variable at package.json. So if your nodejs configuration is based on NODE_ENV it should pick configuration according to NODE_ENV .
You can use the ENV instruction to get the environment variable as an environment variable inside container. Have an entry point script that injects the available environment variable (perhaps something as simple as sed) in place of a placeholder variable name that is in your package.json file. Then start your node application. Obviously this will require you to make a few changes to your Dockerfile in regards to entrypoint script etc.
That is how I have achieved such things in the past.

Resources