I have a simple node app with the following Dockerfile:
FROM node:8-alpine
WORKDIR /home/my-app
COPY package.json .
COPY ./app ./app
COPY ./server.js ./
RUN rm -rf node_modules
RUN npm install \
npm run build
EXPOSE 3000
When I build the image with: docker build -t my-app:latest ., I attempt to run the app and it complains that some modules are missing.
When I go into the container via docker run -i -t my-app:latest /bin/sh I can see that the packages have not been installed. After manually running npm install in the container, it seems to work.
I can only conclude that from this RUN npm install not being executed correctly inside the container.
Related
How does Docker build containers? I can't figure it out. I want:
build a container
pass local folder in it
install npm in the container (using dockerfile) in the volume folder( so, I can see it on my local drive)
run a command in my yaml config file
I've tried to list content of folders with ls command, but the /src/ is always empty (prints: src)
My docker-compose.yml:
version: '3'
services:
node:
build:
context: .
dockerfile: Dockerfile.node
volumes:
- ./src:/src
command: run develop
tty: true
My Dockerfile.node:
FROM node:12
WORKDIR /src
COPY ./src/package*.json ./src/
RUN ls
RUN cd ./src
RUN ls
RUN npm install
RUN ls
On the RUN npm install command I got this error:
npm WARN saveError ENOENT: no such file or directory, open '/src/package.json'
I start project with command docker-compose up --build
My folder structure is:
/
src
--package.json
docker-compose.yml
Dockerfile.node
Please help, thank you in advance.
cd ./src only available in the current RUN command, as Dockerfile each command run in a separate shell, so when it comes to run npm install at this time your working is WORKDIR that is /src not the one you are expecting using cd .src which should be /src/src.
RUN pwd
#/src
RUN cd ./src #here /src/src
RUN ls
#/src <-- back to WORKDIR, while you are expecting /src/src
RUN npm install
In short, there is WORKDIR in dockerfile not cd.
You have to option, change command
RUN cd ./src && npm i
or change the copy command and leave the rest as it is.
COPY ./src/package*.json .
Trying to build angular application in docker and run as container in my local using Node js.
I have used build image using below Dockerfile, but i am not sure what i am missing while running. Can someone point me out?
Dockerfile:
FROM node:10.15.3
ENV HOME=/home
WORKDIR $HOME
RUN npm config set strict-ssl false \
&& npm config set proxy http://proxy.xxxxxx.com:8080
COPY package.json .
RUN npm install
Image created with below command successfully
docker build -t example .
I am trying to run the image using below command, but it is not helping
docker run -p 4201:4200 example
your Dockerfile does not run/serve your application, in order to do that you have to:
install angular/cli
copy the app
run/serve the app
FROM node:10.15.3
RUN npm config set strict-ssl false \
&& npm config set proxy http://proxy.xxxxxx.com:8080
# get the app
WORKDIR /src
COPY . .
# install packages
RUN npm ci
RUN npm install -g #angular/cli
# start app
CMD ng serve --host 0.0.0.0
hope this helps.
Container need a foreground process running, then it will not exit. If not, the container will directly exit.
For your case, you need to COPY your nodejs project to container when docker build, and also start the project in CMD like CMD [ "npm", "start" ]. As the web server not exit, then your container will not exit.
A good article here for your reference on how to dockerizing a Node.js web app.
Just update your Dockerfile to achieve your goal for more options see here:
# base image
FROM node:12.2.0
RUN npm config set strict-ssl false \
&& npm config set proxy http://proxy.xxxxxx.com:8080
# install chrome for protractor tests
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install -yq google-chrome-stable
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install
RUN npm install -g #angular/cli#7.3.9
# add app
COPY . /app
# start app
CMD ng serve --host 0.0.0.0
Give a shot for the following Dockerfile as well!
FROM node:alpine
# get the app
WORKDIR /src
# install packages
RUN npm ci
RUN npm install -g #angular/cli
COPY package.json .
RUN npm install
COPY . .
# start app
CMD ["ng", "serve", "-o"]
I have made a Docker image for a nodeJS and it is running in Local perfectly but in Production, I have to configure it with Nginx(Which I installed in the host machine). We normally did like
location /location_of_app_folder {
proxy_pass http://api.prv:51967/info;
}
How will I configure this in nginx for docker image and how to run docker image. We used pm2 in nodeJS wch I added in Docker file But it is running till I press ctrl+C.
FROM keymetrics/pm2:latest-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json ./
COPY .npmrc ./
RUN npm config set registry http://private.repo/:_authToken=authtoken.
RUN npm install utilities#0.1.9
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*
RUN set NODE_ENV=production
RUN npm config set registry https://registry.npmjs.org/
RUN npm install
COPY . /app
RUN ls -al -R
EXPOSE 51967
CMD [ "pm2-runtime", "start", "pm2.json" ]
I am running the container with the command:
sudo docker run -it --network=host docker_repo_name
expose the docker image port and use the same nginx configuration, ex:
sudo docker run -it -p 51967:51967 docker_repo_name
I have just started learning docker-compose and I am using a nodejs image. I want to install gulp to create some tasks and have one of them working on the background.
When I run: docker-compose run --rm -d server gulp watch-less
I get this error: ERROR: oci runtime error: container_linux.go:247: starting container process caused "exec: \"gulp\": executable file not found in $PATH"
Here are my file:
# Dockerfile
FROM node:6.10.2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install --quiet
COPY . /usr/src/app
CMD ["npm", "start"]
# docker-compose.yml
version: "2"
services:
server:
build: .
ports:
- "5000:5000"
volumes:
- ./:/usr/src/app
I also have a .dockerignore to ignore the node_modules folder and the npm-debug.log
EDIT:
When I run docker-compose run --rm server npm install package-name I don't have any problem and the package is installed.
Try adding gulp install in Dockerfile:
# Dockerfile
FROM node:6.10.2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g gulp
COPY package.json /usr/src/app
RUN npm install --quiet
COPY . /usr/src/app
CMD ["npm", "start"]
I have found a solution that works but maybe is not the best solution. I have created an script that runs a gulp task on the package.json and if I run:
docker-compose run --rm server npm run gulp_task it works and does what it has to do.
I find that just referencing gulp via
./node_modules/.bin/gulp
instead of directly works fine. ./node_modules/.bin isn't in the path by default. Another option would be to add that dir to the PATH.
I'd like to run thumbd as a service inside a node Docker image! At the moment I'm just running it before I start my app, which is no use to me! Is there a way I could setup my Dockerfile to run it as an init.d service on startup without blocking any of my other docker commands?
My Dockerfile goes as follows:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Thumbd
RUN npm install -g thumbd
RUN mkdir -p /var/log/
RUN echo "" > /var/log/thumbd.log
RUN thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD npm run build && npm start
It's probably easiest to run thumbd in it's own container due to the way it works without direct links to your application. Docker likes to push the idea of a single process per container too.
FROM node:6.2.0
# Thumbd
RUN set -uex; \
npm install -g thumbd; \
mkdir -p /var/log/; \
touch /var/log/thumbd.log
CMD thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
You can use Docker Compose to orchestrate running multiple containers in your project.
If you really want to run multiple processes in a container, use an init system like s6 or possibly supervisord.