How should I run thumbd as a service inside a Docker container? - node.js

I'd like to run thumbd as a service inside a node Docker image! At the moment I'm just running it before I start my app, which is no use to me! Is there a way I could setup my Dockerfile to run it as an init.d service on startup without blocking any of my other docker commands?
My Dockerfile goes as follows:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Thumbd
RUN npm install -g thumbd
RUN mkdir -p /var/log/
RUN echo "" > /var/log/thumbd.log
RUN thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD npm run build && npm start

It's probably easiest to run thumbd in it's own container due to the way it works without direct links to your application. Docker likes to push the idea of a single process per container too.
FROM node:6.2.0
# Thumbd
RUN set -uex; \
npm install -g thumbd; \
mkdir -p /var/log/; \
touch /var/log/thumbd.log
CMD thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
You can use Docker Compose to orchestrate running multiple containers in your project.
If you really want to run multiple processes in a container, use an init system like s6 or possibly supervisord.

Related

Create Node.js app usin Docker without installing node on host machine

New to docker. I wanted to create a simple Node.js app using docker on my Ubuntu OS. But all the tutorials/YouTube Videos are first installing Node on host machine, then run and test the app, and then dockerizing the app. Means all the tutorials are simply teaching "How to dockerize an existing app".
If anyone can provide a little guidance on this then it would be really helpful.
First create a directory for your source code and create something like the starter app from here in app.js. You have to make one change though: hostname has to be 0.0.0.0 rather than 127.0.0.1.
Then run an interactive node container using this command
docker run -it --rm -v $(pwd):/app -p 3000:3000 node /bin/bash
This container has your host directory mapped to /app, so if you do
cd /app
ls -al
you should see your app.js file.
Now you can run it using
node app.js
If you then switch back to the host, open a browser and go to http://localhost:3000/ you should get a 'Hello world' response.
Follow this
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
docker build . -t <your username>/node-web-app
docker run -p 49160:8080 -d <your username>/node-web-app
You can create package*.json ./ files manually, if you don't want to install node/npm on your local machine.

how to run multiple npm scripts in docker CMD command

I created a Dockerfile for a nodejs project. It contains a package.json with many scripts.
"scripts": {
"start": "npm run",
"server": "cd server && npm start",
"generator": "cd generator && npm start",
...
},
I need to run server and genberator in my docker image. How to achieve this?
I tried:
CMD ls;npm run server ; npm run generator this won't find the package json because shellform seems to run within /bin/sh -c.
CMD ["npm","run","server"] is also not working and is missing the 2nd command
The ls in first try showed me that all files are in place (incl. the package.json).
For sake of completeness the project in question is https://github.com/seekwhencer/node-bilder-brause (not mine).
the current Dockerfile:
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# Bundle app source
COPY ./* ./
RUN npm install
EXPOSE 3050
EXPOSE 3055
CMD ls ; npm run server ; npm run generator
The typical way to run multiple commands in a CMD or ENTRYPOINT is to write all of the commands to a file and then run that file. This is demonstrated in the Dockerfile below.
This Dockerfile also install imagemagick, which is a dependency of the package OP is trying to use. I also changed the base image to node:14-alpine because it is much smaller than node:14 but works just as well for this purpose.
FROM node:14-alpine
# Install system dependencies for this package.
RUN apk add --no-cache imagemagick
# Create app directory
WORKDIR /usr/src/app
# Bundle app source
COPY . .
RUN npm install \
# Install server and generator.
&& npm run postinstall \
# Write entrypoint.
&& printf "ls\nnpm run server\nnpm run generator\n" > entrypoint.sh
EXPOSE 3050
EXPOSE 3055
CMD ["/bin/sh", "entrypoint.sh"]
docker build --tag nodebilderbrause .
docker run --rm -it nodebilderbrause
The contents of entrypoint.sh are written in the Dockerfile. Here is what the file would look like.
ls
npm run server
npm run generator
I found another way, adding for sake of completeness and reference for me:
https://www.npmjs.com/package/concurrently
adding
RUN npm install -g concurrently
enables:
CMD ["concurrently","npm:server", "npm:generator"]
If you need to run two separate processes, the typical approach is to run two separate containers. You can run both containers off the same image; it's very straightforward to override the command part of a container when you start it.
You need to pick something to be the default CMD. Given the package.json you show, for example, you can specify
CMD npm start
When you actually go to run the container, you can specify an alternate command: anything after the image name is taken as the command. (If you're using Docker Compose, specify command:.) A typical docker run setup might look like:
docker build -t bilder-brause .
docker network create photos
docker run \
--name server \
--net photos \
-d \
-p 3050:3050 \
bilder-brause \
npm run server
docker run \
--name generator \
--net photos \
-d \
-p 3055:3055 \
bilder-brause \
npm run generator
You could build separate images for the different components with separate EXPOSE and CMD directives if you really wanted
FROM bilder-brause
EXPOSE 3050
CMD npm run server
Building these is a minor hassle; there is no way to specify in Compose that one local image is built FROM another so the ordering might not turn out correctly, for example.

npm install not being running on building container

I have a simple node app with the following Dockerfile:
FROM node:8-alpine
WORKDIR /home/my-app
COPY package.json .
COPY ./app ./app
COPY ./server.js ./
RUN rm -rf node_modules
RUN npm install \
npm run build
EXPOSE 3000
When I build the image with: docker build -t my-app:latest ., I attempt to run the app and it complains that some modules are missing.
When I go into the container via docker run -i -t my-app:latest /bin/sh I can see that the packages have not been installed. After manually running npm install in the container, it seems to work.
I can only conclude that from this RUN npm install not being executed correctly inside the container.

How to run Docker image and configure it with nginx

I have made a Docker image for a nodeJS and it is running in Local perfectly but in Production, I have to configure it with Nginx(Which I installed in the host machine). We normally did like
location /location_of_app_folder {
proxy_pass http://api.prv:51967/info;
}
How will I configure this in nginx for docker image and how to run docker image. We used pm2 in nodeJS wch I added in Docker file But it is running till I press ctrl+C.
FROM keymetrics/pm2:latest-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json ./
COPY .npmrc ./
RUN npm config set registry http://private.repo/:_authToken=authtoken.
RUN npm install utilities#0.1.9
RUN apk update && apk add yarn python g++ make && rm -rf /var/cache/apk/*
RUN set NODE_ENV=production
RUN npm config set registry https://registry.npmjs.org/
RUN npm install
COPY . /app
RUN ls -al -R
EXPOSE 51967
CMD [ "pm2-runtime", "start", "pm2.json" ]
I am running the container with the command:
sudo docker run -it --network=host docker_repo_name
expose the docker image port and use the same nginx configuration, ex:
sudo docker run -it -p 51967:51967 docker_repo_name

Nodemon inside docker container

I'm trying to use nodemon inside docker container:
Dockerfile
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "nodemon" ]
Build/Run command
docker build -t tag/apt .
docker run -p 49160:8080 -v /local/path/to/apt:/usr/src/app -d tag/apt
Attaching a local volume to the container to watch for changes in code, results in some override and nodemon complains that can't find node modules (any of them). How can I solve this?
In you Dockerfile, you are running npm install after copying your package*json files. A node_modules directory gets correctly created in /usr/src/app and you're good to go.
When you mount your local directory on /usr/src/app, though, the contents of that directory inside your container are overriden with your local version of the node project, which apparently is lacking the node_modules directory, causing the error you are experiencing.
You need to run npm install on the running container after you mounted your directory. For example you could run something like:
docker exec -ti <containername> npm install
Please note that you'll have to temporarily change your CMD instruction to something like:
CMD ["sleep", "3600"]
In order to be able to enter the container.
This will cause a node_modules directory to be created in your local directory and your container should run nodemon correctly (after switching back to your current CMD).
TL;DR: npm install in a sub-folder, while moving the node_modules folder to the root.
Try this config to see and it should help you.
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install && mv /usr/src/app/node_modules /node_modules
COPY . /usr/src/app
EXPOSE 8080
CMD [ "nodemon" ]
As the other answer said, even if you have run npm install at your WORKDIR. When you mount the volume, the content of the WORKDIR is replaced by your mount folder temporarily, which the npm install did not run.
As node search its require package in serveral location, a workaround is to move the 'installed' node_modules folder to the root, which is one of its require path.
Doing so you can still update code until you require a new package, which the image needs another build.
I reference the Dockerfile from this docker sample project.
In a Javascript or a Nodejs application when we bind src file using bind volume in a Docker container either using docker command or docker-compose we end up overriding node_modules folder. To overcome this issue you need to use anonymous volume. In an anonymous volume, we provide only the destination folder path as compared to the bind volume where we specify source:destination folder path.
General syntax
--volume <container file system directory absolute path>:<read write access>
An example docker run command
docker container run \
--rm \
--detach \
--publish 3000:3000 \
--name hello-dock-dev \
--volume $(pwd):/home/node/app \
--volume /home/node/app/node_modules \
hello-dock:dev
For further reference you check this handbook by Farhan Hasin Chowdhury
Maybe there's no need to mount the whole project. In this case, I would only mount the directory where I put all the source files, e.g. src/.
This way you won't have any problem with the node_modules/ directory.
Also, if you are using Windows you may need to add the -L (--legacy-watch) option to the nodemon command, as you can see in this issue. So it would be nodemon -L.

Resources