Simple node.js workflow with docker - node.js

I'm using Docker on windows for development purposes and i'm trying to create a simple workflow for a node.js project.
I followed this tutorial https://nodejs.org/en/docs/guides/nodejs-docker-webapp/ so my Dockerfile looks like this
FROM node:boron
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json .
# For npm#5 or later, copy package-lock.json as well
# COPY package.json package-lock.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
My "workflow" for each change would look like this
FIRST BUILD
docker build -t thomas/myApp DockerProjects/myApp ; docker run --name app -p 49160:8080 -d thomas/myApp
AFTER EACH CHANGE
docker build -t thomas/myApp DockerProjects/myApp ; docker stop app ; docker rm app ; docker run --name app -p 49160:8080 -d thomas/myApp
I don't want to have hundreds of containers after each change in the project, that's why i'm deleting it before creating another one.
I see several problems:
Each time there is a change and a new image is build, a new <none>:<none> image is created. These images have the same weight as the original one. How can I avoid that ?
Can I use nodemon somehow ?
Can I launch this process automatically each time I change something in the code ?
Docker is quite new for me and i'm still experimenting with it.
Thanks

You can use nodemon in your project to restart your app automatically while your source code directory would be mounted on a volume.
For instance, with this directory structure (which is using Grunt from package.json to run nodemon) :
app/
├── Dockerfile
├── package.json
├── Gruntfile.js
├── src/
│ └── app.js
└── docker-compose.yml
You can use docker-compose which is a tool used to run multiple container. This can be useful if you want to add a database container your app would talk to or any additionnal services interacting with your app.
The following docker-compose config will mount src folder on /usr/src/app/src on the container. With nodemon looking for changes inside src, you will be able to make changes on your machine that will restart the app on the container automatically
To use this you would do :
cd app
docker-compose up
The command above with build the image from dockerfile and start the containers defined in docker-compose.yml.
docker-compose.yml :
version: '2'
services:
your-app:
build: .
ports:
- "8080:8080"
restart: always
container_name: app_container
volumes:
- ./src:/usr/src/app/src
environment:
- SERVER_PORT=8080
Dockerfile :
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json .
COPY Gruntfile.js .
RUN npm install
CMD ["npm","start"]
Gruntfile.js :
var path = require('path');
module.exports = function (grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
concurrent: {
dev: {
tasks: ['nodemon'],
options: {
logConcurrentOutput: true
}
}
},
nodemon: {
dev: {
script: 'src/app.js',
options: {
ignore: [
'node_modules/**'
],
ext: 'js'
}
}
},
clean: {}
});
grunt.loadNpmTasks('grunt-concurrent');
grunt.loadNpmTasks('grunt-nodemon');
grunt.registerTask('default', ['concurrent']);
};
package.json :
{
"name": "your-app",
"version": "1.0.0",
"description": "service",
"scripts": {
"start": "grunt"
},
"author": "someone",
"license": "MIT",
"dependencies": {
"express": "^4.14.0"
},
"devDependencies": {
"grunt": "1.x.x",
"grunt-cli": "1.x.x",
"grunt-concurrent": "2.x.x",
"grunt-nodemon": "0.4.x"
}
}
Sample app.js :
'use strict';
const express = require('express');
const port = process.env.SERVER_PORT;
var app = express();
app.get('/', function(req, res) {
res.send('Hello World');
});
app.listen(port, function() {
console.log('listening on port ' + port);
});
To rebuild the image, you would perform docker-compose build

Each time there is a change and a new image is build, a new <none>:<none> image is created. These images have the same weight as the original one. How can I avoid that ?
You can't. This : iamge is your previous image which was replaced by your new image. So just delete it: docker image prune
Can I use nodemon somehow?
I'm not familiar with that, but it looks like it only restarts your server, but doesnt't do a npm install.
Can I launch this process automatically each time I change something in the code?
I would use Jenkins and automatically build your new Docker image on each git commit.

Related

Unable to Dockerize Vite React-Typescript Project

I am trying to dockerize a Vite React-Typescript boilerplate setup, but I unable to connect to the container.
Installed vite-react-typescript boilerplate:
npm init vite#latest vite-docker-demo -- --template react-ts
Dockerfile
# Declare the base image
FROM node:lts-alpine3.14
# Build step
# 1. copy package.json and package-lock.json to /app dir
RUN mkdir /app
COPY package*.json /app
# 2. Change working directory to newly created app dir
WORKDIR /app
# 3 . Install dependencies
RUN npm ci
# 4. Copy the source code to /app dir
COPY . .
# 5. Expose port 3000 on the container
EXPOSE 3000
# 6. Run the app
CMD ["npm", "run", "dev"]
Command to run docker container in detached mode and open local dev port 3000 on host:
docker run -d -p 3000:3000 vite
The vite instance seems to be running just fine within the container (docker logs output):
> vite-docker#0.0.0 dev /app
> vite
Pre-bundling dependencies:
react
react-dom
(this will be run only when your dependencies or config have changed)
vite v2.4.4 dev server running at:
> Local: http://localhost:3000/
> Network: use `--host` to expose
ready in 244ms.
However, when I navigate to http://localhost:3000/ within Chrome. I see an error indicating The connection was reset.
Any help resolving this issue would be greatly appreciated!
It turns out that I needed to configure host to something other than localhost.
Within vite.config.ts:
import { defineConfig } from 'vite'
import reactRefresh from '#vitejs/plugin-react-refresh'
// https://vitejs.dev/config/
export default defineConfig({
server: {
host: '0.0.0.0',
port: 3000,
},
plugins: [reactRefresh()],
})
Resolves the issue.
in package.json use script
"dev": "vite --host"
example:
"scripts": {
"dev": "vite --host",
"build": "tsc && vite build",
"serve": "vite preview"
},
or run with vite --host
To use hot reload, so the changes actually get reflected:
export default (conf: any) => {
return defineConfig({
server: {
host: "0.0.0.0",
hmr: {
clientPort: ENV_VARIABLES.OUTER_PORT_FRONTEND,
},
port: ENV_VARIABLES.INNER_PORT_FRONTEND_DEV,
watch: {
usePolling: true,
},
},
});
};

docker run puts me into a node repl

When I go to run docker run -p 8080:3000 --name cabinfeverInstance -t something/cabinfever I get put into a Node.js REPL, when I expect to see "Listening on port: 3000". Going to localhost:8000 results in a "didn’t send any data. ERR_EMPTY_RESPONSE". I have found no other instances of this occurring (maybe I am not searching Google for the correct words/tricky phrases). I did find this issue which seems somewhat related. Their solution was to "make the containerized app expect requests from the windows host machine IP (not container IP)", but I am not sure how to implement that. Also, their solution could also not be my solution.
What I have tried:
Clean/purging the data.
Running docker run -p 8080:3000/tcp -p 8080:3000/udp --name cabinfeverInstance -t something/cabinfever
Not specifying the specific port (8080).
Specifying 0.0.0.0.
Several additional ideas.
None have worked and I still get the wonderful Node REPL.
If anyone has any suggestions, I would appreciate them.
Here are the relevant files:
index.js
var express = require('express');
var app = express();
var port = 3000;
app.get('/', function(req,res) {
console.log(new Date().toLocaleString());
res.send(new Date().toLocaleString());
});
app.listen(port, () => {
console.log(`Listening at http://localhost: ${port}`);
});
package.json:
{
"name": "docnode",
"version": "1.0.0",
"description": "barebones node on docker",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "index.js"
},
"author": "my.email.address#gmail.com",
"license": "MIT",
"dependencies": {
"express": "^4.15.5"
}
}
docker-compose.yml:
version: "3"
services:
node:
build: .
command: node index.js
ports:
- "3000:3000"
Dockerfile:
FROM node:slim
LABEL author="my.email.address#gmail.com"
WORKDIR /app
# copy code, install npm dependencies
COPY index.js /app/index.js
COPY package.json /app/package.json
RUN npm install
You should specify the command to run in the Dockerfile, not the docker-compose.yml file:
CMD ["node", "index.js"]
When you docker run an image, the only options it considers are those on the docker run command line. docker run doesn't know about docker-compose.yml, so if you specify things like command: there it won't be honored. Since the CMD isn't in the Dockerfile and it isn't on the docker run command line (after the image name), you fall back to the base image's CMD, which in the case of node is to run the REPL.
With this change you don't need to override command: in the docker-compose.yml file and you can delete that line. Running
docker-compose up -d
will start the container(s) in the docker-compose.yml file with the options specified there (note, the ports: mapping and the docker run -p ports are slightly different).

Node.js Error: Cannot find module (local file)

I am following Node.js's "Intro to Docker" tutorial and, when I run npm start, the project works. When I run docker run (options), the build is generated, but I'll find the error below in the logs. The project is bare-bones, simple, and straight-forward, I'm not sure what I'm missing here. I've gotten a very similar error in production earlier (to Heroku, without Docker), where local runs look good and live deploys get a similar error.
I'm not sure if I'm using something outdated, but I updated npm, docker, and am not sure what else could be.
Any help is appreciated!
Error:
internal/modules/cjs/loader.js:969
throw err;
^
Error: Cannot find module '/usr/src/app/server.js'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:966:15)
at Function.Module._load (internal/modules/cjs/loader.js:842:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Directory:
package.json:
{
"name": "SampleProject",
"version": "1.0.0",
"description": "Node.js on Docker",
"author": "First Last <first.last#example.com>",
"main": "server.js",
"scripts": {
"start": "node server.js",
"dock": "docker run -p 1234:1234 -d <My>/<Info>"
},
"dependencies": {
"core-util-is": "^1.0.2",
"express": "^4.17.1"
}
}
Dockerfile
# I'm using Node.js -> Import it's image
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Run on port 1234
EXPOSE 1234
CMD [ "node", "server.js" ]
server.js
'use strict';
const express = require('express');
// Constants
const PORT = 1234;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
Run:
npm run dock
Note:
I've also cleaned out the project by running the following:
rm -rf node_modules package-lock.json && npm install && npm start
RESOLVED:
Dockerfile
# I'm using Node.js -> Import it's image
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# * BEGIN SOLUTION *
# Bundle app source
COPY . .
# * END SOLUTION *
# Run on port 1234
EXPOSE 1234
CMD [ "node", "server.js" ]
I think you are missing the following important part, should be placed after you have RUN npm install:
To bundle your app's source code inside the docker image, use the COPY instruction:
# Bundle app source
COPY . .
And to force the execution of each step in the Dockerfile,
docker build --no-cache

How do I run a webpack build from a docker container?

The app I'm making is written in ES6 and other goodies is transpiled by webpack inside a Docker container. At the moment everything works from creating the inner directory, installing dependencies, and creating the compiled bundle file.
When running the container instead, it says that dist/bundle.js does not exist. Except if I create the bundle file in the host directory, it will work.
I've tried creating a volume for the dist directory at it works the first time, but after making changes and rebuilding it does not pick up the new changes.
What I'm trying to achieve is having the container build and run the compiled bundle. I'm not sure if the webpack part should be in the Dockerfile as a build step or at runtime since the CMD ["yarn", "start"] crashes but RUN ["yarn", "start"] works.
Any suggestions ands help is appreciated. Thanks in advance.
|_src
|_index.js
|_dist
|_bundle.js
|_Dockerfile
|_.dockerignore
|_docker-compose.yml
|_webpack.config.js
|_package.json
|_yarn.lock
docker-compose.yml
version: "3.3"
services:
server:
build: .
image: selina-server
volumes:
- ./:/usr/app/selina-server
- /usr/app/selina-server/node_modules
# - /usr/app/selina-server/dist
ports:
- 3000:3000
Dockerfile
FROM node:latest
LABEL version="1.0"
LABEL description="This is the Selina server Docker image."
LABEL maintainer="AJ alvaroo#selina.com"
WORKDIR "/tmp"
COPY ["package.json", "yarn.lock*", "./"]
RUN ["yarn"]
WORKDIR "/usr/app/selina-server"
RUN ["ln", "-s", "/tmp/node_modules"]
COPY [".", "./"]
RUN ["yarn", "run", "build"]
EXPOSE 3000
CMD ["yarn", "start"]
.dockerignore
.git
.gitignore
node_modules
npm-debug.log
dist
package.json
{
"scripts": {
"build": "webpack",
"start": "node dist/bundle.js"
}
}
I was able to get a docker service in the browser with webpack by adding the following lines to webpack.config.js:
module.exports = {
//...
devServer: {
host: '0.0.0.0',
port: 3000
},
};
Docker seems to want the internal container address to be 0.0.0.0 and not localhost, which is the default string for webpack. Changing webpack.config.js specification and copying that into the container when it is being built allowed the correct port to be recognized on `http://localhost:3000' on the host machine. It worked for my project; hope it works for yours.
I haven't included my src tree structure but its basically identical to yours,
I use the following docker setup to get it to run and its how we dev stuff every day.
In package.json we have
"scripts": {
"start": "npm run lint-ts && npm run lint-scss && webpack-dev-server --inline --progress --port 6868",
}
dockerfile
FROM node:8.11.3-alpine
WORKDIR /usr/app
COPY package.json .npmrc ./
RUN mkdir -p /home/node/.cache/yarn && \
chmod -R 0755 /home/node/.cache && \
chown -R node:node /home/node && \
apk --no-cache add \
g++ gcc libgcc libstdc++ make python
COPY . .
EXPOSE 6868
ENTRYPOINT [ "/bin/ash" ]
docker-compose.yml
version: "3"
volumes:
yarn:
services:
web:
user: "1000:1000"
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
container_name: "some-app"
command: -c "npm config set proxy=$http_proxy && npm run start"
volumes:
- .:/usr/app/
ports:
- "6868:6868"
Please note this Dockerfile is not suitable for production it's for a dev environment as its running stuff as root.
With this docker file there its a gotcha.
Because alpine is on musl and we are on glib if we install node modules on the host the compiled natives won't work on the docker container, Once the container is up if you get an error we run this to fix it (its a bit of a sticking plaster right now)
docker-compose exec container_name_goes_here /bin/ash -c "npm rebuild node-sass --force"
ikky but it works.
Try changing your start script in the package.json to perform the build first (doing this, you won't need the RUN command to perform the build in your Dockerfile:
{
"scripts": {
"build": "webpack",
"start": "webpack && node dist/bundle.js"
}
}

Docker /bin/bash: nodemon: command not found

I am trying to mount my working node code from my host into a docker container and run it using nodemon using docker-compose.
But container doesn't seems to be able to find nodemon.
Note: My host machine does not has node or npm installed on it.
Here are the files in the root folder of my project (test). (This is only a rough draft)
Dockerfile
FROM surenderthakran/nodejs:v4
ADD . /test
WORKDIR /test
RUN make install
CMD make run
Makefile
SHELL:=/bin/bash
PWD:=$(shell pwd)
export PATH:= $(PWD)/node_modules/.bin:$(PWD)/bin:$(PATH)
DOCKER:=$(shell grep docker /proc/1/cgroup)
install:
#echo Running make install......
#npm config set unsafe-perm true
#npm install
run:
#echo Running make run......
# Check if we are inside docker container
ifdef DOCKER
#echo We are dockerized!! :D
#nodemon index.js
else
#nodemon index.js
endif
.PHONY: install run
docker-compose.yml
app:
build: .
command: make run
volumes:
- .:/test
environment:
NODE_ENV: dev
ports:
- "17883:17883"
- "17884:17884"
package.json
{
"name": "test",
"version": "1.0.0",
"description": "test",
"main": "index.js",
"dependencies": {
"express": "^4.13.3",
"nodemon": "^1.8.0"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [
"api",
"nodejs",
"express"
],
"author": "test",
"license": "ISC"
}
index.js
'use strict';
var express = require('express');
I build my image using docker-compose build. It finishes successfully.
But when I try to run it using docker-compose up, I get:
Creating test_app_1...
Attaching to test_app_1
app_1 | Running make run......
app_1 | We are dockerized!! :D
app_1 | /bin/bash: nodemon: command not found
app_1 | make: *** [run] Error 127
test_app_1 exited with code 2
Gracefully stopping... (press Ctrl+C again to force)
Can anyone please advice?
Note: The Dockerfile for my base image surenderthakran/nodejs:v4 can be found here: https://github.com/surenderthakran/dockerfile_nodejs/blob/master/Dockerfile
The issue has been resolved. The issue boiled down to me not having node_modules in the mounted volume.
Basically, while doing docker-compose build the image was build correctly with the actual code being added to the image and creating the node_modules folder by npm install in the project root. But with docker-compose up the code was being mounted in the project root and it was overriding the earlier added code including the newly created node_modules folder.
So as a solution I compromised to install nodejs on my host and do a npm install on my host. So when the code my being mounted I still got my node_modules folder in my project root because it was also getting mounted from my host.
Not a very elegant solution but since it is a development setup I am ready for the compromise. On production I would be setting up using docker build and docker run and won't be using nodemon anyways.
If anyone can suggest me a better solution I will be greatful.
Thanks!!
I believe you should use a preinstall script in your package.json.
So, in the script section, just add script:
"scritpts": {
"preinstall": "npm i nodemon -g",
"start": "nodemon app.js",
}
And you should good to go :)
Pretty late for an answer. But you could use something called as named volumes to mount your node_modules in the docker volumes space. That way it would hide your bind mount.
You need to set the node_modules as a mounted volume in the docker container.
e.g
docker-compose.yml
app:
build: .
command: make run
volumes:
- .:/test
- /test/node_modules
environment:
NODE_ENV: dev
ports:
- "17883:17883"
- "17884:17884"
I've figured out how to do this without a Dockerfile, in case that's useful to anyone...
You can run multiple commands in the docker-compose.yml command line by using sh -c.
my-server:
image: node:18-alpine
build: .
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
working_dir: /usr/src/app
ports:
- "3100:3100"
command: sh -c "npm install -g nodemon && npm install && npm run dev "
environment:
NODE_ENV: development
PORT: 3100

Resources