When I go to run docker run -p 8080:3000 --name cabinfeverInstance -t something/cabinfever I get put into a Node.js REPL, when I expect to see "Listening on port: 3000". Going to localhost:8000 results in a "didn’t send any data. ERR_EMPTY_RESPONSE". I have found no other instances of this occurring (maybe I am not searching Google for the correct words/tricky phrases). I did find this issue which seems somewhat related. Their solution was to "make the containerized app expect requests from the windows host machine IP (not container IP)", but I am not sure how to implement that. Also, their solution could also not be my solution.
What I have tried:
Clean/purging the data.
Running docker run -p 8080:3000/tcp -p 8080:3000/udp --name cabinfeverInstance -t something/cabinfever
Not specifying the specific port (8080).
Specifying 0.0.0.0.
Several additional ideas.
None have worked and I still get the wonderful Node REPL.
If anyone has any suggestions, I would appreciate them.
Here are the relevant files:
index.js
var express = require('express');
var app = express();
var port = 3000;
app.get('/', function(req,res) {
console.log(new Date().toLocaleString());
res.send(new Date().toLocaleString());
});
app.listen(port, () => {
console.log(`Listening at http://localhost: ${port}`);
});
package.json:
{
"name": "docnode",
"version": "1.0.0",
"description": "barebones node on docker",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "index.js"
},
"author": "my.email.address#gmail.com",
"license": "MIT",
"dependencies": {
"express": "^4.15.5"
}
}
docker-compose.yml:
version: "3"
services:
node:
build: .
command: node index.js
ports:
- "3000:3000"
Dockerfile:
FROM node:slim
LABEL author="my.email.address#gmail.com"
WORKDIR /app
# copy code, install npm dependencies
COPY index.js /app/index.js
COPY package.json /app/package.json
RUN npm install
You should specify the command to run in the Dockerfile, not the docker-compose.yml file:
CMD ["node", "index.js"]
When you docker run an image, the only options it considers are those on the docker run command line. docker run doesn't know about docker-compose.yml, so if you specify things like command: there it won't be honored. Since the CMD isn't in the Dockerfile and it isn't on the docker run command line (after the image name), you fall back to the base image's CMD, which in the case of node is to run the REPL.
With this change you don't need to override command: in the docker-compose.yml file and you can delete that line. Running
docker-compose up -d
will start the container(s) in the docker-compose.yml file with the options specified there (note, the ports: mapping and the docker run -p ports are slightly different).
Related
I'm trying do build a docker image of my Node backend for deployment but when I run it in a container and open in the browser I get "This site can’t be reached" error and the following log in dev tools:
crbug/1173575, non-JS module files deprecated
My backend is based on GraphQL Apollo server. Dockerfile is as following:
FROM node:16
WORKDIR /app
COPY ./package*.json ./
RUN npm ci --only=production
# RUN npm install
COPY . .
# RUN npm run build
EXPOSE 4000
CMD [ "node", "dist/main.js" ]
I've also tried to use the commented code, with no result.
The image builds without a problem and after running the container I get 🚀 Server ready at localhost:4000 in the docker logs, so I'd expect it to work properly.
"scripts": {
"build": "tsc",
"start": "node dist/main.js",
"dev": "concurrently \"tsc -w\" \"nodemon dist/main.js\""
},
That's the scripts part of my package.json I've also tried CMD ["npm", "start"] in Dockerfile but that doesn't work either. When I run the backend from terminal using npm start I can access the GraphQL playground at localhost:4000 - I assume that should be the same with docker?
I'm still new to docker so I'd be grateful for any hints. Thanks
EDIT:
I run the container with the following command:
docker run --rm -d -p 4000:80 image-name:latest
Seemingly it's running on port 0.0.0.0:4000 as that's what it says under 'PORT' when I execute docker ps
Please run docker inspect command and you will get IP and then run through that ip in browser
When I try to deploy a container using docker-compose, I get the following error:
testing |
testing | > test#1.0.0 start
testing | > npm-run-all --parallel start:server
testing |
testing |
testing | ERROR: "start:server" exited with 243.
testing exited with code 1
This only happens on any node:18.4.0 images. I have to use that node version.
My Dockerfile:
FROM node:18.4.0-alpine3.16
WORKDIR /app
COPY ./package.json ./
COPY ./package-lock.json ./
RUN npm install
COPY . /app
EXPOSE 80
CMD npm start
My docker-compose
version: '2'
services:
testing:
container_name: testing
build:
context: .
volumes:
- '.:/app'
ports:
- 80
- 9009:9009
My app (index.js):
const express = require('express')
const app = express()
const port = 3000
app.get('/', (req, res) => {
res.send('Hello World!')
})
app.listen(port, () => {
console.log(`Example app listening on port ${port}`)
})
My package.json
"name": "test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "npm-run-all --parallel start:server",
"start:server": "nodemon .",
"start:web": "echo web starting"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.18.1",
"nodemon": "^2.0.18"
},
"devDependencies": {
"npm-run-all": "^4.1.5"
}
}
os: Ubuntu 20.04.4 LTS.
docker-compose: version 1.29.2
docker: Docker version 20.10.12, build 20.10.12-0ubuntu2~20.04.1
Maybe you found the problem, but this can help someone:
I was havin the same issue using node:bullseye image that comes with npm v8.13.0, so I updated it to the latest version (v8.15.1, in my case) an it was solved.
So, to keep using this image with the latest version, i put this in Dockerfile:
RUN npm install -g npm#latest
You have a volume mapping onto the /app path. That hides everything in that path in the image, including the npm packages you install when building your Dockerfile.
Also, your port mappings don't match your app. Your app listens on port 3000, so your should map port 3000 to a host port.
If you use this docker-compose file, it'll work.
version: '2'
services:
testing:
container_name: testing
build:
context: .
ports:
- 3000:3000
Then you can go to http://localhost:3000/ and you'll get your "Hello World!" message.
First of all, In your index.js you exposed port 3000 const port = 3000.
But on docker you exposed 80 and 3009
ports:
- 80
- 9009:9009
A tip - you don't have to COPY ./package.json ., you copied the entire folder into the container.
WORKDIR /app
COPY . /app
RUN npm install
EXPOSE 80
CMD npm start
Hi I'm trying to dockerize an app im currently working on. It uses nodejs and mariadb. I have some difficulties with figuring out how to make nodemon work.
I tried using --legacy-watch or -L which is the short form but it didn't change the result.
NPM installs all dependecies correct i even get the nodemon text but it doesn't restart the server when i make changes.
Would be gal if anyone could help
package.json:
{
"name": "nodejs_mariadb_docker_test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node src/index.js",
"dev": "nodemon -L src/index.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.17.2",
"mariadb": "^2.5.5",
"nodemon": "^2.0.15"
}
}
Dockerfile for nodejs:
# Specifies the image of your engine
FROM node:16.13.2
# The working directory inside your container
WORKDIR /app
# Get the package.json first to install dependencies
COPY package.json /app
# This will install those dependencies
RUN npm install
# Copy the rest of the app to the working directory
COPY . /app
# Run the container
CMD ["npm", "run", "dev"]
and the docker compose file:
version: "3"
services:
node:
build: .
container_name: express-api
ports:
- "80:8000"
depends_on:
- mysql
mysql:
image: mariadb:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: "password"
volumes:
- mysqldata:/var/lib/mysql
- ./mysql-dump:/docker-entrypoint-initdb.d
volumes:
mysqldata: {}
So the obvious problem is that you do not mount your code into the container. That is why nodemon cannot see any changes, and react to them.
Additionally, it may be more straight forward to develop the application locally and only use docker as a mean to package/ship it.
If you still want to go down this route, I would suggest something like this.
services:
express-api:
build: ./
# overwrite the prod command
command: npm run dev
ports:
- "80:8000"
volumes:
# mount your code folder into the app folder
- .:/app
# mysql stuff ...
In your dockerfile you can swap the command for the production one, since in development, compose will override it.
FROM node:16.13.2
WORKDIR /app
COPY package.json package-lock.json ./
# use ci to install from the lock file,
# to avoid suprises in prod
RUN npm ci
COPY . ./
# use the prod command
CMD ["npm", "run", "start"]
This will do a bit of redundant work in development, like copying the code, but it should be OK.
Additionally, you may want to use a .dockerignore to ignore the mysqldump for example. Otherwise, it will be copied into the image, which is probably not desirable.
Also note that through npm ci your dependencies are locked, and won't update automatically. It will also throw errors if your lock file is not in sync with package.json. This is what you want for production. If you develop locally, you can run npm install locally or via docker exec to bump the dependencies, if required. Then you can check if nothing is broken, and be sure that for your prod image it will be fine since it's used from the lock file again.
I am trying to mount my working node code from my host into a docker container and run it using nodemon using docker-compose.
But container doesn't seems to be able to find nodemon.
Note: My host machine does not has node or npm installed on it.
Here are the files in the root folder of my project (test). (This is only a rough draft)
Dockerfile
FROM surenderthakran/nodejs:v4
ADD . /test
WORKDIR /test
RUN make install
CMD make run
Makefile
SHELL:=/bin/bash
PWD:=$(shell pwd)
export PATH:= $(PWD)/node_modules/.bin:$(PWD)/bin:$(PATH)
DOCKER:=$(shell grep docker /proc/1/cgroup)
install:
#echo Running make install......
#npm config set unsafe-perm true
#npm install
run:
#echo Running make run......
# Check if we are inside docker container
ifdef DOCKER
#echo We are dockerized!! :D
#nodemon index.js
else
#nodemon index.js
endif
.PHONY: install run
docker-compose.yml
app:
build: .
command: make run
volumes:
- .:/test
environment:
NODE_ENV: dev
ports:
- "17883:17883"
- "17884:17884"
package.json
{
"name": "test",
"version": "1.0.0",
"description": "test",
"main": "index.js",
"dependencies": {
"express": "^4.13.3",
"nodemon": "^1.8.0"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [
"api",
"nodejs",
"express"
],
"author": "test",
"license": "ISC"
}
index.js
'use strict';
var express = require('express');
I build my image using docker-compose build. It finishes successfully.
But when I try to run it using docker-compose up, I get:
Creating test_app_1...
Attaching to test_app_1
app_1 | Running make run......
app_1 | We are dockerized!! :D
app_1 | /bin/bash: nodemon: command not found
app_1 | make: *** [run] Error 127
test_app_1 exited with code 2
Gracefully stopping... (press Ctrl+C again to force)
Can anyone please advice?
Note: The Dockerfile for my base image surenderthakran/nodejs:v4 can be found here: https://github.com/surenderthakran/dockerfile_nodejs/blob/master/Dockerfile
The issue has been resolved. The issue boiled down to me not having node_modules in the mounted volume.
Basically, while doing docker-compose build the image was build correctly with the actual code being added to the image and creating the node_modules folder by npm install in the project root. But with docker-compose up the code was being mounted in the project root and it was overriding the earlier added code including the newly created node_modules folder.
So as a solution I compromised to install nodejs on my host and do a npm install on my host. So when the code my being mounted I still got my node_modules folder in my project root because it was also getting mounted from my host.
Not a very elegant solution but since it is a development setup I am ready for the compromise. On production I would be setting up using docker build and docker run and won't be using nodemon anyways.
If anyone can suggest me a better solution I will be greatful.
Thanks!!
I believe you should use a preinstall script in your package.json.
So, in the script section, just add script:
"scritpts": {
"preinstall": "npm i nodemon -g",
"start": "nodemon app.js",
}
And you should good to go :)
Pretty late for an answer. But you could use something called as named volumes to mount your node_modules in the docker volumes space. That way it would hide your bind mount.
You need to set the node_modules as a mounted volume in the docker container.
e.g
docker-compose.yml
app:
build: .
command: make run
volumes:
- .:/test
- /test/node_modules
environment:
NODE_ENV: dev
ports:
- "17883:17883"
- "17884:17884"
I've figured out how to do this without a Dockerfile, in case that's useful to anyone...
You can run multiple commands in the docker-compose.yml command line by using sh -c.
my-server:
image: node:18-alpine
build: .
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
working_dir: /usr/src/app
ports:
- "3100:3100"
command: sh -c "npm install -g nodemon && npm install && npm run dev "
environment:
NODE_ENV: development
PORT: 3100
I'm working with docker and I'm wondering how I can get the command npm start to locate the app.js file without me doing it via the command line.
my package.json (located: /srv/www) looks as so:
{
"name": "dist",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"start": "forever start -c \"nodemon --harmony\" app.js --exitcrash"
},
"author": "",
"license": "ISC"
}
I'm currently invoking my docker image as so:
docker run -d -v /srv/docker/instantynode/srv:/srv -p 80:8080 myimg ???
I am hoping to replace the ??? with a command which will startup node and invoke npm start in the correct directory, any ideas?
I was thinking maybe of making a little startup script to fix this however I was wondering if npm can fix this on it's own?
Regarding node application, you can start with ONBUILD in Dockerfile, so you can try as below:
$ cat Dockerfile
FROM node:0.10-onbuild (or version 0.12, depend your request)
PORT 8000
the image node:0.10-onbuild has codes already as below:
WORKDIR /usr/src/app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
CMD ["npm" "start"]
With ONBUILD, during docker build, your node.js (or angularjs) codes will be not copied to image and npm install doesn't run.
But when you run the container, it starts to copy the files to /usr/src/app and install npm package. Then start npm service.
So in your case, you should be fine to run your application directly without mount the volume every time.
# I guess file package.json is under /srv/docker/instantynode/srv
$ cd /srv/docker/instantynode/srv
$ cat Dockerfile
FROM node:0.10-onbuild
PORT 8000
$ docker build -t myimg .
$ docker run -d -p 8000:8000 myimg
That's all, you should be fine to access your application via port 8000 now.