Wanted Behavior
When my dockerized nodejs server is launched, i can access from my local machine to the address : http://localhost:3030
Docker console should then print "Hello World"
Problem Description
I have a nodejs Server contained in a Docker Container. I can't access to http://localhost:3030/ from my browser
server.js File
const port = require('./configuration/serverConfiguration').port
const express = require('express')
const app = express()
app.get('/', function (req, res) {
res.send('Hello World')
})
app.listen(port)
DockerFile Exposes port 3000 which is the port used by the server.js File
DockerFile
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
I use a docker-compose.yml file because i am linking my container with a mongodb service
docker-compose.yml File
version: '3'
services:
node_server:
build: .
volumes:
- "./app:/src/app"
ports:
- "3030:3000"
links:
- "mongo:mongo"
mongo:
image: mongo
ports:
- "27017:27017"
File publishes my container 3000 port to my host 3030 port
New Info
Tried to execute it on OSX, worked. It seems to be a problem with windows.
I changed localhost for the machine ip since i was using docker toolbox. My bad for not reading the documentation in dept
Related
I have a react-express app that connects to MongoDB Atlas and is deployed to Google App Engine. Currently, the client folder is stored inside the backend folder. I manually build the React client app with npm run build, then use gcloud app deploy to push the entire backend app and the built React files to GAE, where it is connected to a custom domain. I am looking to dockerize the app, but am running into problems with getting my backend up and running. Before Docker, I was using express static middleware to serve the React files, and had a setupProxy.js file in the client directory that I used to redirect api requests.
My express app middleware:
if (process.env.NODE_ENV === 'production') {
app.use(express.static(path.join(__dirname, '/../../client/build')));
}
if (process.env.NODE_ENV === 'production') {
app.get('/*', (req, res) => {
res.sendFile(path.join(__dirname, '/../../client/build/index.html'));
});
}
My client Dockerfile:
FROM node:14-slim
WORKDIR /usr/src/app
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "npm", "start"]
My backend Dockerfile:
FROM node:14-slim
WORKDIR /usr/src/app
COPY ./package.json ./
RUN npm install
COPY . .
EXPOSE 5000
CMD [ "npm", "start"]
docker-compose.yml
version: '3.4'
services:
backend:
image: backend
build:
context: backend
dockerfile: ./Dockerfile
environment:
NODE_ENV: production
ports:
- 5000:5000
networks:
- mern-app
client:
image: client
build:
context: client
dockerfile: ./Dockerfile
stdin_open: true
environment:
NODE_ENV: production
ports:
- 3000:3000
networks:
- mern-app
depends_on:
- backend
networks:
mern-app:
driver: bridge
I'm hoping that someone can provide some insight/assistance regarding how to effectively connect my React and Express apps inside a container using Docker Compose so that I can make calls to my API server. Thanks in advance!
I have an app that's contains a NodeJS server and a ReactJS client. The project's structure is as follows:
client
Dockerfile
package.json
.
.
.
server
Dockerfile
package.json
.
.
.
docker-compose.yml
.gitignore
To run both of these, I am using a docker-compose.yml:
version: "3"
services:
server:
build: ./server
expose:
- 8000
environment:
API_HOST: "http://localhost:3000/"
APP_SERVER_PORT: 8000
MYSQL_HOST_IP: mysql
ports:
- 8000:8000
volumes:
- ./server:/app
command: yarn start
client:
build: ./client
environment:
REACT_APP_PORT: 3000
NODE_PATH: src
expose:
- 3000
ports:
- 3000:3000
volumes:
- ./client/src:/app/src
- ./client/public:/app/public
links:
- server
command: yarn start
And inside of each of both the client and server folders I have a Dockerfile (same for each):
FROM node:10-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
COPY yarn.lock /app
COPY . /app
RUN yarn install
CMD ["yarn", "start"]
EXPOSE 80
Where the client's start script is simply react-scripts start, and the server's is nodemon index.js. The client's package.json has a proxy that is supposed to allow it communication with the server:
"proxy": "http://server:8000",
The react app would call a component that looks like this:
import React from 'react';
import axios from 'axios';
function callServer() {
axios.get('http://localhost:8000/test', {
params: {
table: 'sample',
},
}).then((response) => {
console.log(response.data);
});
}
export function SampleComponent() {
return (
<div>
This is a sample component
{callServer()}
</div>
);
}
Which would call the /test path in the node server, as defined in the index.js file:
const cors = require('cors');
const express = require('express');
const app = express();
app.use(cors());
app.listen(process.env.APP_SERVER_PORT, () => {
console.log(`App server now listening on port ${process.env.APP_SERVER_PORT}`);
});
app.get('/test', (req, res) => {
res.send('hi');
});
Now, this code works like a charm when I run it on my Linux Mint machine, but when I run it on Windows 10 I get the following error (I run both on Chrome):
GET http://localhost:8000/test?table=sample net::ERR_CONNECTION_REFUSED
Is it that I would need to run these on a different port or IP address? I read somewhere that the connections may be using a windows network instead of the network created by docker-compose, but I'm not even sure how to start diagnosing this. Please let me know if you have any ideas that could help.
EDIT: Here are the results of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1824c61bbe99 react-node-mysql-docker-boilerplate_client "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:3000->3000/tcp react-node-mysql-docker-boilerplate_client_1
and here is docker ps -a. For some reason, it seems the server image stops by itself as soon as it starts
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1824c61bbe99 react-node-mysql-docker-boilerplate_client "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:3000->3000/tcp react-node-mysql-docker-boilerplate_client_1
5c26276e37d1 react-node-mysql-docker-boilerplate_server "docker-entrypoint.s…" 3 minutes ago Exited (127) 3 minutes ago react-node-mysql-docker-boilerplate_server_1
I have a simple Node.js/Express app:
const port = 3000
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
It works fine when I start it like: node src/app.js
Now I'm trying to run it in a Docker container. Dockerfile is:
FROM node:8
WORKDIR /app
ADD src/. /app/src
ADD package.json package-lock.json /app/
RUN npm install
COPY . /app
EXPOSE 3000
CMD [ "node", "src/app.js" ]
It starts fine: docker run <my image>:
Listening on port 3000
But now I cannot access it in my browser:
http://localhost:3000
This site can’t be reached localhost refused to connect.
Same happen if I try to run it within docker-compose:
version: '3.4'
services:
service1:
image: xxxxxx
ports:
- 8080:8080
volumes:
- xxxxxxxx
myapp:
build:
context: .
dockerfile: Dockerfile
networks:
- private
ports:
- 3000
command:
node src/app.js
Not sure if I deal right with ports in both docker files
When you work with docker you must define the host for your app as 0.0.0.0 instead of localhost.
For your express application you can define the host on app.listen call.
Check the documentation:
app.listen([port[, host[, backlog]]][, callback])
Your express code should be updated to:
const port = 3000
const host = '0.0.0.0'
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, host, () => console.log(`Example app listening on ${port}!`))
It's also important publish docker ports:
Running docker: docker run -p 3000:3000 <my image>
Running docker-compose:
services:
myapp:
build:
context: .
dockerfile: Dockerfile
networks:
- private
ports:
- 3000:3000
command:
node src/app.js
try this:
services:
myapp:
build:
context: .
dockerfile: Dockerfile
networks:
- private
ports:
- 3000:3000 ##THIS IS THE CHANGE, YOU NEED TO MAP MACHINE PORT TO CONTAINER
command:
node src/app.js
You need to publish ports
docker run -p 3000:3000 <my image>
-p - stands for publish
I am trying to set up a docker network with simple nodejs and mongodb services by following this guide, however, when building nodejs it fails because it can't connect to mongodb.
docker-compose.yml
version: "3"
services:
nodejs:
container_name: nodejs # How the container will appear when listing containers from the CLI
image: node:10 # The <container-name>:<tag-version> of the container, in this case the tag version aligns with the version of node
user: node # The user to run as in the container
working_dir: "/app" # Where to container will assume it should run commands and where you will start out if you go inside the container
networks:
- app # Networking can get complex, but for all intents and purposes just know that containers on the same network can speak to each other
ports:
- "3000:3000" # <host-port>:<container-port> to listen to, so anything running on port 3000 of the container will map to port 3000 on our localhost
volumes:
- ./:/app # <host-directory>:<container-directory> this says map the current directory from your system to the /app directory in the docker container
command: # The command docker will execute when starting the container, this command is not allowed to exit, if it does your container will stop
- ./wait-for.sh
- --timeout=15
- mongodb:27017
- --
- bash
- -c
- npm install && npm start
env_file: ".env"
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=mongodb
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
depends_on:
- mongodb
mongodb:
image: mongo:4.1.8-xenial
container_name: mongodb
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app
networks:
app:
driver: bridge
volumes:
dbdata:
app.js
const express = require('express');
var server = express();
var bodyParser = require('body-parser');
// getting-started.js
var mongoose = require('mongoose');
mongoose.connect('mongodb://simpleUser:123456#mongodb:27017/simpleDb', {useNewUrlParser: true});
server.listen(3000, function() {
console.log('Example app listening on port 3000');
});
Here is the common wait-for.sh script that I was using. https://github.com/eficode/wait-for/blob/master/wait-for
docker logs -f nodejs gives;
Operation timed out
Thanks for your help!
In this case I believe the issue is that you are using the wait-for.sh script which makes use of netcat command (see https://github.com/eficode/wait-for/blob/master/wait-for#L24), but the node:10 image does not have netcat installed...
I would suggest either creating a custom image based on the node:10 image and adding netcat or use a different approach (preferably a nodejs based solution) for checking if the mongodb is accessible
A sample Dockerfile for creating your own custom image would look something like this
FROM node:10
RUN apt update && apt install -y netcat
Then you can build this image by replacing image: node:10 with
build:
dockerfile: Dockerfile
context: .
and you should be fine
I found the problem which was because of the image node:10 doesn't have nc command installed so it was failing. I switched to image node:10-alpine and it worked.
I am running a SpringBoot application in a docker container and another VueJS application in another docker container using docker-compose.yml as follows:
version: '3'
services:
backend:
container_name: backend
build: ./backend
ports:
- "28080:8080"
frontend:
container_name: frontend
build: ./frontend
ports:
- "5000:80"
depends_on:
- backend
I am trying to invoke SpringBoot REST API from my VueJS application using http://backend:8080/hello and it is failing with GET http://backend:8080/hello net::ERR_NAME_NOT_RESOLVED.
Interestingly if I go into frontend container and ping backend it is able to resolve the hostname backend and I can even get the response using wget http://backend:8080/hello.
Even more interestingly, I added another SpringBoot application in docker-compose and from that application I am able to invoke http://backend:8080/hello using RestTemplate!!
My frontend/Dockerfile:
FROM node:9.3.0-alpine
ADD package.json /tmp/package.json
RUN cd /tmp && yarn install
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN npm run build
ENV PORT=80
EXPOSE 80
CMD [ "npm", "start" ]
In my package.json I mapped script "start": "node server.js" and my server.js is:
const express = require('express')
const app = express()
const port = process.env.PORT || 3003
const router = express.Router()
app.use(express.static(`${__dirname}/dist`)) // set the static files location for the static html
app.engine('.html', require('ejs').renderFile)
app.set('views', `${__dirname}/dist`)
router.get('/*', (req, res, next) => {
res.sendFile(`${__dirname}/dist/index.html`)
})
app.use('/', router)
app.listen(port)
console.log('App running on port', port)
Why is it not able to resolve hostname from the application but can resolve from the terminal? Am I missing any docker or NodeJS configuration?
Finally figured it out. Actually, there is no issue. When I run my frontend VueJS application in docker container and access it from the browser, the HTML and JS files will be downloaded on my browser machine, which is my host, and the REST API call goes from the host machine. So from my host, the docker container hostname (backend) is not resolved.
The solution is: Instead of using actual docker hostname and port number (backend:8080) I need to use my hostname and mapped port (localhost:28080) while making REST calls.
I would suggest:
docker ps to get the names/Ids of running containers
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' BACKEND_CONTAINER_NAME to get backend container's IP address from the host.
Now put this IP in the front app and it should be able to connect to your backend.