Dockerized NodeJS application is unable to invoke another dockerized SpringBoot API - node.js

I am running a SpringBoot application in a docker container and another VueJS application in another docker container using docker-compose.yml as follows:
version: '3'
services:
backend:
container_name: backend
build: ./backend
ports:
- "28080:8080"
frontend:
container_name: frontend
build: ./frontend
ports:
- "5000:80"
depends_on:
- backend
I am trying to invoke SpringBoot REST API from my VueJS application using http://backend:8080/hello and it is failing with GET http://backend:8080/hello net::ERR_NAME_NOT_RESOLVED.
Interestingly if I go into frontend container and ping backend it is able to resolve the hostname backend and I can even get the response using wget http://backend:8080/hello.
Even more interestingly, I added another SpringBoot application in docker-compose and from that application I am able to invoke http://backend:8080/hello using RestTemplate!!
My frontend/Dockerfile:
FROM node:9.3.0-alpine
ADD package.json /tmp/package.json
RUN cd /tmp && yarn install
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN npm run build
ENV PORT=80
EXPOSE 80
CMD [ "npm", "start" ]
In my package.json I mapped script "start": "node server.js" and my server.js is:
const express = require('express')
const app = express()
const port = process.env.PORT || 3003
const router = express.Router()
app.use(express.static(`${__dirname}/dist`)) // set the static files location for the static html
app.engine('.html', require('ejs').renderFile)
app.set('views', `${__dirname}/dist`)
router.get('/*', (req, res, next) => {
res.sendFile(`${__dirname}/dist/index.html`)
})
app.use('/', router)
app.listen(port)
console.log('App running on port', port)
Why is it not able to resolve hostname from the application but can resolve from the terminal? Am I missing any docker or NodeJS configuration?

Finally figured it out. Actually, there is no issue. When I run my frontend VueJS application in docker container and access it from the browser, the HTML and JS files will be downloaded on my browser machine, which is my host, and the REST API call goes from the host machine. So from my host, the docker container hostname (backend) is not resolved.
The solution is: Instead of using actual docker hostname and port number (backend:8080) I need to use my hostname and mapped port (localhost:28080) while making REST calls.

I would suggest:
docker ps to get the names/Ids of running containers
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' BACKEND_CONTAINER_NAME to get backend container's IP address from the host.
Now put this IP in the front app and it should be able to connect to your backend.

Related

How to host docker-compose app on Azure App Service?

I have a very basic web app that I am trying to host on azure app service.
The only error log I am seeing so far is "stopping site because it failed during startup".
It is a very basic express app. I was able to get the wordpress tutorial to work, but not sure what I am doing wrong with this app or what is different about it.
I deployed it from Azure CLI with the config file pointed at the docker-compose.yml.
I set the multicontainer-config-type to "compose"
I tried to reproduce the same in my environment to host the docker-compose app on Azure App Service.
Follow the below steps to host the docker-compose app to Azure App Service.
I have developed a source code on my local machine with the below VM configurations and returned a docker file to produce the image out of the source code.
VM Configuration:
Ubuntu: 20.04
Docker version: 23.0.1
Docker Composed version:1.29.2
I have created a simple node JS hello world application.
Make sure that expose the application on Port:8080, below is my server.js file
'use strict';
const express = require('express');
const PORT = 8080;
const HOST = '0.0.0.0';
const app = express();
app.get('/', (req, res) => {
res.send('<h2> style="color: purple"> Welcome to Docker Compose on Azure web app v2.0');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
I have used below Dockerfile and docker-compose.yaml
Dockerfile:
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
Copy . .
EXPOSE 8080
CMD ["npm", "start"]
docker-compose.yaml:
version: '3.8'
services:
app:
build: .
container_name: testnodewebapp
restart: always
ports:
- '8080:8080'
volumes:
- .:/app
- /app/node_modules
This "build ." command will be used to find the docker file in the current working directory(represented by ".").
the resulting container will be named "testnodewebapp" as we have mentioned the container name in "docker-composed.yaml" file.
If the docker file is not in the referenced path in the docker-compse.yaml it will throw an error while executing the docker-composed.yaml file as below.
If the file is in the current directory, you are able to execute docker-composed .yaml file without any issues as below.
Response:
I have created Azure Web app as below.
Note: Make sure that you select Docker Compose (Preview) option while creating the web app.
Created Azure container Registry and enabled Admin user.
Note: Make sure that enable Admin user
Push the images to Azure container Registry using the below command.
docker tag <image-name>:<tag> <acr-login-server>/<image-name>:<tag>
docker login <acr-login-server>
docker push <acr-login-server>/<image-name>:<tag>
Once you push the images to ACR, check the same in the portal.
Change the Web app deployment settings as below.
Make sure that add Config file: in Azure Web App.
version: '3.8'
services:
app:
image: acrvijay123.azurecr.io/nodewebapp:latest
ports:
- "8000:8080"
restart: always
Docker compose app successfully running on App Service.

How to configure port of React app fetch when deploying to ECS fargate cluster with two tasks

I have two docker images that communicate fine when deployed locally, but I'm not sure how to set up my app to correctly make fetch() calls from the React app to the correct port on the other app when they're both deployed as tasks on the same ECS cluster.
My react app uses a simple fetch('/fooService/' + barStr) type call, and my other app exposes a /fooService/{barStr} endpoint on port 8081.
For local deployment and local docker, I used setupProxy.js to specify a proxy:
const { createProxyMiddleware } = require("http-proxy-middleware");
module.exports = function(app) {
app.use(createProxyMiddleware('/fooService',
{ target: 'http://fooImage:8081',
changeOrigin: true
}
));
}
In ECS this seems to do nothing, though. I see the setupProxy run when the image starts up, but the requests from my react app just go directly to {sameIPaddress}/fooService/{barStr}, ignoring the proxy specification entirely. I can see in the browser that the requests are being made over port 80. If these requests are made on port 8081 manually, they complete just fine, so the port is available and the service is running.
I've exposed port 8081 on the other task, and I can access it externally with no problem, I just am unclear on how to design my react app to point to it, since I won't necessarily know what IP address I will be assigned until the task launches. If I use a relative path, I cannot specify the port.
What's the idiomatic way to specify the destination of my fetch requests in this context?
Edit: If it is relevant, here is how the docker images are configured. They are built automatically on dockerhub and work fine if I deploy them in my local docker instance.
docker-compose.yaml
version: "3.8"
services:
fooImage:
image: myname/foo-image:0.1
build: ./
container_name: server
ports:
- '8081:8081'
barImage:
image: myname/bar-image:0.1
build: ./bar
container_name: front
ports:
- '80:80'
stdin_open: true
tty: true
Dockerfile - foo image
#
# Build stage
#
FROM maven:3.8.5-openjdk-18-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
FROM openjdk:18-alpine
COPY --from=build /home/app/target/*.jar /usr/local/lib/app.jar
EXPOSE 8081
ENTRYPOINT ["java", "-jar", "/usr/local/lib/app.jar"]
Dockerfile - bar image
FROM node:17-alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
ECS Foo task ports
ECS Bar task ports
The solution to this issue was to return the proxy target to "localhost:8081". Per Amazon support:
For quickest resolve your issue, you can try to change your proxy
configuration from "http://server:8081" to "http://localhost:8081" and
the proxy should work.
That's because when using Fargate with awsvpc network mode, containers running in a Task share the same network namespace, which means containers can communicate with each other through localhost. (e.g. When back-end container listen at port 8081, front-end container can access back-end container via localhost:8081) And when using docker-compose [2], you can use hostname to communicate to another container that specified in the same docker-compose file. So proxying back-end traffic with "http://server:8081" in Fargate won't work and should be modified to "http://localhost:8081"."

node js docker is not running on heroku

Node js project in Docker container is not running on Heroku.
Here is the source code.
Docker file
FROM node:14
WORKDIR /home/tor/Desktop/work/docker/speech-analysis/build
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
server.js
'use strict';
const express = require('express');
const PORT = process.env.port||8080;
const app = express();
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(PORT);
console.log("Running on http://:${PORT}");
You don't need to expose anything when having a container for Heroku. It takes care of it automatically. If you are running the same Docker locally, you can do:
docker build -t myapp:latest .
docker run -e PORT=8080 -p 8080:8080 -t myapp:latest
I think that the environment variables are case-sensitive on Linux systems - so you need to change the
const PORT = process.env.port||8080;
... to:
const PORT = process.env.PORT||8080;
... as Heroku sets an environment variable PORT (and not port).
According to this answer you just need to use the port 80 in your expose or inside of nodejs:
app.listen(80)
Heroku at run, will generate a random port and bind it to 80
docker run ... -p 46574:80 ...
So if your nodejs app is running at port 80 inside of container, everything will be fine

ERR_CONNECTION_REFUSED when attempting to connect to NodeJS server running on a Docker Container in Windows 10

I have an app that's contains a NodeJS server and a ReactJS client. The project's structure is as follows:
client
Dockerfile
package.json
.
.
.
server
Dockerfile
package.json
.
.
.
docker-compose.yml
.gitignore
To run both of these, I am using a docker-compose.yml:
version: "3"
services:
server:
build: ./server
expose:
- 8000
environment:
API_HOST: "http://localhost:3000/"
APP_SERVER_PORT: 8000
MYSQL_HOST_IP: mysql
ports:
- 8000:8000
volumes:
- ./server:/app
command: yarn start
client:
build: ./client
environment:
REACT_APP_PORT: 3000
NODE_PATH: src
expose:
- 3000
ports:
- 3000:3000
volumes:
- ./client/src:/app/src
- ./client/public:/app/public
links:
- server
command: yarn start
And inside of each of both the client and server folders I have a Dockerfile (same for each):
FROM node:10-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
COPY yarn.lock /app
COPY . /app
RUN yarn install
CMD ["yarn", "start"]
EXPOSE 80
Where the client's start script is simply react-scripts start, and the server's is nodemon index.js. The client's package.json has a proxy that is supposed to allow it communication with the server:
"proxy": "http://server:8000",
The react app would call a component that looks like this:
import React from 'react';
import axios from 'axios';
function callServer() {
axios.get('http://localhost:8000/test', {
params: {
table: 'sample',
},
}).then((response) => {
console.log(response.data);
});
}
export function SampleComponent() {
return (
<div>
This is a sample component
{callServer()}
</div>
);
}
Which would call the /test path in the node server, as defined in the index.js file:
const cors = require('cors');
const express = require('express');
const app = express();
app.use(cors());
app.listen(process.env.APP_SERVER_PORT, () => {
console.log(`App server now listening on port ${process.env.APP_SERVER_PORT}`);
});
app.get('/test', (req, res) => {
res.send('hi');
});
Now, this code works like a charm when I run it on my Linux Mint machine, but when I run it on Windows 10 I get the following error (I run both on Chrome):
GET http://localhost:8000/test?table=sample net::ERR_CONNECTION_REFUSED
Is it that I would need to run these on a different port or IP address? I read somewhere that the connections may be using a windows network instead of the network created by docker-compose, but I'm not even sure how to start diagnosing this. Please let me know if you have any ideas that could help.
EDIT: Here are the results of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1824c61bbe99 react-node-mysql-docker-boilerplate_client "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:3000->3000/tcp react-node-mysql-docker-boilerplate_client_1
and here is docker ps -a. For some reason, it seems the server image stops by itself as soon as it starts
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1824c61bbe99 react-node-mysql-docker-boilerplate_client "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:3000->3000/tcp react-node-mysql-docker-boilerplate_client_1
5c26276e37d1 react-node-mysql-docker-boilerplate_server "docker-entrypoint.s…" 3 minutes ago Exited (127) 3 minutes ago react-node-mysql-docker-boilerplate_server_1

Can't Access to localhost:3030 - NodeJS Docker

Wanted Behavior
When my dockerized nodejs server is launched, i can access from my local machine to the address : http://localhost:3030
Docker console should then print "Hello World"
Problem Description
I have a nodejs Server contained in a Docker Container. I can't access to http://localhost:3030/ from my browser
server.js File
const port = require('./configuration/serverConfiguration').port
const express = require('express')
const app = express()
app.get('/', function (req, res) {
res.send('Hello World')
})
app.listen(port)
DockerFile Exposes port 3000 which is the port used by the server.js File
DockerFile
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
I use a docker-compose.yml file because i am linking my container with a mongodb service
docker-compose.yml File
version: '3'
services:
node_server:
build: .
volumes:
- "./app:/src/app"
ports:
- "3030:3000"
links:
- "mongo:mongo"
mongo:
image: mongo
ports:
- "27017:27017"
File publishes my container 3000 port to my host 3030 port
New Info
Tried to execute it on OSX, worked. It seems to be a problem with windows.
I changed localhost for the machine ip since i was using docker toolbox. My bad for not reading the documentation in dept

Resources