Connection Refused Error on React using env Variables from Docker - node.js

I'm trying to define some env variables in my docker files, in order to use these in my react application:
The next is my docker file in the Node server side:
FROM node:lts-slim
RUN mkdir -p /app
WORKDIR /app
# install node_modules
ADD package.json /app/package.json
RUN npm install --loglevel verbose
# copy codebase to docker codebase
ADD . /app
EXPOSE 8081
# You can change this
CMD [ "nodemon", "serverApp.js" ]
This is my docker-compose file:
version: "3"
services:
frontend:
stdin_open: true
container_name: firestore_manager
build:
context: ./client/firestore-app
dockerfile: DockerFile
image: rasilvap/firestore_manager
ports:
- "3000:3000"
volumes:
- ./client/firestore-app:/app
environment:
- BACKEND_HOST=backend
- BACKEND_PORT=8081
depends_on:
- backend
backend:
container_name: firestore_manager_server
build:
context: ./server
dockerfile: Dockerfile
image: rasilvap/firestore_manager_server
ports:
- "8081:8081"
volumes:
- ./server:/app
This is the way in which I'm using it in the react code:
axios.delete(`http://backend:8081/firestore/`, request).then((res) => {....
But I'm getting a connection refused Error. I'm new with react and not pretty sure how can I achieve this.
Any ideas?

Your way of requesting the service looks fine : http://foo-service:port.
I think that your issue is a security issue.
Because the two applications are not considered on the same origin (origin = domain + protocol + port), you fall into into a CORS (Cross Origin Resource Sharing) requirement.
In that scenario, your browser will not accept to perform your ajax query if the backend didn't return a its agreement to the preflight CORS request to share its resources with that other "origin".
So enable CORS in the backend to solve the issue (each api/framework has its own way).
Maybe that post may help you for firebase.

Related

How to Build Docker-compose file for multiple containers

I have two different containers (1. Frontend app, 2. Webserver ) 1st dependent on server.
Currently both containers are built using seperate docker files and running perfectly fine in localhost envionment. Application is built using nodejs and angular.
I am using docker desktop for windows server 2019.
Using user defined network for both containers to communicate with each other:
docker network create --driver bridge dev_network
I need to build a docker compose file for both of them but don't have enough knowledge on how to build a working compose file. Will be glad if anyone could help me for the same.
Thanks for your time!
Frontend docker file:
FROM node:latest as build
WORKDIR /usr/local/app
COPY ./ /usr/local/app/
RUN npm install
FROM nginx:latest
COPY --from=build /usr/local/app/dist/ClientPortal /usr/share/nginx/html
EXPOSE 80
WebServer dockerfile:
FROM node:14-alpine
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000 8600
CMD ["node", "server.js"]
I tried with following compose file
Defined volume to a windows drive as server image folders along with docker file is located into the directory.
Issue is i docker-compose build is building both the images but when i fire the up command the server container fails to load and exits with exit code 1
Error message: Error: Cannot find module '/usr/src/app/server.js'
Docker-Compose file:
version: '3.4'
services:
clientportal:
image: clientportal
container_name: cspfrontend
build:
context: .
dockerfile: ./Dockerfile
environment:
NODE_ENV: production
networks:
- dev_network
ports:
- 80:80
clientportalserver:
image: clientportalserver
container_name: cspserver
build:
context: .
dockerfile: E:\Work\ClientPortalServer/Dockerfile
volumes:
- E:\Work\ClientPortalServer
networks:
- dev_network
ports:
- 3000:3000
networks:
dev_network:
driver: bridge
I would go with something like this in a docker-compose.yml in the root folder:
version: "3"
services:
frontend:
build: ./frontend
ports:
- 80:80
networks:
- dev_network
depends_on:
- backend
backend:
build: ./backend
ports:
- 3000:3000
expose:
- "3000"
- "8600"
networks:
- dev_network
networks:
dev_network:
driver: bridge
This requires you to have a frontend and a backend folder to your projects, and in those folder you have to have those Dockerfiles, that you showed. I don't know what port are you using in your backend project, so I guessed with port 3000.
project root
│ docker-compose.yml
│
└───frontend
│ │ Dockerfile
│ │ ...
│
└───backend
│ Dockerfile
│ ...
You should start by reading the documentation on the website.
https://docs.docker.com/compose/
The first page shows a working example of a simple compose file with a web service and redis service. If you managed to get the containers running, then the options found in the Compose file reference will be very familiar to you.
You may find this link of particular interest in regards to using an existing Docker network:
https://docs.docker.com/compose/networking/#use-a-pre-existing-network

How to copy build files from one container to another or host on docker

I am trying to dockerize application where I have a php backend and vuejs frontend. Backend is working as I expect however after running npm run build within the frontend container, I need to copy build files from dist folder to nginx container or to host and then use volume to bring those files to nginx container.
I tried to use named volume
services:
frontend:
.....
volumes:
- static:/var/www/frontend/dist
nginx:
.....
volumes:
- static:/var/www/frontend/dist
volumes:
static:
I also tried to do following as suggested on here to bring back dist folder to host
services:
frontend:
.....
volumes:
- ./frontend/dist:/var/www/frontend/dist
However none of the above options is working for me. Below is my docker-compose.yml file and frontend Dockerfile
version: "3"
services:
database:
image: mysql:5.7.22
.....
backend:
build:
context: ./docker/php
dockerfile: Dockerfile
.....
frontend:
build:
context: .
dockerfile: docker/node/Dockerfile
target: 'build-stage'
container_name: frontend
stdin_open: true
tty: true
volumes:
- ./frontend:/var/www/frontend
nginx:
build:
context: ./docker/nginx
dockerfile: Dockerfile
container_name: nginx
restart: unless-stopped
ports:
- 80:80
volumes:
- ./backend/public:/var/www/backend/public:ro
- ./frontend/dist:/var/www/frontend/dist:ro
depends_on:
- backend
- frontend
Frontend Dockerfile
# Develop stage
FROM node:lts-alpine as develop-stage
WORKDIR /var/wwww/frontend
COPY /frontend/package*.json ./
RUN npm install
COPY /frontend .
# Build stage
FROM develop-stage as build-stage
RUN npm run build
You can combine the frontend image and the Nginx image into a single multi-stage build. This basically just involves copying your docker/node/Dockerfile as-is into the start of docker/nginx/Dockerfile, and then COPY --from=build-stage into the final image. You will also need to adjust some paths since you'll need to make the build context be the root of your project.
# Essentially what you had in the question
FROM node:lts AS frontend
WORKDIR /frontend
COPY frontend/package*.json .
RUN npm install
COPY frontend .
RUN npm run build
# And then assemble the Nginx image with content
FROM nginx
COPY --from=frontend /frontend/dist /var/www/html
Once you've done this, you can completely delete the frontend container from your docker-compose.yml file. Note that it never did anything – the image didn't declare a CMD, and the docker-compose.yml didn't provide a command: to run either – and so this shouldn't really change your application.
You can use a similar technique to copy the static files from your PHP application into the Nginx proxy. When all is said and done, this leaves you with a simpler docker-compose.yml file:
version: "3.8"
services:
database:
image: mysql:5.7.22
.....
backend:
build: ./docker/php # don't need to explicitly name Dockerfile
# note: will not need volumes: to export files
.....
# no frontend container any more
nginx:
build:
context: . # because you're COPYing files from other components
dockerfile: docker/nginx/Dockerfile
restart: unless-stopped
ports:
- 80:80
# no volumes:, everything is built into the image
depends_on:
- backend
(There are two practical problems with trying to share content using Docker named volumes. The first is that volumes never update their content once they're created, and in fact hide the content in their original image, so using volumes here actually causes changes in your source code to be ignored in favor of arbitrarily old content. In environments like Kubernetes, the cluster doesn't even provide the copy-on-first-use that Docker has, and there are significant limitations on how files can be shared between containers. This works better if you build self-contained images and don't try to share files.)

Docker-compose builds but app does not serve on localhost

Docker newbie here. Docker-compose file builds without any issues but when I try to run my app on localhost:4200, I get a message - localhost didn't send any data on chrome and the server unexpectedly dropped the connection in safari. I am working on MacOs Catalina. Here is my yml file:
version: '3.0'
services:
my-portal:
build: .
ports:
- "4200:4200"
depends_on:
- backend
backend:
build: ./backend
ports:
- "3000:3000"
environment:
POSTGRES_HOST: host.docker.internal
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
depends_on:
-db
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: mydb
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
POSTGRES_HOST: host.docker.internal
ports:
- 5432:5432
restart: always
volumes:
- ./docker/db/data:/var/lib/postgresql/data
Log for Angular:
/docker-entrypoint.sh: Configuration complete; ready for start up
Log for Node: db connected
Log for Postgres: database system is ready to accept connections
Below are my Angular and Node Docker files:
FROM node:latest AS builder
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
EXPOSE 4200
# Stage 2
FROM nginx:alpine
COPY --from=builder /app/dist/* /usr/share/nginx/html/
Node:
FROM node:12
WORKDIR /backend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
When I created Angular image and ran my app on localhost:4200 it worked fine. Please let me know if I am missing anything.
Your Angular container is built FROM nginx, and you use the default Nginx configuration from the Docker Hub nginx image. That listens on port 80, so that's the port number you need to use in use ports: directive:
services:
quickcoms-portal:
build: .
ports:
- "4200:80" # <-- second port must match nginx image's port
depends_on:
- backend
The EXPOSE directive in the first stage is completely ignored and you can delete it. The FROM nginx line causes docker build to basically completely start over from a new base image, so your final image is stock Nginx plus the files you COPY --from=builder.

React app set proxy not working with docker compose

I am trying to use the docker compose to build two containers:
React app
A flask server powered with gunicorn
I docker composed them up and both of them were boosted up. When I visited the react, it supposed to proxy the request from the react app with port 3000 to flask server with port 5000. But I encountered this:
frontend_1 | Proxy error: Could not proxy request /loadData/ from localhost:3000 to http://backend:5000.
frontend_1 | See https://nodejs.org/api/errors.html#errors_common_system_errors for more information (ECONNREFUSED).
which I figure means it still does not know the actual IP address of the backend container.
Here are some configurations:
docker-compose.yml
version: "3"
services:
backend:
build: ./
expose:
- "5000"
ports:
- "5000:5000"
volumes:
- .:/app
command: gunicorn server:app -c ./gunicorn.conf.py
networks:
- app-test
frontend:
build: ./frontend
expose:
- "3000"
ports:
- "3000:3000"
volumes:
- ./frontend:/app
networks:
- app-test
depends_on:
- backend
links:
- backend
command: yarn start
networks:
app-test:
driver: bridge
backend Dockerfile
FROM python:3.7.3
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
frontend Dockerfile
FROM node:latest
WORKDIR /app
COPY . /app
gunicorn.conf.py
workers = 5
worker_class = "gevent"
bind = "127.0.0.1:5000"
frontend package.json
{
"proxy": "http://backend:5000",
}
I tried nearly everything said online, and it just does not proxy the request.
Some information I have already known:
Both containers are worked.
I can ping the internal IP from the frontend container to the backend, and it responds, so no network issues.
when localhost:3000 is requested, my system will call Axios to send a POST request (/loadData) to the backend, where the proxy part should do the work and then the request should become somebackendip:5000/loadData/
Anyone could help me?
Thanks in advance!
Try change bind to bind = "0.0.0.0:5000" in gunicorn conf and change ports in backend service in your compose accordingly to "127.0.0.1:5000:5000" (last is optional)
By changing gunicorn.conf.py with
bind = "0.0.0.0:5000"
fixed my problem.

Connect docker compose containers without links

https://docs.docker.com/compose/networking/
At above official docker document, I found the part
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
So I understood this paragraph that I can connect docker containers each other without links: or networks: explicitly. because above docker-compose.yml snippet doesn't have links or networks: part. and the document say web’s application code could connect to the URL postgres://db:5432
So I tried to test simple docker-compose with nodejs express, mongodb together using above way. I thought I can connect mongodb in express app with just mongodb://mongo:27017/myapp But I cannot connect mongodb in express container. I think I followed docker's official manual but I don't know why it's not working. Of course I can connect mongodb using links: or networks: But I heard links is depreciated and I cannot find the proper way to use networks:
I think I might be misunderstood, Please fix me.
Below is my docker-compose.yml
version: '3'
services:
app:
container_name: node
restart: always
build: .
ports:
- '3000:3000'
mongo:
image: mongo
ports:
- '27017:27017'
In express app, I connect to mongodb with
mongoose.connect('mongodb://mongo:27017/myapp', {
useMongoClient: true
});
// also doesn't work with mongodb://mongo/myapp
Plus) Dockerfile
FROM node:10.17-alpine3.9
ENV NODE_ENV development
WORKDIR /usr/src/app
COPY ["package*.json", "npm-shrinkwrap.json*", "./"]
RUN rm -rf node_modules
RUN apk --no-cache --virtual build-dependencies add \
python \
make \
g++ \
&& npm install \
&& apk del build-dependencies
COPY . .
EXPOSE 3000
CMD npm start
If you want to connect mongo with local then you should have to select network mode.
docket-compose.yml file content.
version: '2.1'
services:
z2padmin_docker:
image: z2padmin_docker
build: .
environment:
NODE_ENV: production
volumes: [/home/ankit/Z2PDATAHUB/uploads:/mnt/Z2PDATAHUB/uploads]
ports:
- 5000:5000
network_mode: host

Resources