React app set proxy not working with docker compose - node.js

I am trying to use the docker compose to build two containers:
React app
A flask server powered with gunicorn
I docker composed them up and both of them were boosted up. When I visited the react, it supposed to proxy the request from the react app with port 3000 to flask server with port 5000. But I encountered this:
frontend_1 | Proxy error: Could not proxy request /loadData/ from localhost:3000 to http://backend:5000.
frontend_1 | See https://nodejs.org/api/errors.html#errors_common_system_errors for more information (ECONNREFUSED).
which I figure means it still does not know the actual IP address of the backend container.
Here are some configurations:
docker-compose.yml
version: "3"
services:
backend:
build: ./
expose:
- "5000"
ports:
- "5000:5000"
volumes:
- .:/app
command: gunicorn server:app -c ./gunicorn.conf.py
networks:
- app-test
frontend:
build: ./frontend
expose:
- "3000"
ports:
- "3000:3000"
volumes:
- ./frontend:/app
networks:
- app-test
depends_on:
- backend
links:
- backend
command: yarn start
networks:
app-test:
driver: bridge
backend Dockerfile
FROM python:3.7.3
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
frontend Dockerfile
FROM node:latest
WORKDIR /app
COPY . /app
gunicorn.conf.py
workers = 5
worker_class = "gevent"
bind = "127.0.0.1:5000"
frontend package.json
{
"proxy": "http://backend:5000",
}
I tried nearly everything said online, and it just does not proxy the request.
Some information I have already known:
Both containers are worked.
I can ping the internal IP from the frontend container to the backend, and it responds, so no network issues.
when localhost:3000 is requested, my system will call Axios to send a POST request (/loadData) to the backend, where the proxy part should do the work and then the request should become somebackendip:5000/loadData/
Anyone could help me?
Thanks in advance!

Try change bind to bind = "0.0.0.0:5000" in gunicorn conf and change ports in backend service in your compose accordingly to "127.0.0.1:5000:5000" (last is optional)

By changing gunicorn.conf.py with
bind = "0.0.0.0:5000"
fixed my problem.

Related

React app urls are undefined when running in Ngnix in docker

I am deploying my react app (after building it) to nginix server in docker.
This react app connects to a nodejs server running on : localhost:3000
React app is running on localhost:3005
When the react app is depoyed to ngnix+docker, the api urls refering to the nodejs server are showing up as undefined.
POST http://undefined/api/auth/login net::ERR_NAME_NOT_RESOLVED
It should be: http://localhost:3000/api/auth/login
This issue does not seem to be from react but due to ngnix or docker.
The react app is working perfectly fine is using: serve /build -p 3005, bassically testing it without ngnix+docker on a basic local server.
Also, is am not using any environment variables, all urls are hard coded.
I have not added any configuration for ngnix, I am using the default docker image and just copying my react app in it.
Here is my part of docker configuration.
Dockerlife.dev(react app)
FROM nginx:1.23
WORKDIR /react
COPY ./build/ /usr/share/nginx/html
EXPOSE 3005
dockerfile.dev (nodejs server)
FROM node:16.15-alpine3.15
WORKDIR /usr/src/server
COPY ./package.json .
RUN npm install
COPY . .
ENV NODE_ENV=development
EXPOSE 3000
CMD ["npm", "run", "app"]
Dcoker compose:
version: "3"
services:
client:
build:
context: ./react
dockerfile: Dockerfile.dev
ports:
- "3005:80"
volumes:
- /react/node_modules
- ./react:/react
deploy:
restart_policy:
condition: always
node-server:
network_mode: "host"
build:
context: ./server
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
deploy:
restart_policy:
condition: always

WSL2, Docker & Node : Unable to request Node

I created a JS app with Docker Compose with a front, a back and a common component with Yarn Workspaces. It works on Linux. I am out of ideas to make it work on WSL.
The Docker Compose :
# Use postgres/example user/password credentials
version: '3.1'
services:
postgres:
image: postgres:latest
# restart: always
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: caddie_app
ports:
- '5432:5432'
backend:
image: node:16
volumes:
- '.:/app'
ports:
- '3001:3001' # Nest
depends_on:
- postgres
working_dir: /app
command: ["yarn", "workspace", "#caddie/backend", "start:dev"]
environment:
# with docker we listen to the postgres network, but it is reachable at #localhost on our post
DATABASE_URL: postgresql://postgres:password#postgres:5432/caddie_app?schema=public
frontend:
image: node:16
volumes:
- '.:/app'
ports:
- '3000:3000' # React
depends_on:
- backend
working_dir: /app
command: ["yarn", "workspace", "#caddie/frontend", "start"]
I can reach the database with DBeaver, I can fetch the React JS scripts on localhost:3000, but I cannot request the NestJS server on localhost:3001.
The NestJS server is listening on 0.0.0.0
await app.listen(3001, '0.0.0.0');
I allowed the ports 3000 & 3001 on the Firewall. I tried to request directly the NodeJS through the IP of WSL found in ipconfig but the problem remains. I can't figure out what's wrong.
Thanks !

Docker-compose builds but app does not serve on localhost

Docker newbie here. Docker-compose file builds without any issues but when I try to run my app on localhost:4200, I get a message - localhost didn't send any data on chrome and the server unexpectedly dropped the connection in safari. I am working on MacOs Catalina. Here is my yml file:
version: '3.0'
services:
my-portal:
build: .
ports:
- "4200:4200"
depends_on:
- backend
backend:
build: ./backend
ports:
- "3000:3000"
environment:
POSTGRES_HOST: host.docker.internal
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
depends_on:
-db
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: mydb
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
POSTGRES_HOST: host.docker.internal
ports:
- 5432:5432
restart: always
volumes:
- ./docker/db/data:/var/lib/postgresql/data
Log for Angular:
/docker-entrypoint.sh: Configuration complete; ready for start up
Log for Node: db connected
Log for Postgres: database system is ready to accept connections
Below are my Angular and Node Docker files:
FROM node:latest AS builder
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
EXPOSE 4200
# Stage 2
FROM nginx:alpine
COPY --from=builder /app/dist/* /usr/share/nginx/html/
Node:
FROM node:12
WORKDIR /backend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
When I created Angular image and ran my app on localhost:4200 it worked fine. Please let me know if I am missing anything.
Your Angular container is built FROM nginx, and you use the default Nginx configuration from the Docker Hub nginx image. That listens on port 80, so that's the port number you need to use in use ports: directive:
services:
quickcoms-portal:
build: .
ports:
- "4200:80" # <-- second port must match nginx image's port
depends_on:
- backend
The EXPOSE directive in the first stage is completely ignored and you can delete it. The FROM nginx line causes docker build to basically completely start over from a new base image, so your final image is stock Nginx plus the files you COPY --from=builder.

Connection Refused Error on React using env Variables from Docker

I'm trying to define some env variables in my docker files, in order to use these in my react application:
The next is my docker file in the Node server side:
FROM node:lts-slim
RUN mkdir -p /app
WORKDIR /app
# install node_modules
ADD package.json /app/package.json
RUN npm install --loglevel verbose
# copy codebase to docker codebase
ADD . /app
EXPOSE 8081
# You can change this
CMD [ "nodemon", "serverApp.js" ]
This is my docker-compose file:
version: "3"
services:
frontend:
stdin_open: true
container_name: firestore_manager
build:
context: ./client/firestore-app
dockerfile: DockerFile
image: rasilvap/firestore_manager
ports:
- "3000:3000"
volumes:
- ./client/firestore-app:/app
environment:
- BACKEND_HOST=backend
- BACKEND_PORT=8081
depends_on:
- backend
backend:
container_name: firestore_manager_server
build:
context: ./server
dockerfile: Dockerfile
image: rasilvap/firestore_manager_server
ports:
- "8081:8081"
volumes:
- ./server:/app
This is the way in which I'm using it in the react code:
axios.delete(`http://backend:8081/firestore/`, request).then((res) => {....
But I'm getting a connection refused Error. I'm new with react and not pretty sure how can I achieve this.
Any ideas?
Your way of requesting the service looks fine : http://foo-service:port.
I think that your issue is a security issue.
Because the two applications are not considered on the same origin (origin = domain + protocol + port), you fall into into a CORS (Cross Origin Resource Sharing) requirement.
In that scenario, your browser will not accept to perform your ajax query if the backend didn't return a its agreement to the preflight CORS request to share its resources with that other "origin".
So enable CORS in the backend to solve the issue (each api/framework has its own way).
Maybe that post may help you for firebase.

Docker Compose with Docker Toolbox: Node, Mongo, React. React app not showing in the said adress

I am trying to run Express server and React app trough docker containers.
The Express server runs correctly at the given address (the one on Kitematic GUI).
However I am unable to open the React application trough the given address, giving me site cannot be reached.
Running Windows 10 Home with Docker Toolbox.
React app dockerfile:
FROM node:10
# Set the working directory to /client
WORKDIR /frontend
# copy package.json into the container at /client
COPY package*.json ./
# install dependencies
RUN npm install
# Copy the current directory contents into the container at /client
COPY . .
# Make port 3001 available to the world outside this container
EXPOSE 3001
# Run the app when the container launches
CMD ["npm", "run", "start"]
Node/Express dockerfile:
# Use a lighter version of Node as a parent image
FROM node:10
# Set the working directory to /api
WORKDIR /backend
# copy package.json into the container at /api
COPY package*.json ./
# install dependencies
RUN npm install
# Copy the current directory contents into the container at /api
COPY . .
# Make port 3000 available to the world outside this container
EXPOSE 3000
# Run the app when the container launches
CMD ["npm", "start"]
Docker compose file:
version: '3'
services:
client:
container_name: hydrahr-client
build: .\frontend
restart: always
environment:
- REACT_APP_BASEURL=${REACT_APP_BASEURL}
expose:
- ${REACT_PORT}
ports:
- "3001:3001"
links:
- api
api:
container_name: hydrahr-api
build: ./backend
restart: always
expose:
- ${SERVER_PORT}
environment: [
'API_HOST=${API_HOST}',
'MONGO_DB=${MONGO_DB}',
'JWT_KEY=${JWT_KEY}',
'JWT_HOURS_DURATION=${JWT_HOURS_DURATION}',
'IMAP_EMAIL_LISTENER=${IMAP_EMAIL_LISTENER}',
'IMAP_USER=${IMAP_USER}',
'IMAP_PASSWORD=${IMAP_PASSWORD}',
'IMAP_HOST=${IMAP_HOST}',
'IMAP_PORT=${IMAP_PORT}',
'IMAP_TLS=${IMAP_TLS}',
'SMTP_EMAIL=${SMTP_EMAIL}',
'SMTP_PASSWORD=${SMTP_PASSWORD}',
'SMTP_HOST=${SMTP_HOST}',
'SMTP_PORT=${SMTP_PORT}',
'SMTP_TLS=${SMTP_TLS}',
'DEFAULT_SYSTEM_PASSWORD=${DEFAULT_SYSTEM_PASSWORD}',
'DEFAULT_SYSTEM_EMAIL=${DEFAULT_SYSTEM_EMAIL}',
'DEFAULT_SYSTEM_NAME=${DEFAULT_SYSTEM_NAME}',
'SERVER_PORT=${SERVER_PORT}'
]
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo
restart: always
container_name: mongo
ports:
- "27017:27017"
Running with docker-compose up -d
UPDATE 1:
I am able to run the react application using docker run -p 3000:3000 hydra-client-test after building that image.
Since running the container with -p 3000:3000 works, the client is actually probably listening on port 3000. Try setting:
ports:
- 3001:3000

Resources