HTTP requests between React app to express inside Docker containers - node.js

I am trying to send an HTTP request with Axios from react that build into Nginx container to express that build into node container.
when I am using the container name for the HTTP request to communicate between different backend containers everything works,
but when I try it with react app it fails and sends me a CORS error. this only works if I explicitly type the express container IP address.
I am using docker-compose to build everything so I don't know the address before.
there is a way to get around this issue?
cinema-svc:
build: ../cinema_web-service/cinema-svc
container_name: cinema-svc
depends_on:
- auth-svc
- subs-svc
- users-svc
networks:
- cinemanet
- frontnet
restart: unless-stopped
react-client:
build: ../react_client/client
container_name: react-client
depends_on:
- cinema-svc
networks:
- frontnet
ports:
- "80:80"
restart: unless-stopped
Update
this is what solved my issue:
cinema-svc:
build: ../cinema_web-service/cinema-svc
container_name: cinema-svc
depends_on:
- auth-svc
- subs-svc
- users-svc
networks:
cinemanet:
frontnet:
ipv4_address: '172.25.0.20'
restart: unless-stopped
react-client:
build: ../react_client/client
container_name: react-client
depends_on:
- cinema-svc
networks:
- frontnet
ports:
- "80:80"
restart: unless-stopped
networks:
frontnet:
driver: bridge
ipam:
config:
- subnet: 172.25.0.0/24

Related

ngrok: DJango and nodejs: how to add multiple ports

How can i add multiple ports
I have nodejs and django application
webapp:
image: "python-node-buster"
ports:
- "8001:8000"
command:
- python manage.py runserver --noreload 0.0.0.0:8000
networks:
- django_network
node:
image: "python-node-buster"
ports:
- "3000:3000"
stdin_open: true
networks:
- node_network
ngrok:
image: wernight/ngrok:latest
ports:
- 4040:4040
environment:
NGROK_PROTOCOL: http
NGROK_PORT: node:3000
NGROK_AUTH: ""
depends_on:
- node
- webapp
networks:
- node_network
- django_network
networks:
django_network:
driver: bridge
node_network:
driver: bridge
Assuming i get the url for node at http://localhost:4040/inspect/http
http://xxxx.ngrok.io
and in my nodejs application, i want to access
http://xxxx.ngrok.io:8001
for apis
How to configure this

Backend can't query dockerized postgresql

I'm running my containers via a docker compose file. They are in the same network and I can ping from my backend container to my database container. I use the database name as the hostname in the connection string and it doesn't bring any errors that it couldn't find the host. Instead, it just hangs up and times out.
I have a test endpoint which is just suppose to test the connection. When you use that endpoint, database container logs "invalid packet length", and on the frontend, nothing happens, then it times out. I have no idea whats wrong. Any help?
version: '3.2'
services:
server:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
depends_on:
- backend
- frontend
- database
ports:
- '5000:80'
networks:
- app_network
database:
image: postgres:latest
container_name: database
ports:
- "5432:5432"
restart: always
hostname: database
environment:
POSTGRES_PASSWORD: 1234
POSTGRES_USER: postgres
backend:
build:
context: ./backend
dockerfile: ./Dockerfile
image: kalendae:backend
hostname: backend
container_name: backend
environment:
- WAIT_HOSTS=database:5432
- DATABASE_HOST=database
- DATABASE_PORT=5432
- PORT=5051
frontend:
build:
context: ./frontend
dockerfile: ./Dockerfile
image: kalendae:frontend
hostname: frontend
container_name: frontend
environment:
- WAIT_HOSTS=backend:5051
- REACT_APP_BACKEND_HOST=localhost
- REACT_APP_BACKEND_PORT=5051

Can't get docker-compose networking to work

I'm trying to make requests to my server container from my client app in another container. The Docker Compose docs state that the network is setup automatically, so shouldn't all ports be accessible from all containers? When I make a curl request to port 4000 from outside of the container (in a fresh terminal), it works. However when I enter the client container (selektor-client) and try the same request, it fails.
curl --request POST http://localhost:4000/api/music
What am I doing wrong?
docker-compose.yaml:
version: "3"
services:
client:
container_name: selektor-client
restart: always
build: ./client
ports:
- "3000:3000"
volumes:
- ./client/:/client/
- /client/node_modules/
command: ["yarn", "start"]
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
mongo:
container_name: selektor-mongo
command: mongod --noauth
build: .
restart: always
volumes:
# - ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- data-volume:/data/db
ports:
- "27017:27017"
volumes:
data-volume:
When you call containers of the same stack each other, the host name is, by default, the service name, and the port is the internal port: <servicename>:<internal_port>. So, based on this part of your example:
version: "3"
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
The url you client have to use to reach the server is http://server:4000
I had the same problem . my solution was registering network and giving subnet mast to the network and register the static ip to each container :
version: "3"
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
services:
mongodb:
image: mongo:4.2.1
container_name: mongo
command: mongod --auth
hostname: mongo
networks:
my_network:
ipv4_address: 172.20.0.5
volumes:
- /data/db/mongo:/data/db
ports:
- "27017:27017"
rabitmq:
hostname: rabbitmq
container_name: rabbitmq
image: rabbitmq:latest
networks:
my_network:
ipv4_address: 172.20.0.3
volumes:
- /var/lib/rabbitmq:/data/db
ports:
- "5672:5672"
- "15672:15672"
restart: always
you can access through IP addresses

Define Ip of docker-compose file

i'm kinda new to docker so sorry if my terminology is a little wrong. I'm in the process of getting my app to run in docker. Everything is starting up and running correctly but i'm unable to set the ip address that the services are running on. I need to do so since i'm making api calls that previously referenced a static variable in my js code. The spark service especially is important for me to have a knowable ip, as of now its randomly assigned.
docker-compose.yml
version: '3.0' # specify docker-compose version
services:
vue:
build: client
ports:
- "80:80" # specify port mapping
spark:
build: accubrew-spark
ports:
- "8080:8080"
express:
build: server
ports:
- "3000:3000"
links:
- database
database:
image: mongo
ports:
- "27017:27017"```
When you're running containers using docker-compose it creates a user-defined network for you and docker provides an embedded DNS servers, each container will have a record resolvable only within the containers of the network.
This makes it easy for you to know how to contact each service by just calling them by the name you specified on your docker-compose.yml.
You can try this:
version: '3.0' # specify docker-compose version
services:
vue:
build: client
ports:
- "80:80" # specify port mapping
spark:
build: accubrew-spark
ports:
- "8080:8080"
networks:
my_net:
ipv4_address: 172.26.0.3
express:
build: server
ports:
- "3000:3000"
links:
- database
database:
image: mongo
ports:
- "27017:27017"
networks:
my_net:
ipam:
driver: default
config:
- subnet: 172.26.0.0/16
But yours spark port is localhost:8080, if you need to expose other port with the ip 172.26.0.0, you can do - "7077" or with the localhost: -"7077:7077" this is an example with the port 7077 expose:
version: '3.0' # specify docker-compose version
services:
vue:
build: client
ports:
- "80:80" # specify port mapping
spark:
build: accubrew-spark
ports:
- "8080:8080"
- "7077"
networks:
my_net:
ipv4_address: 172.26.0.3
express:
build: server
ports:
- "3000:3000"
links:
- database
database:
image: mongo
ports:
- "27017:27017"
networks:
my_net:
ipam:
driver: default
config:
- subnet: 172.26.0.0/16

Calling service from one docker container to other container is causing net::ERR_NAME_NOT_RESOLVED

i am running application with multi containers as below..
feeder - is a simple nodejs container from image node:alpine
api - is nodejs container with expressjs from image node:alpine
ui-app - is react app container from image node:alpine
i am trying to call the api service in ui-app i am getting error as below
image to Console Log
image to Console Log
not sure what is causing the problem
if i access the services as http://192.168.99.100/ping it works (that is my docker machine default ip)...
but if i use container name like http://api:3200/ping it is not working...? please help..
the below is my docker-compose.
version: '3'
services:
feeder:
build: ./feeder
container_name: feeder
tty: true
depends_on:
- redis
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3100:3100"
networks:
- hmdanet
api:
build: ./api
container_name: api
tty: true
depends_on:
- feeder
links:
- redis
environment:
- IS_FROM_DOCKER=true
ports:
- "3200:3200"
networks:
hmdanet:
aliases:
- "hmda-api"
ui-app:
build: ./ui-app
container_name: ui-app
tty: true
depends_on:
- api
links:
- api
environment:
- IS_FROM_DOCKER=true
ports:
- "3000:3000"
networks:
- hmdanet
redis:
image: redis:latest
ports:
- '6379:6379'
networks:
- hmdanet
networks:
hmdanet:
driver: bridge
You can only use service name as a domain name when you are inside a container. In you case it's your browser making the call, it does not know what api is. In you web app, you should have an env like base url set to the ip of your docker machine or localhost.

Resources