how to access a node container through another oracle-jet container? - node.js

I want my UI to get data through my node server, where each is an independent container. How to connect them using docker-compose file?
i have three container that run mongodb, node server and oracle jet for ui,
i wanna access the node APIs through the oraclejet UI so and the mongo db through the node so i defined this docker-compose.
the link between mongo and node work.
version: "2"
services:
jet:
container_name: sam-jet
image: amazus/sam-ui
ports:
- "8000:8000"
links:
- app
depends_on:
- app
app:
container_name: sam-node
restart: always
image: amazus/sam-apis
ports:
- "3000:3000"
links:
- mongo
depends_on:
- mongo
mongo:
container_name: sam-mongo
image: amazus/sam-data
ports:
- "27017:27017"
In my oracle jet i define the URI as "http://localhost:3000/sam" but didn't work and i tried "http://app:3000/sam" after reading this one accessing a docker container from another another container but also i doesn't work

OJET just needs a working end point to fetch data, really does not matter where and what stack it is hosted. Once you are able to get the nodeJS service working and the URL works from a browser, it should work from the JET app as well. There may be CORS related headers that you need to set up at server side as well.

Related

Unable to connect to the Postgres database in Docker Compose

I am building a Node JS Web application. I am using Postgres SQL and Sequelize for the database.
I have the docker-compose.yaml with the following content.
version: '3.8'
services:
web:
container_name: web-server
build: .
ports:
- 4000:4000
restart: always
depends_on:
- postgres
postgres:
container_name: app-db
image: postgres:11-alpine
ports:
- '5431:5431'
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=babe_manager
volumes:
- ./postgres-data:/var/lib/postgresql/data
build:
context: .
dockerfile: ./pgconfig/Dockerfile
volumes:
app-db-data:
I am using the following credentials to connect to the database.
DB_DIALECT=postgres
DB_HOST=postgres
DB_NAME=babe_manager
DB_USERNAME=docker
DB_PORT=5431
DB_PASSWORD=secret
When the application tries to connect to the database, I am getting the following error.
getaddrinfo ENOTFOUND postgres
I tried using this too.
DB_DIALECT=postgres
DB_HOST=localhost
DB_NAME=babe_manager
DB_USERNAME=docker
DB_PORT=5431
DB_PASSWORD=secret
This time I am getting the following error.
connect ECONNREFUSED 127.0.0.1:5431
My app server is not within the docker-compose.yaml. I just start it using "nodemon server.js". How can I fix it?
You need to make sure your Postgres container and your Node are on the same network. After they reside in the common network you can then use docker's DNS to resolve the database from the Node application in your case eighter postgres or test-db for hostname should work. More on container communication over docker networks.

I cannot establish connection between two containers in Heroku

I have a web application built using Node.js and MongoDB. I have containerized the app using Docker and it was working fine locally but once i have tried to deploy it to production I couldn't establish connection between the backend and MongoDB container. For some reason the environment variables are always undefined.
Here is my docker-compose.yml:
version: "3.7"
services:
food-delivery-db:
image: mongo:4.4.10
restart: always
container_name: food-delivery-db
ports:
- "27018:27018"
environment:
MONGO_INITDB_DATABASE: food-delivery-db
volumes:
- food-delivery-db:/data/db
networks:
- food-delivery-network
food-delivery-app:
image: thisk8brd/food-delivery-app:prod
build:
context: .
target: prod
container_name: food-delivery-app
restart: always
volumes:
- .:/app
ports:
- "3000:5000"
depends_on:
- food-delivery-db
environment:
- MONGODB_URI=mongodb://food-delivery-db/food-delivery-db
networks:
- food-delivery-network
volumes:
food-delivery-db:
name: food-delivery-db
networks:
food-delivery-network:
name: food-delivery-network
driver: bridge
This is expected behaviour:
Docker images run in dynos the same way that slugs do, and under the same constraints:
…
Network linking of dynos is not supported.
Your MongoDB container is great for local development, but you can't use it in production on Heroku. Instead, you can select and provision an addon for your app and connect to it from your web container.
For example, ObjectRocket for MongoDB sets an environment variable ORMONGO_RS_URL. Your application would connect to the database via that environment variable instead of MONGODB_URI.
If you'd prefer to host your database elsewhere, that's fine too. I believe MongoDB Atlas is the official offering.

NodeJS 14 in a Docker container can't connect to Postgres DB (in/out docker)

I'm making a React-Native app using Rest API (NodeJS, Express) and PostgreSQL.
Everything work good when hosted on my local machine.
Everything work good when API is host on my machine and PostgreSQL in docker container.
But when backend and frontend is both in docker, database is reachable from all my computer in local, but not by the backend.
I'm using docker-compose.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- "8080:8080"
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
volumes:
db-data:
.env_docker and .env have the same parameters (just name changing).
Here is my dockerfiles:
Backend
FROM node:14.1
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Database
FROM postgres:alpine
COPY ./wallnerdb.sql /docker-entrypoint-initdb.d/
I tried to change my hostname in connection url to postgres by using the name of the docker, my host IP address, localhost, but no results.
It's also the same .env (file in my node repo with db_name passwd etc) I do use in local to connect my backend to the db.
Since you are using NodeJS 14 in the Docker Container - make sure that you have the latest pg dependency installed:
https://github.com/brianc/node-postgres/issues/2180
Alternatively: Downgrade to Node 12.
Also make sure, that both the database and the "backend" are in the same network. Also: the backend should best "depend" on the database.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- '8080:8080'
networks:
- default
depends_on:
- wallnerdatabase
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- '5432:5432'
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
networks:
- default
volumes:
db-data:
networks:
default:
This should not be necessary in you case - as pointed out in the comments - since Docker Compose already creates a default network
The container name "wallnerdatabase" is the host name of your database - if not configured otherwise.
I expect the issue to be in the database connection URL since you did not share it.
Containers in the same network in a docker-compose.yml can reach each other using the service name. In your case the service name of the database is wallnerdatabase so this is the hostname that you should use in the database connection URL.
The database connection URL that you should use in your backend service should be similar to this:
postgres://user:password#wallnerdatabase:5432/dbname
Also make sure that the backend code is calling the database using the hostname wallnerdatabase as it is defined in the docker-compose.yml file.
Here is the reference on Networking in Docker Compose.
You should access your DB using service name as hostname. Here is my working example - https://gitlab.com/gintsgints/vue-fullstack/-/blob/master/docker-compose.yml

Docker - How to wait for container running

i want launch three containers for my web application.
The containers are: frontend, backend and mongo database.
To do this i write the following docker-compose.yml
version: '3.7'
services:
web:
image: node
container_name: web
ports:
- "3000:3000"
working_dir: /node/client
volumes:
- ./client:/node/client
links:
- api
depends_on:
- api
command: npm start
api:
image: node
container_name: api
ports:
- "3001:3001"
working_dir: /node/api
volumes:
- ./server:/node/api
links:
- mongodb
depends_on:
- mongodb
command: npm start
mongodb:
restart: always
image: mongo
container_name: mongodb
ports:
- "27017:27017"
volumes:
- ./database/data:/data/db
- ./database/config:/data/configdb
and update connection string on my .env file
MONGO_URI = 'mongodb://mongodb:27017/test'
I run it with docker-compose up -d and all go on.
The problem is when i run docker logs api -f for monitoring the backend status: i have MongoNetworkError: failed to connect to server [mongodb:27017] on first connect error, because my mongodb container is up but not in waiting connections (he goes up after backend try to connect).
How can i check if mongodb is in waiting connections status before run api container?
Thanks in advance
Several possible solutions in order of preference:
Configure your application to retry after a short delay and eventually timeout after too many connection failures. This is an ideal solution for portability and can also be used to handle the database restarting after your application is already running and connected.
Use an entrypoint that waits for mongo to become available. You can attempt a full mongo client connect + login, or a simple tcp port check with a script like wait-for-it. Once that check finishes (or times out and fails) you can continue the entrypoint to launching your application.
Configure docker to retry starting your application with a restart policy, or deploy it with orchestration that automatically recovers when the application crashes. This is a less than ideal solution, but extremely easy to implement.
Here's an example of option 3:
api:
image: node
deploy:
restart_policy:
condition: unless-stopped
Note, looking at your compose file, you have a mix of v2 and v3 syntax in your compose file, and many options like depends_on, links, and container_name, are not valid with swarm mode. You are also defining settings like working_dir, which should really be done in your Dockerfile instead.

How to make a second container not run until the first one is already started in the docker-compose?

I am trying to run to container through a docker compose using "docker-compose up" but i get a error.
i have to container one for my app nodejs and the other for my database mongo db and they are connected to each other.
version: "2"
services:
app:
container_name: sam-node
restart: always
image: amazus/sam-apis
ports:
- "3000:3000"
links:
- mongo
mongo:
container_name: sam-mongo
image: amazus/sam-data
ports:
- "27017:27017"
the error that i got is : ailed to connect to server [mongo:27017] on first connect
Add depends_on parameter
app:
container_name: sam-node
restart: always
image: amazus/sam-apis
ports:
- "3000:3000"
depends_on:
- mongo
links:
- mongo
Also, if you'd like to start dependent container after full initialisation of the parent one, you should add command parameter to check if parent container was initialised and built.
Read more here
Why would you ever need this?
Because some services, such as database, could initialise for some time, and if you have some specific logic which requires immediate response, it's better to start dependent container after initialisation completion of the parent one.

Resources