How to connect to Node API docker container from Angular Nginx container - node.js

I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://server-external-ip:80 but when trying to login, Nginx is not connecting to NodeApi. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
ports:
- 5432:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
networks:
- my-network
backend: # name of the second service
image: myId/mynodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
networks:
- my-network
command: bash -c "sleep 20 && node server.js"
myapp:
image: myId/myangularapp
ports:
- "80:80"
depends_on:
- backend
networks:
- my-network
networks:
my-network:
I am not sure what the apiUrl should be? I have tried the following and nothing worked:
apiUrl: "http://backend:3000/api"
apiUrl: "http://server-external-ip:3000/api"
apiUrl: "http://server-internal-ip:3000/api"
apiUrl: "http://localhost:3000/api"

I think you should use the docker-compose service as a DNS. It seems you've several docker hosts/ports available, there are the following in your docker-compose structure:
db:5432
http://backend:3000
http://myapp
Make sure to use db as POSTGRES_DB in the environment part for backend service.
Take a look to my repo, I think is the best way to learn how a similar project works and how to build several apps with nginx, you also can check my docker-compose.yml, it uses several services and are proxied using nginx and are worked together.
On this link you’ll find a nginx/default.conf file and it contains several nginx upstream configuration please take a look at how I used docker-compose service references there as hosts.
Inside the client/ directory, I also have another nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
If you need set or redirect traffic to /api you can use some ngnix config like this
I think this use case can be useful for you and other users!

Related

I cannot establish connection between two containers in Heroku

I have a web application built using Node.js and MongoDB. I have containerized the app using Docker and it was working fine locally but once i have tried to deploy it to production I couldn't establish connection between the backend and MongoDB container. For some reason the environment variables are always undefined.
Here is my docker-compose.yml:
version: "3.7"
services:
food-delivery-db:
image: mongo:4.4.10
restart: always
container_name: food-delivery-db
ports:
- "27018:27018"
environment:
MONGO_INITDB_DATABASE: food-delivery-db
volumes:
- food-delivery-db:/data/db
networks:
- food-delivery-network
food-delivery-app:
image: thisk8brd/food-delivery-app:prod
build:
context: .
target: prod
container_name: food-delivery-app
restart: always
volumes:
- .:/app
ports:
- "3000:5000"
depends_on:
- food-delivery-db
environment:
- MONGODB_URI=mongodb://food-delivery-db/food-delivery-db
networks:
- food-delivery-network
volumes:
food-delivery-db:
name: food-delivery-db
networks:
food-delivery-network:
name: food-delivery-network
driver: bridge
This is expected behaviour:
Docker images run in dynos the same way that slugs do, and under the same constraints:
…
Network linking of dynos is not supported.
Your MongoDB container is great for local development, but you can't use it in production on Heroku. Instead, you can select and provision an addon for your app and connect to it from your web container.
For example, ObjectRocket for MongoDB sets an environment variable ORMONGO_RS_URL. Your application would connect to the database via that environment variable instead of MONGODB_URI.
If you'd prefer to host your database elsewhere, that's fine too. I believe MongoDB Atlas is the official offering.

Can't access MongoDB container from NodeJS App

I'm running an instance of a web application in my Docker container and am also running a MongoDB container so when I launch the web app I can easily connect to the DB on the app's connection page.
The issue is that I'm not sure how to reach the Mongo container from my web app and am not sure if my host/port connection info is correct.
My Docker Setup
As you can see the container is up and running with both mongo and web app services running without errors
I build the two through docker-compose.yml
version: "3.3"
services:
web:
image: grafana-asw-v3
container_name: grafana-asw-v3
restart: always
build: .
ports:
- "13000:3000"
volumes:
- grafana-storage:/var/lib/grafana
stdin_open: true
tty: true
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
ports:
- "27018:27017"
volumes:
grafana-mongo-db: {}
grafana-storage: {}
Issue
With everything up and running I'm attempting to connect through the web app, but I seem to be using the wrong connection info...
I assumed to use "hostMachine:port" (roxane:27018), but it's not connecting. Is there something I overlooked here?
There were two changes I had to make to fix this issue:
Modify the bind_ip in mongod.conf via making this change to my docker-compose file
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
ports:
- "27018:27017"
command: mongod --bind_ip 0.0.0.0
I needed to refer to the IP address instead of the hostname in the cli in my we application. (Thanks to this answer for help with this one)
Short answer
db service is in the same network than web service not in host network.
As you named your services via container_name you shoud be able to use the connection string mongodb://mongo:27017
Explanation
By default, docker containers run under a bridge network allowing them to communicate without viewing your host network.
When using ports in a compose file, you define that you want to map an internal port of the container to the host port
"27018:27017" => I want to expose the container port number 27017 to the host port number 27018.
As a result, you could expose your web frontend without exposing your mongo service :
version: "3.3"
services:
web:
image: grafana-asw-v3
container_name: grafana-asw-v3
restart: always
build: .
ports:
- "13000:3000"
volumes:
- grafana-storage:/var/lib/grafana
stdin_open: true
tty: true
db:
container_name: mongo
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- grafana-mongo-db:/var/lib/mongo
volumes:
grafana-mongo-db: {}
grafana-storage: {}

Access forbidden to Django resource when accessing through Node.js frontend

I cloned a Django+Node.js open-source project, the goal of which is to upload and annotate text documents, and save the annotations in a Postgres db. This project has stack files for docker-compose, both for Django dev and production setups. Both these stack files work completely fine out of the box, with a Postgres database.
Now I would like to upload this project to Google Cloud - as my first ever containerized application. As a first step, I simply want to move the persistent storage to Cloud SQL instead of the included Postgres image in the stack file. My stack-file (Django dev) looks as follows
version: "3.7"
services:
backend:
image: python:3.6
volumes:
- .:/src
- venv:/src/venv
command: ["/src/app/tools/dev-django.sh", "0.0.0.0:8000"]
environment:
ADMIN_USERNAME: "admin"
ADMIN_PASSWORD: "${DJANGO_ADMIN_PASSWORD}"
ADMIN_EMAIL: "admin#example.com"
# DATABASE_URL: "postgres://doccano:doccano#postgres:5432/doccano?sslmode=disable"
DATABASE_URL: "postgres://${CLOUDSQL_USER}:${CLOUDSQL_PASSWORD}#sql_proxy:5432/postgres?sslmode=disable"
ALLOW_SIGNUP: "False"
DEBUG: "True"
ports:
- 8000:8000
depends_on:
- sql_proxy
networks:
- network-overall
frontend:
image: node:13.7.0
command: ["/src/frontend/dev-nuxt.sh"]
volumes:
- .:/src
- node_modules:/src/frontend/node_modules
ports:
- 3000:3000
depends_on:
- backend
networks:
- network-overall
sql_proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command:
- "/cloud_sql_proxy"
- "-dir=/cloudsql"
- "-instances=${CLOUDSQL_CONNECTION_NAME}=tcp:0.0.0.0:5432"
- "-credential_file=/root/keys/keyfile.json"
volumes:
- ${GCP_KEY_PATH}:/root/keys/keyfile.json:ro
- cloudsql:/cloudsql
networks:
- network-overall
volumes:
node_modules:
venv:
cloudsql:
networks:
network-overall:
I have a bunch of models, e.g. project in the Django backend, which I can view, modify, add and delete using Django admin interface, but while trying to access them through Node.js views I get a 403 Forbidden error. This is the case of all my Django models.
For reference, in the above stack file, I have listed the only difference from the originally cloned Docker-compose stack file, where the DATABASE_URL used to point to a local Postgres Docker image, as follows
postgres:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: "doccano"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: "doccano"
networks:
- network-backend
To check if my GCP keys are correct, I tried to deploy the Cloud SQL Proxy container alone and interact with it (add, remove and update rows in included tables), and that was possible. However, the fact that I can use the Django admin interface successfully in the deployed Docker-compose stack should already prove that things are ok with the Cloud SQL proxy.
I'm not an experienced Node.js developer by any means, and have a little experience with Django and Django admin. My intention behind using a Docker-compose setup was that I will not have to bother with the intricacies of js views, and only have to deal with the Python business logic.

Node can't reach postgres server in docker compose

I'm running a NodeJS app and its related services (Redis, Postgres) through docker-compose. My NodeJS app can reach Redis just fine using its name & port from my docker-compose file, but for some reason I can't seem to reach Postgres:
Error: getaddrinfo EAI_AGAIN postgres
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:66:26)
My docker-compose file:
services:
api:
build:
context: ./
dockerfile: Dockerfile
ports:
- "3001:3001"
depends_on:
- postgres
- redis
postgres:
image: postgres:11.1
ports:
- "5432:5432"
expose:
- "5432"
hostname: postgres
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: root
POSTGRES_DB: test
restart: on-failure
networks:
- integration-tests
redis:
image: 'docker.io/bitnami/redis:6.0-debian-10'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
hostname: redis
volumes:
- 'redis_data:/bitnami/redis/data'
I've tried both normal lts and lts-alpine base images for my NodeJS app. I'm using knex, which delegates connecting to the pg library... Anybody have any idea why it won't even connect? I've tried both running directly through docker-compose and through tilt.
By adding :
networks:
- integration-tests
Only for postgres, you create a separate network only for postgres.
By default, docker-compose create a network for all your container inside the same file with the name: <project-name>_default. It's why, when using docker-compose all the containers in the same file could communicate using their name.
By specifying a network for postgres, you "ask" to docker-compose to not use the default network for it.
You have 2 solutions:
- Remove the instruction to failback to the default network
- Add the networks instruction to all other containers in your project / or only those who need it
Note: By default, docker-compose will prefixe all your object (container, networks, volume) with the project name. The default project name is the name of the current directory.

NodeJS 14 in a Docker container can't connect to Postgres DB (in/out docker)

I'm making a React-Native app using Rest API (NodeJS, Express) and PostgreSQL.
Everything work good when hosted on my local machine.
Everything work good when API is host on my machine and PostgreSQL in docker container.
But when backend and frontend is both in docker, database is reachable from all my computer in local, but not by the backend.
I'm using docker-compose.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- "8080:8080"
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
volumes:
db-data:
.env_docker and .env have the same parameters (just name changing).
Here is my dockerfiles:
Backend
FROM node:14.1
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Database
FROM postgres:alpine
COPY ./wallnerdb.sql /docker-entrypoint-initdb.d/
I tried to change my hostname in connection url to postgres by using the name of the docker, my host IP address, localhost, but no results.
It's also the same .env (file in my node repo with db_name passwd etc) I do use in local to connect my backend to the db.
Since you are using NodeJS 14 in the Docker Container - make sure that you have the latest pg dependency installed:
https://github.com/brianc/node-postgres/issues/2180
Alternatively: Downgrade to Node 12.
Also make sure, that both the database and the "backend" are in the same network. Also: the backend should best "depend" on the database.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- '8080:8080'
networks:
- default
depends_on:
- wallnerdatabase
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- '5432:5432'
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
networks:
- default
volumes:
db-data:
networks:
default:
This should not be necessary in you case - as pointed out in the comments - since Docker Compose already creates a default network
The container name "wallnerdatabase" is the host name of your database - if not configured otherwise.
I expect the issue to be in the database connection URL since you did not share it.
Containers in the same network in a docker-compose.yml can reach each other using the service name. In your case the service name of the database is wallnerdatabase so this is the hostname that you should use in the database connection URL.
The database connection URL that you should use in your backend service should be similar to this:
postgres://user:password#wallnerdatabase:5432/dbname
Also make sure that the backend code is calling the database using the hostname wallnerdatabase as it is defined in the docker-compose.yml file.
Here is the reference on Networking in Docker Compose.
You should access your DB using service name as hostname. Here is my working example - https://gitlab.com/gintsgints/vue-fullstack/-/blob/master/docker-compose.yml

Resources