Access forbidden to Django resource when accessing through Node.js frontend - node.js

I cloned a Django+Node.js open-source project, the goal of which is to upload and annotate text documents, and save the annotations in a Postgres db. This project has stack files for docker-compose, both for Django dev and production setups. Both these stack files work completely fine out of the box, with a Postgres database.
Now I would like to upload this project to Google Cloud - as my first ever containerized application. As a first step, I simply want to move the persistent storage to Cloud SQL instead of the included Postgres image in the stack file. My stack-file (Django dev) looks as follows
version: "3.7"
services:
backend:
image: python:3.6
volumes:
- .:/src
- venv:/src/venv
command: ["/src/app/tools/dev-django.sh", "0.0.0.0:8000"]
environment:
ADMIN_USERNAME: "admin"
ADMIN_PASSWORD: "${DJANGO_ADMIN_PASSWORD}"
ADMIN_EMAIL: "admin#example.com"
# DATABASE_URL: "postgres://doccano:doccano#postgres:5432/doccano?sslmode=disable"
DATABASE_URL: "postgres://${CLOUDSQL_USER}:${CLOUDSQL_PASSWORD}#sql_proxy:5432/postgres?sslmode=disable"
ALLOW_SIGNUP: "False"
DEBUG: "True"
ports:
- 8000:8000
depends_on:
- sql_proxy
networks:
- network-overall
frontend:
image: node:13.7.0
command: ["/src/frontend/dev-nuxt.sh"]
volumes:
- .:/src
- node_modules:/src/frontend/node_modules
ports:
- 3000:3000
depends_on:
- backend
networks:
- network-overall
sql_proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command:
- "/cloud_sql_proxy"
- "-dir=/cloudsql"
- "-instances=${CLOUDSQL_CONNECTION_NAME}=tcp:0.0.0.0:5432"
- "-credential_file=/root/keys/keyfile.json"
volumes:
- ${GCP_KEY_PATH}:/root/keys/keyfile.json:ro
- cloudsql:/cloudsql
networks:
- network-overall
volumes:
node_modules:
venv:
cloudsql:
networks:
network-overall:
I have a bunch of models, e.g. project in the Django backend, which I can view, modify, add and delete using Django admin interface, but while trying to access them through Node.js views I get a 403 Forbidden error. This is the case of all my Django models.
For reference, in the above stack file, I have listed the only difference from the originally cloned Docker-compose stack file, where the DATABASE_URL used to point to a local Postgres Docker image, as follows
postgres:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: "doccano"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: "doccano"
networks:
- network-backend
To check if my GCP keys are correct, I tried to deploy the Cloud SQL Proxy container alone and interact with it (add, remove and update rows in included tables), and that was possible. However, the fact that I can use the Django admin interface successfully in the deployed Docker-compose stack should already prove that things are ok with the Cloud SQL proxy.
I'm not an experienced Node.js developer by any means, and have a little experience with Django and Django admin. My intention behind using a Docker-compose setup was that I will not have to bother with the intricacies of js views, and only have to deal with the Python business logic.

Related

Building a Docker Compose stack with Azure Container Instance

I'm using Docker Compose with Azure Container Instance service. Their docs/guides say I should be able to build custom images with that service using docker compose up -d, but the service forces me to include a pre-built image in my compose.yml. How can I deploy a web app from a Compose file so that Azure builds it too?
Here's my my desired compose file is like. Note that I use a generic Redis image, but that the DB and API both rely on other Dockerfiles to be built. I'd like to use some Azure service (I think Container Instance is the only one that supports interacting with compose files) to deploy with a single command so that my local code is pushed to some build server (or pulled using git by the build server) to build any needed images used by my compose.yml. Is this possible?
version: "3.9"
services:
db:
build:
context: docker/db
dockerfile: db.Dockerfile
restart: always
ports:
- "3306:3306"
api:
build:
context: docker/api
dockerfile: api.Dockerfile
restart: always
environment:
- MYSQL_HOST=db
- MYSQL_HOST_REPLICA=db
- REDIS_HOSTNAME=redis
ports:
- "8000:8000"
depends_on:
- db
- redis
redis:
image: redis:alpine
restart: always
ports:
- "6379:6379"

I cannot establish connection between two containers in Heroku

I have a web application built using Node.js and MongoDB. I have containerized the app using Docker and it was working fine locally but once i have tried to deploy it to production I couldn't establish connection between the backend and MongoDB container. For some reason the environment variables are always undefined.
Here is my docker-compose.yml:
version: "3.7"
services:
food-delivery-db:
image: mongo:4.4.10
restart: always
container_name: food-delivery-db
ports:
- "27018:27018"
environment:
MONGO_INITDB_DATABASE: food-delivery-db
volumes:
- food-delivery-db:/data/db
networks:
- food-delivery-network
food-delivery-app:
image: thisk8brd/food-delivery-app:prod
build:
context: .
target: prod
container_name: food-delivery-app
restart: always
volumes:
- .:/app
ports:
- "3000:5000"
depends_on:
- food-delivery-db
environment:
- MONGODB_URI=mongodb://food-delivery-db/food-delivery-db
networks:
- food-delivery-network
volumes:
food-delivery-db:
name: food-delivery-db
networks:
food-delivery-network:
name: food-delivery-network
driver: bridge
This is expected behaviour:
Docker images run in dynos the same way that slugs do, and under the same constraints:
…
Network linking of dynos is not supported.
Your MongoDB container is great for local development, but you can't use it in production on Heroku. Instead, you can select and provision an addon for your app and connect to it from your web container.
For example, ObjectRocket for MongoDB sets an environment variable ORMONGO_RS_URL. Your application would connect to the database via that environment variable instead of MONGODB_URI.
If you'd prefer to host your database elsewhere, that's fine too. I believe MongoDB Atlas is the official offering.

How to connect to Node API docker container from Angular Nginx container

I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://server-external-ip:80 but when trying to login, Nginx is not connecting to NodeApi. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
ports:
- 5432:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
networks:
- my-network
backend: # name of the second service
image: myId/mynodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
networks:
- my-network
command: bash -c "sleep 20 && node server.js"
myapp:
image: myId/myangularapp
ports:
- "80:80"
depends_on:
- backend
networks:
- my-network
networks:
my-network:
I am not sure what the apiUrl should be? I have tried the following and nothing worked:
apiUrl: "http://backend:3000/api"
apiUrl: "http://server-external-ip:3000/api"
apiUrl: "http://server-internal-ip:3000/api"
apiUrl: "http://localhost:3000/api"
I think you should use the docker-compose service as a DNS. It seems you've several docker hosts/ports available, there are the following in your docker-compose structure:
db:5432
http://backend:3000
http://myapp
Make sure to use db as POSTGRES_DB in the environment part for backend service.
Take a look to my repo, I think is the best way to learn how a similar project works and how to build several apps with nginx, you also can check my docker-compose.yml, it uses several services and are proxied using nginx and are worked together.
On this link you’ll find a nginx/default.conf file and it contains several nginx upstream configuration please take a look at how I used docker-compose service references there as hosts.
Inside the client/ directory, I also have another nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
If you need set or redirect traffic to /api you can use some ngnix config like this
I think this use case can be useful for you and other users!

Using Docker compose and volumes to persist uploaded pictures directory

I'm working on an ecommerce, I want to have the ability to upload product photos from the client and save them in a directory on the serve.
I implemented this feature but then I understood that since we use docker for our deployment, the directory in which I save the pictures won't persist. as I searched, I kinda realized that I should use volumes and map that directory in docker compose. I'm a complete novice backend developer (I work on frontend) so I'm not really sure what I should do.
Here is the compose file:
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
If I want to store the my photos in ../static/images (ralative to the root of my project), what should I do and how should refer to this path in my backend code?
Backend is in nodejs (Nestjs).
You have to create a volume and tell to docker-compose/docker stack mount it within the container specify the path you wamth. See the volumes to the very end of the file and the volumes option on nodejs service.
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
- static-files:/home/node/static/images
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
volumes:
static-files:{}
Doing this an empty container will be crated persisting your data and every time a new container mounts this path you can get the data stored on it. I would suggest to use the same approach with mysql instead of saving data within the host.
https://docs.docker.com/compose/compose-file/#volume-configuration-reference

how to access a node container through another oracle-jet container?

I want my UI to get data through my node server, where each is an independent container. How to connect them using docker-compose file?
i have three container that run mongodb, node server and oracle jet for ui,
i wanna access the node APIs through the oraclejet UI so and the mongo db through the node so i defined this docker-compose.
the link between mongo and node work.
version: "2"
services:
jet:
container_name: sam-jet
image: amazus/sam-ui
ports:
- "8000:8000"
links:
- app
depends_on:
- app
app:
container_name: sam-node
restart: always
image: amazus/sam-apis
ports:
- "3000:3000"
links:
- mongo
depends_on:
- mongo
mongo:
container_name: sam-mongo
image: amazus/sam-data
ports:
- "27017:27017"
In my oracle jet i define the URI as "http://localhost:3000/sam" but didn't work and i tried "http://app:3000/sam" after reading this one accessing a docker container from another another container but also i doesn't work
OJET just needs a working end point to fetch data, really does not matter where and what stack it is hosted. Once you are able to get the nodeJS service working and the URL works from a browser, it should work from the JET app as well. There may be CORS related headers that you need to set up at server side as well.

Resources