Building a Docker Compose stack with Azure Container Instance - azure

I'm using Docker Compose with Azure Container Instance service. Their docs/guides say I should be able to build custom images with that service using docker compose up -d, but the service forces me to include a pre-built image in my compose.yml. How can I deploy a web app from a Compose file so that Azure builds it too?
Here's my my desired compose file is like. Note that I use a generic Redis image, but that the DB and API both rely on other Dockerfiles to be built. I'd like to use some Azure service (I think Container Instance is the only one that supports interacting with compose files) to deploy with a single command so that my local code is pushed to some build server (or pulled using git by the build server) to build any needed images used by my compose.yml. Is this possible?
version: "3.9"
services:
db:
build:
context: docker/db
dockerfile: db.Dockerfile
restart: always
ports:
- "3306:3306"
api:
build:
context: docker/api
dockerfile: api.Dockerfile
restart: always
environment:
- MYSQL_HOST=db
- MYSQL_HOST_REPLICA=db
- REDIS_HOSTNAME=redis
ports:
- "8000:8000"
depends_on:
- db
- redis
redis:
image: redis:alpine
restart: always
ports:
- "6379:6379"

Related

How to deploy a multi-container docker with GitHub actions to Azure web application?

I want to deploy a web application, which is built with Docker into a Azure web app.
There are a lot of tutorials and documentation about how to easily deploy a single docker image into Azure. But how to deploy multiple images into Azure?
I want to achieve this:
Local development with Docker-Compose. Works.
Versioning with GitHub. Works.
GitHub actions > Building the Docker images and pushing it to Docker-Hub (maybe not necessary, if the images are registered in Azure). Works.
Deploy everything to Azure and run the web application.
There is a similar question here: How to deploy a multi-container app to Azure with a Github action?
But I want to avoid the manual step, which is mentionend in the answer.
My docker-compose.yml:
version: '3.8'
services:
server-app:
image: tensorflow/serving
command:
- --model_config_file=/models/models.config
ports:
- 8501:8501
container_name: TF_serving
tty: true
volumes:
- type: bind
source: ./content
target: /models/
client-app:
build:
context: ./client-app
dockerfile: dockerfile
image: user1234/client-app:latest
restart: unless-stopped
ports:
- 7862:80

I cannot establish connection between two containers in Heroku

I have a web application built using Node.js and MongoDB. I have containerized the app using Docker and it was working fine locally but once i have tried to deploy it to production I couldn't establish connection between the backend and MongoDB container. For some reason the environment variables are always undefined.
Here is my docker-compose.yml:
version: "3.7"
services:
food-delivery-db:
image: mongo:4.4.10
restart: always
container_name: food-delivery-db
ports:
- "27018:27018"
environment:
MONGO_INITDB_DATABASE: food-delivery-db
volumes:
- food-delivery-db:/data/db
networks:
- food-delivery-network
food-delivery-app:
image: thisk8brd/food-delivery-app:prod
build:
context: .
target: prod
container_name: food-delivery-app
restart: always
volumes:
- .:/app
ports:
- "3000:5000"
depends_on:
- food-delivery-db
environment:
- MONGODB_URI=mongodb://food-delivery-db/food-delivery-db
networks:
- food-delivery-network
volumes:
food-delivery-db:
name: food-delivery-db
networks:
food-delivery-network:
name: food-delivery-network
driver: bridge
This is expected behaviour:
Docker images run in dynos the same way that slugs do, and under the same constraints:
…
Network linking of dynos is not supported.
Your MongoDB container is great for local development, but you can't use it in production on Heroku. Instead, you can select and provision an addon for your app and connect to it from your web container.
For example, ObjectRocket for MongoDB sets an environment variable ORMONGO_RS_URL. Your application would connect to the database via that environment variable instead of MONGODB_URI.
If you'd prefer to host your database elsewhere, that's fine too. I believe MongoDB Atlas is the official offering.

Unable to use volume in docker compose yml in Azure

I have an Azure app service which is using docker-compose.yml file as it is multi-container docker app. It's docker-compose.yml file is given below:
version: '3.4'
services:
multiapp:
image: yogyogi/apps:i1
build:
context: .
dockerfile: MultiApp/Dockerfile
multiapi:
image: yogyogi/apps:i2
build:
context: .
dockerfile: MultiApi/Dockerfile
The app works pefectly with no issues. I can open it on the browser perfectly.
Now here starts the problem. I am trying to put an SSL Certficate for this app. I want to use volume and map it to inside the container. So i changed the docker-compose.yml file to:
version: '3.4'
services:
multiapp:
image: yogyogi/apps:i1
build:
context: .
dockerfile: MultiApp/Dockerfile
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypass123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- SSL:/https
multiapi:
image: yogyogi/apps:i2
build:
context: .
dockerfile: MultiApi/Dockerfile
Here only the following lines are added to force the app to use aspnetapp.pfx as the SSL. This ssl should be mount to https folder of the container.
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypass123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- SSL:/https/aspnetapp.pfx:ro
Also note that here SSL given on the volume is referring to Azure File Share. I have created an Azure Storage account (called 'myazurestorage1') and inside it created an Azure file share (called 'myfileshare1'). In this file share I created a folder by th name of ssl and uploaded my certificate inside this folder.
Then in the azure app service which contains the app running in docker compose. I did path mappings for this azure file share so that it can be used with your app. In the below screenshot see this:
I also tried the following docker-compose.yml file as given in official docs but no use:
version: '3.4'
services:
multiapp:
image: yogyogi/apps:i1
build:
context: .
dockerfile: MultiApp/Dockerfile
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypass123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ./mydata/ssl/aspnetapp.pfx:/https/aspnetapp.pfx:ro
multiapi:
image: yogyogi/apps:i2
build:
context: .
dockerfile: MultiApi/Dockerfile
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: myfileshare1
storage_account_name: myazurestorage1
storageAccountKey: j74O20KrxwX+vo3cv31boJPb+cpo/pWbSy72BdSDxp/d7hXVgEoR56FVA7B+L6D/CnmdpIqHOhiEKqbuttLZAw==
But this does not works as app starts getting error. What is wrong and how to solve it?

Using Docker compose and volumes to persist uploaded pictures directory

I'm working on an ecommerce, I want to have the ability to upload product photos from the client and save them in a directory on the serve.
I implemented this feature but then I understood that since we use docker for our deployment, the directory in which I save the pictures won't persist. as I searched, I kinda realized that I should use volumes and map that directory in docker compose. I'm a complete novice backend developer (I work on frontend) so I'm not really sure what I should do.
Here is the compose file:
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
If I want to store the my photos in ../static/images (ralative to the root of my project), what should I do and how should refer to this path in my backend code?
Backend is in nodejs (Nestjs).
You have to create a volume and tell to docker-compose/docker stack mount it within the container specify the path you wamth. See the volumes to the very end of the file and the volumes option on nodejs service.
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
- static-files:/home/node/static/images
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
volumes:
static-files:{}
Doing this an empty container will be crated persisting your data and every time a new container mounts this path you can get the data stored on it. I would suggest to use the same approach with mysql instead of saving data within the host.
https://docs.docker.com/compose/compose-file/#volume-configuration-reference

NodeJS 14 in a Docker container can't connect to Postgres DB (in/out docker)

I'm making a React-Native app using Rest API (NodeJS, Express) and PostgreSQL.
Everything work good when hosted on my local machine.
Everything work good when API is host on my machine and PostgreSQL in docker container.
But when backend and frontend is both in docker, database is reachable from all my computer in local, but not by the backend.
I'm using docker-compose.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- "8080:8080"
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
volumes:
db-data:
.env_docker and .env have the same parameters (just name changing).
Here is my dockerfiles:
Backend
FROM node:14.1
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
Database
FROM postgres:alpine
COPY ./wallnerdb.sql /docker-entrypoint-initdb.d/
I tried to change my hostname in connection url to postgres by using the name of the docker, my host IP address, localhost, but no results.
It's also the same .env (file in my node repo with db_name passwd etc) I do use in local to connect my backend to the db.
Since you are using NodeJS 14 in the Docker Container - make sure that you have the latest pg dependency installed:
https://github.com/brianc/node-postgres/issues/2180
Alternatively: Downgrade to Node 12.
Also make sure, that both the database and the "backend" are in the same network. Also: the backend should best "depend" on the database.
version: '3'
services:
wallnerbackend:
build:
context: ./backend/
dockerfile: ../Dockerfiles/server.dockerfile
ports:
- '8080:8080'
networks:
- default
depends_on:
- wallnerdatabase
wallnerdatabase:
build:
context: .
dockerfile: ./Dockerfiles/postgresql.dockerfile
ports:
- '5432:5432'
volumes:
- db-data:/var/lib/postgresql/data
env_file: .env_docker
networks:
- default
volumes:
db-data:
networks:
default:
This should not be necessary in you case - as pointed out in the comments - since Docker Compose already creates a default network
The container name "wallnerdatabase" is the host name of your database - if not configured otherwise.
I expect the issue to be in the database connection URL since you did not share it.
Containers in the same network in a docker-compose.yml can reach each other using the service name. In your case the service name of the database is wallnerdatabase so this is the hostname that you should use in the database connection URL.
The database connection URL that you should use in your backend service should be similar to this:
postgres://user:password#wallnerdatabase:5432/dbname
Also make sure that the backend code is calling the database using the hostname wallnerdatabase as it is defined in the docker-compose.yml file.
Here is the reference on Networking in Docker Compose.
You should access your DB using service name as hostname. Here is my working example - https://gitlab.com/gintsgints/vue-fullstack/-/blob/master/docker-compose.yml

Resources