Azure WebApps + Azure File share with multicontainers - azure

I have been trying to setup my Postgresql db with my API. My API is on Azure WebApps and db on Azure fileshare.
The following is my docker compose file
version: '3.3'
services:
db:
image: postgres
volumes:
- db_path:/var/lib/postgresql/data
restart: always
environment:
POSTGRES_USER: dbuser
POSTGRES_PASSWORD: dbuser's_password
POSTGRES_DB: db_1
api:
depends_on:
- db
image: <my registry>.azurecr.io/api:latest
ports:
- "80:80"
restart: always
My WebApp -> Configuration -> Path mappings is below
It did get deployed but I couldn't find my database files in my Azure fileshare's location. Now as I am redeploying, I can see the following in the Log Stream.
Can somebody show me where did I do wrong? Why my DB is not in my Azure fileshare?
Thanks in advance
Edit:
Along with what Charles suggested as below:
I also have updated my docker-compose.yml file as below. The changes I made is I kept my volume name same as mapping name and added driver_opts:
version: '3.8'
services:
db:
image: mysql
volumes:
- mysql:/var/lib/mysql
environment:
- MYSQL_DATABASE=dbName
- MYSQL_ROOT_PASSWORD=MyRootPassword!
- MYSQL_USER=dbUser_1
- MYSQL_PASSWORD=dbUser_1'sPassword
restart: always
api:
depends_on:
- db
entrypoint: ["./wait_for.sh", "db:3306", "-t", "3600", "--", "execute", "api"] #waiting very long enough to set the db server up and running
image: <my registry's url>/api:latest
ports:
- "80:80"
restart: always
volumes:
mysql:
driver: azure_file
driver_opts:
share_name: Azure_Share_Name
storage_account_name: Azure_Storage_Account_Name
storageaccountkey: Azure_Key

This is a known issue. When you mount using the persistent volume with Azure File Share, then the mount path will have the root owner and group, and you can't change it. So it the application needs the path with a special user, like this issue, the Postgresql needs the mount path /var/lib/postgresql/data having postgres owner, then it can't be achieved.
Note that the screenshot you provide with the path mapping shows the wrong mount path with the configuration in your YAML file. It may be a mistake.

Related

Is it possible to mount my existing Mongo DataBase with docker-compose?

After some months of development I got to a point where it is better to dockerize my MERN application. I managed to create .yaml file and everything is working OK but the problem is that I already have big amount of data that is collected. I want to be able to mount this data to container but I don't know how to do it. Read a lot of stuff but still my data is not appearing after composing the applications. Here is how my docker-compose.yaml file looks-like:
version: '3.9'
services:
#MongoDB Service
mongo_db:
container_name: db_container
image: mongo:latest
restart: always
ports:
- 2717:27017
volumes:
- /mnt/c/temp/mongo/db:/data/db
#Node API Service
api:
build: .
ports:
- 4001:4001
environment:
PORT: 4001
MONGODB_URI: mongodb://db_container:27017
DB_NAME: project-system
depends_on:
- mongo_db
volumes:
mongo_db:
As you can see in this row:
volumes:
- /mnt/c/temp/mongo/db:/data/db
I am trying to point the path from my C:\ drive but this doesn't work. I also tried the same row in:
volumes:
mongo_db:
(the bottom of file) but again without success. Basically my existing DB is on
C:\data\db
How can I point this to be the source of MongoDB service?
First, you need to create the dump from local MongoDB and copy those files to docker MongoDB. You can use these commands to create:
mongodump --uri 'mongodb://localhost:27017/yourdatabase' --archive=<your file> --gzip
mongorestore --uri 'mongodb://remotehost:27017/yourdatabase' --archive=<your file> --gzip
You should be able to access the docker from local host.
Note: Reference this answer if you don't get it correct.
You can do these changes on the path you are mounting to make data persistent. Create a new folder C:/data/docker_mongo to make data persistent.
version: '3.9'
services:
#MongoDB Service
mongo_db:
container_name: db_container
image: mongo:latest
restart: always
ports:
- 2717:27017
volumes:
- C:/data/docker_mongo:/data/db
#Node API Service
api:
build: .
ports:
- 4001:4001
environment:
PORT: 4001
MONGODB_URI: mongodb://db_container:27017
DB_NAME: project-system
depends_on:
- mongo_db
volumes:
mongo_db:

How to mount a docker volume in Azure with docker compose YAML

I'm attempting to deploy a PosgreSQL Docker container in Azure. To that end, I created in Azure a storage account and a file share to store a Docker volume.
Also, I created the Docker Azure context and set it as default.
To create the volume, I run:
volume create volpostgres --storage-account mystorageaccount
I can verify that the volume was created with docker volume ls.
ID DESCRIPTION
mystorageaccount/volpostgres Fileshare volpostgres in mystorageaccount storage account
But when I try to deploy with docker compose up, I get
could not find volume source "volpostgres"
This is the YAML file that does not work. How to fix it? how to point to the volume correctly?
version: '3.7'
services:
postgres:
image: postgres:13.1
container_name: cont_postgres
networks:
db:
ipv4_address: 22.225.124.121
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: xxxxx
volumes:
- volpostgres:/var/lib/postgresql/data
networks:
db:
driver: bridge
ipam:
driver: default
config:
- subnet: 22.225.124.121/24
volumes:
volpostgres:
name: mystorageaccount/volpostgres
You can follow the steps here. And the volumes part in the docker-compose file needs to be changed into this:
volumes:
volpostgres:
driver: azure_file
driver_opts:
share_name: myfileshare
storage_account_name: mystorageaccount

Mount azure storage account in docker-compose

How can I Mount azure storage account as a volume in the docker-compose?
I checked this driver but it's deprecated and the link provided there, & it is inactive.
docker-compose.yml
version: '3.3'
services:
web:
image: web:74
ports:
- "3000:3000"
volumes:
logvolume01: {}
You can just pass the url of the blob path,
volumes:
- ${WEBAPP_STORAGE_HOME}/zoo1/data:/data
here is an example

Using Docker compose and volumes to persist uploaded pictures directory

I'm working on an ecommerce, I want to have the ability to upload product photos from the client and save them in a directory on the serve.
I implemented this feature but then I understood that since we use docker for our deployment, the directory in which I save the pictures won't persist. as I searched, I kinda realized that I should use volumes and map that directory in docker compose. I'm a complete novice backend developer (I work on frontend) so I'm not really sure what I should do.
Here is the compose file:
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
If I want to store the my photos in ../static/images (ralative to the root of my project), what should I do and how should refer to this path in my backend code?
Backend is in nodejs (Nestjs).
You have to create a volume and tell to docker-compose/docker stack mount it within the container specify the path you wamth. See the volumes to the very end of the file and the volumes option on nodejs service.
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
- static-files:/home/node/static/images
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
volumes:
static-files:{}
Doing this an empty container will be crated persisting your data and every time a new container mounts this path you can get the data stored on it. I would suggest to use the same approach with mysql instead of saving data within the host.
https://docs.docker.com/compose/compose-file/#volume-configuration-reference

How to run docker-compose in Azure Container Service and deploy to agent rather than master?

I follow this article (https://blogs.msdn.microsoft.com/jcorioland/2016/04/25/create-a-docker-swarm-cluster-using-azure-container-service/#comment-1015) to setup a swarm docker host cluster. There are 1 master and 2 agents.The good points for this article is to use "-H 172.16.0.5:2375" which creates new containers on "agent" rather than "master" one.
My question is: if I want to make docker-compose.yml work with that, how could I do it? I have tried command like:
docker-compose -H 172.16.0.5:2375 up
But it doesn't work. If I just use:
docker-compose up
Then the containers will be created on master host and I couldn't even use public DNS to visit the website.
Here is the yml file I use for 1 magento & 1 mariadb containers:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '3306:3306'
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
ports:
- '80:80'
volumes:
- 'magento_data:/bitnami/magento'
- 'apache_data:/bitnami/apache'
- 'php_data:/bitnami/php'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
And this section is from my guess based on that article,
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
but yml doesn't like port appended, e.g.
environment:
- MAGENTO_HOST=172.16.0.5:2375
- MARIADB_HOST=172.16.0.5:2375
Thanks a lot!

Resources