I have a docker compose file that I use for local development but I need to deploy this on Azure Containers. The docker compose I use on local is this:
version: "3.4"
services:
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "8888:8888"
- "8889:8889"
- "4317:4317"
depends_on:
- zipkin-all-in-one
seq:
image: datalust/seq:latest
environment:
- ACCEPT_EULA=Y
ports:
- "80:80"
- "5341:5341"
And this one is working fine. Actually I could make Zipkin and Seq work with Azure, the problem is open telemetry. It needs a configuration file to work, so I did the follow:
Created an azure file storage
Added Opentelemetry yaml file into this storage
Changed Docker compose file as the follows to point this volume
version: "3.4"
services:
#zipkin here
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- mydata:/mounts/testvolumes
ports:
- "8888:8888"
- "8889:8889"
- "4317:4317"
depends_on:
- zipkin-all-in-one
# seq here
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: testvolume
storage_account_name: storageqwricc
You can see in this image that everything is running but otel.
I'm almost sure that the problem is it can't find otel configuration file. The error that appears in Logs:
Error: Failed to start container otel-collector, Error response: to create containerd task: failed to create shim task: failed to create container ddf9fc55eee4e72cc78f2b7857ff735f7bc506763b8a7ce62bd9415580d86d07: guest RPC failure: failed to create container: failed to run runc create/exec call for container ddf9fc55eee4e72cc78f2b7857ff735f7bc506763b8a7ce62bd9415580d86d07 with exit status 1: container_linux.go:380: starting container process caused: exec: stat no such file or directory: unknown
And this is my azure file storage.
I've tried different paths. Checked permission file. Running without OTEL and it works as expected.
Also tried this configuration from other thread:
volumes:
- name: mydata
azureFile:
share_name: testvolume
readOnly: false
storageAccountName: storageqwricc
storageAccountKey: mysecretkey
Related
I have been trying to setup my Postgresql db with my API. My API is on Azure WebApps and db on Azure fileshare.
The following is my docker compose file
version: '3.3'
services:
db:
image: postgres
volumes:
- db_path:/var/lib/postgresql/data
restart: always
environment:
POSTGRES_USER: dbuser
POSTGRES_PASSWORD: dbuser's_password
POSTGRES_DB: db_1
api:
depends_on:
- db
image: <my registry>.azurecr.io/api:latest
ports:
- "80:80"
restart: always
My WebApp -> Configuration -> Path mappings is below
It did get deployed but I couldn't find my database files in my Azure fileshare's location. Now as I am redeploying, I can see the following in the Log Stream.
Can somebody show me where did I do wrong? Why my DB is not in my Azure fileshare?
Thanks in advance
Edit:
Along with what Charles suggested as below:
I also have updated my docker-compose.yml file as below. The changes I made is I kept my volume name same as mapping name and added driver_opts:
version: '3.8'
services:
db:
image: mysql
volumes:
- mysql:/var/lib/mysql
environment:
- MYSQL_DATABASE=dbName
- MYSQL_ROOT_PASSWORD=MyRootPassword!
- MYSQL_USER=dbUser_1
- MYSQL_PASSWORD=dbUser_1'sPassword
restart: always
api:
depends_on:
- db
entrypoint: ["./wait_for.sh", "db:3306", "-t", "3600", "--", "execute", "api"] #waiting very long enough to set the db server up and running
image: <my registry's url>/api:latest
ports:
- "80:80"
restart: always
volumes:
mysql:
driver: azure_file
driver_opts:
share_name: Azure_Share_Name
storage_account_name: Azure_Storage_Account_Name
storageaccountkey: Azure_Key
This is a known issue. When you mount using the persistent volume with Azure File Share, then the mount path will have the root owner and group, and you can't change it. So it the application needs the path with a special user, like this issue, the Postgresql needs the mount path /var/lib/postgresql/data having postgres owner, then it can't be achieved.
Note that the screenshot you provide with the path mapping shows the wrong mount path with the configuration in your YAML file. It may be a mistake.
I have an Azure app service which is using docker-compose.yml file as it is multi-container docker app. It's docker-compose.yml file is given below:
version: '3.4'
services:
multiapp:
image: yogyogi/apps:i1
build:
context: .
dockerfile: MultiApp/Dockerfile
multiapi:
image: yogyogi/apps:i2
build:
context: .
dockerfile: MultiApi/Dockerfile
The app works pefectly with no issues. I can open it on the browser perfectly.
Now here starts the problem. I am trying to put an SSL Certficate for this app. I want to use volume and map it to inside the container. So i changed the docker-compose.yml file to:
version: '3.4'
services:
multiapp:
image: yogyogi/apps:i1
build:
context: .
dockerfile: MultiApp/Dockerfile
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypass123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- SSL:/https
multiapi:
image: yogyogi/apps:i2
build:
context: .
dockerfile: MultiApi/Dockerfile
Here only the following lines are added to force the app to use aspnetapp.pfx as the SSL. This ssl should be mount to https folder of the container.
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypass123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- SSL:/https/aspnetapp.pfx:ro
Also note that here SSL given on the volume is referring to Azure File Share. I have created an Azure Storage account (called 'myazurestorage1') and inside it created an Azure file share (called 'myfileshare1'). In this file share I created a folder by th name of ssl and uploaded my certificate inside this folder.
Then in the azure app service which contains the app running in docker compose. I did path mappings for this azure file share so that it can be used with your app. In the below screenshot see this:
I also tried the following docker-compose.yml file as given in official docs but no use:
version: '3.4'
services:
multiapp:
image: yogyogi/apps:i1
build:
context: .
dockerfile: MultiApp/Dockerfile
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypass123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ./mydata/ssl/aspnetapp.pfx:/https/aspnetapp.pfx:ro
multiapi:
image: yogyogi/apps:i2
build:
context: .
dockerfile: MultiApi/Dockerfile
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: myfileshare1
storage_account_name: myazurestorage1
storageAccountKey: j74O20KrxwX+vo3cv31boJPb+cpo/pWbSy72BdSDxp/d7hXVgEoR56FVA7B+L6D/CnmdpIqHOhiEKqbuttLZAw==
But this does not works as app starts getting error. What is wrong and how to solve it?
I'm attempting to deploy a PosgreSQL Docker container in Azure. To that end, I created in Azure a storage account and a file share to store a Docker volume.
Also, I created the Docker Azure context and set it as default.
To create the volume, I run:
volume create volpostgres --storage-account mystorageaccount
I can verify that the volume was created with docker volume ls.
ID DESCRIPTION
mystorageaccount/volpostgres Fileshare volpostgres in mystorageaccount storage account
But when I try to deploy with docker compose up, I get
could not find volume source "volpostgres"
This is the YAML file that does not work. How to fix it? how to point to the volume correctly?
version: '3.7'
services:
postgres:
image: postgres:13.1
container_name: cont_postgres
networks:
db:
ipv4_address: 22.225.124.121
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: xxxxx
volumes:
- volpostgres:/var/lib/postgresql/data
networks:
db:
driver: bridge
ipam:
driver: default
config:
- subnet: 22.225.124.121/24
volumes:
volpostgres:
name: mystorageaccount/volpostgres
You can follow the steps here. And the volumes part in the docker-compose file needs to be changed into this:
volumes:
volpostgres:
driver: azure_file
driver_opts:
share_name: myfileshare
storage_account_name: mystorageaccount
I'm trying to run KrakenD image in Azure App Service.
KrakenD requires json config file krakend.json to be put into /etc/krakend/ (KrakenD image is based on Linux Alpine)
I created Web App for containers with the following docker-compose file:
version: "3"
services:
krakend:
image: devopsfaith/krakend:latest
volumes:
- ${WEBAPP_STORAGE_HOME}/site/krakend:/etc/krakend
ports:
- "8080:8080"
restart: always
Added storage account with a blob container where uploaded sample kraken.json file
In app configuration i added a path mapping like this:
But it looks like volume was not mounted correctly
2019-11-15 12:46:29.368 ERROR - Container create failed for
krakend_krakend_0_3032a936 with System.AggregateException, One or
more errors occurred. (Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"} ) (Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"} ) InnerException:
Docker.DotNet.DockerApiException, Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"}
2019-11-15 12:46:29.369 ERROR - multi-container unit was not started
successfully
Additional questions
What does mean Mount path in Storage mounting? - i put there value /krankend
volume definition starts with ${WEBAPP_STORAGE_HOME} in docs they specified it as
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
so i did it by analogy and tried all 3 possible paths
${WEBAPP_STORAGE_HOME}/site/wwwroot/krakend
${WEBAPP_STORAGE_HOME}/site/krakend
${WEBAPP_STORAGE_HOME}/krakend
but no luck - still getting the error
ERROR parsing the configuration file: '/etc/krakend/krakend.json'
(open): no such file or directory
finally resolved that with the following docker-compose file
version: "3"
services:
krakend:
image: devopsfaith/krakend:latest
volumes:
- volume1:/etc/krakend
environment:
WEBSITES_ENABLE_APP_SERVICE_STORAGE: TRUE
ports:
- "8080:8080"
restart: always
where volume1 is a blob storage mounted as the following
This did not work for me. I was getting Bind mount must start with ${WEBAPP_STORAGE_HOME}.
This worked. docker-compose.yml
version: "3"
services:
backend:
image: xxxx
ports:
- 80:80
volumes:
- vol1:/var/www/html/public
volumes:
vol1:
driver: local
Volume definition:
Name: vol1
Config: basic
Storage accounts: ...
Storage container: ...
Mount paht: /var/www/html/public
I follow this article (https://blogs.msdn.microsoft.com/jcorioland/2016/04/25/create-a-docker-swarm-cluster-using-azure-container-service/#comment-1015) to setup a swarm docker host cluster. There are 1 master and 2 agents.The good points for this article is to use "-H 172.16.0.5:2375" which creates new containers on "agent" rather than "master" one.
My question is: if I want to make docker-compose.yml work with that, how could I do it? I have tried command like:
docker-compose -H 172.16.0.5:2375 up
But it doesn't work. If I just use:
docker-compose up
Then the containers will be created on master host and I couldn't even use public DNS to visit the website.
Here is the yml file I use for 1 magento & 1 mariadb containers:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '3306:3306'
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
ports:
- '80:80'
volumes:
- 'magento_data:/bitnami/magento'
- 'apache_data:/bitnami/apache'
- 'php_data:/bitnami/php'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
And this section is from my guess based on that article,
environment:
- MAGENTO_HOST=172.16.0.5
- MARIADB_HOST=172.16.0.5
but yml doesn't like port appended, e.g.
environment:
- MAGENTO_HOST=172.16.0.5:2375
- MARIADB_HOST=172.16.0.5:2375
Thanks a lot!