How to mount docker volume in Azure Web App for containers? - azure

I'm trying to run KrakenD image in Azure App Service.
KrakenD requires json config file krakend.json to be put into /etc/krakend/ (KrakenD image is based on Linux Alpine)
I created Web App for containers with the following docker-compose file:
version: "3"
services:
krakend:
image: devopsfaith/krakend:latest
volumes:
- ${WEBAPP_STORAGE_HOME}/site/krakend:/etc/krakend
ports:
- "8080:8080"
restart: always
Added storage account with a blob container where uploaded sample kraken.json file
In app configuration i added a path mapping like this:
But it looks like volume was not mounted correctly
2019-11-15 12:46:29.368 ERROR - Container create failed for
krakend_krakend_0_3032a936 with System.AggregateException, One or
more errors occurred. (Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"} ) (Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"} ) InnerException:
Docker.DotNet.DockerApiException, Docker API responded with status
code=InternalServerError, response={"message":"invalid volume
specification: ':/etc/krakend'"}
2019-11-15 12:46:29.369 ERROR - multi-container unit was not started
successfully
Additional questions
What does mean Mount path in Storage mounting? - i put there value /krankend
volume definition starts with ${WEBAPP_STORAGE_HOME} in docs they specified it as
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
so i did it by analogy and tried all 3 possible paths
${WEBAPP_STORAGE_HOME}/site/wwwroot/krakend
${WEBAPP_STORAGE_HOME}/site/krakend
${WEBAPP_STORAGE_HOME}/krakend
but no luck - still getting the error
ERROR parsing the configuration file: '/etc/krakend/krakend.json'
(open): no such file or directory

finally resolved that with the following docker-compose file
version: "3"
services:
krakend:
image: devopsfaith/krakend:latest
volumes:
- volume1:/etc/krakend
environment:
WEBSITES_ENABLE_APP_SERVICE_STORAGE: TRUE
ports:
- "8080:8080"
restart: always
where volume1 is a blob storage mounted as the following

This did not work for me. I was getting Bind mount must start with ${WEBAPP_STORAGE_HOME}.
This worked. docker-compose.yml
version: "3"
services:
backend:
image: xxxx
ports:
- 80:80
volumes:
- vol1:/var/www/html/public
volumes:
vol1:
driver: local
Volume definition:
Name: vol1
Config: basic
Storage accounts: ...
Storage container: ...
Mount paht: /var/www/html/public

Related

How to set configuration files using docker compose with Azure Container Instance

I have a docker compose file that I use for local development but I need to deploy this on Azure Containers. The docker compose I use on local is this:
version: "3.4"
services:
zipkin-all-in-one:
image: openzipkin/zipkin:latest
ports:
- "9411:9411"
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "8888:8888"
- "8889:8889"
- "4317:4317"
depends_on:
- zipkin-all-in-one
seq:
image: datalust/seq:latest
environment:
- ACCEPT_EULA=Y
ports:
- "80:80"
- "5341:5341"
And this one is working fine. Actually I could make Zipkin and Seq work with Azure, the problem is open telemetry. It needs a configuration file to work, so I did the follow:
Created an azure file storage
Added Opentelemetry yaml file into this storage
Changed Docker compose file as the follows to point this volume
version: "3.4"
services:
#zipkin here
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- mydata:/mounts/testvolumes
ports:
- "8888:8888"
- "8889:8889"
- "4317:4317"
depends_on:
- zipkin-all-in-one
# seq here
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: testvolume
storage_account_name: storageqwricc
You can see in this image that everything is running but otel.
I'm almost sure that the problem is it can't find otel configuration file. The error that appears in Logs:
Error: Failed to start container otel-collector, Error response: to create containerd task: failed to create shim task: failed to create container ddf9fc55eee4e72cc78f2b7857ff735f7bc506763b8a7ce62bd9415580d86d07: guest RPC failure: failed to create container: failed to run runc create/exec call for container ddf9fc55eee4e72cc78f2b7857ff735f7bc506763b8a7ce62bd9415580d86d07 with exit status 1: container_linux.go:380: starting container process caused: exec: stat no such file or directory: unknown
And this is my azure file storage.
I've tried different paths. Checked permission file. Running without OTEL and it works as expected.
Also tried this configuration from other thread:
volumes:
- name: mydata
azureFile:
share_name: testvolume
readOnly: false
storageAccountName: storageqwricc
storageAccountKey: mysecretkey

:( Application Error when trying to deploy docker-compose app in azure app service

I am trying to deploy a Django application on the azure app service using docker-compose, I already pushed two images to ACR and when I executed the docker-compose.yml on my computer everything worked fine, but when I deploy the app and visit the URL it only shows the next message:
":( Application Error
If you are the application administrator, you can access the diagnostic resources."
I also searched in the deployment center logs for any kind of info but there is only this message
"Failed to load container logs: Resource containerlog of type text not found"
any idea how can I fix it?
this is my docker-compose.yml
services:
web:
image: zaito.azurecr.io/backgroundtasks:latest
command: uwsgi --http "0.0.0.0:8000" --protocol uwsgi --module zaitoTasksApi.wsgi --master --processes 4 --threads 2
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- "8000"
nginx:
image: zaito.azurecr.io/nginx:django
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- "80:80"
depends_on:
- web
volumes:
media_volume:
static_volume:

Azure: client principal name is missing with docker compose

I'm trying to get the X-Ms-Client-Principal-Name value from the request header but it doesn't come anymore after I change the container settings from Single Container to Docker Compose.
Any idea why is this happening?
This is the docker-compose file that I'm using:
version: "3.8"
services:
web:
image: webimage:v1.0.0
ports:
- "8000:80"
redis:
image: redis:alpine
Both images are correctly downloaded from the repository and launched

Docker volume binds file as directory on remote linux server

I have docker installed on linux on remote server. I use ssh to connect to docker from my host machine with Windows 10. To deploy my app to docker, I use docker-compose v3, but I have serious problem. When I try to mount volume with file from my host machine, file converts to directory in my container.
Here's my docker-compose.yml:
version: "3.8"
services:
zeebe:
container_name: zeebe
image: camunda/zeebe
environment:
- ZEEBE_LOG_LEVEL=debug
ports:
- "26500:26500"
- "9600:9600"
- "5701:5701"
volumes:
- zeebe_data:/usr/local/zeebe/data
- type: bind
source: ./ZeebeServer/lib/application.yml
target: /usr/local/zeebe/config/application.yaml
read_only: true
depends_on:
- elasticsearch
networks:
- backend
operate:
container_name: operate
image: camunda/operate
ports:
- "8080:8080"
depends_on:
- zeebe
- elasticsearch
volumes:
- operate_data:/usr/local/operate
- type: bind
source: C:/Users/Aset/IdeaProjects/ABC-Store/ZeebeServer/operate/application.yml
target: /usr/local/operate/config/application.yml
read_only: true
networks:
- backend
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.7.1
ports:
- "9200:9200"
environment:
- discovery.type=single-node
- cluster.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- zeebe_elasticsearch_data:/usr/share/elasticsearch/data
networks:
- backend
And this is my error after command docker-compose -d up:
ERROR: for zeebe Cannot start service zeebe: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/User/IdeaProjects/ABC-Store/ZeebeServer/lib/application.yml\\\" to rootfs \\\"/var/lib/docker/overlay2/fec3c1f3ad8748e2bf3aa5fdf30558434c2474ecac8b7fdbad2fdbb27df24415/merged\\\" at \\\"/var/lib/docker/overlay2/fec3c1f3ad8748e2bf3aa5fdf30558434c2474ecac8b7fdbad2fdbb27df24415/merged/usr/local/zeebe/config/application.yaml\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I understand that you are trying to mount a local folder from your Windows host (C:/Users/Aset/IdeaProjects/...) into a docker container.
But this is not possible (that easily) since docker can only mount folders local to the docker host i.e. your remote server you're connecting to via ssh.
So in order to mount the folder of your local Windows host into the container you'd first have to mount that folder somehow on the docker host/remote server (e.g. with a Samba share) or copy the contents of that folder to the remote server and edit your volume source to the path of the folder on the docker host.
P.S. in case you're wondering why this doesn't work but docker build with files from your Windows host does: docker build uploads the build context i.e. the current directory, to the docker host making its content available on the server.

Mount azure storage account in docker-compose

How can I Mount azure storage account as a volume in the docker-compose?
I checked this driver but it's deprecated and the link provided there, & it is inactive.
docker-compose.yml
version: '3.3'
services:
web:
image: web:74
ports:
- "3000:3000"
volumes:
logvolume01: {}
You can just pass the url of the blob path,
volumes:
- ${WEBAPP_STORAGE_HOME}/zoo1/data:/data
here is an example

Resources