When specifying a docker volume in a dockerfile, docker creates a volume when the container is started. As expected the directories that are defined are there when logged into the container at the expected paths.
However, when using the same image in Azure Webapp for Linux, while logging into the container, the expected paths and files are not there.
Does azure webapp not support this or is there some setting that I'm missing and need to enable?
I'm not talking of mounting a filesystem/storage account as documented here.
That won't work. The alternative is to enable Persistent Storage and create a Docker Compose file where you specify a volume even if you have a single container. You then use the multi-container option and upload the YAML file.
Enable persistent storage by setting the application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE = TRUE. You can do this from the portal or by using the CLI.
az webapp config appsettings set -g myResourceGroup -n <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE
The ${WEBAPP_STORAGE_HOME} environment variable will point to the /home folder of the VM running your container. Use it to map the folder with a volume.
version: '3.3'
services:
wordpress:
image: myimage
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "80:80"
restart: always
Related
My simple docker-compose.yaml file:
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- .hi:/var/www/html
ports:
- 8000:80
in folder hi/ I have just an index.php with a hello world print in it. (Do I need to have a Dockerfile here also?)
Now I just want to run this container with docker compose up:
$ docker compose up
host path ("/Users/xy/project/TEST/hi") not allowed as volume source, you need to reference an Azure File Share defined in the 'volumes' section
What has "docker compose" up to do with Azure? - I don't want to use Azure File Share at this moment, and I never mentioned or configured anything with Azure. I logged out of azure with: $az logout but got still this strange error on my macbook.
I've encountered the same issue as you but in my case I was trying to use an init-mongo.js script for a MongoDB in an ACI. I assume you were working in an Azure context at some point, I can't speak on that logout issue but I can speak to volumes on Azure.
If you are trying to use volumes in Azure, at least in my experience, (regardless if you want to use file share or not) you'll need to reference an Azure file share and not your host path.
Learn more about Azure file share: Mount an Azure file share in Azure Container Instances
Also according to the Compose file docs:
The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.
So the docker-compose file should look something like this
docker-compose.yml
version: '3'
services:
website:
image: php:7.4-cli
container_name: php72
volumes:
- hi:/var/www/html
ports:
- 8000:80
volumes:
hi:
driver: azure_file
driver_opts:
share_name: <name of share>
storage_account_name: <name of storage account>
Then just place the file/folder you wanted to use in the file share that is driving the volume beforehand. Again, I'm not sure why you are encountering that error if you've never used Azure but if you do end up using volumes with Azure this is the way to go.
Let me know if this helps!
I was testing to deploy docker compose on Azure and I faced the same problem as yours
then I tried to use docker images and that gave me the clue:
it says: image command not available in current context, try to use default
so I found the command "docker context use default"
and it worked!
so Azure somehow changed the docker context, and you need to change it back
https://docs.docker.com/engine/context/working-with-contexts/
I am trying to mount an azure file share to a Web App for Containers (linux) service. This is a .net Core 3 web api app with an angular front end. The app container runs perfectly locally when I mount a local drive to load the exact same files as in the file share.
according to the docker docs for azure file share I should set my docker compose file to be the following:
version: '3.4'
services:
webui:
image: ${DOCKER_REGISTRY-}webui
build:
context: .
dockerfile: src/WebUI/Dockerfile
environment:
- "UseInMemoryDatabase=false"
- "ASPNETCORE_ENVIRONMENT=Production"
- "ConnectionStrings__DefaultConnection=Server="
- "ASPNETCORE_Kestrel__Certificates__Default__Path=/security/mycertfile.pfx"
- "ASPNETCORE_Kestrel__Certificates__Default__Password=Your_password123"
ports:
- "5000:5000"
- "5001:5001"
volumes:
- mcpdata:"/security:/security"
restart: always
volumes:
mcpdata:
driver: azure_file
driver_opts:
share_name: sharename
storage_account_name: storageaccountname
In the configuration for my web app I have created the following file mount:
I can confirm that the file share contains the file referenced in the environment variables: mcpdata/security/mycertfile.pfx
PROBLEM:
When the container is run by the service it gives an error:
System.InvalidOperationException: There was an error loading the certificate. The file '/security/mycert.pfx' was not found.
WHAT I TIRED:
Because the container fails I cannot ssh into it to check for the files. So i pull the image from azure container registry locally and then do a docker export -o dump.tar . I then extract the files and the security folder is not created.
I also tried just referencing the named file share directly in the docker compose file by removing the top level mount definition from the docker compose file. removed code shown below:
volumes:
mcpdata:
driver: azure_file
driver_opts:
share_name: sharename
storage_account_name: storageaccountname
QUESTION:
Can someone help me connect an azure file share to my container, or help me confirm where the files are mounted when the container fails.
EDIT 1:
attempt to add file share mount with azure cli. I used the following command to add the file share mount to my web app:
az webapp config storage-account add --resource-group "rgname" --name "appname" --slot development --custom-id fsmount001 --storage-type AzureFiles --share-name "sname" --account-name "aname" --access-key "key" --mount-path /
this command works and creates the file mount, however I still get the error that it cannot find the cert file in the /security/ folder
If I bash into the app via kudu and not the container itself, I can see that the file mount exists and is named security in the root of the web app.
EDIT 2: SOLUTION
set up the file mount with the following command:
az webapp config storage-account add --resource-group "rgname" --name "appname" --slot development --custom-id fsmount001 --storage-type AzureFiles --share-name "sname" --account-name "aname" --access-key "key" --mount-path /security/security/
In docker compose I use:
volumes:
- fsmount001: /security:/security
In appsettings.Production.json:
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "/security/mycert.pfx",
"Password": "password"
}
}
This is what my file mount settings look like in the azure portal under configuration -> path mappings:
Inside the file mount is a folder called security which contains the cert file.
Thanks to Charles help and I hope this helps someone else!
The steps what you have followed is for the ACI, not for the
Web App. To mount the Azure File Share to the Azure Web App for the container, you just need to follow the steps in Link storage to your app.
And you need to change the docker-compose file at the volumes:
From:
volumes:
- mcpdata:"/security:/security"
Into:
volumes:
- custom-id:/security/security/
The custom-id is the thing you uses in the CLI command.
I'm trying to build a multiconatiner app with azure. I'm struggling with accessing persistent storage. In my docker-compose file I want to add the config file for my rabbitmq container. I've mounted a fileshare to the directory "/home/fileshare" which contains the def.json. On the cloud it doesn't seem to create the volume, as on startup rabbitmq can't find the file. If i do this locally and just save the file somewhere it works.
Docker Compose File:
version: '3.8'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
volumes:
- /home/fileshare/def.json:/opt/rabbitmq-conf/def.json
expose:
- 5672
- 15672
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_management load_definitions "/opt/rabbitmq-conf/def.json"
networks:
- cloudnet
networks:
cloudnet:
driver: bridge
You need to use the WEBAPP_STORAGE_HOME env variable that is mapped to persistent storage at /home.
${WEBAPP_STORAGE_HOME}/fileshare/def.json:/opt/rabbitmq-conf/def.json
On my understanding, you want to mount the Azure File Share to the container rabbitmq and upload the def.json file to the file share so that you can access the def.json file inside the container.
Follow the steps here to mount the file share to the container. And it just supports to mount the file share, not the file directly.
The solution to this problem seems to be to use ftp to access the webapp and save the definitions file. Docker-Compose is in preview mode (since 2018), and a lot of the options are actually not supported. I tried mounting the storage to a single container app and used ssh to connect to it, and the file is exactly where one would expect it to be. With a multi-container app this doesn't work.
I feel like the docker-compose feature is not yet fully supported
I am relatively new to Docker and am currently building a multi-container dockerized azure web app (in flask). However, I am having some difficulty with secret management. I had successfully built a version that was storing app secrets through environment variables. But based on some recent reading it has come to my attention that that is not a good idea. I've been attempting to update my app to use Docker Secrets but have had no luck.
I have successfully created the secrets based on this post:
how do you manage secret values with docker-compose v3.1?
I have deployed the stack and verified that the secrets are available in both containers in /run/secrets/. However, when I run the app in azure I get an error.
Here are the steps I've taken to launch the app in azure.
docker swarm init --advertise-addr XXXXXX
$ echo "This is an external secret" | docker secret create my_external_secret
docker-compose build
docker push
docker stack deploy -c *path-to*/docker-compose.yml webapp
Next I'll restart the azure web app to pull latest images
Basic structure of the docker-compose is below.
version: '3.1'
services:
webapp:
build: .
secrets:
- my_external_secret
image: some_azure_registry/flask_site:latest
celery:
build: .
command: celery worker -A tasks.celery --loglevel=INFO -P gevent
secrets:
- my_external_secret
image: some_azure_registry.azurecr.io/flask_site_celery:latest
secrets: # top level secrets block
- my_external_secret
external: true
However, when I run the app in azure I get:
No such file or directory: '/run/secrets/my_external_secret
I can attach a shell to the container and successfully run:
python
open('/run/secrets/*my_external_secret*', 'r').read().strip()
But when the above line is executed by the webapp it fails with the no file or directory error. Any help would be greatly appreciated.
Unfortunately, the secret at the top-level of docker-compose is not supported in Azure Web App for Container. Take a look below:
Supported options
command
entrypoint
environment
image
ports
restart
services
volumes
Unsupported options
build (not allowed)
depends_on (ignored)
networks (ignored)
secrets (ignored)
ports other than 80 and 8080 (ignored)
For more details, see Docker Compose options.
I want to setup a reverse proxy with authentication for my webapp, here's my composer file:
version: '3'
services:
nginx:
image: registry.azurecr.io/nginx:latest
container_name: nginx
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/.htpasswd:/etc/nginx/.htpasswd
ports:
- 80:80
myapp:
image: registry.azurecr.io/myapp:latest
container_name: myapp
expose:
- 8080
As you can see I rely on two files to edit the nginx configuration. Problem is that I want to deploy this application to azure's app services but azure does not allow to specify external configurations (as far as I know).
So, is there a way to specify a couple of username/passwords in this same composer file?
For your requirement, you can use the persist storage for the web app or use the path mapping, then put the file in it. If you just want to put the file in the compose file, it seems it's impossible.
For persist storage, mount the persist volume to the path inside the container in the compose file, then enable the persist storage through setting the WEBSITES_ENABLE_APP_SERVICE_STORAGE as true in the app settings. For more details, see Use persistent storage in Docker Compose.
For path mapping, you need to create a storage account. Use the blob or the file share. See the document containerized app. Set the volume like this:
wordpress:
image: wordpress:latest
volumes:
- <custom-id>:<path_in_container>
For more details about steps, see Serve content from Azure Storage in App Service on Linux.