I've deployed a Web App on Azure and use a Docker Container from the public registry (my own image) to host my website. But users can upload pictures and data is stored in json-files on the server. Of course I want to write these files to a mounted volume outside of the container. So that I can redeploy an update version of my website without losing data.
Is that possible with Web Apps? Or do I need to move on to an Ubuntu VM with Docker on Azure? What I like about the webapps is I don't have to worry about managing the VM and only care about my container.
This blog post is a great start and understanding Azure's strategy regarding volume mounting (ASL == App Services on
Linux; ASW=App Services on Windows):
... However, in this case, we would like to leverage the regular App Service Filesystem, so we can interact with the application using FTP. When a container is deployed, ASL mounts the equivalent of D:\home path on ASW to /home (using volume mount in Docker). Now when that happens, it is up to your container to map the corresponding paths into the application. In order to understand how this works more closely, take a look at the official Dockerfile used in PHP7 container on ASL.
https://hajekj.net/2016/12/25/building-custom-docker-images-for-use-in-app-service-on-linux/
Related
Docker has a run command that accepts a --read-only argument for mounting a container with a read-only filesystem. Is there a way to set up an Azure App Service slot to run a container from an Azure Container Registry with a read-only filesystem? I haven't been able to find any documentation or setting in the web console for configuring this.
My current setup is to use a GitHub Actions workflow to build and deploy the container with docker/build-push-action and azure/webapps-deploy Actions. My app is a Python Django app and as part of a security assessment, I've been instructed to make the app run in a read-only environment to prevent runtime modification of the app's code. I've already ensured that no part of my app needs to be able to write to the Docker container's filesystem, so now all I need to do is to ensure that the filesystem cannot be modified.
Docker containers for Azure App Service cannot be run in read-only mode(Mounting a Host's root File System in Read-Only Mode). AZ CLI Azure App Service for Docker supported commands
You can run your app in Azure App Service directly from a ZIP package, the ZIP package itself gets mounted directly as the read-only wwwroot directory.
Running directly from a package have multiple benefits:
Eliminates file lock conflicts between deployment and runtime.
Ensures only full-deployed apps are running at any time.
https://learn.microsoft.com/en-us/azure/app-service/deploy-run-package
Unfortunately, you can't change the docker command that the Azure App Service used to run the containers. Actually, there is nothing you can do with the command. All the containers are temporary. If you only want to run the containers and do not need to persist the data. Then you need to do nothing. The app services won't affect the filesystem after you delete them.
Azure Files volume mounting is not supported in Windows containers.
I'm aware I can use AzCopy with Azure Files, but I was wondering if there was a simpler way that doesn't involve creating an Azure storage resource. Because I would have the added work of maintaining the creation/teardown of these storages.
Ideally, I would like the host agent (running create container), to simply copy the files directly to the container instances, therefore the files are tied to the execution of the hosting agent.
As I know there is no way to copy files to the Windows-based Azure container instance except the command. The AzCopy command is OK. It's impossible that you want to do something on the host agent. You can do nothing with the ACI host agent. Additionally, the ACI is more suitable for a quick test and running of the images.
If you want to copy files and other controls on the containers, I recommend the AKS. You can run the Windows-based containers in the AKS with Windows nodes, and the Azure File volume is also available for the Windows containers. See the information here.
B"H
I'd like to use Azure web apps to host staging servers for wordpress.
The best way to ensure that staging is as close as possible to live seems to be docker.
I use docker-compose on my dev machine and it works great. I would like to replicate that setup on azure.
My docker-compose.yml file sets up three containers 1) mysql 2) phpMyAdmin 3) my-wordpress-container
I mount three volumes
- ./data/mysql:/var/lib/mysql
- ./data/init/:/docker-entrypoint-initdb.d
in the db container and
- ./site/wp-content/uploads:/var/www/html/wp-content/uploads
in the wordpress container.
Most important is the /docker-entrypoint-initdb.d file for bootstrapping the db with test data.
How would I accomplish that in Azure web apps?
Before answering... Please do not explain how to deploy multiple containers on App services using docker-compose. Or post links to tutorials on hosting Wordpress in that environment. That is both simple and not quite enough for it to be useful.
Also, please do not post links about mounting File Shares in web apps. That is also quite simple.
The question is how to mount File Shares into a multi container setup. Specifically how I might be able to mount the /docker-entrypoint-initdb.d to bootstrap the db with test data.
This is totally doable in Container Instances. However there are other features that make it unusable for this solution.
I think this documentation is what you are looking for:
https://learn.microsoft.com/en-us/azure/app-service/containers/how-to-serve-content-from-azure-storage#use-custom-storage-in-docker-compose
From the docs:
Azure Storage can be mounted with multi-container apps using the custom-id. To view the custom-id name, run az webapp config storage-account list --name <app_name> --resource-group <resource_group>.
In your docker-compose.yml file, map the volumes option to custom-id. For example:
wordpress:
image: wordpress:latest
volumes:
- <custom-id>:<path_in_container>
There is a way to mount disks from Azure web apps, just google "mount disk from azure web app". There is a good blog post from TOM KERKHOVE.
However, I would advise you against this, currently the Azure Storage REST API appears more stable https://learn.microsoft.com/en-us/rest/api/storageservices/
Azure Web-Apps enables multi-containers in preview at the moment. Web-Apps does come with MySql as part of the instance. But not recommended for when you want to scale you Web-Application. So it is best to host the MySQL in MySQL as a service.
If you go to the link below it will walk you how to host Wordpress and Redis within Azure Web-App with multi-containers. Using My-SQL Service for the database and persistent storage.
https://learn.microsoft.com/en-gb/azure/app-service/containers/tutorial-multi-container-app
If you would still want to run MySQL in docker you may want to consider using Azure Kubernetes Service to host all the containers instead.
I hope this helps.
I have the Matomo Docker Image from https://github.com/bitnami/bitnami-docker-matomo that I run in a Web App for Container on Azure with my own Azure Container Registry (ACR).
Also, I have an Azure Storage Account with a File Share available.
What I would like to achieve is to mount a persistent storage (File Share from Az Storage Account) to it so I don't loose the config and plugins installed of Matomo.
I tried using the Mount Storage (Preview), but I couldn't get it to work.
Name: matomo_data
Storage Type: Azure Files
Mount path: /bitnami
As described in: https://github.com/bitnami/bitnami-docker-matomo#persisting-your-application
This didn't work.
I also tried via the setting WEBSITES_ENABLE_APP_SERVICE_STORAGE = true on the Web App for Containers, but apparently seems not to do anything either.
I would appreciate any hints here, as otherwise I would have to make a custom docker image, push it to the registry, with a custom docker compose file, which I would like to avoid.
Thanks a lot in advance for any hints on this!
To mount the Azure File Share to the Web App for Container, as I think, it's not simple persistent storage, it's a share action. See the Caution below:
Linking an existing directory in a web app to a storage account will
delete the directory contents. If you are migrating files for an
existing app, make a backup of your app and its content before you
begin.
So, if you want to mount the file share to the web app to persist the storage, you need to upload all the files needed to the file share first. And the steps that mount the Azure File Share to the Web app are here. It shows for Windows, and for Linux is also the same way.
But I will suggest you'd better use the persistent storage following the steps here. This way will create persistent storage at the beginning and will not delete the directory contents.
I am trying to build two different services which will be running on Azure Web Apps for Containers. I am creating docker images and storing it in Azure Container Registry. I want to share single persistent storage between these two services. I understood from blogs that you can mount /home directory but could not be shared between two services.
There is plugin for docker Cloudstor, I can create the volume but not sure how we can utilize this generated volume in Web Apps For Containers. The app service runs the command for docker, does anybody know how we can use the volume created using the plugin?
In My Opinion the webapps for containers should not be there. I think it is better to get a docker host machine as vm and then work with normal docker features. this is also the way microsoft describes in their docs for multi-docker szenarios. https://learn.microsoft.com/de-de/azure/virtual-machines/linux/docker-compose-quickstart
Things that should microsoft do:
give kudo a proper docker cli
map storages to docker volumes via azure dashboard/azure cli
Create a storage account and mount Fileshare into the docker image somewhere under /home
This will be easiest if the two service instances are in the same resource group as the storage account.
What is your reason for sharing a single storage instance?
Without experimenting I can't guarantee the same storage container can be shared between two app services. Depends on your needs. I expect two containers in the same storage account can be mounted into your two docker images.
Without knowing a little more this is the most I can contribute. All the best.