Persistant storage Azure container - azure

I've been struggling in a couple of days now with how to set up persistent storage in a custom docker container deployed on Azure.
Just for the ease, I've used the official Wordpress image in my container and provided the database credentials through environment variables, so far so good. The application is stateless and the data is stored in a separate MySQL service in Azure.
How to handle content files like server logs or uploaded images, those are placed in /var/www/html/wp-content/upload and will be removed if the container gets removed or if restoring a backup snapshot. Is it possible to mount this directory to a host location? Is it possible to mount this directory so it will be accessible through the FTP to the App Service?

Ok, I realized that it's not possible to mount volumes to a single container app. To mount a volume you must use Docker Compose and mount the volume as in the example below.
Also, make sure you set the application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to TRUE
version: '3.3'
services:
wordpress:
image: wordpress
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "8000:80"
restart: always
With this, your uploaded files will be persisted and also included in the snapshot backups.

Yes, you can do this and you should read about PV (persistent volume) and PVC (persistent volume claims) which allows mounting volumes onto your cluster.
In your case, you can mount:
Azure Files - basically a managed NFS endpoint mounted on the k8s cluster
Azure Volumes - basically managed disk volumes mounted on the k8s cluster

Related

Wrong path for File Share in Azure App Service

I'm deploying my multi-container application on Azure App Service (Web App for Containers) and time has come to add persistent storage.
I made it work, but the result is not what I expected.
So I have a Storage Account with a File Share (Transaction optimized).
In that File Share, I created a directory called media.
In my App Service, under Settings > Configuration > Path mappings, I created a Mount storage record as below :
In my docker-compose.yml file, I have the followings :
backend:
container_name: my_container_name
image: my_image
volumes:
- filestore:/whatever/media
# ...
volumes:
filestore:
The backend of my application stores files in /whatever/media folder and it works, my files are in Azure. But not at the correct path. In azure, I'd like to have everything under the media directory that I created. Instead, files and directories are created at the same level of my azure media directory but not in it :
You can see in the screenshot above the result :
helpdesk
media
Whereas i'd like to have only
media
With media/helpdesk/...
How can I achieve that ?
What am I missing ?
Thanks in advance for your help

How I deploy Grafana on Azure WebApp with Docker-compose file?

I want to deploy monitoring dashboards using Grafana as web apps using Azure-cloud and share them with my team members.
But I found some problem:
(1) In Docker-compose, Grafana needs volumes to store data.
(2) So I made Azure Storage & File share. And mapping path this storage to Webapp.
Storage Mount is as follows.
name : namename
mapping path : /var/lib/grafana
format : AzureFiles
(3) And this is my docker-compose.yml
services:
grafana:
image: grafana/grafana
ports:
- 3001:3000
volumes:
- namename:/var/lib/grafana
(4) After I build it, my webapp was down and shown me the screen below.
enter image description here
and error log is this.
service init failed: migration failed: database is locked
Logging is not enabled for this container.
I don't know what is problem, and how to fix it.
Also, I want to attach storage and check its inside.
How I do?
When you mount the Azure File Share to the container, then the path you mount will have the root owner and group. But the image runs with the user grafana, so it does not have permission to migrate files.
The solution is that mounts to a new path that does not exist in the image. For example, path /var/lib/grafana-data. then it will work well. Then you need to copy the data yourself from the path /var/lib/grafana to the path /var/lib/grafana top persist them.

Azure webapp for containers,cant attach azure blob storage account to container

i am running multi layered container app in azure using an app service call "web app for containers".
i have blob storage account where i want my data to be saved,
and i am trying to attach it to my app container.
i configured the "path mapping" for my container with the following parameters:
{ name : AppDataVol , mouth path: /var/appdata, type:azureBlob ,AccountName:"*****" ,share name:"test-container" }
and... it seems to be ignored data do not persist it doesn't reach my storage volume,
and on restart everything gone...
i don't know what i am doing wrong i have been at it for almost a week!
below is my docker-compose file, to make it simple i removed the other services
please help me :(
version: '3'
services:
app:
container_name: best-app-ever
image: 'someunknowenregistry.azurecr.io/test/goodapp:latest'
restart: always
build: .
ports:
- '3000:3000'
environment:
- NODE_ENV=production
volumes:
- AppDataVol:/appData
volumes:
AppDataVol:
As my answer explained in your previous problem, mount Azure Blob storage to your container would work for you. But you need to understand the caution:
Linking an existing directory in a web app to a storage account will
delete the directory contents. If you are migrating files for an
existing app, make a backup of your app and its content before you
begin.
It means if the directory of the image already has files, then the mount action will delete the existing files in it, not persist them. So mount the Blob Storage to the container, you should find the directory which is empty and will have files when the image comes into a container, then mount the blob storage to it.

Azure Container Instances with blobfuse or Azure Storage Blobs

I'm deploying to azure container instances from the azure container registry (azure cli and/or portal). Azure blobfuse (on ubuntu 18) is giving me the following error:
device not found, try 'modprobe fuse' first.
The solution to this would be to use the --cap-add=SYS_ADMIN --device /dev/fuse flags when starting the container (docker run):
can't open fuse device in a docker container when mounting a davfs2 volume
However, the --cap-add flag is not supported by ACI:
https://social.msdn.microsoft.com/Forums/azure/en-US/20b5c3f8-f849-4da2-92d9-374a37e6f446/addremove-linux-capabilities-for-docker-container?forum=WAVirtualMachinesforWindows
AzureFiles are too expensive for our scenario.
Any suggestion on how to use blobfuse or Azure Blob Storage (quasi-natively from nodejs) from a Docker Linux container in ACI?
Unfortunately, it seems it's impossible to mount blobfuse or Azure Blob Storage to Azure Container Instance. There are just four types volume that can be mount to. You can take a look at the Azure Template for Azure Container Instance, it shows the whole property of the ACI. And you can see all the volume objects here.
Maybe other volumes which we can mount to Docker Container will be supported in the future. Hope this will help you.

Why OpenEBS mount path is still available on host machine even after deletion of PVC?

I have deleted PVC, but it shows that OpenEBS mount folder is still present with the PVC folder name and data. This is not releasing space on my host machine. Why it is so?
OpenEBS volumes of storage engine (cas type) jiva - use replica pods configured with a hostPath to save the data. Upon deletion of a OpenEBS Volume - the target and replica pods are deleted, but not the associated hostPath and the contents within it. The following issue tracks the implementation of a feature to clean up the contents of the jiva replica hostPath. https://github.com/openebs/openebs/issues/1246
Starting with OpenEBS 0.7, the OpenEBS volumes of cas type - cStor are supported. In case of cstor volumes, the associated data of the volume is deleted when the corresponding PV/PVC is deleted.

Resources