I want to deploy monitoring dashboards using Grafana as web apps using Azure-cloud and share them with my team members.
But I found some problem:
(1) In Docker-compose, Grafana needs volumes to store data.
(2) So I made Azure Storage & File share. And mapping path this storage to Webapp.
Storage Mount is as follows.
name : namename
mapping path : /var/lib/grafana
format : AzureFiles
(3) And this is my docker-compose.yml
services:
grafana:
image: grafana/grafana
ports:
- 3001:3000
volumes:
- namename:/var/lib/grafana
(4) After I build it, my webapp was down and shown me the screen below.
enter image description here
and error log is this.
service init failed: migration failed: database is locked
Logging is not enabled for this container.
I don't know what is problem, and how to fix it.
Also, I want to attach storage and check its inside.
How I do?
When you mount the Azure File Share to the container, then the path you mount will have the root owner and group. But the image runs with the user grafana, so it does not have permission to migrate files.
The solution is that mounts to a new path that does not exist in the image. For example, path /var/lib/grafana-data. then it will work well. Then you need to copy the data yourself from the path /var/lib/grafana to the path /var/lib/grafana top persist them.
Related
I'm deploying my multi-container application on Azure App Service (Web App for Containers) and time has come to add persistent storage.
I made it work, but the result is not what I expected.
So I have a Storage Account with a File Share (Transaction optimized).
In that File Share, I created a directory called media.
In my App Service, under Settings > Configuration > Path mappings, I created a Mount storage record as below :
In my docker-compose.yml file, I have the followings :
backend:
container_name: my_container_name
image: my_image
volumes:
- filestore:/whatever/media
# ...
volumes:
filestore:
The backend of my application stores files in /whatever/media folder and it works, my files are in Azure. But not at the correct path. In azure, I'd like to have everything under the media directory that I created. Instead, files and directories are created at the same level of my azure media directory but not in it :
You can see in the screenshot above the result :
helpdesk
media
Whereas i'd like to have only
media
With media/helpdesk/...
How can I achieve that ?
What am I missing ?
Thanks in advance for your help
In this project I am using azure app service with linux containers, there is a certain file that must persist and be shared across all the instances. However when the container starts it fails with the following error
Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file
Using File Zilla and the app service FTPS credentials I am able to upload the file I need to the default shared storage ending in this folder structure
-/
|_ASP.NET
|_LogFiles
|_site
|_thefileineed.txt
As you can see, is a c# project, so it has an appsettings.json file in which the path to this file is declared
{
"PathToFile":"/home/thefileineed.txt"
}
Because it uses containers I asume that i must mount the shared storage inside the container with compose-file.yml and following the documentation I use the following setup:
...
volumes:
- ${WEBAPP_STORAGE_HOME}:/home
What i am missing? or how is it supposed to access to the file
Your setting should be like below.
MyExternalStorageIn the docker compose configuration I have to set......volumes:- MyExternalStorage:/var/www/html/contao
For more details, please read related posts.
1. Azure Web App usage of WEBAPP_STORAGE_HOME variable in docker-compose
2. [ Q&A ] Web App Docker Compose Persistent Storage
How to configure Azure Blob Storage Container on an Yaml
- name: scripts-file-share
azureFile:
secretName: dev-blobstorage-secret
shareName: logs
readOnly: false```
The above is for the logs file share to configure on yaml.
But if I need to mount blob container? How to configure it?
Instead of azureFile do I need to use azureBlob?
And what is the configuration that I need to have below azureBlob? Please help
After the responses I got from the above post and also went through the articles online, I see there is no option for Azure Blob to mount on Azure AKS except to use azcopy or rest api integration for my problem considering the limitations I have on my environment.
So, after little bit research and taking references from below articles I could able to create a Docker image.
1.) Created the docker image with the reference article. But again, I also need support to run a bash script as I am running azcopy command using bash file. So, I tried to copy the azcopy tool to /usr/bin.
2.) Created SAS tokens for Azure File Share & Azure Blob. (Make sure you give required access permissions only)
3.) Created a bash file that runs the below command.
azcopy <FileShareSASTokenConnectionUrl> <BlobSASTokenConnectionUrl> --recursive=true
4.) Created a deployment yaml that runs on AKS. Added the command to run bash file in that.
This gave me the ability to copy the files from Azure File Share Folders to Azure Blob Container
References:
1.) https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#obtain-a-static-download-link
2.) https://github.com/Azure/azure-storage-azcopy/issues/423
i am running multi layered container app in azure using an app service call "web app for containers".
i have blob storage account where i want my data to be saved,
and i am trying to attach it to my app container.
i configured the "path mapping" for my container with the following parameters:
{ name : AppDataVol , mouth path: /var/appdata, type:azureBlob ,AccountName:"*****" ,share name:"test-container" }
and... it seems to be ignored data do not persist it doesn't reach my storage volume,
and on restart everything gone...
i don't know what i am doing wrong i have been at it for almost a week!
below is my docker-compose file, to make it simple i removed the other services
please help me :(
version: '3'
services:
app:
container_name: best-app-ever
image: 'someunknowenregistry.azurecr.io/test/goodapp:latest'
restart: always
build: .
ports:
- '3000:3000'
environment:
- NODE_ENV=production
volumes:
- AppDataVol:/appData
volumes:
AppDataVol:
As my answer explained in your previous problem, mount Azure Blob storage to your container would work for you. But you need to understand the caution:
Linking an existing directory in a web app to a storage account will
delete the directory contents. If you are migrating files for an
existing app, make a backup of your app and its content before you
begin.
It means if the directory of the image already has files, then the mount action will delete the existing files in it, not persist them. So mount the Blob Storage to the container, you should find the directory which is empty and will have files when the image comes into a container, then mount the blob storage to it.
I've been struggling in a couple of days now with how to set up persistent storage in a custom docker container deployed on Azure.
Just for the ease, I've used the official Wordpress image in my container and provided the database credentials through environment variables, so far so good. The application is stateless and the data is stored in a separate MySQL service in Azure.
How to handle content files like server logs or uploaded images, those are placed in /var/www/html/wp-content/upload and will be removed if the container gets removed or if restoring a backup snapshot. Is it possible to mount this directory to a host location? Is it possible to mount this directory so it will be accessible through the FTP to the App Service?
Ok, I realized that it's not possible to mount volumes to a single container app. To mount a volume you must use Docker Compose and mount the volume as in the example below.
Also, make sure you set the application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to TRUE
version: '3.3'
services:
wordpress:
image: wordpress
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "8000:80"
restart: always
With this, your uploaded files will be persisted and also included in the snapshot backups.
Yes, you can do this and you should read about PV (persistent volume) and PVC (persistent volume claims) which allows mounting volumes onto your cluster.
In your case, you can mount:
Azure Files - basically a managed NFS endpoint mounted on the k8s cluster
Azure Volumes - basically managed disk volumes mounted on the k8s cluster