I'm deploying my multi-container application on Azure App Service (Web App for Containers) and time has come to add persistent storage.
I made it work, but the result is not what I expected.
So I have a Storage Account with a File Share (Transaction optimized).
In that File Share, I created a directory called media.
In my App Service, under Settings > Configuration > Path mappings, I created a Mount storage record as below :
In my docker-compose.yml file, I have the followings :
backend:
container_name: my_container_name
image: my_image
volumes:
- filestore:/whatever/media
# ...
volumes:
filestore:
The backend of my application stores files in /whatever/media folder and it works, my files are in Azure. But not at the correct path. In azure, I'd like to have everything under the media directory that I created. Instead, files and directories are created at the same level of my azure media directory but not in it :
You can see in the screenshot above the result :
helpdesk
media
Whereas i'd like to have only
media
With media/helpdesk/...
How can I achieve that ?
What am I missing ?
Thanks in advance for your help
Related
In this project I am using azure app service with linux containers, there is a certain file that must persist and be shared across all the instances. However when the container starts it fails with the following error
Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file
Using File Zilla and the app service FTPS credentials I am able to upload the file I need to the default shared storage ending in this folder structure
-/
|_ASP.NET
|_LogFiles
|_site
|_thefileineed.txt
As you can see, is a c# project, so it has an appsettings.json file in which the path to this file is declared
{
"PathToFile":"/home/thefileineed.txt"
}
Because it uses containers I asume that i must mount the shared storage inside the container with compose-file.yml and following the documentation I use the following setup:
...
volumes:
- ${WEBAPP_STORAGE_HOME}:/home
What i am missing? or how is it supposed to access to the file
Your setting should be like below.
MyExternalStorageIn the docker compose configuration I have to set......volumes:- MyExternalStorage:/var/www/html/contao
For more details, please read related posts.
1. Azure Web App usage of WEBAPP_STORAGE_HOME variable in docker-compose
2. [ Q&A ] Web App Docker Compose Persistent Storage
I want to deploy monitoring dashboards using Grafana as web apps using Azure-cloud and share them with my team members.
But I found some problem:
(1) In Docker-compose, Grafana needs volumes to store data.
(2) So I made Azure Storage & File share. And mapping path this storage to Webapp.
Storage Mount is as follows.
name : namename
mapping path : /var/lib/grafana
format : AzureFiles
(3) And this is my docker-compose.yml
services:
grafana:
image: grafana/grafana
ports:
- 3001:3000
volumes:
- namename:/var/lib/grafana
(4) After I build it, my webapp was down and shown me the screen below.
enter image description here
and error log is this.
service init failed: migration failed: database is locked
Logging is not enabled for this container.
I don't know what is problem, and how to fix it.
Also, I want to attach storage and check its inside.
How I do?
When you mount the Azure File Share to the container, then the path you mount will have the root owner and group. But the image runs with the user grafana, so it does not have permission to migrate files.
The solution is that mounts to a new path that does not exist in the image. For example, path /var/lib/grafana-data. then it will work well. Then you need to copy the data yourself from the path /var/lib/grafana to the path /var/lib/grafana top persist them.
Using app engine standard environment for python 3.7.
When running the app deploy command are container images uploaded to google storage in the bucket eu.artifacts.<project>.appspot.com.
This message is printed during app deploy
Beginning deployment of service [default]...
#============================================================#
#= Uploading 827 files to Google Cloud Storage =#
#============================================================#
File upload done.
Updating service [default]...
The files are uploaded to a multi-region (eu), how do I change this to upload to a single region?
Guessing that it's a configuration file that should be added to the repository to instruct app engine, cloud build or cloud storage that the files should be uploaded to a single region.
Is the eu.artifacts.<project>.appspot.com bucket required, or could all files be ignore using the .gcloudignore file?
The issue is similar to this issue How can I specify a region for the Cloud Storage buckets used by Cloud Build for a Cloud Run deployment?, but for app engine.
I'm triggering the cloud build using a service account.
Tried to implement the changes in the solution in the link above, but aren't able to get rid of the multi region bucket.
substitutions:
_BUCKET: unused
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--promote', '--stop-previous-version']
artifacts:
objects:
location: 'gs://${_BUCKET}/artifacts'
paths: ['*']
Command gcloud builds submit --gcs-log-dir="gs://$BUCKET/logs" --gcs-source-staging-dir="gs://$BUCKET/source" --substitutions=_BUCKET="$BUCKET"
I delete whole bucket after deploying, which prevents billing
gsutil -m rm -r gs://us.artifacts.<project-id>.appspot.com
-m - multi-threading/multi-processing (instead of deleting object-by-object this arguments will delete objects simultaneously)
rm - command to remove objects
-r - recursive
https://cloud.google.com/storage/docs/gsutil/commands/rm
After investigation a little bit more, I want to mention that this
kind of bucket is created by the “container registry” product when you deploy a new container( when you deploy your App Engine Application)-> When you push an image to a registry with a new hostname, Container Registry creates a storage bucket in the specified multi-regional location.This bucket is the underlying storage for the registry. Within a project, all registries with the same hostname share one storage bucket.
Based on this, it is not accessible by default and itself contains container images which are written when you deploy a new container. It's not recommended to modify it because the artifacts bucket is meant to contain deployment images, which may influence your app.
Finally, something curious that I found is when you create a default bucket (as is the case of the aforementioned bucket), you also get a staging bucket with the same name except that staging. You can use this staging bucket for temporary files used for staging and test purposes; it also has a 5 GB limit, but it is automatically emptied on a weekly basis
i am running multi layered container app in azure using an app service call "web app for containers".
i have blob storage account where i want my data to be saved,
and i am trying to attach it to my app container.
i configured the "path mapping" for my container with the following parameters:
{ name : AppDataVol , mouth path: /var/appdata, type:azureBlob ,AccountName:"*****" ,share name:"test-container" }
and... it seems to be ignored data do not persist it doesn't reach my storage volume,
and on restart everything gone...
i don't know what i am doing wrong i have been at it for almost a week!
below is my docker-compose file, to make it simple i removed the other services
please help me :(
version: '3'
services:
app:
container_name: best-app-ever
image: 'someunknowenregistry.azurecr.io/test/goodapp:latest'
restart: always
build: .
ports:
- '3000:3000'
environment:
- NODE_ENV=production
volumes:
- AppDataVol:/appData
volumes:
AppDataVol:
As my answer explained in your previous problem, mount Azure Blob storage to your container would work for you. But you need to understand the caution:
Linking an existing directory in a web app to a storage account will
delete the directory contents. If you are migrating files for an
existing app, make a backup of your app and its content before you
begin.
It means if the directory of the image already has files, then the mount action will delete the existing files in it, not persist them. So mount the Blob Storage to the container, you should find the directory which is empty and will have files when the image comes into a container, then mount the blob storage to it.
I've been struggling in a couple of days now with how to set up persistent storage in a custom docker container deployed on Azure.
Just for the ease, I've used the official Wordpress image in my container and provided the database credentials through environment variables, so far so good. The application is stateless and the data is stored in a separate MySQL service in Azure.
How to handle content files like server logs or uploaded images, those are placed in /var/www/html/wp-content/upload and will be removed if the container gets removed or if restoring a backup snapshot. Is it possible to mount this directory to a host location? Is it possible to mount this directory so it will be accessible through the FTP to the App Service?
Ok, I realized that it's not possible to mount volumes to a single container app. To mount a volume you must use Docker Compose and mount the volume as in the example below.
Also, make sure you set the application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to TRUE
version: '3.3'
services:
wordpress:
image: wordpress
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "8000:80"
restart: always
With this, your uploaded files will be persisted and also included in the snapshot backups.
Yes, you can do this and you should read about PV (persistent volume) and PVC (persistent volume claims) which allows mounting volumes onto your cluster.
In your case, you can mount:
Azure Files - basically a managed NFS endpoint mounted on the k8s cluster
Azure Volumes - basically managed disk volumes mounted on the k8s cluster