Why OpenEBS mount path is still available on host machine even after deletion of PVC? - openebs

I have deleted PVC, but it shows that OpenEBS mount folder is still present with the PVC folder name and data. This is not releasing space on my host machine. Why it is so?

OpenEBS volumes of storage engine (cas type) jiva - use replica pods configured with a hostPath to save the data. Upon deletion of a OpenEBS Volume - the target and replica pods are deleted, but not the associated hostPath and the contents within it. The following issue tracks the implementation of a feature to clean up the contents of the jiva replica hostPath. https://github.com/openebs/openebs/issues/1246
Starting with OpenEBS 0.7, the OpenEBS volumes of cas type - cStor are supported. In case of cstor volumes, the associated data of the volume is deleted when the corresponding PV/PVC is deleted.

Related

Azure kubernetes change pod disk type

i need to change my azure kubernetes pod disk type from premium ssd to standard ssd, but the disk contains some data, can i directly change the type or need to migrate the data first?. thanks
disk type changed with old data still exists
To change pod disk type from Premium to Standard, you can create new pod with standard disk type and then migrate/transfer the data. Finally, delete the old premium ssd pod.
See more details regarding Azure volume disk in :
https://learn.microsoft.com/en-us/azure/aks/concepts-storage#persistent-volumes

default snapshot location for velero

I want to use velero with my azure Kubernetes cluster to backup cluster data and persistent volumes.
Like doc says I have annotated the pods and even backup job shows 4 snapshots successful.
I managed to take the backup for the cluster and I can see it in my azure storage account. the problem is I see only gz files and one json file in my storage accounts velero designated container. Shouldn't I see a file equivalent to my PVs ?(which is about 10GB)
This in fact is the correct setup. You should see only json files and gziped files in the backup folder within valero container.
These files have pointers to actual snapshots in azure. look for the snapshots within resource group you specified during backup. There should be snapshots corresponding to PVC size.

Persistant storage Azure container

I've been struggling in a couple of days now with how to set up persistent storage in a custom docker container deployed on Azure.
Just for the ease, I've used the official Wordpress image in my container and provided the database credentials through environment variables, so far so good. The application is stateless and the data is stored in a separate MySQL service in Azure.
How to handle content files like server logs or uploaded images, those are placed in /var/www/html/wp-content/upload and will be removed if the container gets removed or if restoring a backup snapshot. Is it possible to mount this directory to a host location? Is it possible to mount this directory so it will be accessible through the FTP to the App Service?
Ok, I realized that it's not possible to mount volumes to a single container app. To mount a volume you must use Docker Compose and mount the volume as in the example below.
Also, make sure you set the application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to TRUE
version: '3.3'
services:
wordpress:
image: wordpress
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "8000:80"
restart: always
With this, your uploaded files will be persisted and also included in the snapshot backups.
Yes, you can do this and you should read about PV (persistent volume) and PVC (persistent volume claims) which allows mounting volumes onto your cluster.
In your case, you can mount:
Azure Files - basically a managed NFS endpoint mounted on the k8s cluster
Azure Volumes - basically managed disk volumes mounted on the k8s cluster

Mount PVC in cronjob and statefulset

I have two services that I would like to access a PersistentVolumeClaim.
One is a StatefulSet that is reading from the volume (and serving content to end users), the other is a Kubernetes CronJob that periodically updates the contents of the data in obtained by a PVC.
Right now I'm running into the issue that my PVC is backed by a PV (not NFS, Ceph, or the like) and one service grabs the volume making the other not start.
How can I make it so both of these services have access to the volume?
And is there a way to add a CronJob to my StatefulSet the same way I add more containers?
Have you checked the accessModes of your pv and pvc?
If you want more than one pod to be able to mount the volume you'll need to use ReadOnlyMany or ReadWriteMany
Persistent Volume Docs
As for your second question, no, there's no way to "add a CronJob to [a] StatefulSet". They are separate and distinct API objects.

Dynamically created volumes from Kubernetes not being auto-deleted on Azure

I have a questions about kubernetes and the default reclaim behavior of dynamically provisioned volumes. The reclaim policy is "delete" for dynamically created volumes in Azure, but after the persistent volume claim and persistent volume have been deleted using kubectl, the page blob on the vhd still exists and is not going away.
This is an issue because every time I restart the cluster, I get an new 1 Gib page blob I now have to pay for, and the old one, which is unused, does not go way. They show up as unleased in the portal and I am able to manually delete them in the storage account. However, will not delete themselves. According to "kubectl get pv" and "kubectl get pvc," they do not exist.
According to all the documentation I can find, they should go away upon deletion using "kubectl":
http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
Any help on this issue would be much appreciated.
EDIT: I have found that this issue appears only when you delete the persistent volume before you delete the persistent volume claim.I know that is not intended behavior but it should be fixed or throw an error.

Resources