Persistant Volume is not available in Volume Attachment - azure

I am trying to expose the Blob Storage to Kubernetes pods. During the testing, I got the below error
Unable to attach or mount volumes: unmounted volumes=[blob-secret],
unattached volumes=[secrets-store]: error processing PVC
namespace/pvc-blob-claim: PVC is being deleted
Then I executed the below command for debugging
kubectl get pv
pv-blob 10Gi RWX Retain
kubectl get volumeattachment
I got none <pv-blob is not available here>
May I know why the PV is not available under Volume Attachment ? Any suggestions on debugging the scenario are much appreciated.

Related

AKS File Share Persistent Mounting with Managed Identity - Having issue after key Rotation

Mounted Azure File shares in AKS deployments using Cluster UAMI with Reader & Storage account key operator service role. It was successfully mounted in all the POD replicas and able to create the files/list all the files of Azure file share from a pod. But, it is not working after key rotation. Also, I tried to create new deployment, storage class, PVC. Still, facing permission issues while PODs are getting created.
Stage 1: (First Time Process)
Created AKS Cluster, Storage File share, User managed Identity.
Assigned the UAMI to Cluster and provided the Reader & Storage account key operator service roles in new storage scope.
Created new Custom Storage class, PVC, deployments.
Result: All functionalities were working as expected.
Stage 2: (Failure Process)
Created new deployment after key rotation as existing PODs were unable to access the Azure File Share. Permission issue.
Then, Created new Storage Class/PVC/deployment - Still same permission issue.
Error:
default 13s Warning FailedMount pod/myapp-deploymentkey1-67465fb9df-9xcrz MountVolume.SetUp failed for volume "xx" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o file_mode=0777,dir_mode=0777,vers=3.0,actimeo=30,mfsymlinks,<masked> //{StorageName}.file.core.windows.net/sample1 /var/lib/kubelet/pods/xx8/volumes/kubernetes.io~azure-file/pvc-cxx
Output: mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
default 13s Warning FailedMount pod/myapp-deploymentkey1-67465fb9df-jwmcc MountVolume.SetUp failed for volume "xx" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t cifs -o file_mode=0777,dir_mode=0777,vers=3.0,actimeo=30,mfsymlinks,<masked> //{StorageName}.file.core.windows.net/sample1 /var/lib/kubelet/pods/xxx/volumes/kubernetes.io~azure-file/pvc-xx
Output: mount error(13): Permission denied
• The error that you are encountering while mounting file share on the Kubernetes pod represents that there is communication protocol issue, i.e., the communication channel used to connect to the azure file share and mount it on the pod after key rotation is unencrypted and the connection attempt was made from different location of Azure datacenter than where the file share resides.
• Also, please check whether ‘Secure Transfer’ required property for the storage account is enabled or not because if it is enabled, then any requests originating from an insecure connection are rejected. Microsoft recommends that you always require secure transfer for all your storage accounts.
• So, for this issue, you can try disabling the ‘secure transfer’ property on the file share storage account as the files share will be shared for all the existing pods so if a new pod deployment with new key rotation related to the user assigned managed identity is detected, the existing ones might not be compatible with the new keys assigned or may not be updated with it.
• You can also check the version of SMB encryption used for the existing pods and the newly deployed ones. Please refer the below links for more information: -
https://learn.microsoft.com/en-us/answers/questions/560362/aks-file-share-persistent-mounting-with-managed-id.html
https://learn.microsoft.com/en-us/azure/storage/files/storage-troubleshoot-linux-file-connection-problems#mount-error13-permission-denied-when-you-mount-an-azure-file-share

"timeout expired" error when mouning PVC in Azure Kubernetes AKS

After a high load problem that triggered my pod evicted in the deployment, even after deleting the deployment and creating it again, I am getting the following problem:
Warning FailedMount 15s kubelet Unable to mount volumes for pod "XXX(YYY)": timeout expired waiting for volumes to attach or mount for pod "qa"/"XXX". list of unmounted volumes=[ZZZ-volume]. list of unattached volumes=[shared dockersocket ZZZ-volume default-token-kks6d]
The PV is RWO mode so it can only be attached to one POD at a time. I guess the system still has the PV as attached to the evicted pod (which I have deleted) so it does not allow it to be attached to a new POD.
How can I "free" my PV/PVC so it can be attached to the new POD?
Edit: I added get PV and get PVC outputs as requested:
kubectl get pvc:
XXX-pvc-default Bound pvc-XXX-7d98-11ea-91c2-XXX 5Gi RWO default 469d
kubectl get pv:
pvc-XXX-7d98-11ea-91c2-XXX 5Gi RWO Delete Bound qa/XXX-pvc-default default 469d

Azure kubernetes - Secure pod access to resources

I am coming from Windows OS background and have limited knowledge on Linux.
As per the Microsoft documentation
For your applications to run correctly, pods should run as a defined user or group and not as root. The securityContext for a pod or container lets you define settings such as runAsUser or fsGroup to assume the appropriate permissions. Only assign the required user or group permissions, and don't use the security context as a means to assume additional permissions. The runAsUser, privilege escalation, and other Linux capabilities settings are only available on Linux nodes and pods.
and they provided the below sample
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
fsGroup: 2000
containers:
- name: security-context-demo
image: nginx:1.15.5
securityContext:
runAsUser: 1000
allowPrivilegeEscalation: false
capabilities:
add: ["NET_ADMIN", "SYS_TIME"]
Based on the given description, I understood that pod is not given root access and given limited access on network & time.
What does runAsUser: 1000 & fsGroup: 2000 mean? How to setup the fsGroup?
runAsUser: 1000 field specifies that for any Containers in the Pod, all processes run with user ID 1000
fsGroup: 2000 specifies all processes of the container are also part of the supplementary group ID 2000.This group ID is associated with the emptyDir volume mounted in the pod and with any files created in that volume. You should remember that only certain volume types allow the kubelet to change the ownership of a volume to be owned by the pod. If the volume type allows this (as emptyDir volume type) the owning group ID will be the fsGroup.
One thing to remember with fsGroup is that when you try to mount a folder from your host. As Docker mounts the host volume preserving UUID and GUID from the host, permission issues in the Docker volume are possible. The user running the container may not have the appropriate privileges to write in the volume.
Possible solutions are running the container with the same UUID and GUID as the host or change the permissions of the host folder before mounting it to the container.
For more details on fsGroup refer to docs

Persistant storage Azure container

I've been struggling in a couple of days now with how to set up persistent storage in a custom docker container deployed on Azure.
Just for the ease, I've used the official Wordpress image in my container and provided the database credentials through environment variables, so far so good. The application is stateless and the data is stored in a separate MySQL service in Azure.
How to handle content files like server logs or uploaded images, those are placed in /var/www/html/wp-content/upload and will be removed if the container gets removed or if restoring a backup snapshot. Is it possible to mount this directory to a host location? Is it possible to mount this directory so it will be accessible through the FTP to the App Service?
Ok, I realized that it's not possible to mount volumes to a single container app. To mount a volume you must use Docker Compose and mount the volume as in the example below.
Also, make sure you set the application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to TRUE
version: '3.3'
services:
wordpress:
image: wordpress
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "8000:80"
restart: always
With this, your uploaded files will be persisted and also included in the snapshot backups.
Yes, you can do this and you should read about PV (persistent volume) and PVC (persistent volume claims) which allows mounting volumes onto your cluster.
In your case, you can mount:
Azure Files - basically a managed NFS endpoint mounted on the k8s cluster
Azure Volumes - basically managed disk volumes mounted on the k8s cluster

Mount PVC in cronjob and statefulset

I have two services that I would like to access a PersistentVolumeClaim.
One is a StatefulSet that is reading from the volume (and serving content to end users), the other is a Kubernetes CronJob that periodically updates the contents of the data in obtained by a PVC.
Right now I'm running into the issue that my PVC is backed by a PV (not NFS, Ceph, or the like) and one service grabs the volume making the other not start.
How can I make it so both of these services have access to the volume?
And is there a way to add a CronJob to my StatefulSet the same way I add more containers?
Have you checked the accessModes of your pv and pvc?
If you want more than one pod to be able to mount the volume you'll need to use ReadOnlyMany or ReadWriteMany
Persistent Volume Docs
As for your second question, no, there's no way to "add a CronJob to [a] StatefulSet". They are separate and distinct API objects.

Resources