Use Release.Name in pv.yaml as value of shareName field - azure

I have this pv.yaml file:
{{ if .Values.useSystemFilesPV}}
apiVersion: v1
kind: PersistentVolume
metadata:
name: mono-pv-{{ .Release.Name }}
spec:
capacity:
storage: {{ .Values.pvStorage }}
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
secretName: mono
secretNamespace: {{ .Release.Name }}
shareName: mono-sua-tst-man-yv/idit-core-pen
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
{{ end }}
And I need to create this folder: idit-core-pen under the existing file share: mono-sua-tst-man-yv.
Regarding this field: shareName ,the value is mono-sua-tst-man-yv/idit-core-pen ('mono-sua-tst-man-yv' is my file share, and 'idit-core-pen' is a folder inside this file share).
It works well like that! it creates the folder idit-core-pen under the existing fileshare.
But when I change the value into: mono-sua-tst-man-yv/{{ .Release.Name }} it doesn't work (it doesnt create the folder under the file share) and it throws this errors message:
MountVolume.MountDevice failed for volume "mono-pv-idit-core-pen" : rpc error: code = Internal desc = volume(#mono#mono-sua-tst-man-yv/idit-core-pen#mono-pv-idit-core-pen#idit-core-pen) mount //euaidtcoretrunkaks.file.core.windows.net/mono-sua-tst-man-yv/idit-core-pen on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/mono-pv-idit-core-pen/globalmount failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t cifs -o dir_mode=0777,file_mode=0777,uid=1000,gid=1000,mfsymlinks,nobrl,actimeo=30,<masked> //euaidtcoretrunkaks.file.core.windows.net/mono-sua-tst-man-yv/idit-core-pen /var/lib/kubelet/plugins/kubernetes.io/csi/pv/mono-pv-idit-core-pen/globalmount Output: mount error(2): No such file or directory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
I tried to play a bit with the quotation marks (example: "mono-sua-tst-man-yv/{{ .Release.Name }}" ) and with the curly brackets (example: mono-sua-tst-man-yv/{{{{ .Release.Name }}}}) but it didn't help. It didn't create the folder.
Maybe anyone has a better idea?
Thanks a lot.

Related

Is it possible to specify a folder path from a docker compose volume?

I have a some docker containers that I host on Windows Azure. I also have storage that I use for persistent data. Inside the storage I created this folder structure : acishare/haproxy/enduser and acishare/haproxy/promoter.
In the code below, I want to mount acishare/haproxy/enduser as a volume inside the docker compose configuration:
loadbalancer:
image: haproxytech/haproxy-ubuntu:2.5
...
volumes:
- haproxy:/usr/local/etc/haproxy
volumes:
haproxy:
driver: azure_file
driver_opts:
share_name: acishare
storage_account_name: tctstorage2
storage_account_key: <account_key>
This code allows to fetch the acishare folder only. However, I need to mount acishare/haproxy/enduser. Does anyone know how to do that?
This currently is not possible with Docker, though it has been requested as a feature for quite some time now:
https://github.com/moby/moby/issues/32582
As SidShetye commented on the Github issue, as of 15 April 2020 it was:
the 2nd oldest issue (2/3500)
the 8th most commented issue (8/3500)
the 7th most thumbs-up'd issue (7/3500)
If this functionality is something you critically need, then you may want to look to Kubernetes, as they support subpaths for volumes
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "rootpasswd"
volumeMounts:
- mountPath: /var/lib/mysql
name: site-data
subPath: mysql
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
subPath: html
volumes:
- name: site-data
persistentVolumeClaim:
claimName: my-lamp-site-data

Kubernetes - "Mount Volume Failed" when trying to deploy

I deployed my first container, I got info:
deployment.apps/frontarena-ads-deployment created
but then I saw my container creation is stuck in Waiting status.
Then I saw the logs using kubectl describe pod frontarena-ads-deployment-5b475667dd-gzmlp and saw MountVolume error which I cannot figure out why it is thrown:
Warning FailedMount 9m24s kubelet MountVolume.SetUp
failed for volume "ads-filesharevolume" : mount failed: exit status 32 Mounting command:
systemd-run Mounting arguments: --description=Kubernetes transient
mount for
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
--scope -- mount -t cifs -o username=frontarenastorage,password=mypassword,file_mode=0777,dir_mode=0777,vers=3.0
//frontarenastorage.file.core.windows.net/azurecontainershare
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
Output: Running scope as unit
run-rf54d5b5f84854777956ae0e25810bb94.scope. mount error(115):
Operation now in progress Refer to the mount.cifs(8) manual page (e.g.
man mount.cifs)
Before I run the deployment I created a secret in Azure, using the already created azure file share, which I referenced within the YAML.
$AKS_PERS_STORAGE_ACCOUNT_NAME="frontarenastorage"
$STORAGE_KEY="mypassword"
kubectl create secret generic fa-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
In that file share I have folders and files which I need to mount and I reference azurecontainershare in YAML:
My YAML looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment
labels:
app: frontarena-ads-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test
labels:
app: frontarena-ads-aks-test
spec:
containers:
- name: frontarena-ads-aks-test
image: faselect-docker.dev/frontarena/ads:test1
imagePullPolicy: Always
ports:
- containerPort: 9000
volumeMounts:
- name: ads-filesharevolume
mountPath: /opt/front/arena/host
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: azurecontainershare
readOnly: false
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-aks-test
The Issue was because of the different Azure Regions in which AKS cluster and Azure File Share are deployed. If they are in the same Region you would not have this issue.

Copy file from cron job's pod to local directory in AKS

I have created a cron job which runs every 60 min. In the job's container I have mounted emptyDir volume as detailed-logs. In my container I am writing a csv file at path detailed-logs\logs.csv.
I am trying to copy this file from pod to local machine using kubectl cp podname:detailed-logs\logs.csv \k8slogs\logs.csv but it throws the error:
path "detailed-logs\logs.csv" not found (no such file or directory).
Once job runs successfully, pod created by job goes to completed state, is this can be a issue?
The file you are referring to is not going to persist once your pod completes running. What you can do is make a backup of the file when the cron job is running. The two solutions I can suggest are either attach a persistent volume to the job pod, or to upload the file somewhere while running the job.
USE A PERSISTENT VOLUME
Here you can create a PV through a quick readWriteOnce Persistent Volume Claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Then you can mount it onto the pod using the following:
...
volumeMounts:
- name: persistent-storage
mountPath: /detailed-logs
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: my-pvc
...
UPLOAD FILE
The way I do it is run the job in a container that has aws-cli installed, and then store my file on AWS S3, you can choose another platform:
apiVersion: v1
kind: ConfigMap
metadata:
name: backup-sh
data:
backup.sh: |-
#!/bin/bash
aws s3 cp /myText.txt s3://bucketName/
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: s3-backup
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: aws-kubectl
image: expert360/kubectl-awscli:v1.11.2
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: s3-creds
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: s3-creds
key: secret-access-key
command:
- /bin/sh
- -c
args: ["sh /backup.sh"]
volumeMounts:
- name: backup-sh
mountPath: /backup.sh
readOnly: true
subPath: backup.sh
volumes:
- name: backup-sh
configMap:
name: backup-sh
restartPolicy: Never

Mount Azure Files' folder into Kubernetes?

As direct mount or persistent volume claim the Azure docs show how to mount an Azure Files storage account to a Kubernetes pod. This mounts the entire storage as the mounted path. How do I instead mount a folder within the Azure Files storage to Kubernetes?
On Azure Files, I have the following:
AzureFiles
|- folder1
|- file1
|- folder2
|- file2
When I mount the Azure Files storage account to Kubernetes (to /mnt/azure) I see this:
/mnt
|- azure
|- folder1
|- file1
|- folder2
|- file2
Instead I'd like to see this when I mount Azure Files' path folder1:
/mnt
|- azure
|- file1
How do I change my Pod definition to specify this path:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- // ... snip ...
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
// TODO: how to specify path in aksshare
Edit
Search for several days, I figure out how to mount a sub-folder of the Azure File Share to the AKS pod. You can set the yaml file like this:
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: share/subfolder
readOnly: false
Just set the share name with the directory, take care, do not append / in the end. The screenshot of the result is here:
For more details, see Naming and Referencing Shares, Directories, Files, and Metadata.
I believe you can use subPath. So something like this:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- // ... snip ...
volumeMounts:
- name: azure
mountPath: /mnt/azure
subPath: folder1
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
// TODO: how to specify path in aksshare

Mount Error for Block Storage on Azure kubernetes

I have been trying to mount a file share on Kubernetes pod hosted on AKS in Azure. So far, I have tried to:
1. Successfully created a secret by base64 encoding the name and the key
2. Create a yaml by specifying the correct configurations
3. Once I apply it using kubectl apply -f azure-file-pod.yaml, it gives me the following error:
Output: mount error: could not resolve address for
demo.file.core.windows.net: Unknown error
I have an Azure File Share by the name of demo.
Here is my yaml file:
apiVersion: v1
kind: Pod
metadata:
name: azure-files-pod
spec:
containers:
- image: microsoft/sample-aks-helloworld
name: azure
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: demo
readOnly: false
How can this possibly be resolved?

Resources