non ASCII filenames on GCP persistantvolume not displaying correctly - linux

I am moving our .net core(2.1) API from IIS to linux containers in kubernetes in GCP and am having some trouble with files being retrieved from the mounted fileserver when the filenames contain non ASCII characters. Similar to this GitHub issue, but these are standard UTF8 characters. åäö
var di = new DirectoryInfo(directory);
if (di.Exists)
{
var dFiles = di.GetFiles("*.pdf", SearchOption.TopDirectoryOnly);
Log.Information($"Files found in {di.Name} : {string.Join(",", dFiles.Select(f => f.Name))}");
//Log.Information($"Length of files found in {di.Name} : {string.Join(",", dFiles.Select(f => f.Length))}");
files.AddRange(dFiles);
}
bost�der.pdf is returned, but it should be bostäder.pdf.
In all other cases, characters are handled correctly, such as reading from the database. It is only when reading filenames that there is a problem. I get a FileNotFoundException when attempting to read the length of these files. LastWriteTime returns 1601-01-01.
The file is in GCP filestore and mounted to the kubernetes cluster using a PersistentVolume and PersistentVolumeClaim and then mounted to the containers.
I have tried to alter the currentthread culture when running this method, but with no luck. The same code works fine on the windows/IIS version using symlinks to point to the fileshare.
If I give the path of the file directly using new FileInfo rather than DirectoryInfo.GetFiles then the resulting FileInfo object returns the correctly encoded filename although the FileInfo.Length method still causes a FileNotFoundException.
Have used multiple Docker images
mcr.microsoft.com/dotnet/core/aspnet:2.1-bionic
mcr.microsoft.com/dotnet/core/aspnet:2.1-stretch-slim
Have also attempted to set the
ENV LC_ALL=sv_SE.UTF-8 \
LANG=sv_SE.UTF-8
in the Dockerfile with no luck. Any ideas what else I can try?
Edit:
I have now investigated further and the problem is not with .net, but with the mounted drive on the pods. Local files are shown correctly using ls, but the files on the mount are not. So it appears to be something to do with the way the drive is mounted.
I have a persistantvolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: fileserver
spec:
capacity:
storage: 1T
accessModes:
- ReadWriteMany
nfs:
path: /mymount
server: 1.2.3.4
and a persistantvolumeclaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fileserver-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1T
which is mounted to my pods using
volumeMounts:
- mountPath: /mnt/fileserver
name: pvc
and
volumes:
- name: pvc
persistentVolumeClaim:
claimName: fileserver-claim
readOnly: false
in my deployment.yaml
Edit 2:
Seems to be a problem with Cloud Filestore only supporting NFSv3 - is there any way that I could read these files using NFSv3

Related

Azure Kubernetes Service - Increase Windows disk space

I'd like to host Windows containers, which act as build agents, at an Azure Kubernetes Service instance - unfortunately I can't increase the default 20GB pod disk space. I'd need more disk space for running build jobs at the pods.
The pod is getting deployed using an ADO pipeline by applying YAML which describes the workload.
Attaching the pod, and proving the disk space results in following:
PS: C:\ Get-PSDrive C
Name Used (GB) Free (GB) Provider Root
---- --------- --------- -------- ----
C 0.31 19.57 FileSystem C:\
Does anybody know how to increase the disk space?
At our on-premise cluster it is possible by adding
--storage-opt 50G
as parameter with regard to the modified Docker service parameter.
But how does it work for AKS?
Thank you a lot in advance!
We can increase the pod disk size in AKS by creating the disks manually using persistent volume
By default disk size will be 4GiB
For me its 30GiB, I increased to 50GiB
To increase the disk size please follow the below steps
I have created the storage class for disk
vi sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azuredisk-premium-retain
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters :
storageaccounttype: Premium_LRS
kind: Managed
To deploy the Storage class use below command
kubectl apply -f sc.yaml
Please use the below command to check the storage class created or not
kubectl get sc
I have created the persistent volume to to create the disk manually
vi pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-managed-disk-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: azuredisk-premium-retain
resources:
requests:
storage: 50GiB
In the pvc file i am increasing the storage to 50GiB
To deploy the PVC use below commands
kubectl apply -f pvc.yaml
kubectl get pvc
I have created the pod for mounting the volume
vi pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: newpod #pod name
spec:
containers:
- name: newpod
image: nginx:latest
volumeMounts:
- mountPath: "/mnt/azure" # mounting the volume
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: azure-managed-disk-pvc
To deploy the pod
kubectl apply -f pod.yaml
kubectl get pods
After deploying the pvc_file Go-To>portal>disks>search with pvc_name you created, disk will be increased with created with 50GiB
Previously it was 30GiB now it increased to 50GiB
NOTE : we cannot decrease the disk size once it increase
Reference:
MS-DOC

How to mount local volume hostPath with AKS?

I am trying to create a Kubernetes pod and mounting a volume from local hostpath. I am using Azure Kubernetes cluster. Following is my yaml for creating pod
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /opt/myfolder
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /Users/kkadam/minikube/myfolder
# this field is optional
I have few files under myfolder which I want to use inside container. Files are present in local volume but not inside container.
What could be an issue?
You can not add local path to container running on AKS. You have to add the file on specific node where POD is scheduled.
If both are on same node POD and files then you can mount the files as the volume to the container and use it.
However if your POD is schedule to another node then you will not be able to access the files inside the container.
If due to any reason your node restarted or deleted during auto-scaling you might lose the data.
Judging by what you said in your comment and your config, especially the path /Users/kkadam/minikube/myfolder which is typically a Mac OS path, it seems that you're trying to mount your local volume (probably your mac) in a pod deployed on AKS.
That's the problem.
In order to make it work, you need to put the files you're trying to mount on the node running your pod (which is in AKS).

Accessing Azure File Share from local Kubernetes cluster

OS: Windows 10
Kubernetes version: 1.14.8
Helm version: 3
Docker Desktop version: 2.1.0.5
Trying to deploy a Kubernetes cluster using a Helm-chart that contains a pod that connects to a statically provisioned Azure File Share.
Deploying to an Azure Kubernetes cluster works, but when we try to deploy the cluster locally on docker-desktop it gets the error message when trying to mount the share:
Unable to mount volumes for pod "": timeout expired waiting
for volumes to attach or mount for pod "". list of unmounted
volumes=[servicecatalog-persistent-storage]. list of unattached
volumes=[interactor-properties servicecatalog-persistent-storage
default-token-9fp7j]
Mounting arguments: -t cifs -o
username=,password=,file_mode=0777,dir_mode=0777,vers=3.0
//.file.core.windows.net/spps
/var/lib/kubelet/pods/44a70ebf-1b26-11ea-ab13-00155d0a4406/volumes/kubernetes.io~azure-file/servicecatalog-spp-pv
Output: mount error(11): Resource temporarily unavailable
Helm charts (removed redundant information):
Deployment:
apiVersion: apps/v1
kind: Deployment
spec:
spec:
containers:
- name: {{ .Release.Name }}-{{ .Chart.Name }}
volumeMounts:
- name: servicecatalog-persistent-storage
mountPath: /data/sppstore
volumes:
- name: servicecatalog-persistent-storage
persistentVolumeClaim:
claimName: servicecatalog-pv-claim
Persistent Storage / Claims:
apiVersion: v1
kind: PersistentVolume
metadata:
name: servicecatalog-spp-pv
labels:
usage: servicecatalog-spp-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
secretName: azurefile-secret
shareName: spps
readOnly: false
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: servicecatalog-pv-claim
annotations:
volume.beta.kubernetes.io/storage-class: ""
storageClass:
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
usage: servicecatalog-spp-pv
Secret:
apiVersion: v1
kind: Secret
metadata:
name: azurefile-secret
type: Opaque
data:
azurestorageaccountname: <acc name>
azurestorageaccountkey:<acc key>
We have tried:
Using the Azure File Diagnostics to ensure ports are open and we are able to connect from our machine. link
Connecting using Azure Storage Explorer (works)
Microsoft says that connecting to an Azure File Share locally requires SMB 3.0 for security reasons which Windows 10 supports, but Kubernetes seems to use CIFS (which is a dialect of SMB?), but we cant figure out if its supported for access to Azure File Share. Any ideas?
The recommended way to mount an Azure file share on Linux is using SMB
3.0. By default, Azure Files requires encryption in transit, which is only supported by SMB 3.0. Azure Files also supports SMB 2.1, which
does not support encryption in transit, but you may not mount Azure
file shares with SMB 2.1 from another Azure region or on-premises for
security reasons.
https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-linux
so if you are using smb 2.1 you can only mount the file share from inside the same region. not from local workstation or from another azure region
since your cifs mount mentions vers=3.0 - I would assume this should work in your case. check storage account network access restrictions? or your network restrictions. say port 445, or other concerns mentioned in the linked article

How to upload a file to kubernetes cluster for my Apps to access it?

Lets say we have an application which accesses a file. This App is a jar which is packaged into an image and pushed to Registry for the Kubernetes to run it. But when we create the Pod, we need to configure a volume also in it. When we specify a volume we give a path, how do we place the file in that volume from lets say our virtual machine?
Please help me in understanding this with an explanation. Also should we create a storage so that its accessible from kubernetes cluster? please explain relevent topic as well to understand this.
Note: we are using azure cli
I think the best approach would be to create a ConfigMap with the data you want to use from your application. Then you just need to mount the ConfigMap as a volume in the Pod's (explained here) that need the data.
You can easily create a ConfigMap from a file like
kubectl create configmap your-configmap-name --from-file=/some/path/to/file
And then mount it in your Pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config

Kubernetes on Windows Persistent Volume

Does windows minikube support a persistent volume with a hostpath? If so what is the syntax?
I tried:
apiVersion: v1
kind: PersistentVolume
metadata:
name: kbmongo002
labels:
type: local
spec:
storageClassName: mongostorageclass
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/temp/mongo"
persistentVolumeReclaimPolicy: Retain
---
This passed validation and created the PV and a PVC claimed it, but nothing was written to my expected location of C:\temp\mongo
I also tried:
hostPath:
path: "c:/temp/mongo"
persistentVolumeReclaimPolicy: Retain
---
That resulted in:
Error: Error response from daemon: Invalid bind mount spec
"c:/temp/mongo:/data/db": invalid mode: /data/db
Error syncing pod
If you use virtualbox in windows, only the c:/Users is mapped into vm as /c/Users which is kubernetes system can access. It is the feature in Virtualbox.
Minikube use VM to simulate the kubernetes VM.
Minikube provides mount feature as well, not so user-friendly for persitency.
You can try choose one of the solutions below
use folders under /c/Users for your yaml file
map extra folders into virtualbox VM like C:\Users
use minikube mount, see host folder mount
I have tried k8s hostpath on windows, it works well.
You should use drive letter in pod mount path, see example: https://github.com/andyzhangx/Demo/blob/master/windows/azuredisk/aspnet-pod-azuredisk.yaml#L14
As there is a docker mount path related bug on windows, you need to use drive letter as mount path in pod, see issue: https://github.com/moby/moby/issues/34729

Resources