Kubernetes persistent volume resize - gitlab

What would a be good approach to fix storage issues when your services run out of the presistent volume free space?
For example I have a gitlab service running on kubernetes installed with Helm Chart.
I have used the default settings, but now I ran out of free space for gitlab.
What would be the ideal approach to fix this issue?
Is there anyway I can increase the PV in size?
Should I somehow backup the gitlab data, recreate it with more storage?
Can I somehow backup and restore data from PV-s so there is no dataloss?
I am open to any suggestion about how to deal with the issue when your PersistentVolume is getting full!
Thank you for your answers,
Bence Pjatacsuk

Is there anyway I can increase the PV in size?
There is no official way to increase PV size in Kubernetes for now, actually I don't think this is Kubernetes' responsibility. Here's the related issue.
But you can increase it manually in two steps:
Increase PV size in the backend storage, eg. resize GCE pd
Change that PV size definition in Kubernetes cluster, eg. kubectl edit pv <pv_id>
As for data backup and restore, it depends on your backend storage. You can backup your PV (eg. create a snapshot) -> create a new one based on it -> and increase the size -> create a new pod with same definition but bind the larger size PV to it.

Related

How to get Kubernetes to utilize a mounted disk?

I have an Ubuntu machine that runs a Kubernetes cluster.
I constantly get "disk pressure" issues in various pods in that cluster.
To combat this issue, I've attached a volume/disk to that machine, formatted it, and mounted it in /media/whatever.
Unfortunately, it seems that the Kubernetes cluster is not utilizing the new disk space from the mounted volume.
My question is: how to get the Kubernetes cluster to utilize the new volume?
I don't mean to attach any volumes to individual pods, but to allow Kubernetes to use any available disk space freely.
I am aware that this question is a bit general and arises from a big gap in overall understanding of Kubernetes,
but I hope that you will still be kind enough to help me.

Is there a way to retrieve kubernetes container's ephemeral-storage usage details?

I create some pods with containers for which I set ephemeral-storage request and limit, like: (here 10GB)
Unfortunately, for some containers, the ephemeral-storage will be fully filled for unknown reasons. I would like to understand which dirs/files are responsible for filling it all, but I did not find a solution to do it.
I tried with df -h, but unfortunately, it will give stats for the whole node and not only for the particular pod/container.
Is there a way to retrieve the kubernetes container's ephemeral-storage usage details?
Pods use ephemeral local storage for scratch space, caching, and for logs. The kubelet can provide scratch space to Pods using local ephemeral storage to mount emptyDir volumes into containers.
Depending on your Kubernetes platform, You may not be able to easily determine where these files are being written, any filesystem can fill up, but rest assured that disk is being consumed somewhere (or worse, memory - depending on the specific configuration of your emptyDir and/or Kubernetes platform).
Refer to this SO link for more details on how by default & allocatable ephemeral-storage in a standard kubernetes environment is sourced from filesystem(mounted to /var/lib/kubelet).
And also refer to kubernetes documentation on how ephemeral storage can be managed & Ephemeral storage consumption management works.
I am assuming you're a GCP user, you can get a sense of your ephemeral-storage usage way:
Menu>Monitoring>Metrics Explorer>
Resource type: kubernetes node & Metric: Ephemeral Storage
Try the below commands to know kubernetes pod/container's ephemeral-storage usage details :
Try du -sh / [run inside a container] : du -sh will give the space consumed by your container files. Which simply returns the amount of disk space the current directory and all those stuff in it are using as a whole, something like: 2.4G.
Also you can check the complete file size using the du -h someDir command.
Inspecting container filesystems : You can use /bin/df as a tool to monitor ephemeral storage usage on the volume where ephemeral container data is located, which is /var/lib/kubelet and /var/lib/containers.

Too many connected disks to AKS node

I read that there is a limitation to the amount of data disks that can bound to a node in a cluster. Right now im using a small node which can only hold up to 4 data disks. If i exceed this amount i will get this error: 0/1 nodes are available: 1 node(s) exceed max volume count.
The question that i mainly have is how to handle this. I have some apps that just need a small amount of persistant storage in my cluster however i can only attach a few data disks. If i bind 4 data disks of 100m i already reached the max limit.
Could someone advice me on how to handle these scenarios? I can easily scale up the machines and i will have more power in my machine and more disks however the ratio disks vs server power is completely offset at that point.
Best
Pim
You should look at using Azure File instead of Azure Disk. With Azure File, you can do ReadWriteMany hence having a single mount on the VM(node) to allow multiple POD to access the mounted volume.
https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md
https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-file
https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
4 PV per node
30 pods per node
Thoses are the limits on AKS nodes right now.
You can handle it by add more nodes, and more money, or find a provider with different limits.
On one of those, as an example, the limits are 127 volumes and 110 pods, for the same node size.

Increase the base device size of docker container with Kubernetes

I have a Kubernetes cluster setup with 3 worker nodes and a master node. Using docker images to generate pods and altering the working containers. I have been working on one image and have altered it heavily to make the servers within work. The problem I am facing is -
There is no space on the device left.
On further investigation, I found the docker container is set at a 10G size limit which now, I would definitely want to change. How can I change that without losing all my changes in the container and without the need of storing the changes as a separate image altogether?
Changing the limit without a restart is impossible.
To prevent data loss during the container restart, you can use Volumes and store data there, instead of the root image.
P.S. It is not possible to mount Volume dynamically to a container without restart.

Cassandra Snapshot running on kubernetes

I'm using Kubernetes (via minikube) to deploy my Lagom services and my Cassandra DB.
After a lot of work, I succeed to deploy my service and my DB on Kubernetes.
Now, I'm about to manage my data and I need to generate a backup for each day.
Is there any solution to generate and restore a snapshot (Backup) for Cassandra running on Kubernetes:
cassandra statefulset image:
gcr.io/google-samples/cassandra:v12
Cassandra node:
svc/cassandra ClusterIP 10.97.86.33 <none> 9042/TCP 1d
Any help? please.
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsBackupRestore.html
That link contains all the information you need. Basically you use nodetool snapshot command to create hard links of your SSTables. Then it's up to you to decide what to do with the snapshots.
I would define a new disk in the statefulset and mount it to a folder, e.g. /var/backup/cassandra. The backup disk is a network storage. Then I would create a simple script that:
Run 'nodetool snapshot'
Get the snapshot id from the output of the command.
Copy all files in the snapshot folder to /var/backup/cassandra
Delete snapshot folder
Now all I have to do is make sure I store the backups on my network drive somewhere else for long term.
Disclaimer. I haven't actually done this so there might be a step missing but this would be the first thing I would try based on the Datastax documentation.

Resources