Kubectl version gives the following output.
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:04:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
I have used kubectl to edit persistent volume from 8Gi to 30Gi as
However, when I exec the pod and run df -h I see the following:
I have deleted the pods but it again shows the same thing. if I cd into cd/dev I don't see disk and vda1 folder there as well. I think I actually want the bitnami/influxdb to be 30Gi. Please guide and let me know if more info is needed.
This is a community wiki answer posted for better visibility. Feel free to expand it.
Based on the comments provided here, there could be several reasons for this behavior.
According to the documentation from the Kubernetes website, manually changing the PersistentVolume size will not change the volume size:
Warning: Directly editing the size of a PersistentVolume can prevent
an automatic resize of that volume. If you edit the capacity of a
PersistentVolume, and then edit the .spec of a matching
PersistentVolumeClaim to make the size of the PersistentVolumeClaim
match the PersistentVolume, then no storage resize happens. The
Kubernetes control plane will see that the desired state of both
resources matches, conclude that the backing volume size has been
manually increased and that no resize is necessary.
It also depends on how Kubernetes running and support for the allowVolumeExpansion feature. From DigitalOcean:
are you running one of DigitalOcean's managed clusters, or a DIY
cluster running on DigitalOcean infrastructure? In case of the latter,
which version of our CSI driver do you use? (You need v1.2.0 or later
for volume expansion to be supported.)
Related
I'm using a Bitnami Helm Chart for Cassandra in order to deploy it with Terraform. I'm freshly new to it all, and I struggle with changing one config value, mainly commitlog_segment_size_in_mb. I want to do it before I run terraform commands, but in the Helm Chart itself, I failed to find any mentions of it.
I know I can change it after the terraform deployment in the cassandra.yaml file, but I would like to have this value controllable, so that another terraform update will not overwrite this file.
What would be the best approach to change values of Cassandra config?
Can I modify it in Terraform if it's not in the Helm Chart?
Can I export parts of the configuration to a different file, so that I know my next Terraform installations will not overwrite them?
This isn't a direct answer to your question but in case you weren't aware of it already, K8ssandra.io is a ready-made platform for running Apache Cassandra in Kubernetes using Helm charts to deploy Cassandra with the DataStax Cassandra Operator (cass-operator) under the hood with all the tools built-in:
Reaper for automated repairs
Medusa for backups and restores
Metrics Collector for monitoring with Prometheus + Grafana
Traefik templates for k8s cluster ingress
Stargate.io - a data gateway for connecting to Cassandra using REST API, GraphQL API and JSON/Doc API
K8ssandra and all components are fully open-source and free to use, improve and enjoy. Cheers!
I am installing hawkbit using helm charts,
But after installation all pods are showing pending state, its showing issue related to pvc.
I created pvc and pv but still same output:
There should be no need for creating PVC and/or PV manually. Which version of the RabbitMQ chart are you using? I'm asking because there was a recent version upgrade.
I'm using the Helm Chart to deploy Spark Operator to GKE. Then I define a SparkApplication specification in a YAML file. But after reading the User Guide I still don't understand:
Where to store SparkApplication YAML files on Kubernetes cluster or Google storage?
Is it ok/possible to deploy them along with the Spark Operator Helm chart to the Spark Master container?
Is it a good approach to load the SparkApplication configurations to Google Storage and then run kubectl apply -f <YAML GS file path>
What are the best practices for storing SparkApplication configurations on Kubernetes cluster or GS that I may be missing?
To address your questions:
There are a lot of possibilities to store your YAML files. You can store it locally on your PC, laptop or you can store it in the cloud. Going further in that topic, syncing your YAML files to version controlled system (for example Git) would be one of the better options because you will have full history of the changes with ability to check what changes you made and rollback if something failed. The main thing is that the kubectl will need access to this files.
There is no such thing as master container in Kubernetes. There is master node. A master node is a machine which controls and manages a set of worker nodes (workloads runtime)
Please check the official documentation about Kubernetes components.
You can put your YAML files in your Google Storage (bucket). But you would not be able to run command in a way kubectl apply -f FILE. kubectl will not be able to properly interpret file location like gs://NAME_OF_THE_BUCKET/magical-deployment.yaml.
One way to run kubectl apply -f FILE_NAME.yaml would be to have it stored locally and synced outside.
You can access the data inside a bucket through gsutil. You could try to tinker with gsutil cat gs://NAME_OF_THE_BUCKET/magical-deployment.yaml and try to pipe it into kubectl but I would not recommend that approach.
Please refer to gsutil tool documentation in this case and be aware of:
The gsutil cat command does not compute a checksum of the downloaded data. Therefore, we recommend that users either perform their own validation of the output of gsutil cat or use gsutil cp or rsync (both of which perform integrity checking automatically).
-- https://cloud.google.com/storage/docs/gsutil/commands/cat
Let me know if you have any questions to this.
I'm using Kubernetes (via minikube) to deploy my Lagom services and my Cassandra DB.
After a lot of work, I succeed to deploy my service and my DB on Kubernetes.
Now, I'm about to manage my data and I need to generate a backup for each day.
Is there any solution to generate and restore a snapshot (Backup) for Cassandra running on Kubernetes:
cassandra statefulset image:
gcr.io/google-samples/cassandra:v12
Cassandra node:
svc/cassandra ClusterIP 10.97.86.33 <none> 9042/TCP 1d
Any help? please.
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsBackupRestore.html
That link contains all the information you need. Basically you use nodetool snapshot command to create hard links of your SSTables. Then it's up to you to decide what to do with the snapshots.
I would define a new disk in the statefulset and mount it to a folder, e.g. /var/backup/cassandra. The backup disk is a network storage. Then I would create a simple script that:
Run 'nodetool snapshot'
Get the snapshot id from the output of the command.
Copy all files in the snapshot folder to /var/backup/cassandra
Delete snapshot folder
Now all I have to do is make sure I store the backups on my network drive somewhere else for long term.
Disclaimer. I haven't actually done this so there might be a step missing but this would be the first thing I would try based on the Datastax documentation.
What would a be good approach to fix storage issues when your services run out of the presistent volume free space?
For example I have a gitlab service running on kubernetes installed with Helm Chart.
I have used the default settings, but now I ran out of free space for gitlab.
What would be the ideal approach to fix this issue?
Is there anyway I can increase the PV in size?
Should I somehow backup the gitlab data, recreate it with more storage?
Can I somehow backup and restore data from PV-s so there is no dataloss?
I am open to any suggestion about how to deal with the issue when your PersistentVolume is getting full!
Thank you for your answers,
Bence Pjatacsuk
Is there anyway I can increase the PV in size?
There is no official way to increase PV size in Kubernetes for now, actually I don't think this is Kubernetes' responsibility. Here's the related issue.
But you can increase it manually in two steps:
Increase PV size in the backend storage, eg. resize GCE pd
Change that PV size definition in Kubernetes cluster, eg. kubectl edit pv <pv_id>
As for data backup and restore, it depends on your backend storage. You can backup your PV (eg. create a snapshot) -> create a new one based on it -> and increase the size -> create a new pod with same definition but bind the larger size PV to it.