Hawkbit all pods are showing pending - eclipse-hawkbit

I am installing hawkbit using helm charts,
But after installation all pods are showing pending state, its showing issue related to pvc.
I created pvc and pv but still same output:

There should be no need for creating PVC and/or PV manually. Which version of the RabbitMQ chart are you using? I'm asking because there was a recent version upgrade.

Related

Upgrade virtual-node-aci-linux in Azure Kubernetes Cluster

Does anyone have any links or know how to upgrade the virtual-node-aci-linux on Azure?
I am currently on version v1.19.10-vk-azure-aci-v1.4.1 however my other node pools are now on v.1.22.11 after upgrading Kubernetes.
I am getting some odd behaviour since the upgrade, It seems I now have to specify a single instance in my VMMS for the virtual-node-aci-linux node to be ready. I don't remember having to do this before.
NAME STATUS ROLES AGE VERSION
aks-control-13294507-vmss000006 Ready agent 86s v1.22.11
virtual-node-aci-linux Ready agent 164d v1.19.10-vk-azure-aci-v1.4.1
Also previously I am sure that only my virtual-node-aci-linux was visible in the node list.
Any help would be appreciated.

How to patch GKE Managed Instance Groups (Node Pools) for package security updates?

I have a GKE cluster running multiple nodes across two zones. My goal is to have a job scheduled to run once a week to run sudo apt-get upgrade to update the system packages. Doing some research I found that GCP provides a tool called "OS patch management" that does exactly that. I tried to use it but the Patch Job execution raised an error informing
Failure reason: Instance is part of a Managed Instance Group.
I also noticed that during the creation of the GKE Node pool, there is an option for enabling "Auto upgrade". But according to its description, it will only upgrade the version of the Kubernetes.
According to the Blog Exploring container security: the shared responsibility model in GKE:
For GKE, at a high level, we are responsible for protecting:
The nodes’ operating system, such as Container-Optimized OS (COS) or Ubuntu. GKE promptly makes any patches to these images available. If you have auto-upgrade enabled, these are automatically deployed. This is the base layer of your container—it’s not the same as the operating system running in your containers.
Conversely, you are responsible for protecting:
The nodes that run your workloads. You are responsible for any extra software installed on the nodes, or configuration changes made to the default. You are also responsible for keeping your nodes updated. We provide hardened VM images and configurations by default, manage the containers that are necessary to run GKE, and provide patches for your OS—you’re just responsible for upgrading. If you use node auto-upgrade, it moves the responsibility of upgrading these nodes back to us.
The node auto-upgrade feature DOES patch the OS of your nodes, it does not just upgrade the Kubernetes version.
OS Patch Management only works for GCE VM's. Not for GKE
You should refrain from doing OS level upgrades in GKE, that could cause some unexpected behavior (maybe a package get's upgraded and changes something that will mess up the GKE configuration).
You should let GKE auto-upgrade the OS and Kubernetes. Auto-upgrade will upgrade the OS as GKE releases are inter-twined with the OS release.
One easy way to go is to signup your clusters to release channels, this way they get upgraded as often as you want (depending on the channel) and your OS will be patched regularly.
Also you can follow the GKE hardening guide which provide you with step to make sure your GKE clusters are as secured as possible

How to modify Cassandra config values when using helm chart in Terraform

I'm using a Bitnami Helm Chart for Cassandra in order to deploy it with Terraform. I'm freshly new to it all, and I struggle with changing one config value, mainly commitlog_segment_size_in_mb. I want to do it before I run terraform commands, but in the Helm Chart itself, I failed to find any mentions of it.
I know I can change it after the terraform deployment in the cassandra.yaml file, but I would like to have this value controllable, so that another terraform update will not overwrite this file.
What would be the best approach to change values of Cassandra config?
Can I modify it in Terraform if it's not in the Helm Chart?
Can I export parts of the configuration to a different file, so that I know my next Terraform installations will not overwrite them?
This isn't a direct answer to your question but in case you weren't aware of it already, K8ssandra.io is a ready-made platform for running Apache Cassandra in Kubernetes using Helm charts to deploy Cassandra with the DataStax Cassandra Operator (cass-operator) under the hood with all the tools built-in:
Reaper for automated repairs
Medusa for backups and restores
Metrics Collector for monitoring with Prometheus + Grafana
Traefik templates for k8s cluster ingress
Stargate.io - a data gateway for connecting to Cassandra using REST API, GraphQL API and JSON/Doc API
K8ssandra and all components are fully open-source and free to use, improve and enjoy. Cheers!

How to get root access to a pod in openshift 4.0

We have an OpenShift v4.0 deployed and running. We are using Open Data Hub pods framework within Openshift wherein we have got our jupyterhub along with spark.
Goal is to read a bunch of csv files with spark and load it into mysql. Error I was getting is mentioned in this tread How to set up JDBC driver for MySQL in Jupyter notebook for pyspark?.
One of the solution is to copy the jar file in spark master node. But I am not having access to pod as root user.
How can I get access to root within a pod in Openshift?
#roar S, your answer is correct, however, it is preferable to create your own SCC identical to the "anyuid" SCC (call it "my-anyuid") and link the new SCC it to the system account.
(and your link points to OCP v3.2 where the question is about OCP v4.x)
We had previous bad experience with this as the upgrade from OCP v4.2 to v4.3 failed because we did what you proposed. In fact "add-scc-to-user" "modify" the target SCC and the upgrade process ddidn't like it
To create a SCC similar toanyuid, just extract the anyuid manifest (oc get scc anyuid -o yaml)save it, remove all linked SA in the manifest, change the name and create the new one
https://docs.okd.io/latest/authentication/managing-security-context-constraints.html

Azure kubernetes service node pool upgrades & patches

I have some confusion on AKS Node pool upgrades and Patching. Could you please clarify on this.
I have one AKS node pool, which is having 4 nodes, so now I want to upgrade the kubernetes version only in two nodes of node pool. Is it possible?
if it is possible to upgrade in two nodes, then how we can upgrade remaining two nodes? and how we can find out which two nodes are having old kubernetes version instead of latest kubernetes version
While doing the Upgrade process, will it create two new nodes with latest kubernetes version, and then will it delete old nodes in node pool?
Actually azure automatically applies patches on nodes, but will it creates new nodes with new patches and deleted old nodes?
1. According to the docs:
you can upgrade specific node pool.
So the approach with additional node-pool mentioned by 4c74356b41.
Additional info:
Node upgrades
There is an additional process in AKS that lets you upgrade a cluster. An upgrade is typically to move to a newer version of Kubernetes, not just apply node security updates.
An AKS upgrade performs the following actions:
A new node is deployed with the latest security updates and Kubernetes version applied.
An old node is cordoned and drained.
Pods are scheduled on the new node.
The old node is deleted.
2. By default, AKS uses one additional node to configure upgrades.
You can control this process by increase --max-surge parameter
To speed up the node image upgrade process, you can upgrade your node images using a customizable node surge value.
3. Security and kernel updates to Linux nodes:
In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu image, with the OS configured to automatically check for updates every night. If security or kernel updates are available, they are automatically downloaded and installed.
Some security updates, such as kernel updates, require a node reboot to finalize the process. A Linux node that requires a reboot creates a file named /var/run/reboot-required. This reboot process doesn't happen automatically.
This tutorial summarize the process of Cluster Maintenance and Other Tasks
no, create another pool with 2 nodes and test your application there. or create another cluster. you can find node version with kubectl get nodes
it gradually updates nodes one by one (default). you can change these. spot instances cannot be upgraded.
yes, latest patch version image will be used

Resources