How to have multiple kubernetes config and change quicky between them - azure

I am working with multiple Kubernetes clusters at Azure, so I need to change quickly from one cluster to another without having various files at my path C:\Users\username\.kube, because I have to rename or replace the file when I wish to change to other.

I suggest that you use the following tools and tricks:
Use asdf to manage multiple kubectl versions
Set the KUBECONFIG env var to change between multiple kubeconfig files
Use kube-ps1 to keep track of your current context/namespace
Use kubectx and kubens to change fast between clusters/namespaces
Use aliases to combine them all together
Take a look at this article, it explains how to accomplish this: Using different kubectl versions with multiple Kubernetes clusters
I also recommend this read: Mastering the KUBECONFIG file

I recommend you check out this tool: kubectxwin
This is the Windows version of the kubectx tool which is the go-to for many to quickly change between clusters and namespaces within clusters.

You need to have all your kubernetes config files.
1.- Create a config file in the path C:\Users\username\.kube
2.- Get the data from every config file. For instance, 3 files one per environment (dev, qa, prod) so let's merge into one
Your file must looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: some_authority_01
server: some_server_url_01
name: some_cluster_name_01
- cluster:
certificate-authority-data: some_authority_02
server: some_server_url_02
name: some_cluster_name_02
- cluster:
certificate-authority-data: some_authority_03
server: some_server_url_03
name: some_cluster_name_03
contexts:
- context:
cluster: some_cluster_name_01
user: some_user_01
name: some_cluster_name_01
- context:
cluster: some_cluster_name_02
user: some_user_02
name: some_cluster_name_02
- context:
cluster: some_cluster_name_03
user: some_user_03
name: some_cluster_name_03
current-context: some_cluster_name_01
kind: Config
preferences: {}
users:
- name: some_user_01
user:
client-certificate-data: some_certificate_01
client-key-data: some_key_01
- name: some_user_02
user:
client-certificate-data: some_certificate_02
client-key-data: some_key_02
- name: some_user_02
user:
client-certificate-data: some_certificate_03
client-key-data: some_key_03
Note: the value of the current-context may vary, it isn't necessary that be the first cluster.
Adding the Shortcuts
3.- Add shortcuts for Windows 10 for changing kubernetes context quicky
3.1.- Create a file called Microsoft.PowerShell_profile.ps1 in the path C:\Users\username\Documents\WindowsPowerShell
3.2 Copy this data into the file that was recently created
function See-Contexts{kubectl config get-contexts}
Set-Alias -Name seec -Value See-Contexts
function change-context-01 { kubectl config use-context some_cluster_name_01}
Set-Alias -Name ctx01 -Value change-context-01
function change-context-02 { kubectl config use-context some_cluster_name_02}
Set-Alias -Name ctx01 -Value change-context-02
function change-context-03 { kubectl config use-context some_cluster_name_03}
Set-Alias -Name ctx01 -Value change-context-03
3.3.- Search PowerShell at search bar in in Windows and open the option RUN ISE as Administrator and open the file Microsoft.PowerShell_profile.ps1 and run the file.
With this solution you can easily change from kubernetes cluster using a shortcut. For example if you wanna change to
the cluster some_cluster_name_01 you only need to type ctx01. This is useful when we have multiple kubernetes clusters.

Related

Installing a custom grafana datasource through helm / terraform

I would like to install the alertmanager datasource (https://grafana.com/grafana/plugins/camptocamp-prometheus-alertmanager-datasource/) to my kube-prometheus-stack installation which is being built using terraform and the helm provider. I cannot work out how to get the plugin files to the node running grafana though.
Using a modified values.yaml and feeding to helm with -f values.yaml (please ignore values):
additionalDataSources:
- name: Alertmanager
editable: false
type: camptocamp-prometheus-alertmanager-datasource
url: http://localhost:9093
version: 1
access: default
# optionally
basicAuth: false
basicAuthUser:
basicAuthPassword:
I can see the datasource in grafana but the plugin files do not exist.
Alertmanager visible in list of datasources
However, clicking on the datasource I see
Plugin not found, no installed plugin with that ID
Please note that the grafana pod seems to require a restart to pick up datasource changes as well which I would consider needs fixing at a higher level.
It's actually quite simple to get the files there and I cannot believe I overlooked the simplistic solution. Posting this here in the hope others find it useful.
In the kube-prometheus-stack, values.yaml file, just override the grafana section as follows:
grafana:
.
.
.
plugins:
- camptocamp-prometheus-alertmanager-datasource
- grafana-googlesheets-datasource
- doitintl-bigquery-datasource
- redis-datasource
- xginn8-pagerduty-datasource
- marcusolsson-json-datasource
- grafana-kubernetes-app
- yesoreyeram-boomtable-panel
- savantly-heatmap-panel
- bessler-pictureit-panel
- grafana-polystat-panel
- dalvany-image-panel
- michaeldmoore-multistat-panel
additionalDataSources:
- name: Alertmanager
editable: false
type: camptocamp-prometheus-alertmanager-datasource
url: http://prometheus-kube-prometheus-alertmanager.monitoring:9093
version: 1
access: default
# optionally
basicAuth: false
basicAuthUser:
basicAuthPassword:
where the name / type of the plugin can be found on the installation instructions on the Grafana Plugins page
I made some progress by discovering I could get onto the pod running grafana using:
kubectl exec -it --container grafana prometheus-grafana-5d844b67c6-5p46b -- /bin/sh
The one listed in kubectl get pods was the sidecar.
Then I could run:
kubectl exec -it --container grafana prometheus-grafana-5d844b67c6-5p46b -- grafana-cli plugins install camptocamp-prometheus-alertmanager-datasource
which did the required file installation. After deleting and recreating the pod, there is progress
Keen to hear any comments on the approach or better ideas!

Dapr -VaulttokenMountpath Issue

I am trying to execute the Dapr -Secret management using Vault in k8s env.
https://github.com/dapr/quickstarts/tree/master/secretstore
Applied the following component Yaml for vault .
Component yaml:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: vault
spec:
type: secretstores.hashicorp.vault
version: v1
metadata:
name: vaultAddr
value: vault:8270 # Optional. Default: "https://127.0.0.1:8200"
name: skipVerify # Optional. Default: false
value : true
name: vaultTokenMountPath # Required. Path to token file.
value : root/tmp/
Token file is created under root/tmp path and tried to execute the service. I am getting the following errors.
Permission denied error. (even though I have given all the read/write permissions.)
I tried applying permission to the file not able to access. Can anyone please provide solution.
Your YAML did not format well but it looks like your value for vaultTokenMountPath is incomplete. It needs to point to the file not just the folder root/tmp/. I created a file called vault.txt and copied my root token into it. So my path would be root/tmp/vault.txt in your case.
I was able to make it work in WSL2 by pointing to a file (/tmp/token in my case).
I was unable to make it work in kubernetes as I did not find any way to inject file in the DAPR sidecar, opened issue on github for this: https://github.com/dapr/components-contrib/issues/794

Azure Kubernetes - No connection to Server

when I execute the following PowerShell command:
.\kubectl get nodes
I get no nodes in response. I noticed that the config file from kubectl is empty too:
apiVersion: v1
clusters:
- cluster:
server: ""
name: cl-kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
When I enter the server address at the config file, I get the message that the connection was refused. I suspect that it is due to missing certificates. During another installation this (apparently) following information was created automatically, which is now missing:
certificate-authority-data,
contexts - cluster,
contexts - user,
current context,
users - name,
client-certificate-data,
client-key-data,
token,
Could that be it? If so, where do I get this information?
Many thanks for the help
You need to use the Azure CLI first to get the credentials. Run
az aks get-credentials
https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials

How can I remove or ignore unwanted .snapshot in mounted volume?

I am running a kubernetes cluster with NFS NAStorage, and when I mount volumes they get a .snapshot directory created at the mountpoint. This causes problems for example when using Helm Charts, as these don't expect an unknown Read Only directory in certain paths (e.g. chown ... <dir> can fail, crashing the container).
When installing the Graylog Helm Chart, I noticed the initContainer for the graylog pod crashing due to chown: ... Read-only file system after running the following chown line:
chown -R 1100:1100 /usr/share/graylog/data/
where the following volume is mounted:
...
volumeMounts:
- mountPath: /usr/share/graylog/data/journal
name: journal
...
I tried working around this by modifying the command to fail "quietly" by making it run : upon failure:
chown -fR 1100:1100 /usr/share/graylog/data/ || :
This made the initContainer succeed, but now the main container crashes instead, this time due to the mere presence of the .snapshot dir.
...
kafka.common.KafkaException: Found directory /usr/share/graylog/data/journal/.snapshot, '.snapshot' is not in the form of topic-partition
If a directory does not contain Kafka topic data it should not exist in Kafka's log directory
...
I have tried modifying the mount point of the volume, too, moving it up one level, but this causes new issues:
...
volumeMounts:
- mountPath: /usr/share/graylog/data
name: data-journal
...
com.github.joschi.jadconfig.ValidationException: Parent path /usr/share/graylog/data/journal for Node ID file at /usr/share/graylog/data/journal/node-id is not a directory
I would have expected there to be some way of disabling the creation of the .snapshot directory, ideally a way to unmount/never mount it in the first place. That, or any way to have the container properly ignore the directory entirely, to make it not interfere with the processes in the container, since it seems the very presence of it can seriously disrupt. However, I have yet to find anything of the sort, and I can't seem to find anyone having had a similar issue (the introduction of Volume Snapshots in kubernetes has not made the searching easier, to say the least).
Edit 1
I tried (semi successfully, I get the Parent path ... is not a directoryerror above) to implement subPath: journal, thus circumventing the .snapshot directory (or so I believe), but this still means potentially editing every Chart that is used in my cluster. Hopefully an alternative on a higher level can be found.
volumeMounts:
- mountPath: /usr/share/graylog/data/journal
name: data-journal
subPath: journal
Edit 2
I am running a bare-metal cluster, with MetalLB and Nginx as loadbalancer+ingress controller.
The storage solution is provided by a third party provider, and it is from their backup solution that the .snapshot directory is made.
My imagined workaround
Since this will mainly be a problem when using Helm Charts or other deployments where volume mounts will be more or less out of our control, I will look into applying a "kustomization" that adds a single line to each volumeMount, adding
...
subPath: mount
or something like that. By doing that, I should be separating the actual mount point in the volume and the directory that actually gets mounted in the container by one level, keepin the .snapshot directory hidden in the abstract volume object. I will post my findings and potential kustomization that may come of it, if anyone else runs into a similar problem.
If someone thinks of a more streamlined solution, it is still very welcome - I'm sure it is possible to improve upon this one.
We finally got this fixed by the storage service provider, after them figuring out which configuration needed to be applied. If anyone has run into the same problem and needs to know which configuration, please reach out and I will ask our service provider.
The workaround that worked before we got the configuration fixed was as follows:
(Including --namespace is optional)
Install mongodb-replicaset and elasticsearch (v 6.8.1)
$ helm install --name mongodb-replicaset --namespace graylog stable/mongodb-replicaset
# We add the elastic repo since the 'stable' repo will be deprecated further on
$ helm repo add elastic https://helm.elastic.co
# We run elasticsearch version 6.8.1 since Graylog v3 currently is incompatible with later versions.
$ helm install elastic/elasticsearch --name elasticsearch --namespace graylog --set imageTag=6.8.1
# Wait for deployments to complete, then you can test to see all went well
$ helm test mongodb-replicaset
$ helm test elasticsearch
Extraxt Graylog deployment template
$ helm fetch --untar stable/graylog
$ helm dependency update graylog
$ helm template graylog -n graylog -f graylog-values.yaml > graylog-template.yaml
#graylog-values.yaml
tags:
install-mongodb: false
install-elasticsearch: false
graylog:
mongodb:
uri: "mongodb://mongodb-replicaset-0.mongodb-replicaset.graylog.svc.cluster.local:27017/graylog?replicaSet=rs0"
elasticsearch:
hosts: "http://elasticsearch-client.graylog.svc.cluster.local:9200"
# + any further values
Add namespace: graylog to all objects in graylog-template.yaml
Add subPath: mount to all volumeMounts where a persistent volume is used (in this case name: journal) in graylog-template.yaml
...
volumeMounts:
- mountPath: /usr/share/graylog/data/journal
name: journal
+ subPath: mount
...
volumeMounts:
- mountPath: /usr/share/graylog/data/journal
name: journal
+ subPath: mount
...
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: journal
This can be done quickly in vim by typing :g/name: <volume-name>/norm osubPath: mount. Please note the lack of a space between "o" and "subPath", and note that this will add the line to the volumeClaimTemplate as well, which needs to be removed. "mount" can also be called something else.
Deploy
$ kubectl apply -f graylog-template.yaml

Azure Module in Ansible

I am trying to create a resource group in Azure using Ansible. However i am getting following error:
ERROR! no action detected in task
The error appears to have been in '/home/alam/azure/rg.yml': line 6, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- azure_rm_resourcegroup:
^ here
Here is my yml playbook:
- name: Test the inventory script
hosts: azure
connection: local
gather_facts: no
tasks:
- name: "Create a resource group"
azure_rm_resourcegroup:
location: westus
name: Testing
state: present
tags:
delete: never
testing: testing
Command:
ansible-playbook -i ./ansible/contrib/inventory/azure_rm.py rg.yml
Upgrade Ansible to at least version 2.1 (better yet to the latest one). The docs are clear on that requirement:
azure_rm_resourcegroup - Manage Azure resource groups.
New in version 2.1.
If you use an older version, the module name will not be recognised and Ansible will throw an error: "no action detected in task."
Upgrading to 2.2 has resolved the issue. However to create the resources "hosts" should not be "Azure". Change it to "localhost"

Resources