Azure Kubernetes - No connection to Server - azure

when I execute the following PowerShell command:
.\kubectl get nodes
I get no nodes in response. I noticed that the config file from kubectl is empty too:
apiVersion: v1
clusters:
- cluster:
server: ""
name: cl-kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
When I enter the server address at the config file, I get the message that the connection was refused. I suspect that it is due to missing certificates. During another installation this (apparently) following information was created automatically, which is now missing:
certificate-authority-data,
contexts - cluster,
contexts - user,
current context,
users - name,
client-certificate-data,
client-key-data,
token,
Could that be it? If so, where do I get this information?
Many thanks for the help

You need to use the Azure CLI first to get the credentials. Run
az aks get-credentials
https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials

Related

Node container unable to locate Hashicorp Vault secrets file on startup on AWS EKS 1.24

We have a small collection of Kubernetes pods which run react/next.js UIs in a node 16 alpine container (node:16.18.1-alpine3.15 to be precise). All of this runs in AWS EKS 1.23. We make use of annotations on these pods in order to inject secrets from Hashicorp Vault at start up. The annotations pull the desired secrets from Vault and write these to a file on the pod. Example of said annotations below :
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-pre-populate-only: "true"
vault.hashicorp.com/role: "onejourney-ui"
vault.hashicorp.com/agent-inject-secret-config: "secret/data/onejourney-ui"
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/onejourney-ui" -}}
export AUTH0_CLIENT_ID="{{ .Data.data.auth0_client_id }}"
export SENTRY_DSN="{{ .Data.data.sentry_admin_dsn }}"
{{- end }}
When the pod starts up, we source this file (which is created by default at /vault/secrets/config) to set environment variables and then delete the file. We do that with the following pod arguments in our helm chart :
node:
args:
- /bin/sh
- -c
- source /vault/secrets/config; rm -rf /vault/secrets/config; yarn start-admin;
We recently upgraded some of AWS EKS clusters from 1.23 to 1.24. After doing so, we noted that our node applications were failing to start and entering a crash loop. Looking in the logs of these containers, the problem seemed to be that the pod was unable to locate the secrets file anymore.
Interestingly, the Vault init container completed successfully and shows that the file was successfully created...
Out of curiosity, I removed the node args to source the file which allowed the container to start successfully, but I found when execing into the pod, the file WAS infact present and had the content I was expecting. The file also had the correct owner and permissions as we see in a good working instance in EKS 1.23.
We have other containers (php-fpm) which consume secrets in the same manner however these were not affected on 1.24, only node containers were affected. There were no namespace, pod or deployment annotations I saw added which would have been a possible cause. After rolling the cluster back down to EKS 1.23, the deployment worked as expected.
I'm left scratching my head as to why the pod is unable to source that file on 1.24. Any suggestions on what to check or a possible cause would be greatly appreciated.

resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system"

while I try to add my k8s cluster in azure vm, is shows error like
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
Here is the output for my command executed
root#kubeadm-master:~# curl --insecure -sfL https://104.211.32.151:8443/v3/import/lqkbhj6gwg9xcb5j8pnqcmxhtdg6928wmb7fj2n9zv95dbxsjq8vn9.yaml | kubectl apply -f -clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver
created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
secret/cattle-credentials-e558be7 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/cattle-cluster-agent created
daemonset.apps/cattle-node-agent created
error: resource mapping not found for name: "cattle-admin-binding" namespace: "cattle-system" from "STDIN": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
ensure CRDs are installed first
I was also facing the same issue, so I changed the API version for the cattle-admin-binding from beta to stable as below:
Old value:
apiVersion: rbac.authorization.k8s.io/v1beta1
Changed to:
apiVersion: rbac.authorization.k8s.io/v1
Though I ran into some other issues later, the above error was gone.

Dapr -VaulttokenMountpath Issue

I am trying to execute the Dapr -Secret management using Vault in k8s env.
https://github.com/dapr/quickstarts/tree/master/secretstore
Applied the following component Yaml for vault .
Component yaml:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: vault
spec:
type: secretstores.hashicorp.vault
version: v1
metadata:
name: vaultAddr
value: vault:8270 # Optional. Default: "https://127.0.0.1:8200"
name: skipVerify # Optional. Default: false
value : true
name: vaultTokenMountPath # Required. Path to token file.
value : root/tmp/
Token file is created under root/tmp path and tried to execute the service. I am getting the following errors.
Permission denied error. (even though I have given all the read/write permissions.)
I tried applying permission to the file not able to access. Can anyone please provide solution.
Your YAML did not format well but it looks like your value for vaultTokenMountPath is incomplete. It needs to point to the file not just the folder root/tmp/. I created a file called vault.txt and copied my root token into it. So my path would be root/tmp/vault.txt in your case.
I was able to make it work in WSL2 by pointing to a file (/tmp/token in my case).
I was unable to make it work in kubernetes as I did not find any way to inject file in the DAPR sidecar, opened issue on github for this: https://github.com/dapr/components-contrib/issues/794

Ansible Lookup with azure_keyvault_secret Invalid Credentails

I'm attempting to retrieve a secret stored in Azure Key Vault with Ansible. I found and installed the azure.azure_preview_modules using ansible-galaxy. I've also updated the ansible.cfg to point to the lookup_plugins directory from the role. When Running the following playbook I get the error:
- hosts: localhost
connection: local
roles:
- { role: azure.azure_preview_modules }
tasks:
- name: Look up secret when ansible host is general VM
vars:
url: 'https://myVault.vault.azure.net/'
secretname: 'SecretPassword'
client_id: 'ServicePrincipalIDHere'
secret: 'ServicePrinipcalPassHere'
tenant: 'TenantIDHere'
debug: msg="the value of this secret is {{lookup('azure_keyvault_secret',secretname,vault_url=url, cliend_id=client_id, secret=secret, tenant_id=tenant)}}"
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'azure_keyvault_secret'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Invalid credentials provided."}
Using the same information I can connect to Azure using AZ PowerShell and AZCLI and retrieve the Azure Key Vault secrets at the commandline. However, those same credentails do not work within this task for the playbook using the lookup plug-in.
I had a similar error when using python sdk (which ansible is built on top of). try changing url to this:
url: 'https://myVault.vault.azure.net' # so remove the trailing slash
the error text is 101% misleading
After much toil I figured out the issue! The argument client_id is misspelled in the example and I didn't catch it which resulted in the error. cliend_id=client_id,
https://github.com/Azure/azure_preview_modules/blob/master/lookup_plugins/azure_keyvault_secret.py#L49
Corrected example below.
- name: Look up secret when ansible host is general VM
vars:
url: 'https://valueName.vault.azure.net'
secretname: 'secretName/version'
client_id: 'ServicePrincipalID'
secret: 'P#ssw0rd'
tenant: 'tenantID'
debug: msg="the value of this secret is {{lookup('azure_keyvault_secret',secretname,vault_url=url, client_id=client_id, secret=secret, tenant_id=tenant)}}"

How to have multiple kubernetes config and change quicky between them

I am working with multiple Kubernetes clusters at Azure, so I need to change quickly from one cluster to another without having various files at my path C:\Users\username\.kube, because I have to rename or replace the file when I wish to change to other.
I suggest that you use the following tools and tricks:
Use asdf to manage multiple kubectl versions
Set the KUBECONFIG env var to change between multiple kubeconfig files
Use kube-ps1 to keep track of your current context/namespace
Use kubectx and kubens to change fast between clusters/namespaces
Use aliases to combine them all together
Take a look at this article, it explains how to accomplish this: Using different kubectl versions with multiple Kubernetes clusters
I also recommend this read: Mastering the KUBECONFIG file
I recommend you check out this tool: kubectxwin
This is the Windows version of the kubectx tool which is the go-to for many to quickly change between clusters and namespaces within clusters.
You need to have all your kubernetes config files.
1.- Create a config file in the path C:\Users\username\.kube
2.- Get the data from every config file. For instance, 3 files one per environment (dev, qa, prod) so let's merge into one
Your file must looks like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: some_authority_01
server: some_server_url_01
name: some_cluster_name_01
- cluster:
certificate-authority-data: some_authority_02
server: some_server_url_02
name: some_cluster_name_02
- cluster:
certificate-authority-data: some_authority_03
server: some_server_url_03
name: some_cluster_name_03
contexts:
- context:
cluster: some_cluster_name_01
user: some_user_01
name: some_cluster_name_01
- context:
cluster: some_cluster_name_02
user: some_user_02
name: some_cluster_name_02
- context:
cluster: some_cluster_name_03
user: some_user_03
name: some_cluster_name_03
current-context: some_cluster_name_01
kind: Config
preferences: {}
users:
- name: some_user_01
user:
client-certificate-data: some_certificate_01
client-key-data: some_key_01
- name: some_user_02
user:
client-certificate-data: some_certificate_02
client-key-data: some_key_02
- name: some_user_02
user:
client-certificate-data: some_certificate_03
client-key-data: some_key_03
Note: the value of the current-context may vary, it isn't necessary that be the first cluster.
Adding the Shortcuts
3.- Add shortcuts for Windows 10 for changing kubernetes context quicky
3.1.- Create a file called Microsoft.PowerShell_profile.ps1 in the path C:\Users\username\Documents\WindowsPowerShell
3.2 Copy this data into the file that was recently created
function See-Contexts{kubectl config get-contexts}
Set-Alias -Name seec -Value See-Contexts
function change-context-01 { kubectl config use-context some_cluster_name_01}
Set-Alias -Name ctx01 -Value change-context-01
function change-context-02 { kubectl config use-context some_cluster_name_02}
Set-Alias -Name ctx01 -Value change-context-02
function change-context-03 { kubectl config use-context some_cluster_name_03}
Set-Alias -Name ctx01 -Value change-context-03
3.3.- Search PowerShell at search bar in in Windows and open the option RUN ISE as Administrator and open the file Microsoft.PowerShell_profile.ps1 and run the file.
With this solution you can easily change from kubernetes cluster using a shortcut. For example if you wanna change to
the cluster some_cluster_name_01 you only need to type ctx01. This is useful when we have multiple kubernetes clusters.

Resources