How to replace default certificates on a cloud2edge instance? - eclipse-hono

I deployed a cloud2edge instance and now i want to replace the default certificates with other ones generated with the create_certs.sh script. According to the Hono documentation i can specify the configuration (including the certificates path) in the values.yaml, but i am not sure how to do it with the cloud2edge package.
Where should i take a look in order to achieve my goal?
Is there any possibility to set the certificates path without re-installing the package?

This is what i did in order to replace the keys/certificate for the mqtt adapter:
Create a secret containing the keys and the certificate
kubectl create secret generic mqtt-key-cert --from-file=certs/mqtt-adapter-cert.pem --from-file=mqtt-adapter-key.pem -n $NS
Mount the secret into the adapter's container filesystem
helm upgrade -n $NS --set hono.adapters.mqtt.extraSecretMounts.tls.secretName="mqtt-key-cert" --set hono.adapters.mqtt.extraSecretMounts.tls.mountPath="/etc/tls" --reuse-values $RELEASE eclipse-iot/cloud2edge
Set the corresponding environment variables into the mqtt adapter deployment
kubectl edit deployments c2e-adapter-mqtt-vertx -n $NS
YAML:

Related

Connect with SSH to AKS cluster nodes

I am trying to connect with SSH to a scale set-based AKS cluster node for maintenance purposes. I am following the instructions in this article:
https://learn.microsoft.com/en-us/azure/aks/ssh
However, when I run:
az vmss extension set --name VMAccessForLinux --protected-settings '{\"username\":\"azureuser\", \"ssh_key\":\"$(cat ~/.ssh/id_rsa.pub)\"}' --publisher Microsoft.OSTCExtensions --resource-group $RG_NAME --version 1.4 --vmss-name $NODE_NAME
I get the following error:
VM has reported a failure when processing extension 'VMAccessForLinux'. Error message: "Enable failed: Failed to generate public key file.
My SSH key pair is located at C:\Users\username\.ssh and readable. I have tried generating multiple pairs, but the issue does not seem to be here. For generating the keys I used: ssh-keygen -m PEM -t rsa -b 4096
Any idea where I can find more information about this error or how can I troubleshoot it in more detail? Thank you.
The reason is that you need to use the double quotes to set the value of the parameter `--protected-setting like this:
--protected-settings "{\"username\":\"azureuser\", \"ssh_key\":\"$(cat ~/.ssh/id_rsa.pub)\"}"
Only when you use the double quotes then the character \ can work. You need to read the document carefully. And make sure the SSH public key is in the right format.

CIS benchmark issue for Kubernetes cluster

I'm running the CIS kube-bench tool on the master node and trying to resolve this error
[FAIL] 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated).
I understand that I need to update the API server manifest YAML file with this flag pointing to the right CA file --kubelet-certificate-authority however, I'm not sure which one is the right CA Certififace for Kubelet.
These are my files in the PKI directory:-
apiserver-etcd-client.crt
apiserver-etcd-client.key
apiserver-kubelet-client.crt
apiserver-kubelet-client.key
apiserver.crt
apiserver.key
ca.crt
ca.key
etcd
front-proxy-ca.crt
front-proxy-ca.key
front-proxy-client.crt
front-proxy-client.key
sa.key
sa.pub
3 very similar discussions on the same topic. I wont provide you all steps cause it well written in documentation and related questions on stack. Only high-level overview
How Do I Properly Set --kubelet-certificate-authority apiserver parameter?
Kubernetes kubelet-certificate-authority on premise with kubespray causes certificate validation error for master node
Kubernetes kubelet-certificate-authority on premise with kubespray causes certificate validation error for master node
Your actions:
Follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelets.
These connections terminate at the kubelet's HTTPS endpoint. By
default, the apiserver does not verify the kubelet's serving
certificate, which makes the connection subject to man-in-the-middle
attacks and unsafe to run over untrusted and/or public networks.
Enable Kubelet authentication and Kubelet authorization
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the --kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
From #Matt answer
Use /etc/kubernetes/ssl/ca.crt to sign new certificate for kubelet with valid IP SANs.
Set --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.crt (valid CA).
In /var/lib/kubelet/config.yaml (kubelet config file) set tlsCertFile and tlsPrivateKeyFile to point to newly created kubelet crt and key files.
And from clarifications:
Yes you have to generate certificates for kubelets and sign sign them
the provided certificate authority located here on the master
/etc/kubernetes/ssl/ca.crt
By default in Kubernetes there are 3 different Parent CA (kubernetes-ca, etcd-ca, kubernetes-front-proxy-ca). You are looking for kubernetes-ca because kubelet using kubernetes-ca, and you can check the documentation. kubernetes-ca default path is /etc/kubernetes/pki/ca.crt But also you verify it via kubelet configmap with below commands
kubectl get configmap -n kube-system $(kubectl get configmaps -n kube-system | grep kubelet | awk '{print $1}') -o yaml | grep -i clientca

What is the best way to provide a private SSH key in Azure DevOps Pipeline?

I am building an Azure DevOps pipleline using Terraform. The pipeline creates a Linux server and then logs into the Linux server to update packages and install Apache.
I am currently storing the private key in my BitBucket repo (I know, this is not best practice), which are then pulled down onto the build agent server and then I login to the new server with the following command:
ssh -f -q -o BatchMode=yes -o StrictHostKeyChecking=no -i ../private_key.pem ubuntu#$ip sudo apt update -y
What is the best way to store and then retrieve the private key within Azure DevOps?
Two options I can think of:
1) Create an ssh service connection in azure DevOps. Reference the service connection in your pipeline. https://medium.com/#sibeeshvenu/ssh-deployment-task-in-azure-pipelines-b0e2923bd7b4
2) Store the SSH key as an Azure Key Vault secret and then download the secret using the Azure CLI during the build.
az keyvault secret download --name mysshkey --vault-name mykeyvault --file ~/.ssh/id_rsa
Authenticate the Azure CLI using a service principal, and supply the credentials to the pipeline using a variable group.
I found that Azure DevOps provides you a feature to upload secret files are part of the build. You can see more information here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/library/secure-files?view=azure-devops

"Incorrect padding" when trying to create managed Kubernetes cluster on Azure with AKS

I am working through the instructions outlined here to try and set up a Couchbase cluster on Azure Container Service (AKS). That tutorial is using terminal/Mac, and I'm using Powershell/Windows.
I'm getting an error before I even get to the Couchbase part. I successfully created a resource group (which I called "cb_ask_spike", and yes it does appear on the Portal) from the command line, but then I try to create an AKS cluster:
az aks create --resource-group cb_aks_spike --name cbakscluster
I also tried:
az aks create --resource-group cb_aks_spike --name cbakscluster --generate-ssh-keys
In both cases, I get an error:
az aks create: error: Incorrect padding
I don't know what this error message means, and I can't seem to find any reference to it in the documentation or anywhere. What am I doing wrong?
I'm using azure-cli v2.0.31.
I am fairly confident that I solved why I'm getting this error, and I've updated issue 6142 on azure-cli. At this time, I believe this is a bug, and it's not fixed, but there is a workaround.
First it's important to note that --generate-ssh generates a new ssh key in ~/.ssh
I had a hunch that since ~ for me is "C:\Users\Matthew Groves" that the space in the path was causing the problem. Sure enough, I created a new account called "mgroves". ~ is now "C:\Users\mgroves" and voila, I don't get the "incorrect padding" error message anymore.
So, the workaround is either to use a new account (huge pain) or rename the folder (this is what I have done, and it's also a huge pain and I'm still finding little problems here and there all throughout my system because of it.
In addition to the now approved answer there is a solution that doesn't require you to change any directory or account name and is also easy to implement as well.
As correctly stated in the other answers the Azure CLI cannot handle the actual location where the generated SSH keys will be stored if there is a space in the path. I.e. C:\Users\Admin Account\.ssh\.
When using the az aks create command you can either use --generate-ssh-keys to let the Azure CLI handle it, OR you can specify an already existing SSH key with --ssh-key-value.
I used Git Bash to generate a new SSH key pair in the C:\Users\Admin Account\.ssh\ directory:
ssh-keygen -f ~/.ssh/aks-ssh
Now create the Azure AKS cluster while pointing to this new SSH key with:
az aks create \
--resource-group YourResourceGroup \
--name YourClusterName \
--node-count 3 \
--kubernetes-version 1.16.8 \
--ssh-key-value ~\.ssh\aks-ssh.pub
And you are good to go!
Just verified today using az cli in Powershell for version 2.0.31. You might need to first run the az group and then create az aks command. Screenshot for your reference.

Azure Kubernetes Private Key Location

I've used this command to deploy a Kubernetes cluster in Azure:
az acs create -n acs-cluster -g acsrg1 -d applink789 --generate-ssh-keys
Everything is working- I can connect to the cluster with kubectl. Now I want to define an SSH step in a Continuous Delivery pipeline. The documentation indicates that this command created a public/private key pair. Where is the private key stored? I've looked in the .ssh, .kube, and .azure folders in my home directory but I can't tell if any of those values are the private key.
Figured it out- the documentation says the keys will be generated if they are missing. If the id_rsa keypair is present in the .ssh hidden directory, it is used. Connected with Putty using the azureuser default account.

Resources