I am trying to deploy the Consul client on a k8s cluster ( the Consul server is on a docker swarm cluster). I want to use a config.yaml (mentioned in https://www.consul.io/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes) to set up the configuration. I found a Helm Chart Configuration page (https://www.consul.io/docs/k8s/helm#client) and a Configuration page (https://www.consul.io/docs/agent/options#config_key_reference). What is the difference between them? It seems that I need to refer to the Helm Chart Configuration page since I am working on the k8s. However, on the Helm Chart Configuration page, I can not find how to set up something like node_name, data_dir, client_addr, bind_addr, advertise_addr. Besides, I also need to set verify_incoming, encrypt, verify_outgoing, verify_server_hostname. For ca_file, cert_file, and key_file, I assume that cert_file is for caCert (in Helm Chart Configuration), key_file is for caKey, and I am not sure what the ca_file stand for.
k8s version:
Server Version:v1.22.4
There ate four servers in the cluster.
Any help would be appreciated.
Thanks
I can not find how to set up something like node_name, data_dir, client_addr, bind_addr, advertise_addr. Besides, I also need to set verify_incoming, encrypt, verify_outgoing, verify_server_hostname. For ca_file, cert_file, and key_file, I assume that cert_file is for caCert
Most of these parameters are specified by the Helm chart when deploying the Consul client or server pods. Is there a particular reason that you need to override these particular settings?
With that said, you can use server.extraConfig and client.extraConfig to provide additional configuration parameters to the server and client agents. Normally you would only specify parameters that are not already specified by deployments created by the Helm chart.
Related
We're using Terraform to deploy AKS clusters to an environment behind a proxy over VPN. Deployment of the cluster works correctly when off-network without the proxy, but errors out on Helm deployment creation on-network.
We are able to connect to the cluster after it's up while on the network using the following command after retrieving the cluster context.
kubectl config set-cluster <cluster name> --certificate-authority=<path to organization's root certificate in PEM format>
The Helm deployments are also created with Terraform after the creation of the cluster. It seems that these require the certificate-authority data to deploy and we haven't been able to find a way to automate this at the right step in the process. Consequently, the apply fails with the error:
x509: certificate signed by unknown authority
Any idea how we can get the certificate-authority data in the right place so the Helm deployments stop failing? Or is there a way to get the cluster to implicitly trust that root certificate? We've tried a few different things:
Researched if you could automatically have that data in there when retrieving the cluster context (i.e. az aks get-credentials --name <cluster name> --resource-group <cluster RG>)?** Couldn't find an easy way to accomplish this.
We started to consider adding the root cert info as part of the kubeconfig that's generated during deployment (rather than the one you create when retrieving the context). The idea is that it can be passed in to the kubernetes/helm providers and also leveraged when running kubectl commands via local-exec blocks. We know that works but that means that we couldn't find a way to automate that via Terraform.
We've tried providing the root certificate to the different fields of the provider config, shown below. We've specifically tried a few different things with cluster_ca_certificate, namely providing the PEM-style cert of the root CA.
provider "kubernetes" {
host = module.aks.kube_config.0.host
client_certificate = base64decode(module.aks.kube_config.0.client_certificate)
client_key = base64decode(module.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(module.aks.kube_config.0.cluster_ca_certificate)
}
provider "helm" {
version = ">= 1.2.4"
kubernetes {
host = module.aks.kube_config.0.host
client_certificate = base64decode(module.aks.kube_config.0.client_certificate)
client_key = base64decode(module.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(module.aks.kube_config.0.cluster_ca_certificate)
}
}
Thanks in advance for the help! Let me know if you need any additional info. I'm still new to the project so I may not have explained everything correctly.
In case anyone finds this later, we ultimately ended up just breaking the project up into two parts: cluster creation and bootstrap. This let us add a local-exec block in the middle to run the kubectl config set-cluster... command. So the order of operations is now:
Deploy AKS cluster (which copies Kube config locally as one of the Terraform outputs)
Run the command
Deploy microservices
Because we're using Terragrunt, we can just use its apply-all function to execute both operations, setting the dependencies described here.
I'm using Minikube for development and I need to build a k8s app that pull all images from ACR, all images stored already on ACR.
To pull images from azure what I need to is to create secret with user&pass of the azure account and pass this secret to every image that I want to pull using imagePullSecrets (documentation here)
There is a way to add this registry as a global setting for namespace, or the project?
I don't understand why every image needs to get the secret implicitly in the spec.
Edit:
Thanks for the comments I'll check them later, for now I resolve this problem at minikube level. there is a way to set a private registry in minikube (doc here)
In my version this bug exists, and this answer resolve the problem.
As I know, if you do not use the K8s in Azure, I mean the Azure Kubernetes Service, then there are two ways I know the pull the images from ACR. One is the way you know that using the secrets. And another is to use the service account, but you also need to configure it in each deployment or the pods the same way as the secrets.
If you use the Azure Kubernetes Service, then you just need to assign the AcrPull role to the service principal of the AKS, and then you need to set nothing for each image.
You can add imagePullSecrets to a service account (e.g. to the default serviceaccout).
It will automatically add imagePullSecrets to the pod spec that has assigned this specific (e.g. default) serviceaccount, so you don't have to do it explicitly.
You can do it running:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
You can verify it with:
$ kubectl run nginx --image=nginx --restart=Never
$ kubectl get pod nginx -o=jsonpath='{.spec.imagePullSecrets[0].name}{"\n"}'
myregistrykey
Also checkout the k8s docs add-image-pull-secret-to-service-account.
In my case, I had a local Minikube installed in order to test locally my charts and my code. I tried most of the solutions suggested here and in other Stack Overflow posts and the following are the options I found out :
Move the image from the local Docker registry to Minikube's registry and set the pullPolicy to Never or IfNotPresent in your chart.
docker build . -t my-docker-image:v1
minikube image load my-docker-image:v1
$ minikube image list
rscoreacr.azurecr.io/decibel:0.0.1
k8s.gcr.io/pause:3.5
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
...
##Now edit your chart and change the `pullPolicy`.
helm install my_name chart/ ## should work.
I think that the main disadvantage of this option is that you need to change your chart and remember to change the values to their previous value.
Create a secret that holds the credentials to the acr.
First login to the acr via :
az acr login --name my-registry.azurecr.io --expose-token
The output of the command should show you a user and an access token.
Now you should create a Kubernetes secret (make sure that you are on the right Kubernetes context - Minikube) :
kubectl create secret docker-registry my-azure-secret --docker-server=my-registry.azurecr.io --docker-username=<my-user> --docker-password=<access-token>
Now, if your chart uses the default service account (When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace) you should edit the service account via the following command :
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-azure-secret"}]}'
I didn't like this option because if I have a different secret provider for every helm chart I need to overwrite the yaml with the imagePullSecrets.
Another alternative you have is using Minikube's registry creds
Personally, the solution I went for is the first solution with a tweak, instead of adding the pullPolicy in the yaml itself, I overwrite it when I install the chart :
$ helm install --set image.pullPolicy=IfNotPresent <name> charts/
I'm using a CI/CD pipeline to deploy a Node.js app on my Kubernetes cluster. Now, we use different sensible environment variables in local, and we would like to deploy them as environment variables within the cluster to be used by the different containers...
Which strategy should I go with?
TIA
There are many tools created in order to let you inject secret into kubernetes safely.
Natively you can use the "Secrets" object: https://kubernetes.io/docs/concepts/configuration/secret/
And mount the secret as env var.
Alternatively, you can use some open source tools that make this process more secured by encrypting the secrets, here are some I recommend:
https://learnk8s.io/kubernetes-secrets-in-git
https://www.vaultproject.io/docs/platform/k8s
I setup up a prototype cluster in Azure Kubernetes Service to test the ability to configure HTTPS ingress with cert-manager. I was able to make everything work, now I'm ready to setup my production environment.
The problem is I used the sub domain name I needed (sub.domain.com) on the prototype and now I can't seem to make Let's Encrypt give a certificate to the production cluster.
I'm still very new to Kubernetes and I can't seem to find a way to export or move the certificate from one to the other.
Update:
It appears that the solution provided below would have worked, but it came down to needing to suspend/turnoff the prototype's virtual machine. Within a couple minutes the production environment picked up the certificate.
you can just do something like:
kubectl get secret -o yaml
and just copy\paste your certificate secret to a new cluster, or use something like heptio ark to do backup\restore.
ps. I dont know why it wouldn't let you create a new cert, at worst you would need to wait 7 days for your rate limit to refresh.
Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you.
More Background
We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl.
That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci.
That GitLab-Ci integration is the goal, and the 'Why' behind this question.
My Question
Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc).
While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API).
My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service):
How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service?
If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able!
Kubernetes Resources I am Reviewing
How should I manage deployments with kubernetes
Kubernetes Deployments
Will update as I work through the process.
Creating the integration
I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this question because I was having some error when I tried to add my Kubernetes cluester info into GitLab.
How to integrate them:
Inside GitLab, go to "Operations" > "Kubernetes" menu.
Click on the "Add Kubernetes cluster" button on the top of the page
You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using az login command, and then execute this other command to get the Kubernetes cluster credentials: az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name>
The previous command will create a ~/.kube/config file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this .kube/config file
These are the fields:
Kubernetes cluster name: It's the name of your cluster on Azure, it's in the .kube/config file too.
API URL: It's the URL in the field server of the .kube/config file.
CA Certificate: It's the field certificate-authority-data of the .kube/config file, but you will have to base64 decode it.
After you decode it, it must be something like this:
-----BEGIN CERTIFICATE-----
...
some base64 strings here
...
-----END CERTIFICATE-----
Token: It's the string of hexadecimal chars in the field token of the .kube/config file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with cluster-admin privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen here under Create a gitlab service account in the default namespace) and apply it to your cluster by means of kubectl apply -f serviceaccount.yml.
Project namespace (optional, unique): I leave it empty, don't know yet for what or where this namespace can be used.
Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now.
Deploy
In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the kubectl command, here is a list of all the variables available:
https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables
To have these variables injected in your deploy job, there are some conditions:
You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above
Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your .gitlab-ci.yml) must have an environment key (take a look at the line 31 in this example), and the environment name must match the name you used in menu "Operations" > "Environments".
Here are an example of a .gitlab-ci.yml with three stages:
Build: it builds a docker image and push it to gitlab private registry
Test: it doesn't do anything yet, just put an exit 0 to change it later
Deploy: download a stable version of kubectl, copy the .kube/config file to be able to run kubectl commands in the cluster and executes a kubectl cluster-info to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this kubectl cluster-info command is executing fine.
Tip: to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command env in the script of your deploy stage. It helps a lot to debug a job.
I logged into our GitLab-Ci backend today and saw a 'Kubernetes' button — along with an offer to save $500 at GCP.
GitLab Kubernetes
URL to hit your repo's Kubernetes GitLab page is:
https://gitlab.com/^your-repo^/clusters
As I work through the integration process I will update this answer (but also welcome!).
Official GitLab Kubernetes Integration Docs
https://docs.gitlab.com/ee/user/project/clusters/index.html