Terraform - How to Create a GKE Cluster and Install Helm Charts? - terraform

Goal
I have a specific workflow to set up a fresh Kubernetes cluster on Google Cloud. And I want to automate the process with Terraform. Those are the steps:
Create cluster
gcloud beta container --project "my-google-project" clusters create "cluster-name" --zone "europe-west3-b"
Setup Helm repos
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo add jetstack https://charts.jetstack.io/
helm repo update
Install NGINX Ingress
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
helm install nginx-ingress stable/nginx-ingress
Install Cert-Manager
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.13.0/deploy/manifests/00-crds.yaml
kubectl create namespace cert-manager
helm install cert-manager jetstack/cert-manager --namespace cert-manager
Ideas
The first step will probably look like this:
resource "google_container_cluster" "primary" {
name = "cluster-name"
location = "europe-west3-b"
initial_node_count = 3
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
metadata = {
disable-legacy-endpoints = "true"
}
}
}
But I have no idea how to approach steps 2 - 4.

While Terraform makes sense for building and provisioning cloud infrastructure for things like Kubernetes to run on, it doesn't necessarily make sense to be used to configure said infrastructure after deployment. I think most infrastructure designs would consider applications deployed onto a provisioned cluster as configurations to said cluster. The semantics here are surely a bit nuanced but I maintain that a tool like Ansible is better suited to deploy applications to your cluster after provisioning.
So my advice would be to define a handful of Ansible Roles. Perhaps:
create_cluster
deploy_helm
install_nginx_ingress
install_cert_manager
Within each respective role, define the tasks and variables that are required to be used as per the Galaxy schema. Lastly, define a Playbook that Ansible uses to include or import these roles. This would allow you to provision your infrastructure and deploy all of the required applications to it in a single command:
ansible-playbook playbook.yml

Related

Capturing kubectl set command in terraform

We have a case where we need to update AWS EKS CNI config on the daemon set. But the solution is only through kubectl command. How do we update an existing daemonset with specific values through terraform code? The requirement is that the solution has to be in IAC. The equivalent kubectl command given is
kubectl set env daemonset -n kube-system aws-node WARM_IP_TARGET=2,MINIMUM_IP_TARGET=12
The values shown in numbers are planned to be variables in terraform.
What you are asking for doesn't exist. Here is the open Terraform Github issue for what you are asking for:
https://github.com/hashicorp/terraform-provider-kubernetes/issues/723
Even if that did exist, I wouldn't consider that IaC as it's not declarative (might as well just run a bash script).
In my opinion, the real solution is for AWS to allow the provisioning of bare clusters so that "addons" can be managed completely through IaC tools. But that also does not exist:
https://github.com/aws/containers-roadmap/issues/923
The closest you're going to get will be to use a null_resource to execute the patch. Here's an example in that Github issue:
https://github.com/hashicorp/terraform-provider-kubernetes/issues/723#issuecomment-679423792
So your final result will look similar to this:
resource "null_resource" "patch_aws_cni" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = <<EOF
# do all those commands to get kubectl and auth info, then run:
kubectl set env daemonset -n kube-system aws-node WARM_IP_TARGET=2,MINIMUM_IP_TARGET=12
EOF
}
}

Argo CD stuck in deadlock when deleting its installation via Helm with Terraform

I am installing Argo CD in a cluster using a Helm release via Terraform. It installs Argo CD CRDs and automatically add a custom project and a initial application. This application monitors a path with other applications (app of apps).
If I want to remove Argo CD and all its apps and argocd namespace from the cluster using terraform destroy, it usually gets stuck in a deadlock for deleting the apps.
I understand that having the finalizers below will make the deletion of the app cascde and delete all the resources when using kubectl delete app ..., which is good:
metadata:
finalizers:
- resources-finalizer.argocd.argoproj.io
However, in my app definitions I also have the sync policy for prune and autoHeal set to true so if an app is accidentally deleted, it's recreated immediately like this:
syncPolicy:
automated:
prune: true
selfHeal: true
Am I right to assume that these two definitions are conflicting and causing the deadlock when destroying Argo CD with Terraform? The destroy command tries to delete the app which gets stuck in a loop recreates it everytime.
If that's the case, how can I achieve that Argo CD will recreate an app if it is deleted accidentally (via UI, kubectl or argocd CLI) but will be completely pruned with terraform destroy?

Use GitHub Actions and Helm to deploy to AKS

I have setup an Azure Kubernetes Service and manually successfully deployed multiple Helm charts.
I now want to setup up a CD pipeline using GitHub Actions and Helm to deploy (that is install and upgrade) a Helm chart whenever the Action is triggers.
Up until now I only found Actions that use kubectl for deployment, which I don't want to use, because there are some secrets provided in the manifests that I don't want to check into version control, hence the decision for Helm as it can fill these secrets with values provided as environmental variables when running the helm install command:
# without Helm
...
clientId: secretValue
# with Helm
...
clientId: {{ .Values.clientId }}
The "secret" would be provided like this: helm install --set clientId=secretValue.
Now the question is how can I achieve this using GitHub Actions? Are there any "ready-to-use" solutions available that I just haven't found or do I have to approach this in a completely different way?
Seems like I made things more complicated than I needed.
I ended up with writing a simple GitHub Action based on the alpine/helm docker image and was able to successfully setup the CD pipeline into AKS.

Helm - Spark operator examples/spark-pi.yaml does not exist

I've deployed Spark Operator to GKE using the Helm Chart to a custom namespace:
helm install --name sparkoperator incubator/sparkoperator --namespace custom-ns --set sparkJobNamespace=custom-ns
and confirmed the operator running in the cluster with helm status sparkoperator.
However when I'm trying to run the Spark Pi example kubectl apply -f examples/spark-pi.yaml I'm getting the following error:
the path "examples/spark-pi.yaml" does not exist
There are few things that I probably still don't get:
Where is actually examples/spark-pi.yaml located after deploying the operator?
What else should I check and what other steps should I take to make the example work?
Please find the spark-pi.yaml file here.
You should copy it to your filesystem, customize it if needed, and provide a valid path to it with kubectl apply -f path/to/spark-pi.yaml.
kubectl apply needs a yaml file either locally on the system where you are running kubectl command or it can be a http/https endpoint hosting the file.

Installing Istio in Kubernetes with automatic sidecar injection: istio-inializer.yaml Validation Failure

I'm trying to install Istio with automatic sidecar injection into Kubernetes. My environment consists of three masters and two nodes and was built on Azure using the Azure Container Service marketplace product.
Following the documentation located here, I have so far enabled RBAC and DynamicAdmissionControl. I have accomplished this by modifying /etc/kubernetes/istio-inializer.yaml on the Kubernetes Master by adding the following content outlined in red and then restarting the Kubernetes Master using the Unix command, reboot.
The next step in the documentation is to apply the yaml using kubectl. I assume that the documentation intends for the user to clone the Istio repository and cd into it before this step but that is unmentioned.
git clone https://github.com/istio/istio.git
cd istio
kubectl apply -f install/kubernetes/istio-initializer.yaml
After which the following error occurs:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
error: error validating "install/kubernetes/istio-initializer.yaml": error validating data: found invalid field initializers for v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
If I attempt to execute kubectl apply with the mentioned flag, validate=false, then this error is generated instead:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml --validate=false
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
deployment "istio-initializer" configured
error: unable to recognize "install/kubernetes/istio-initializer.yaml": no matches for admissionregistration.k8s.io/, Kind=InitializerConfiguration
I'm not sure where to go from here. The problem appears to be related to the admissionregistration.k8s.io/v1alpha1 block in the yaml but I'm unsure what specifically is incorrect in this block.
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: InitializerConfiguration
metadata:
name: istio-sidecar
initializers:
- name: sidecar.initializer.istio.io
rules:
- apiGroups:
- "*"
apiVersions:
- "*"
resources:
- deployments
- statefulsets
- jobs
- daemonsets
Installed version of Kubernetes:
user#hostname:~/istio$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
I suspect this is a versioning mismatch. As a follow up question, is it possible to deploy a version of kubernetes >= 1.7.4 to Azure using ACS?
I'm fairly new to working with Kubernetes so if anyone could help I would greatly appreciate it. Thank you for your time.
Seems to be a versioning problem as the alpha feature is supported for k8s version> 1.7 as mentioned here (https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers).
1.7 introduces two alpha features, Initializers and External Admission
Webhooks, that address these limitations. These features allow admission
controllers to be developed out-of-tree and configured at runtime.
And it is possible to deploy a version of kubernetes >= 1.7.4 to Azure. Note sure about the deployed version using the portal. But if you use acs-egnine to generate the ARM template, it is possible to deploy a cluster with version 1.7.5.
You can refer here for the procedures https://github.com/Azure/acs-engine. Basically it involves three steps. First, you should create the json file by referring to the clusterDefinition section. To use version 1.7.5, you should specify the attribute "orchestratorRelaease" to be "1.7" and also enable the RBAC by specifying the attribute "enableRbac" to be true. Second, use the acs engine (version >= 0.6.0) to parse the json file to ARM template (azuredeploy.json & azuredeploy.parameters.json should be created). Lastly, use the command "New-AzureRmResourceGroupDeployment" in powershell to deploy the cluster to Azure.
Hope this helps :)

Resources