I have setup an Azure Kubernetes Service and manually successfully deployed multiple Helm charts.
I now want to setup up a CD pipeline using GitHub Actions and Helm to deploy (that is install and upgrade) a Helm chart whenever the Action is triggers.
Up until now I only found Actions that use kubectl for deployment, which I don't want to use, because there are some secrets provided in the manifests that I don't want to check into version control, hence the decision for Helm as it can fill these secrets with values provided as environmental variables when running the helm install command:
# without Helm
...
clientId: secretValue
# with Helm
...
clientId: {{ .Values.clientId }}
The "secret" would be provided like this: helm install --set clientId=secretValue.
Now the question is how can I achieve this using GitHub Actions? Are there any "ready-to-use" solutions available that I just haven't found or do I have to approach this in a completely different way?
Seems like I made things more complicated than I needed.
I ended up with writing a simple GitHub Action based on the alpine/helm docker image and was able to successfully setup the CD pipeline into AKS.
Related
We have a small collection of Kubernetes pods which run react/next.js UIs in a node 16 alpine container (node:16.18.1-alpine3.15 to be precise). All of this runs in AWS EKS 1.23. We make use of annotations on these pods in order to inject secrets from Hashicorp Vault at start up. The annotations pull the desired secrets from Vault and write these to a file on the pod. Example of said annotations below :
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-pre-populate-only: "true"
vault.hashicorp.com/role: "onejourney-ui"
vault.hashicorp.com/agent-inject-secret-config: "secret/data/onejourney-ui"
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/onejourney-ui" -}}
export AUTH0_CLIENT_ID="{{ .Data.data.auth0_client_id }}"
export SENTRY_DSN="{{ .Data.data.sentry_admin_dsn }}"
{{- end }}
When the pod starts up, we source this file (which is created by default at /vault/secrets/config) to set environment variables and then delete the file. We do that with the following pod arguments in our helm chart :
node:
args:
- /bin/sh
- -c
- source /vault/secrets/config; rm -rf /vault/secrets/config; yarn start-admin;
We recently upgraded some of AWS EKS clusters from 1.23 to 1.24. After doing so, we noted that our node applications were failing to start and entering a crash loop. Looking in the logs of these containers, the problem seemed to be that the pod was unable to locate the secrets file anymore.
Interestingly, the Vault init container completed successfully and shows that the file was successfully created...
Out of curiosity, I removed the node args to source the file which allowed the container to start successfully, but I found when execing into the pod, the file WAS infact present and had the content I was expecting. The file also had the correct owner and permissions as we see in a good working instance in EKS 1.23.
We have other containers (php-fpm) which consume secrets in the same manner however these were not affected on 1.24, only node containers were affected. There were no namespace, pod or deployment annotations I saw added which would have been a possible cause. After rolling the cluster back down to EKS 1.23, the deployment worked as expected.
I'm left scratching my head as to why the pod is unable to source that file on 1.24. Any suggestions on what to check or a possible cause would be greatly appreciated.
Issue itself
Got an Azure Container registry as both image and chart storage. Assume it myacr.azurecr.io with 8 different charts pushed. As far as I read before Azure ACR is capable of storing charts and compatible with Helm 3 (version 3.5.2).
The following steps to reproduce are simple.
helm repo add myacr https://myacr.azurecr.io/helm/v1/repo --username myusername -password admin123 - repo added. OK.
helm chart save ./my-chart/ myacr.azurecr.io/helm/my-chart:1.0.0 - chart saved. OK
helm push ./my-chart/ myacr.azurecr.io/helm/my-chart:1.0.0 - pushed. Available in Azure portal. OK.
helm repo update - what could go wrong here? As expected. OK
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "myacr" chart repository
Update Complete. ⎈Happy Helming!⎈
helm search repo -l - I see everything from ingress-nginx and jetstack but nothing from myacr in the list.
Yet if I do pull and export everything works fine - chart is in place
What I tried
renaming repo name to helm/{app} according to some theories in the web - fail
reconfiguring chart with full descriptions e.t.c. according to ingress-nginx - fail
executing helm search repo -l --devel to see all possible chart versions - no luck
"Swithing off and on again" - removing and adding repo again with different combinations - fail
explicit slang language on every attempt - warms up a bit but doesn't solve the issue
The questions are
Is Azure ACR fully compatible with Helm 3?
Is there any specific workaround to make it compatible with Helm 3?
Does search functionality have any requirements to chart structure or version?
Is Azure ACR fully compatible with Helm 3?
Yes, it's fully compatible with Helm 3.
Is there any specific workaround to make it compatible with Helm 3?
Nothing needs to be done because the first question is yes.
Does search functionality have any requirements to chart structure or
version?
You need to first to add the repo to your local helm with the command az acr helm repo add --name myacr or helm repo add myacr https://myacr.azurecr.io/helm/v1/repo --username xxxxx --password xxxxxx, and then you get the output like this running the command helm search repo -l:
And the local repo looks like this:
Goal
I have a specific workflow to set up a fresh Kubernetes cluster on Google Cloud. And I want to automate the process with Terraform. Those are the steps:
Create cluster
gcloud beta container --project "my-google-project" clusters create "cluster-name" --zone "europe-west3-b"
Setup Helm repos
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo add jetstack https://charts.jetstack.io/
helm repo update
Install NGINX Ingress
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
helm install nginx-ingress stable/nginx-ingress
Install Cert-Manager
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.13.0/deploy/manifests/00-crds.yaml
kubectl create namespace cert-manager
helm install cert-manager jetstack/cert-manager --namespace cert-manager
Ideas
The first step will probably look like this:
resource "google_container_cluster" "primary" {
name = "cluster-name"
location = "europe-west3-b"
initial_node_count = 3
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
metadata = {
disable-legacy-endpoints = "true"
}
}
}
But I have no idea how to approach steps 2 - 4.
While Terraform makes sense for building and provisioning cloud infrastructure for things like Kubernetes to run on, it doesn't necessarily make sense to be used to configure said infrastructure after deployment. I think most infrastructure designs would consider applications deployed onto a provisioned cluster as configurations to said cluster. The semantics here are surely a bit nuanced but I maintain that a tool like Ansible is better suited to deploy applications to your cluster after provisioning.
So my advice would be to define a handful of Ansible Roles. Perhaps:
create_cluster
deploy_helm
install_nginx_ingress
install_cert_manager
Within each respective role, define the tasks and variables that are required to be used as per the Galaxy schema. Lastly, define a Playbook that Ansible uses to include or import these roles. This would allow you to provision your infrastructure and deploy all of the required applications to it in a single command:
ansible-playbook playbook.yml
I've deployed Spark Operator to GKE using the Helm Chart to a custom namespace:
helm install --name sparkoperator incubator/sparkoperator --namespace custom-ns --set sparkJobNamespace=custom-ns
and confirmed the operator running in the cluster with helm status sparkoperator.
However when I'm trying to run the Spark Pi example kubectl apply -f examples/spark-pi.yaml I'm getting the following error:
the path "examples/spark-pi.yaml" does not exist
There are few things that I probably still don't get:
Where is actually examples/spark-pi.yaml located after deploying the operator?
What else should I check and what other steps should I take to make the example work?
Please find the spark-pi.yaml file here.
You should copy it to your filesystem, customize it if needed, and provide a valid path to it with kubectl apply -f path/to/spark-pi.yaml.
kubectl apply needs a yaml file either locally on the system where you are running kubectl command or it can be a http/https endpoint hosting the file.
I'm trying to install Istio with automatic sidecar injection into Kubernetes. My environment consists of three masters and two nodes and was built on Azure using the Azure Container Service marketplace product.
Following the documentation located here, I have so far enabled RBAC and DynamicAdmissionControl. I have accomplished this by modifying /etc/kubernetes/istio-inializer.yaml on the Kubernetes Master by adding the following content outlined in red and then restarting the Kubernetes Master using the Unix command, reboot.
The next step in the documentation is to apply the yaml using kubectl. I assume that the documentation intends for the user to clone the Istio repository and cd into it before this step but that is unmentioned.
git clone https://github.com/istio/istio.git
cd istio
kubectl apply -f install/kubernetes/istio-initializer.yaml
After which the following error occurs:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
error: error validating "install/kubernetes/istio-initializer.yaml": error validating data: found invalid field initializers for v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
If I attempt to execute kubectl apply with the mentioned flag, validate=false, then this error is generated instead:
user#hostname:~/istio$ kubectl apply -f install/kubernetes/istio-initializer.yaml --validate=false
configmap "istio-inject" configured
serviceaccount "istio-initializer-service-account" configured
deployment "istio-initializer" configured
error: unable to recognize "install/kubernetes/istio-initializer.yaml": no matches for admissionregistration.k8s.io/, Kind=InitializerConfiguration
I'm not sure where to go from here. The problem appears to be related to the admissionregistration.k8s.io/v1alpha1 block in the yaml but I'm unsure what specifically is incorrect in this block.
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: InitializerConfiguration
metadata:
name: istio-sidecar
initializers:
- name: sidecar.initializer.istio.io
rules:
- apiGroups:
- "*"
apiVersions:
- "*"
resources:
- deployments
- statefulsets
- jobs
- daemonsets
Installed version of Kubernetes:
user#hostname:~/istio$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
I suspect this is a versioning mismatch. As a follow up question, is it possible to deploy a version of kubernetes >= 1.7.4 to Azure using ACS?
I'm fairly new to working with Kubernetes so if anyone could help I would greatly appreciate it. Thank you for your time.
Seems to be a versioning problem as the alpha feature is supported for k8s version> 1.7 as mentioned here (https://kubernetes.io/docs/admin/extensible-admission-controllers/#what-are-initializers).
1.7 introduces two alpha features, Initializers and External Admission
Webhooks, that address these limitations. These features allow admission
controllers to be developed out-of-tree and configured at runtime.
And it is possible to deploy a version of kubernetes >= 1.7.4 to Azure. Note sure about the deployed version using the portal. But if you use acs-egnine to generate the ARM template, it is possible to deploy a cluster with version 1.7.5.
You can refer here for the procedures https://github.com/Azure/acs-engine. Basically it involves three steps. First, you should create the json file by referring to the clusterDefinition section. To use version 1.7.5, you should specify the attribute "orchestratorRelaease" to be "1.7" and also enable the RBAC by specifying the attribute "enableRbac" to be true. Second, use the acs engine (version >= 0.6.0) to parse the json file to ARM template (azuredeploy.json & azuredeploy.parameters.json should be created). Lastly, use the command "New-AzureRmResourceGroupDeployment" in powershell to deploy the cluster to Azure.
Hope this helps :)