Passing arguments to Docker containers via Kubernetes Deployment YML - node.js

I'm using a CI/CD pipeline to deploy a Node.js app on my Kubernetes cluster. Now, we use different sensible environment variables in local, and we would like to deploy them as environment variables within the cluster to be used by the different containers...
Which strategy should I go with?
TIA

There are many tools created in order to let you inject secret into kubernetes safely.
Natively you can use the "Secrets" object: https://kubernetes.io/docs/concepts/configuration/secret/
And mount the secret as env var.
Alternatively, you can use some open source tools that make this process more secured by encrypting the secrets, here are some I recommend:
https://learnk8s.io/kubernetes-secrets-in-git
https://www.vaultproject.io/docs/platform/k8s

Related

How do I pass resources that were created by Terraform to Kustomize

Am using a combination of these tools
Terraform - To deploy the Application specific AWS resources I need
(For instance a secret)
Skaffold - To help with the inner
development loop, surrounding the deployment of K8s resources to
local and remote cluster
Kustomize - To help with templating of
different configurations for different environment
My github action steps are as follows
Terraform to create the AWS resources. At this point it creates a AWS
secrets arn.
Skaffold to deploy the k8s manifests. Skaffold in-turn delegates K8s manifest generation to Kustomize. Within the Kustomize overlay files i need to be able to access the Secrets arn that was created earlier, this arn needs to be injected into the container that is being deployed. How do I achieve this?
Rephrasing the question: How do I pass resources that were created by terraform to be consumed by something like Kustomize (Which is used by skaffold)
(p.s, I really like the choice of my tools thus far as each one excels at one thing. I realize that terraform can possibly do all of it, but that is a choice that I dont want to make unless there are no easier options)
Here is what I have learnt:
I don't think there are any industry standards in terms of how to share this data between the tools across different steps within github actions. That being said here are some of the options
Have the Terraform store the secrets arn in a parameter store. Retrieve the arn from the parameter store in later steps. This means that the steps have to share a static key
Have Terraform update the kustomize files directly (or use kustomize_overlays as datasource)
There could be other similar approaches, but none of these tools have a native way of passing/sharing data

GitHub Actions for Terraform - How to provide "terraform.tfvars" file with aws credentials

I am trying to setup GitHub Actions for execute a terraform template.
My confusion is - how do I provide *.tfvars file which has aws credentials. (I can't check-in these files).
Whats the best practice to share the variable's values expected by terraform commands like plan or apply where they need aws_access_key and aws_secret_key.
Here is my GitHub project - https://github.com/samtiku/terraform-ec2
Any guidance here...
You don't need to provide all variables through *.tfvars file. Apart from -var-file option, terraform command provides also -var parameter, which you can use for passing secrets.
In general, secrets are passed to scripts through environment variables. CI tools give you an option to define environment variables in project configuration. It's a manual step, because as you have already noticed, secrets cannot be stored in the repository.
I haven't used Github Actions in particular, but after setting environment variables, all you need to do is run terraform with secrets read from them:
$ terraform -var-file=some.tfvars -var "aws-secret=${AWS_SECRET_ENVIRONMENT_VARIABLE}
This way no secrets are ever stored in the repository code. If you'd like to run terraform locally, you'll need first to export these variables in your shell :
$ export AWS_SECRET_ENVIRONMENT_VARIABLE="..."
Although Terraform allows providing credentials to some providers via their configuration arguments for flexibility in complex situations, the recommended way to pass credentials to providers is via some method that is standard for the vendor in question.
For AWS in particular, the main standard mechanisms are either a credentials file or via environment variables. If you configure the action to follow what is described in one of those guides then Terraform's AWS provider will automatically find those credentials and use them in the same way that the AWS CLI does.
It sounds like environment variables will be the easier way to go within GitHub actions, in which case you can just set the necessary environment variables directly and the AWS provider should use them automatically. If you are using the S3 state storage backend then it will also automatically use the standard AWS environment variables.
If your system includes multiple AWS accounts then you may wish to review the Terraform documentation guide Multi-account AWS Architecture for some ideas on how to model that. The summary of what that guide recommends is to have a special account set aside only for your AWS users and their associated credentials, and then configure your other accounts to allow cross-account access via roles, and then you can use a single set of credentials to run Terraform but configure each instance of the AWS provider to assume the appropriate role for whatever account that provider instance should interact with.

Azure Webapps for Containers connection string in environment variables

My app running in a docker container on Azure Webapps for Containers tries to access a connection string through an environment variable. I've added it to the Application Settings in the Azure UI but I can't access it through my code, specifically my ASP.NET Core application is returning null.
I know that the logs won't show it being added as a -e connstring=myconnstring argument in the docker run command, but it should never the less be present in the container.
It turns out, by using the Advanced Tools -> Environment Kudu service in Azure, the connection string environment variable names were being prefixed with SQLAZURECONNSTR_.
I know it is a convention to have these kind of prefixes on environment variables when reading them with the .NET Core environment variable configuration provider as described here, but quite why Azure adds these prefixes automatically, apparently without documenting this behaviour anywhere, is unclear to me.

How to Integrate GitLab-Ci w/ Azure Kubernetes + Kubectl + ACR for Deployments?

Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you.
More Background
We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl.
That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci.
That GitLab-Ci integration is the goal, and the 'Why' behind this question.
My Question
Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc).
While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API).
My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service):
How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service?
If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able!
Kubernetes Resources I am Reviewing
How should I manage deployments with kubernetes
Kubernetes Deployments
Will update as I work through the process.
Creating the integration
I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this question because I was having some error when I tried to add my Kubernetes cluester info into GitLab.
How to integrate them:
Inside GitLab, go to "Operations" > "Kubernetes" menu.
Click on the "Add Kubernetes cluster" button on the top of the page
You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using az login command, and then execute this other command to get the Kubernetes cluster credentials: az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name>
The previous command will create a ~/.kube/config file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this .kube/config file
These are the fields:
Kubernetes cluster name: It's the name of your cluster on Azure, it's in the .kube/config file too.
API URL: It's the URL in the field server of the .kube/config file.
CA Certificate: It's the field certificate-authority-data of the .kube/config file, but you will have to base64 decode it.
After you decode it, it must be something like this:
-----BEGIN CERTIFICATE-----
...
some base64 strings here
...
-----END CERTIFICATE-----
Token: It's the string of hexadecimal chars in the field token of the .kube/config file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with cluster-admin privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen here under Create a gitlab service account in the default namespace) and apply it to your cluster by means of kubectl apply -f serviceaccount.yml.
Project namespace (optional, unique): I leave it empty, don't know yet for what or where this namespace can be used.
Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now.
Deploy
In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the kubectl command, here is a list of all the variables available:
https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables
To have these variables injected in your deploy job, there are some conditions:
You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above
Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your .gitlab-ci.yml) must have an environment key (take a look at the line 31 in this example), and the environment name must match the name you used in menu "Operations" > "Environments".
Here are an example of a .gitlab-ci.yml with three stages:
Build: it builds a docker image and push it to gitlab private registry
Test: it doesn't do anything yet, just put an exit 0 to change it later
Deploy: download a stable version of kubectl, copy the .kube/config file to be able to run kubectl commands in the cluster and executes a kubectl cluster-info to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this kubectl cluster-info command is executing fine.
Tip: to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command env in the script of your deploy stage. It helps a lot to debug a job.
I logged into our GitLab-Ci backend today and saw a 'Kubernetes' button — along with an offer to save $500 at GCP.
GitLab Kubernetes
URL to hit your repo's Kubernetes GitLab page is:
https://gitlab.com/^your-repo^/clusters
As I work through the integration process I will update this answer (but also welcome!).
Official GitLab Kubernetes Integration Docs
https://docs.gitlab.com/ee/user/project/clusters/index.html

Where does kubernete's kubelet create service environment variables?

I'm creating a kubernetes cluster, and in it I have several services. I know based on https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#discovering-services I have two options.
use the environment variables set by the kubelet.
use skydns
I want to try to use the environment variables first before I go adding another dependency into the mix. However, I'm unsure where the environment variables are for each service. I haven't found them when doing env or sudo env on the kubelet. Are they within a certain container and/or pod? If so do I have to link the other pods to that one to get its environment variables for services?
I have several NodeJS services in containers, so I'm wondering if talking to each service would require this to get the ip:
process.env('SERVICE_X_PUBLIC_IPV4') once I have the environment variable thing sorted out.
Not as important, but related, how does this all work across multiple nodes?
The environment variables for a given service are put in every container that is started after the service was created.
For example, if you create a pod foo and then later a service bar, the pod's containers won't have any environment variables for bar.
If you instead create service bar and then a pod foo, the pod's containers should have environment variables something like:
BAR_PORT=tcp://10.167.240.1:80
BAR_SERVICE_HOST=10.167.240.1
You can test this out by attaching a terminal to one of your containers, as explained here.

Resources