Kubeconfig in Azure - azure

I have an Azure cloud where a Kubernetes cluster was created by me. Besides, in my environment, I have Jenkins running for the pipeline. I need to create a container with React FE in it. I need to describe some kubectl commands with kubeconfig to enable access to Kubernetes clusters in my Azure cloud. Below lines of code are from the Jenkins groovy file:
sh "helm template $podPath -f $destPath --set namespace=$namespace > helm_chart_${env}.yaml" sh "kubectl config set-context jenkins-react#react --kubeconfig=/root/.kube/sa_new_kubeconfig" sh "kubectl delete -f helm_chart_${env}.yaml
--kubeconfig=/root/.kube/sa_new_kubeconfig || true" sh "sleep 10"
I am willing to know if there is any alternative way to use kubeconfig apart from defining it explicitly in Jenkins groovy code. If yes then which is the convenient and better way?

You can use an env variable KUBECONFIG with the path to your Kubernetes config file.
Then, it depends on how you configure your Jenkins and your Jenkins pipeline, but you may:
Add this variable to your Jenkins agent configuration
Add this variable to your Jenkinsfile pipeline

Related

Capturing kubectl set command in terraform

We have a case where we need to update AWS EKS CNI config on the daemon set. But the solution is only through kubectl command. How do we update an existing daemonset with specific values through terraform code? The requirement is that the solution has to be in IAC. The equivalent kubectl command given is
kubectl set env daemonset -n kube-system aws-node WARM_IP_TARGET=2,MINIMUM_IP_TARGET=12
The values shown in numbers are planned to be variables in terraform.
What you are asking for doesn't exist. Here is the open Terraform Github issue for what you are asking for:
https://github.com/hashicorp/terraform-provider-kubernetes/issues/723
Even if that did exist, I wouldn't consider that IaC as it's not declarative (might as well just run a bash script).
In my opinion, the real solution is for AWS to allow the provisioning of bare clusters so that "addons" can be managed completely through IaC tools. But that also does not exist:
https://github.com/aws/containers-roadmap/issues/923
The closest you're going to get will be to use a null_resource to execute the patch. Here's an example in that Github issue:
https://github.com/hashicorp/terraform-provider-kubernetes/issues/723#issuecomment-679423792
So your final result will look similar to this:
resource "null_resource" "patch_aws_cni" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = <<EOF
# do all those commands to get kubectl and auth info, then run:
kubectl set env daemonset -n kube-system aws-node WARM_IP_TARGET=2,MINIMUM_IP_TARGET=12
EOF
}
}

Nextjs App reading configuration from Azure App Service

We have a nextjs project which is build by docker and deploy into Azure App Service (container). We also setup configuration values within App Service and try to access it, however its not working as expected.
Few things we tried
Restarting the App Service after adding new configuration
removing .env file while building the docker image
including .env file while building the docker image
Here's how we read try to read the environment variables within the App Service
const env = process.env.NEXT_PUBLIC_ENV;
const A = process.env.NEXT_PUBLIC_AS_VALUE;
Wondering if this can actually be done?
Just thinking something out loud below,
Since we're deploying the docker image within App Service's Container (Linux).. does that mean, the container can't pull the value from this environment variable?
Docker image already perform the npm run build, would that means the image is in static formed (build time). It will never ready from App Service configuration (runtime).
After a day or 2, I came up with an alternative solution by passing the environment values in Dockerfile while building my project.
TLDR
Pass your env values within dockerfile
Set all your env (dev, staging, prod, etc) var values in Pipeline variable.
Set a "settable" variable inside the Pipeline variable too, so you can set to build different environment while triggering your pipeline (eg, buildEnv)
Setup a bash script to perform variable text changing (eg, from firebaseApiKey to DEVfirebaseApiKey ) according to env received from buildEnv.
Use "replace token" task from Azure Pipeline to replace values inside Dockerfile
Build your docker image
Huaala~ now you get your environment specific build
Details
Within your Dockerfile you can place your env variable like this
RUN NEXT_PUBLIC_ENV=#{env}# \
NEXT_PUBLIC_FIREBASE_API_KEY=#{firebaseApiKey}# \
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=#{firebaseAuthDomain}# \
NEXT_PUBLIC_FIREBASE_PROJECT_ID=#{firebaseProjectId}# \
NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=#{firebaseStorageBucket}# \
NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=#{firebaseMessagingSenderId}# \
NEXT_PUBLIC_FIREBASE_APP_ID=#{firebaseAppId}# \
NEXT_PUBLIC_FIREBASE_MEASUREMENT_ID=#{firebaseMeasurementId}# \
NEXT_PUBLIC_BASE_URL=#{baseURL}# \
npm run build
These values set (eg, baseURL, firebaseMeasurementId, etc) are only a placeholder, because we need to change them later with bash script according to the buildEnv we receive. (buildEnv is settable when you trigger a build)
Bash script sample as below. What it does is that it will look within your Dockerfile for the word env and change to DEVenv / UATenv / PRODenv based on what you're passing to buildEnv
#!/bin/bash
case $(buildENV) in
dev)
sed -i -e 's/env/DEVenv/g' ./Dockerfile
;;
uat)
sed -i -e 's/env/UATenv/g' ./Dockerfile
;;
prod)
sed -i -e 's/env/PROenvD/g' ./Dockerfile
;;
*)
echo -n "unknown"
;;
esac
When this is complete, your "environment specific" docker file is sort of created. Now we'll make use of the "replace token" task from Azure Pipeline to replace the values inside Dockerfile. **Make sure you have all your values setup in Pipeline Variable!
Lastly all you may build your docker image and deploy :)

How Terraform local-exec works on Concourse?

I used 'null_resource' and pass aws cli to 'local-exec' to update stepfunction:
resource "null_resource" "enable_step_function_xray" {
triggers = {
state_machine_arn = xxxxxxx
}
provisioner "local-exec" {
command = "aws stepfunctions update-state-machine --state-machine-arn ${self.triggers.state_machine_arn} --tracing-configuration enabled=true"
}
}
This works fine when I tested via local Terraform, my question is if this will work if I apply Terraform on Concourse?
It depends entirely on if you have the Concourse job configured to use a container image that has the aws cli tool installed. If the AWS CLI tool is installed and in the path then the local-exec should succeed. If not, then it will obviously fail.
My assumption is that in your local machine, you've already set up the required credentials. So if you simply try it on Concourse CI it will fail with an authentication error.
To set it up in Concourse -
AWS Console - Create a new IAM user cicd with programmatic access only and the relevant permissions. For testing purposes, you can use the AdministratorAcess policy, but make sure to make it least-privileged later on.
AWS Console - Create AWS security credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) for the cicd user (save them in a safe place)
Concourse CI - Create the secrets AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
Concourse CI - Add ((AWS_ACCESS_KEY_ID)) and ((AWS_SECRET_ACCESS_KEY)) environment variables to your Concourse CI task
I'm sure there are many tutorials about this subject, but the above steps will probably appear in most of these tutorials. Concourse CI should now be able to apply changes on your AWS account.

Azure Container Instance | Environment Variables from an Environment Variables File

How can I create an Azure container instance and configure it with an environment variables file?
Something that'd be equivalent to Docker's --env-file flag for the run command. I couldn't find a way to do that but I'm new to both Azure and Docker.
So it'd look something like: az container create <...> --env-file myEnvFile where myEnvFile is stored somewhere on Azure so I could grab it, like how Docker can grab such a file locally.
You can find what you want here https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-create
i.e.
az container create -g MyResourceGroup --name myapp --image myimage:latest --environment-variables key1=value1 key2=value2
Apologies, realised you want it from a file, if running in a script can you not have the file set local environment variables or parse the file to set them and then run the command above?
I'm really sure there is no parameter to set the environment variables of the Azure container instance from a file only through one command.
You can take a look at the parameter --environment-variables in the command az container create:
A list of environment variables for the container. Space-separated
values in 'key=value' format.
It requires the value of a list. So you can read from the file to create a list and then use the list as the value of the parameter --environment-variables in the create command.
As far as I'm aware, from answers and my research, this is currently not supported.

Helm - Spark operator examples/spark-pi.yaml does not exist

I've deployed Spark Operator to GKE using the Helm Chart to a custom namespace:
helm install --name sparkoperator incubator/sparkoperator --namespace custom-ns --set sparkJobNamespace=custom-ns
and confirmed the operator running in the cluster with helm status sparkoperator.
However when I'm trying to run the Spark Pi example kubectl apply -f examples/spark-pi.yaml I'm getting the following error:
the path "examples/spark-pi.yaml" does not exist
There are few things that I probably still don't get:
Where is actually examples/spark-pi.yaml located after deploying the operator?
What else should I check and what other steps should I take to make the example work?
Please find the spark-pi.yaml file here.
You should copy it to your filesystem, customize it if needed, and provide a valid path to it with kubectl apply -f path/to/spark-pi.yaml.
kubectl apply needs a yaml file either locally on the system where you are running kubectl command or it can be a http/https endpoint hosting the file.

Resources