Does Kustomize require you specify an entire resource to change one value? - kustomize

I'd understood that Kustomize would be the solution to my Kubernetes configuration management needs where, for example, if I want maxReplicas for a resource to be 3 on my dev and test environments but 7 in production, I could do that easily. I imagined there'd be a base file, and then an overlay file that would respecify just the values needing changing. My goal is that if I have multiple configurations (configurations? nodes? clusters? I'm still having trouble with K8s terminology. I mean dev, test, prod.), any time a value common to all of them needed changing, I could change it in one place, the base configuration, instead of in three different config files. Essentially, just as in programming, I want to factor out what's common to all the environments.
But I'm looking at https://www.densify.com/kubernetes-tools/kustomize/ and getting a different impression. Here, the dev version of an hpa.yml file is only meant to change the values of maxReplicas and averageUtilization. So I'd thought the overlay would look as follows, in the same way that, in a .NET Core application, appsettings.dev.json only needs to specify the settings from appsettings.json that it's overriding:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: frontend-deployment-hpa
spec:
maxReplicas: 2
target:
averageUtilization: 90
instead of the whole definition that's in the example given. So, if I started with all instances having minReplicas = 1 and I wanted to change it to 3 for all of them, I'd have make that change in every overlay instead of just in the base.
Is this correct? If so, is there a tool that will allow configuration management to work as I'm looking to have it work?

Is this correct?
No.
Kustomize does not require to specify the entire resource in order to change a single value; the entire point of Kustomize is its ability to transform manifests through patches and other mechanisms to produce the desired output.
For example, assume the following layout:
.
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
├── dev
│   └── kustomization.yaml
└── prod
└── kustomization.yaml
In base/deployment.yaml we have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
template:
spec:
containers:
- name: web
image: docker.io/traefik/whoami:latest
And in base/kustomization.yaml we have:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: kustomize-demo
resources:
- deployment.yaml
If in my dev environment I want to keep replicas: 1, I would create overlays/dev/kustomization.yaml like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
This simply includes the resources from base verbatim.
On the other hand, if in my production environment I want to run with three replicas, I would patch the Deployment resource in overlays/prod/kustomization.yaml like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 3
This type of patch is called a "strategic merge patch"; you can also apply changes using "JSONPatch" patches. That might look like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- target:
kind: Deployment
name: demo
patch: |
- op: replace
path: /spec/replicas
value: 3
The kustomize documentation has a variety of examples of patching and other transformations. For example, the commonLabels directive I show in base/kustomization.yaml uses the labels transformer to produce this output:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kustomize-demo
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: kustomize-demo
template:
metadata:
labels:
app: kustomize-demo
spec:
containers:
- image: docker.io/traefik/whoami:latest
name: web
Notice how the labels defined in commonLabels have been applied both to:
The top-level /metadata/labels element
The /spec/selector/matchLabels element
The /spec/template/metdata/labels element

Related

How to deploy .NET core web and worker projects to Kubernetes in single deployment?

I am relatively new to Docker and Kubernetes technologies. My requirement is to deploy one web and one worker (.Net background service) project in a single deployment.
this is how my deployment.yml file looks like :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: xxxxx.azurecr.io/worker:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
#ports:
#- containerPort: 80
apiVersion : apps/v1
kind: Deployment
metadata:
name: web
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxx.azurecr.io/web:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
ports:
- containerPort: 80
this is how my service.yml file looks like :
apiVersion: v1
kind: Service
metadata:
name: worker
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: worker
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web
What I have found is if I keep both in service.yml file then its only deploying one in Kubernetes and if I comment one and execute one by one then its deploying to Kubernetes.
Is there any rule that we can’t have both in single file? Any reason why it’s not working together however working individually?
One more ask is there any way we can look into worker service pod something like taking remote of that and see what exactly going on there....even if it’s a console application then anyway to read what’s its printing on console after deployment.?
This issue was resolved in the comments section and I decided to provide a Community Wiki answer just for better visibility to other community members.
It is possible to group multiple Kubernetes resources in the same file, but it is important to separate them using three dashes (“---”).
It's also worth mentioning that resources will be created in the order they appear in the file.
For more information, see the Organizing resource configurations documentation.
I've created an example to demonstrate how we can create a simple app-1 application (Deployment + Service) using a single manifest file:
$ cat app-1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app-1
name: app-1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
NOTE: Resources are created in the order they appear in the file:
$ kubectl apply -f app-1.yml
service/app-1 created
deployment.apps/app-1 created
$ kubectl get deploy,svc
NAME READY UP-TO-DATE
deployment.apps/app-1 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.14.179 <none> 80/TCP

How to start docker container with dynamic url

My requirement is as follows:
Developer creates a branch in Jenkins. Lets say branch name is "mystory-101"
Now developer push the code to this branch
Jenkins job starts as soon as commit is pushed to the branch "mystory-101" and create a new docker image for this branch if not created already
My application is Node.js based app, so docker container starts with node.js and deployes the code from the branch "mystory-101"
After the code is deployed and node.js is running, then I would also like this node.js app to be accessible via the URL : https://mystory-101.mycompany.com
For this purpose I was reading this https://medium.com/swlh/ci-cd-pipeline-using-jenkins-dynamic-nodes-86ea854ff7a7
but I am not sure how to achive step #5. Can you please advice how to achive this autometically?
Reformatting answers from commentaries, having a Jenkins installation and Kubernetes cluster, you may automate your deployments using a Jenkins plugin such as oc or kubernetes, or you could prefer using the kubectl client directly, assuming your agents do have that binary.
Not going through the RBAC specifics, you would probably need a ServiceAccount for Jenkins, and use a token (can be found in a Secret named after your ServiceAccount). That ServiceAccount should have enough privileges to create resources in the namespaces you intend to deploy stuff into -- usually the edit ClusterRole, with a namespace-scoped RoleBinding:
kubectl create sa jenkins -n my-namespace
kubectl create rolebinding jenkins-edit \
--clusterrole=edit \
--serviceaccount=my-namespace:jenkins-edit \
--namespace=my-namespace
Once Jenkins is done building your image, you would deploy it to Kubernetes, most likely creating a Deployment, a Service, and an Ingress, substituting resource names, namespaces and your ingress requested FQDN to match your requirements.
Prepare your deployment yaml, something like:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-BRANCH
spec:
selector:
matchLabels:
name: app-BRANCH
template:
spec:
containers:
- image: my-registry/path/to/image:BRANCH
[...]
---
apiVersion: v1
kind: Service
metadata:
name: app-BRANCH
spec:
selector:
name: app-BRANCH
ports:
[...]
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-BRANCH
spec:
rules:
- host: app-BRANCH.my-base-domain.com
http:
paths:
- backend:
serviceName: app-BRANCH
Then, have your Jenkins agent apply that configuration, substituting values properly:
sed "s|BRANCH|$BRANCH|" deploy.yaml | kubectl apply -n my-namespace -f-
kubectl wait -n my-namespace deploy/app-$BRANCH --for=condition=Available
kubectl logs -n my-namespace deploy/app-$BRANCH --tail=200

YAML - Validation error during deployment using Yaml config file

I'm following this Microsoft Tutorial to create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure Cli. In the Run the Application section of this turorial, I get the following error when running the following command to deploy the application using YAML config file:
kubectl apply -f sample.yaml
error: error validating "sample.yaml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
Question: As shown in the following sample.yaml file, the apiVersion is already set. So what this error is about and how can we fix the issue?
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample
labels:
app: sample
spec:
replicas: 1
template:
metadata:
name: sample
labels:
app: sample
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: sample
image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
resources:
limits:
cpu: 1
memory: 800M
requests:
cpu: .1
memory: 300M
ports:
- containerPort: 80
selector:
matchLabels:
app: sample
---
apiVersion: v1
kind: Service
metadata:
name: sample
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: sample
Issue resolved. The issue was related to copy/paste to Azure Cloud Shell. When you copy/paste content to vi editor in Azure Cloud Shell and if the content's first letter happens to be a then following may happen:
when opened vi in read mode, then by pasting, the first a may put user in edit mode and may not actually get that a inserted in the editor. So, in my case the content was pasted as follows (I'm only showing the first few lines here for brevity). So you notice here a was missing in the first line apiVersion: apps/v1 below:
sample.yaml file:
piVersion: apps/v1
kind: Deployment
metadata:
…..
...
This happens when you use an outdated kubectl. Can you try updating to 1.2.5 or 1.3.0 and run it again
I fixed it in my case! For more context, feel free to visit here.
Summary:
If there is any file in which you are applying the yaml configs as follows:
kubectl apply -f .
then change that to the following:
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
Basically, apply configs separately with each file.

Azure Kubernetes Container Env Variables

so in Docker, I can do a Docker run -e to pass in environment variables.
But how does one do that for Azure Kubernetes Pods? They aren't username/password kinds of variables but more so URLs segments we would want to use during startup.
http://webapi/august where august is what we would want to pass in, then in September, we would want to pass in september.
This aren't the best examples, but it shows what I'm looking for.
Thanks.
There is a clear example on kubernetes documentation for this - https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Short example from there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Take a note of env
later if you want to change the variable on the fly - you can use kubectl set env -h command

How to force restart pod when there is change in container environment variable

i am trying to deploy image which has some change to it environment variables, but when i do so i am getting below error
The Pod "envar-demo" is invalid: spec: Forbidden: pod updates may not
change fields other than spec.containers[*].image,
spec.initContainers[*].image, spec.activeDeadlineSeconds or
spec.tolerations (only additions to existing tolerations)
{"Volumes":[{"Name":"default-token-9dgzr","HostPath":null,"EmptyDir":null,"GCEPersistentDisk":null,"AWSElasticBlockStore":null,"GitRepo":null,"Secret":{"SecretName":"default-token-9dgzr","Items":null,"DefaultMode":420,"Optional":null},"NFS":null,"ISCSI":null,"Glusterfs":null,"PersistentVolumeClaim":null,"RBD":null,"Quobyte":null,"FlexVolume":null,"Cinder":null,"CephFS":null,"Flocker":null,"DownwardAPI":null,"FC":null,"AzureFile":null,"ConfigMap":null,"VsphereVolume":null,"AzureDisk":null,"PhotonPersistentDisk":null,"Projected":null,"PortworxVolume":null,"ScaleIO":null,"StorageOS":null}],"InitContainers":null,"Containers":[{"Name":"envar-demo-container","Image":"gcr.io/google-samples/node-hello:1.0","Command":null,"Args":null,"WorkingDir":"","Ports":null,"EnvFrom":null,"Env":[{"Name":"DEMO_GREETING","Value":"Hello
from the environment
my yaml.
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars-new
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment-change value"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
why i am not able to deploy, when there is change to my container environment variables.
my pod is running state, but still i need to change my environment variable, and restart my pod.
actually, you are better off using deployments for this use case.
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-hello
labels:
app: node-hello
spec:
replicas: 3
selector:
matchLabels:
app: node-hello
template:
metadata:
labels:
app: node-hello
spec:
containers:
- name: node-hello
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 80
env:
- name: DEMO_GREETING
value: "Hello from the environment-change value"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
this way you would be able to change environment variables and the pod will get restarted with the new environment variables
For this kind of requirements, a replicaset or a Deployment(prefered) can be used.
You may also try to read the ENV value from outside, if there is change, you(i.e. a script or job or scheduler) can restart a new Pod with the new ENV values.

Resources