Azure Kubernetes Container Env Variables - azure

so in Docker, I can do a Docker run -e to pass in environment variables.
But how does one do that for Azure Kubernetes Pods? They aren't username/password kinds of variables but more so URLs segments we would want to use during startup.
http://webapi/august where august is what we would want to pass in, then in September, we would want to pass in september.
This aren't the best examples, but it shows what I'm looking for.
Thanks.

There is a clear example on kubernetes documentation for this - https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Short example from there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Take a note of env
later if you want to change the variable on the fly - you can use kubectl set env -h command

Related

How to deploy .NET core web and worker projects to Kubernetes in single deployment?

I am relatively new to Docker and Kubernetes technologies. My requirement is to deploy one web and one worker (.Net background service) project in a single deployment.
this is how my deployment.yml file looks like :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: xxxxx.azurecr.io/worker:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
#ports:
#- containerPort: 80
apiVersion : apps/v1
kind: Deployment
metadata:
name: web
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxx.azurecr.io/web:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
ports:
- containerPort: 80
this is how my service.yml file looks like :
apiVersion: v1
kind: Service
metadata:
name: worker
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: worker
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web
What I have found is if I keep both in service.yml file then its only deploying one in Kubernetes and if I comment one and execute one by one then its deploying to Kubernetes.
Is there any rule that we can’t have both in single file? Any reason why it’s not working together however working individually?
One more ask is there any way we can look into worker service pod something like taking remote of that and see what exactly going on there....even if it’s a console application then anyway to read what’s its printing on console after deployment.?
This issue was resolved in the comments section and I decided to provide a Community Wiki answer just for better visibility to other community members.
It is possible to group multiple Kubernetes resources in the same file, but it is important to separate them using three dashes (“---”).
It's also worth mentioning that resources will be created in the order they appear in the file.
For more information, see the Organizing resource configurations documentation.
I've created an example to demonstrate how we can create a simple app-1 application (Deployment + Service) using a single manifest file:
$ cat app-1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app-1
name: app-1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
NOTE: Resources are created in the order they appear in the file:
$ kubectl apply -f app-1.yml
service/app-1 created
deployment.apps/app-1 created
$ kubectl get deploy,svc
NAME READY UP-TO-DATE
deployment.apps/app-1 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.14.179 <none> 80/TCP

Azure kubernetes service loadbalancer external IP not accessible

I am new to the world of Kubernetes and was testing a sample Django "Hello world" app deployment. Using docker-compose I was able to access the hell world page on a browser but I need to use Kubernetes. So I tested two options and none of them worked.
1) I created an Azure CICD pipeline to build and push the image in ACR using the following Dockerfile,
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /hello_world
WORKDIR /hello_world
COPY . /hello_world/
RUN pip install -r requirements.txt
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
The pipeline completes successfully and uploads the image in the repository.
Now I use kubectl to deploy using the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: acrshgpdev1.azurecr.io/django-helloworld:194
#imagePullPolicy: Always
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: django-helloworld
The deployment and service are created but when I try to access the external IP of the LB service through a browser the page is inaccessible. I used the external ip:port and it didn't work.
Any thoughts why would this be happening?
2) I used the same Dockerfile but a different deployment file(changed the image to the locally created image & removed LB service) to deploy the app to my local Kubernetes. the deployment file was as follows,
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
selector:
app: django-helloworld
ports:
- protocol: TCP
port: 80
targetPort: 30800
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-helloworld
spec:
replicas: 3
selector:
matchLabels:
app: django-helloworld
template:
metadata:
labels:
app: django-helloworld
spec:
containers:
- name: django-helloworld
image: django-helloworld:1.0
#imagePullPolicy: Always
ports:
- containerPort: 8000
It creates the deployment and service but doesn't assign an external IP to the NodePort service so I am not able to figure out what service should I choose to test the app is successful. I know I can't choose a LB as it doesn't go locally and I need to deploy using a cloud service.
just configure your service to be of type LoadBalancer and do a proper port mapping:
apiVersion: v1
kind: Service
metadata:
name: django-helloworld-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
selector:
app: django-helloworld
https://kubernetes.io/docs/concepts/services-networking/service/
Make sure the deployment has associated healthy pods too (they show as Running and with 1/1 next to their name). If there aren't, make sure your cluster can successfully pull from acrshgpdev1.azurecr.io registry; you can integrate directly an AKS cluster with an ACR registry following this article:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr acrshgpdev1.azurecr.io
or by adding the SP of the AKS cluster manually to the Reader role on the ACR.

How to force restart pod when there is change in container environment variable

i am trying to deploy image which has some change to it environment variables, but when i do so i am getting below error
The Pod "envar-demo" is invalid: spec: Forbidden: pod updates may not
change fields other than spec.containers[*].image,
spec.initContainers[*].image, spec.activeDeadlineSeconds or
spec.tolerations (only additions to existing tolerations)
{"Volumes":[{"Name":"default-token-9dgzr","HostPath":null,"EmptyDir":null,"GCEPersistentDisk":null,"AWSElasticBlockStore":null,"GitRepo":null,"Secret":{"SecretName":"default-token-9dgzr","Items":null,"DefaultMode":420,"Optional":null},"NFS":null,"ISCSI":null,"Glusterfs":null,"PersistentVolumeClaim":null,"RBD":null,"Quobyte":null,"FlexVolume":null,"Cinder":null,"CephFS":null,"Flocker":null,"DownwardAPI":null,"FC":null,"AzureFile":null,"ConfigMap":null,"VsphereVolume":null,"AzureDisk":null,"PhotonPersistentDisk":null,"Projected":null,"PortworxVolume":null,"ScaleIO":null,"StorageOS":null}],"InitContainers":null,"Containers":[{"Name":"envar-demo-container","Image":"gcr.io/google-samples/node-hello:1.0","Command":null,"Args":null,"WorkingDir":"","Ports":null,"EnvFrom":null,"Env":[{"Name":"DEMO_GREETING","Value":"Hello
from the environment
my yaml.
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars-new
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment-change value"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
why i am not able to deploy, when there is change to my container environment variables.
my pod is running state, but still i need to change my environment variable, and restart my pod.
actually, you are better off using deployments for this use case.
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-hello
labels:
app: node-hello
spec:
replicas: 3
selector:
matchLabels:
app: node-hello
template:
metadata:
labels:
app: node-hello
spec:
containers:
- name: node-hello
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 80
env:
- name: DEMO_GREETING
value: "Hello from the environment-change value"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
this way you would be able to change environment variables and the pod will get restarted with the new environment variables
For this kind of requirements, a replicaset or a Deployment(prefered) can be used.
You may also try to read the ENV value from outside, if there is change, you(i.e. a script or job or scheduler) can restart a new Pod with the new ENV values.

Getting a validtion error when trying to apply a Yaml file in AKS

I'm following along with this tutorial. I'm at the stage where I deploy using the command:
kubectl apply -f azure-vote-all-in-one-redis.yaml
The YAML file looks like this:
version: '3'
services:
azure-vote-back:
image: redis
container_name: azure-vote-back
ports:
- "6379:6379"
azure-vote-front:
build: ./azure-vote
image: azure-vote-front
container_name: azure-vote-front
environment:
REDIS: azure-vote-back
ports:
- "8080:80"
However, I'm getting the error:
error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
If I add an apiVersion and a Kind, like this:
apiVersion: v1
kind: Pod
Then I get the error:
error validating data: ValidationError(Pod): unknown field "services" in io.k8s.api.core.v1.Pod
Am I missing something here?
It looks like you're trying to apply a Docker Swarm/Compose YAML file to your Kubernetes cluster. This will not work directly without a conversion.
Using a tool like Kompose to convert your Docker YAML into k8s YAML is a useful step into migrating from one to the other.
For more information see https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
so first of all, every yaml definition should follow AKMS spec: apiVersion, kind, metadata, spec. Also, you should avoid pod and use deployments. Because deployments handle pods on their own.
Here's a sample vote-back\front definition:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 60%
maxUnavailable: 60%
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: aksrg.azurecr.io/azure-vote-front:voting-dev
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
imagePullSecrets:
- name: k8s
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
In my case, I am deploying my project on GKE via Travis. In my travis file, I am calling a shell file (deploy.sh).
In the deploy.sh file, I have written all the steps to create kubernetes resources:
### Deploy
# Apply k8s config
kubectl apply -f .
So here, I replaced kubectl apply -f . with the individual file names as follows:
### Deploy
# Apply k8s config
kubectl apply -f namespace.yaml
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
And then, the error is fixed!

How to deploy a node.js with redis on kubernetes?

I have a very simple node.js application (HTTP service), which "talks" to redis. I want to create a deployment and run it with minikube.
From my understanding, I need a kubernetes Pod for my app, based on the docker image. Here's my Dockerfile:
FROM node:8.9.1
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
I build the docker image with docker build -t my-app .
Next, I created a Pod definition for my app's Pod:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
So far, so good. But from now on, I have no clear idea how to proceed with redis:
should redis be another Pod, or a Service (in terms of Kubernetes kind)?
How do I reference redis from inside my app? Based on whether redis will be defined as a Pod/Service, how do I obtain a connection URL and port? I read about environment variables being created by Kubernetes, but I am not sure whether these work for Pods or Services.
How do I aggregate both (my app & redis) under single configuration? How do I make sure that redis starts first, then my app (which requires running redis instance), and how do I expose my HTTP endpoints to the "outside world"? I read about Deployments, but I am not sure how to connect these pieces together.
Ideally, I would like to have all configurations inside YAML files, so that at the end of the day the whole infrastructure could be started with a single command.
I think I figured out a solution (using a Deployment and a Service).
For my deployment, I used two containers (webapp + redis) within one Pod, since it doesn't make sense for a webapp to run without active redis instance, and additionally it connects to redis upon application start. I could be wrong in this reasoning, so feel free to correct me if you think otherwise.
Here's my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
selector:
matchLabels:
app: my-app-deployment
template:
metadata:
labels:
app: my-app-deployment
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /srv/www
name: redis-storage
- name: my-app
image: my-app:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
volumes:
- name: redis-storage
emptyDir: {}
And here's the Service definition:
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
ports:
- port: 8080
protocol: TCP
type: NodePort
selector:
app: my-app-deployment
I create the deployment with:
kubectl create -f deployment.yaml
Then, I create the service with kubectl create -f service.yaml
I read the IP with minikube ip and extract the port from the output of kubectl describe service my-app-service.
I agree with all of the previous answers. I'm just trying to things more simple by executing a single command.
First, create necessary manifests for redis in a file say redis.yaml and service to expose it outside.
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: node-redis
spec:
ports:
- name: redis
port: 6379
targetPort: 6379
type: NodePort
selector:
app: node-redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
app: node-redis
replicas: 1
template:
metadata:
labels:
app: node-redis
spec:
containers:
- name: redis
image: redis:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
# data volume where redis writes data
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-data
---
# data volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
labels:
app: node-redis
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
Next put manifests for your app in another file say my-app.yaml. Here i put the volume field so that you can use the data that stored by redis.
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: node-redis
spec:
containers:
- name: my-app
image: my-app:latest
ports:
- containerPort: 8080
# data volume from where my-app read data those are written by redis
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-data
Now we can use the following bash file my-app.sh.
#!/bin/bash
kubectl create -f redis.yaml
pod_name=$(kubectl get po -l app=node-redis | grep app-with-redis | awk '{print $1}')
# check whether redis server is ready or not
while true; do
pong=$(kubectl exec -it $pod_name -c redis redis-cli ping)
if [[ "$pong" == *"PONG"* ]]; then
echo ok;
break
fi
done
kubectl create -f my-app.yaml
Just run chmod +x my-app.sh; ./my-app.sh to deploy. To get the url run minikube service redis --url. You can similarly get the url for your app. The only thing is you need a nodePort type service for your app to access it from outside of the cluster.
So, everything is in your hand now.
I would run redis in a separate pod (i.e.: so your web app doesn't take down the redis server if itself crashes).
Here is your redis deployment & service:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
volumes:
- name: host-sys
hostPath:
path: /sys
initContainers:
- name: disable-thp
image: redis:4.0-alpine
volumeMounts:
- name: host-sys
mountPath: /host-sys
command: ["sh", "-c", "echo never > /host-sys/kernel/mm/transparent_hugepage/enabled"]
containers:
- name: redis
image: redis:4.0-alpine
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 350m
memory: 1024Mi
ports:
- containerPort: 6379
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
name: redis
selector:
app: redis
Since we've exposed a kubernetes Service you can then access your redis instance by hostname, or it's "service name", which is redis.
You can check out my kubernetes redis repository at https://github.com/mateothegreat/k8-byexamples-redis. You can simply run make install if you want the easier route.
Good luck and if you're still stuck please reach out!
yes you need a separete deployement and service for redis
use kubernetes service discovery , should be built in , KubeDNS , CoreDNS
use readniness and liveness probes
Yes , you can write a single big yaml file to describe all the deployments and services. then:
kubectl apply -f yourfile.yml
or you can place the yaml in separate files and then do the :
kubectl apply -f dir/
I recommend you to read further the k8s docs, but in general re your questions raised above:
Yes another pod (with the relevant configuration) and an additional service depends on your use case, check this great example: https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/
Using services, read more here: https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
There are several ways to manage dependencies - search for deployment dependencies, but in general you can append them in the same file with readiness endpoint and expose using a Service - read more in the link in bullet 2

Resources