Accessing Kubernetes worker node labels from the Containers/pods - azure

How to access Kubernetes worker node labels from the container/pod running in the cluster?
Labels are set on the worker node as the yaml output of this kubectl command launched against this Azure AKS worker node shows :
$ kubectl get nodes aks-agentpool-39829229-vmss000000 -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2021-10-15T16:09:20Z"
labels:
agentpool: agentpool
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: Standard_DS2_v2
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: eastus
failure-domain.beta.kubernetes.io/zone: eastus-1
kubernetes.azure.com/agentpool: agentpool
kubernetes.azure.com/cluster: xxxx
kubernetes.azure.com/mode: system
kubernetes.azure.com/node-image-version: AKSUbuntu-1804gen2containerd-2021.10.02
kubernetes.azure.com/os-sku: Ubuntu
kubernetes.azure.com/role: agent
kubernetes.azure.com/storageprofile: managed
kubernetes.azure.com/storagetier: Premium_LRS
kubernetes.io/arch: amd64
kubernetes.io/hostname: aks-agentpool-39829229-vmss000000
kubernetes.io/os: linux
kubernetes.io/role: agent
node-role.kubernetes.io/agent: ""
node.kubernetes.io/instance-type: Standard_DS2_v2
storageprofile: managed
storagetier: Premium_LRS
topology.kubernetes.io/region: eastus
topology.kubernetes.io/zone: eastus-1
name: aks-agentpool-39829229-vmss000000
resourceVersion: "233717"
selfLink: /api/v1/nodes/aks-agentpool-39829229-vmss000000
uid: 0241eb22-4d1b-4d65-870f-fcc51dac1c70
Note: The pod/Container that I have is running with non-root access and it doesn't have a privileged user.
Is there a way to access these labels from the worker node itself ?

In the AKS cluster,
Create a namespace like:
kubectl create ns get-labels
Create a Service Account in the namespace like:
kubectl create sa get-labels -n get-labels
Create a Clusterrole like:
kubectl create clusterrole get-labels-clusterrole --resource=nodes --verb=get,list
Create a Rolebinding like:
kubectl create rolebinding get-labels-rolebinding -n get-labels --clusterrole get-labels-clusterrole --serviceaccount get-labels:get-labels
Run a pod in the namespace you craeted like:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: get-labels
namespace: get-labels
spec:
serviceAccountName: get-labels
containers:
- image: centos:7
name: get-labels
command:
- /bin/bash
- -c
- tail -f /dev/null
EOF
Execute a shell in the running container like:
kubectl exec -it get-labels -n get-labels -- bash
Install jq tool in the container:
yum install epel-release -y && yum update -y && yum install jq -y
Set up shell variables:
# API Server Address
APISERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
If you want to get a list of all nodes and their corresponding labels, then use the following command:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/nodes | jq '.items[].metadata | {name,labels}'
else, if you want the labels corresponding to a particular node then use:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/nodes/<nodename> | jq '.metadata.labels'
Please replace <nodename> with the name of node intended.
N.B. You can choose to include the installation of the jq tool in the Dockerfile from which your container image is built and make use of environment variables for the shell variables. We have used neither in this answer in order to explain the working of this method.

Related

run docker inside docker container in AKS [duplicate]

We have been tasked with setting up a container-based Jenkins deployment, and there is strong pressure to do this in AKS. Our Jenkins needs to be able to build other containers. Normally I'd handle this with a docker-in-docker approach by mounting /var/run/docker.sock & /usr/bin/docker into my running container.
I do not know if this is possible in AKS or not. Some forum posts on GitHub suggest that host-mounting is possible but broken in the latest AKS relase. My limited experimentation with a Helm chart was met with this error:
Error: release jenkins4 failed: Deployment.apps "jenkins" is invalid:
[spec.template.spec.initContainers[0].volumeMounts[0].name: Required
value, spec.template.spec.initContainers[0].volumeMounts[0].name: Not
found: ""]
The change I made was to update the volumeMounts: section of jenkins-master-deployment.yaml and include the following:
-
type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
Is what I'm trying to do even possible based on AKS security settings, or did I just mess up my chart?
If it's not possible to mount the docker socket into a container in AKS, that's fine, I just need a definitive answer.
Thanks,
Well, we did this a while back for VSTS (cloud TFS, now called Azure DevOps) build agents, so it should be possible. The way we did it is also with mounting the docker.sock
The relevant part for us was:
... container spec ...
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
I have achieved the requirement using following manifests.
Our k8s manifest file carries this securityContext under pod definition.
securityContext:
privileged: true
In our Dockerfile we were installing Docker-inside-Docker like this way
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install curl wget -y
RUN apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release -y
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
# last two lines of Dockerfile
COPY ./agent_startup.sh .
RUN chmod +x /agent_startup.sh
CMD ["/usr/sbin/init"]
CMD ["./agent_startup.sh"]
Content of agent_startup.sh file
#!/bin/bash
echo "DOCKER STARTS HERE"
service --status-all
service docker start
service docker start
docker version
docker ps
echo "DOCKER ENDS HERE"
sleep 100000
Sample k8s file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: build-agent
labels:
app: build-agent
spec:
replicas: 1
selector:
matchLabels:
app: build-agent
template:
metadata:
labels:
app: build-agent
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: build-agent
image: myecr-repo.azurecr.io/buildagent
securityContext:
privileged: true
When Dockerized agent pool was up, docker daemon was running inside docker container.
My Kubectl version
PS D:\Temp\temp> kubectl.exe version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.22.6
WARNING: version difference between client (1.25) and server (1.22) exceeds the supported minor version skew of +/-1
pod shell output:
root#**********-bcd967987-52wrv:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
**Disclaimer: Our kubernetes cluster version is 1.22 and base image is Ubuntu-18.04 and tested only to check if docker-inside-docker is running and not registered with Azure DevOps. You can modify startup script according to your need **

Certicate and Service token in gitlab pipeline for kubernetes service

I am a neophyte, I'm trying to configure my project on gitlab to be able to integrate it with a kubernetes cluster infrastructure pipeline.
While I am configuring gitlab asked for a certificate and a token. Since kuberntes is deployed on azure, how can I create/retrieve the certicate and required token?
Possibly which user / secret in the kuberntes service does it refer to?
You can get the default values of CA certificate using the below steps :
CA Certificate:
CA certificate is nothing but the Kubernetes certificate that we use in the config file for authenticating to the cluster.
Connect to AKS cluster,az aks get-credentials — resource-group <RG> — name <KubeName>
Run kubectl get secrets , after you run command in output you will
get a default token name , you can copy the name.
Run kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode to get the
certificate , you can copy the certificate and use it in setting the
runner.
Output:
Token :
The token will be of the service account with cluster-admin permissions which Gitlab will use to access the AKS cluster , so you can create a new admin service account if not created earlier by using below steps:
Create a Yaml file with below contents :
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: gitlab-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: gitlab-admin
namespace: kube-system
Run kubectl apply -f <filename>.yaml to apply and bind the service
account to the cluster.
Run kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}') to get the token
for the Gitlab Admin we created in the file and bind with the
cluster in the previous step. You can copy the token value and use it in
the runner setting .
Output:

Create kubernetes env var secrets from .env file

I have a nodejs application which stores variables in environment variables.
I'm using the dotenv module, so I have a .env file that looks like :
VAR1=value1
VAR2=something_else
I'm currently setting up a BitBucket Pipeline to auto deploy this to a Kubernetes cluster.
I'm not very familiar with kubernetes secrets, though I'm reading up on them.
I'm wondering :
Is there an easy way to send to a Docker-container / kubernetes-deployment all of the environment variables I have defined in my .env file so they are available in the pods my app is running in ?
I'm hoping for an example secrets.yml file or similar which takes everything from .env and makes in into environment variables in the container. But it could also be done in the BitBucket pipeline level, or at the Docker container level .. I'm not sure ...
Step 1: Create a k8s secret with your .env file:
# kubectl create secret generic <secret-name> --from-env-file=<path-to-env-file>
$ kubectl create secret generic my-env-list --from-env-file=.env
secret/my-env-list created
Step 2: Varify secret:
$ kubectl get secret my-env-list -o yaml
apiVersion: v1
data:
VAR1: dmFsdWUx
VAR2: c29tZXRoaW5nX2Vsc2U=
kind: Secret
metadata:
name: my-env-list
namespace: default
type: Opaque
Step 3: Add env to your pod's container:
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: my-env-list # <---- here
restartPolicy: Never
Step 4: Run the pod and check if the env exist or not:
$ kubectl apply -f pod.yaml
pod/demo-pod created
$ kubectl logs -f demo-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=demo-pod
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
VAR1=value1 # <------------------------------------------------------here
VAR2=something_else # <-----------------------------------------------here
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
You can also use the kustomize operator to create a secret from file as follows:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: kust-example
generatorOptions:
# Prevents adding hash at the end of the secret name
disableNameSuffixHash: true
secretGenerator:
- name: your-secret
namespace: default
envs:
- path/secret.env
Then you just have to run `kubectl apply -k dir
You can also use this to achieve the same result as using Kustomization but with more control to automate your job
https://github.com/juliosmelo/dotenv2k8s

Azure Kubernetes - Istio accessing grafana, prometheus, jaeger, kiali & envoy externally?

I have used the following configuration to setup the Istio
cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
# Use the default profile as the base
# More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
profile: default
# Enable the addons that we will want to use
addonComponents:
grafana:
enabled: true
prometheus:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
values:
global:
# Ensure that the Istio pods are only scheduled to run on Linux nodes
defaultNodeSelector:
beta.kubernetes.io/os: linux
kiali:
dashboard:
auth:
strategy: anonymous
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF
I want to access the services like grafana, prometheus, jaeger, kiali & envoy externally - eg: https://grafana.mycompany.com, how can I do it?
Update:
I have tried below however it doesn't work
kubectl expose service prometheus --type=LoadBalancer --name=prometheus --namespace istio-system
kubectl get svc prometheus-svc -n istio-system -o json
export PROMETHEUS_URL=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"):$(kubectl get svc prometheus-svc -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')
echo http://${PROMETHEUS_URL}
curl http://${PROMETHEUS_URL}
I got it working as mentioned below
kubectl expose service prometheus --type=LoadBalancer --name=prometheus --namespace istio-system
export PROMETHEUS_URL=$(kubectl get svc prometheus-svc -n istio-system -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"):$(kubectl get svc prometheus-svc -n istio-system -o 'jsonpath={.spec.ports[0].port}')
echo http://${PROMETHEUS_URL}
curl http://${PROMETHEUS_URL}
I would assume that it may not be the right way of exposing the services. Instead
Create a Istio Gateway point to https://grafana.mycompany.com
Create a Istio Virtual service to redirect the requuest to the above Internal Service

Kub Cluster on Azure Container Service routes to 404, while my docker image works fine in my local?

I've created a docker image on centOS by enabling systemd services and built my image. I created docker-compose.yml file and docker-compose up -d and the image gets built and I can hit my application at localhost:8080/my/app.
I was using this tutorial - https://carlos.mendible.com/2017/12/01/deploy-your-first-service-to-azure-container-services-aks/.
So after I'm done with my docker image, I deployed my Image to Azure Container Registry and then created Azure Container Service (AKS Cluster). Then deploying that same working docker image on to AKS cluster and I get 404 page not found, when I'm trying to access the load balancer public IP. I got into kubernetes machine and tried to curl localhost:8080/my/app, still 404.
I see my services are up and running without any issue inside the Kubernetes pod and configuration is pretty much same as my docker container.
Here is my Dockerfile:
#Dockerfile based on latest CentOS 7 image
FROM c7-systemd-httpd-local
RUN yum install -y epel-release # for nginx
RUN yum install -y initscripts # for old "service"
ENV container docker
RUN yum install -y bind bind-utils
RUN systemctl enable named.service
# webserver service
RUN yum install -y nginx
RUN systemctl enable nginx.service
# Without this, init won't start the enabled services and exec'ing and starting
# them reports "Failed to get D-Bus connection: Operation not permitted".
VOLUME /run /tmp
# Don't know if it's possible to run services without starting this
ENTRYPOINT [ "/usr/sbin/init" ]
VOLUME ["/sys/fs/cgroup"]
RUN mkdir -p /myappfolder
COPY . myappfolder
WORKDIR ./myappfolder
RUN sh ./setup.sh
WORKDIR /
EXPOSE 8080
CMD ["/bin/startServices.sh"]
Here is my Docker-Compose.yml
version: '3'
services:
myapp:
build: ./myappfolder
container_name: myapp
environment:
- container=docker
ports:
- "8080:8080"
privileged: true
cap_add:
- SYS_ADMIN
security_opt:
- seccomp:unconfined
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
command: "bash -c /usr/sbin/init"
Here is my Kubectl yml file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- args:
- bash
- -c
- /usr/sbin/init
env:
- name: container
value: docker
name: myapp
image: myapp.azurecr.io/newinstalled_app:v1
ports:
- containerPort: 8080
args: ["--allow-privileged=true"]
securityContext:
capabilities:
add: ["SYS_ADMIN"]
privileged: true
#command: ["bash", "-c", "/usr/sbin/init"]
imagePullSecrets:
- name: myapp-test
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: myapp
I used these commands -
1. az group create --name resource group --location eastus
2. az ask create --resource-group rename --name kubname --node-count 1 --generate-ssh-keys
3. az ask get-credentials --resource-group rename --name kubname
4. kubectl get cs
5. kubectl cluster-info
6. kubectl create -f yamlfile.yml
7. kubectl get po --watch
8. kubectl get svc --watch
9. kubectl get pods
10. kubectl exec -it myapp-66678f7645-2r58w -- bash
entered into pod - its 404.
11. kubectl get svc -> External IP - 104.43.XX.XXX:8080/my/app -> goes to 404.
But my docker-compose up -d -> goes into our application.
Am I missing anything?
Figured it out. I need to have loadbalancer pointing to 80 and destination port to 8080.
That's the only change I made and things started working fine.
Thanks!

Resources