AKS Azure DevOps Build Agent - azure

I'm trying to build a Azure DevOps Linux Build Agent in Azure Kubernetes Service.
I created the yaml file and created the secrets to use inside of the file.
I applied the file and have "CreateContainerConfigError" with my pod in a "waiting" state.
I run command
"kubectl get pod <pod name> -o yaml"
and it states the secret "vsts" could not be found.
I find this weird because I used "kubectl get secrets" and I see the secrets "vsts-account" and "vsts-token" listed.

You may check your kubernetes configuration, which is supposed to be like below:
apiVersion: v1
kind: ReplicationController
metadata:
name: vsts-agent
spec:
replicas: 1
template:
metadata:
labels:
app: vsts-agent
version: "0.1"
spec:
containers:
– name: vsts-agent
image: microsoft/vsts-agent:ubuntu-16.04-docker-18.06.1-ce-standard
env:
– name: VSTS_ACCOUNT
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_ACCOUNT
– name: VSTS_TOKEN
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_TOKEN
– name: VSTS_POOL
value: dockerized-vsts-agents
volumeMounts:
– mountPath: /var/run/docker.sock
name: docker-volume
volumes:
– name: docker-volume
hostPath:
path: /var/run/docker.sock
You may follow the blog below to see whether it helps you:
https://mohitgoyal.co/2019/01/10/run-azure-devops-private-agents-in-kubernetes-clusters/

Related

Azure Kubernetes : Azure Disks or Azure Files as data volumes?

I have an Azure Kubernetes cluster and I need to mount a data volume for an application like mentioned below
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 3
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-db-password
key: db-password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: usermanagement-dbcreation-script
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: azure-managed-disk-pvc
- name: usermanagement-dbcreation-script
configMap:
name: usermanagement-dbcreation-script
I see that there are two options to create the Persistent Volume - either a based on Azure Disks or Azure Files.
I want to know what is the difference between Azure Disks or Azure Files with respect to Persistent Volume in Azure Kubernetes and when should I Azure Disks vs Azure Files?
For something as mysql (exclusive access to files) you are better off using Azure Disks. That would be pretty much a regular disk attached to the pod, whereas Azure Files are mostly meant to be used when you need ReadWriteMany access, not ReadWriteOnce
https://learn.microsoft.com/en-us/azure/aks/concepts-storage

Pod CrashLoopBackOff error with automountServiceAccountToken set to false

Hi Kubernetes Experts,
I was using the following ServiceAccount creation config:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
and the following Deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
...
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
k8s-custom-scheduler: my-scheduler
spec:
serviceAccountName: my-scheduler
Things were working fine, And now I want to make my pod more secure setting automountServiceAccountToken to false.
I changed my ServiceAccount creation and deployment config:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
automountServiceAccountToken: false
Deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
...
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
k8s-custom-scheduler: my-scheduler
spec:
serviceAccountName: my-scheduler
automountServiceAccountToken: false
After setting this my scheduler pod is not coming up and it says CrashLoopBackOff
Error:
I0325 17:37:50.304810 1 flags.go:33] FLAG: --write-config-to=""
I0325 17:37:50.891504 1 serving.go:319] Generated self-signed cert in-memory
W0325 17:37:51.168023 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:37:51.168064 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0325 17:37:51.168072 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0325 17:37:51.168089 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:37:51.168102 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
W0325 17:37:51.168111 1 options.go:298] Neither --kubeconfig nor --master was specified. Using default API client. This might not work.
invalid configuration: no configuration has been provided
I believe we need to configure something more along with automountServiceAccountToken: false.
Can someone point me to the additional configurations needed to use automountServiceAccountToken: false?
Configure Service Accounts for Pods
You can access the API from inside a pod using automatically mounted service account credentials.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account or for a particular pod.
So, when you are creating a ServiceAccount and a Deployment like in your example yaml files, credentials for accessing the Kubernetes API are not automatically mounted to the Pod. But your k8s Deployment 'my-scheduler' requires them to access the API.
You can test your ServiceAccount with some dummy Deployment of nginx, for example. And it will work without mounting credentials.
Also, if you create a ServiceAccount like in your example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
automountServiceAccountToken: false
You can manually mount the API credentials like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-scheduler
namespace: kube-system
...
spec:
...
template:
metadata:
labels:
app: my-scheduler
spec:
containers:
- image: <YOUR_IMAGE>
imagePullPolicy: Always
name: my-scheduler
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access
readOnly: true
serviceAccountName: my-scheduler
volumes:
- name: kube-api-access
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
See the Managing Service Accounts link for more information.

Finally got Key Vault integrated with AKS... but not clear what I need to do if anything after that to read into env vars

The documentation is a bit confusing there are two sets:
https://learn.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes
https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/pod-identity-mode/
At any rate, I'm able to do the following to see that secrets are in the Pod:
kubectl exec -it nginx-secrets-store-inline -- ls /mnt/secrets-store/
kubectl exec -it nginx-secrets-store-inline -- cat /mnt/secrets-store/secret1
This is basically where the documentation and tutorials I've seen end.
Cool... but what needs to be done to get them into the environmental variables in the application running in the Pod?
For example, this is how my API deployment is setup from when I was doing kubectl create secret generic app-secrets --from-literal=PGUSER=$pguser...:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment-dev
namespace: production
spec:
replicas: 3
revisionHistoryLimit: 5
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: api
image: api
ports:
- containerPort: 5000
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGDATABASE
- name: PGHOST
value: postgres-cluster-ip-service-dev
- name: PGPORT
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGPORT
- name: PGUSER
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGUSER
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGPASSWORD
volumeMounts:
- mountPath: /mnt/file-storage
name: file-storage-dev
subPath: file-storage
volumes:
- name: file-storage-dev
persistentVolumeClaim:
claimName: file-storage-dev
---
apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service-dev
namespace: development
spec:
type: ClusterIP
selector:
component: api
ports:
- port: 5000
targetPort: 5000
What needs to be done now with all of these?
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGDATABASE
The CSI secret store driver is a container storage interface driver - it can only mount to files.
For postgres specifically, you can use docker secrets environment variables to point to the path you're mounting the secret in and it will read it from the file instead. This works via appending _FILE to the variable name.
Per that document: Currently, this is only supported for POSTGRES_INITDB_ARGS, POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/db-secret
In the general case, if you need the secrets in environment variables, I would typically use a startup script in the container to read the CSI mounted secrets and export them. If it's a custom container this is usually easy enough to add; if it's a standard container you may be able to override the command with a small set of shell commands that can export the appropriate variables by reading the files before calling whatever the normal ENTRYPOINT of the container would have been.
The answer above by Patrick helped, but is not fully correct. AKS provides support as well to "sync" Key Vault Secrets into Kubernetes Secrets which can be used as ENV variables.
See the microsoft docs on how to setup sync of a secret into kubernetes:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#sync-mounted-content-with-a-kubernetes-secret
And this article shows how you can reference the secret into an environment variable:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#set-an-environment-variable-to-reference-kubernetes-secrets

Data from volumes as kubernetes secrets

I have an application that starts with docker-compose up. Some ssh credentials are provided with a json file, in a volume, in the host machine. I want to run the app in kubernetes, how can I provide the credentials using kubernetes secrets? my json file looks like:
{
"HOST_USERNAME"="myname",
"HOST_PASSWORD"="mypass",
"HOST_IP"="myip"
}
I created a file named mysecret.yml with base64 and I applied in kubernetes
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
HOST_USERNAME: c2gaQ=
HOST_PASSWORD: czMxMDIsdaf0NjcoKik=
HOST_IP: MTcyLjIeexLjAuMQ==
How I have to write the volumes in deployment.yml in order to use the secret properly?
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
This is the above example of using secret as volumes. You can use the same to define a deployment.
Please refer to official kubernetes documentation for further info:
https://kubernetes.io/docs/concepts/configuration/secret/

Copy file from cron job's pod to local directory in AKS

I have created a cron job which runs every 60 min. In the job's container I have mounted emptyDir volume as detailed-logs. In my container I am writing a csv file at path detailed-logs\logs.csv.
I am trying to copy this file from pod to local machine using kubectl cp podname:detailed-logs\logs.csv \k8slogs\logs.csv but it throws the error:
path "detailed-logs\logs.csv" not found (no such file or directory).
Once job runs successfully, pod created by job goes to completed state, is this can be a issue?
The file you are referring to is not going to persist once your pod completes running. What you can do is make a backup of the file when the cron job is running. The two solutions I can suggest are either attach a persistent volume to the job pod, or to upload the file somewhere while running the job.
USE A PERSISTENT VOLUME
Here you can create a PV through a quick readWriteOnce Persistent Volume Claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Then you can mount it onto the pod using the following:
...
volumeMounts:
- name: persistent-storage
mountPath: /detailed-logs
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: my-pvc
...
UPLOAD FILE
The way I do it is run the job in a container that has aws-cli installed, and then store my file on AWS S3, you can choose another platform:
apiVersion: v1
kind: ConfigMap
metadata:
name: backup-sh
data:
backup.sh: |-
#!/bin/bash
aws s3 cp /myText.txt s3://bucketName/
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: s3-backup
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: aws-kubectl
image: expert360/kubectl-awscli:v1.11.2
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: s3-creds
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: s3-creds
key: secret-access-key
command:
- /bin/sh
- -c
args: ["sh /backup.sh"]
volumeMounts:
- name: backup-sh
mountPath: /backup.sh
readOnly: true
subPath: backup.sh
volumes:
- name: backup-sh
configMap:
name: backup-sh
restartPolicy: Never

Resources