Data from volumes as kubernetes secrets - node.js

I have an application that starts with docker-compose up. Some ssh credentials are provided with a json file, in a volume, in the host machine. I want to run the app in kubernetes, how can I provide the credentials using kubernetes secrets? my json file looks like:
{
"HOST_USERNAME"="myname",
"HOST_PASSWORD"="mypass",
"HOST_IP"="myip"
}
I created a file named mysecret.yml with base64 and I applied in kubernetes
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
HOST_USERNAME: c2gaQ=
HOST_PASSWORD: czMxMDIsdaf0NjcoKik=
HOST_IP: MTcyLjIeexLjAuMQ==
How I have to write the volumes in deployment.yml in order to use the secret properly?

apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
This is the above example of using secret as volumes. You can use the same to define a deployment.
Please refer to official kubernetes documentation for further info:
https://kubernetes.io/docs/concepts/configuration/secret/

Related

Reading in values from /mnt/secrets-store/ after integration AKV with AKS using CSI Driver

I have AKV integrated with AKS using CSI driver (documentation).
I can access them in the Pod by doing something like:
## show secrets held in secrets-store
kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
## print a test secret 'ExampleSecret' held in secrets-store
kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret
I have it working with my PostgreSQL deployment doing the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-prod
namespace: prod
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/PG-DATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/PG-USER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: postgres-storage-prod
mountPath: /var/postgresql
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: postgres-storage-prod
persistentVolumeClaim:
claimName: postgres-storage-prod
- name: file-storage-prod
persistentVolumeClaim:
claimName: file-storage-prod
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Which works fine.
Figured all I'd need to do is swap out stuff like the following:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGPASSWORD
For:
- name: POSTGRES_PASSWORD
value: /mnt/secrets-store/PG-PASSWORD
# or
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
And I'd be golden, but that does not turn out to be the case.
In the Pods it is reading in the value as a string, which makes me confused about two things:
Why does this work for the PostgreSQL deployment but not my Django API, for example?
Is there a way to add them in env: without turning them in secrets and using secretKeyRef?
The CSI Driver injects the secrets in the pod by placing them as files on the file system. There will be one file per secret where
The filename is the name of the secret (or the alias specified in the secret provider class)
The content of the file is the value of the secret.
The CSI does not create environment variables of the secrets. The recomended way to add secrets as environment variables is to let CSI create a Kubernetes secret and then use the native secretKeyRef construct
Why does this work for the PostgreSQL deployment but not my Django API, for example?
In you Django API app you set an environment variable POSTGRES_PASSWORD
to the value /mnt/secrets-store/PG-PASSWORD. i.e you simply say that a certain variable should contain a certain value, nothing more. Thus the variable ill contaain the pat, not the secret value itself.
The same is true for the Postgres deployment it is just a path in an environment variable. The difference lies within how the Postgres deployment interprets the value. When the environment variables ending in _FILE is used Postgres does not expect the environment variable itself to contain the secret, but rather a path to a file that does. From the docs of the Postgres image:
As an alternative to passing sensitive information via environment
variables, _FILE may be appended to some of the previously listed
environment variables, causing the initialization script to load the
values for those variables from files present in the container. In
particular, this can be used to load passwords from Docker secrets
stored in /run/secrets/<secret_name> files. For example:
$ docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres
Currently, this is only supported for POSTGRES_INITDB_ARGS,
POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.
Is there a way to add them in env: without turning them in secrets and using secretKeyRef?
No, not out of the box. What you could do is to have an entrypoint script in your image that reads all the files in your secret folder and sets them as environment variables (The name of the variables being the filenames and the value the file content) before it starts the main application. That way the application can access the secrets as environment variables.

Sync Azure Key Vault Secret with Kubernetes dockerconfigjson Secret

I am trying to sync an Azure Key Vault Secret with a Kubernetes Secret of type dockerconfigjson by applying the following yaml manifest with the 4 objects Pod, SecretProviderClass, AzureIdentity and AzureIdentityBinding.
All configuration around key vault access and managed identity RBAC rules have been done and proven to work, as I have access to the Azure Key Vault secret from within the running Pod.
But, when applying this manifest, and according to the documentation here, I expect to see the kubernetes secret regcred reflecting the Azure Key Vault Secret when I create the Pod with mounted secret volume, but the kubernetes secret remains unchanged. I have also tried to recreate the Pod in an attempt to trigger the sync but in vain.
Since this is a very declarative way of configuring this functionality, I am also confused where to look at logs for troubleshooting.
Can someone lead me to what may I be doing wrong?
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
aadpodidbinding: webapp
spec:
containers:
- name: demo
image: mcr.microsoft.com/oss/azure/aad-pod-identity/demo:v1.6.3
volumeMounts:
- name: web-app-secret
mountPath: "/mnt/secrets"
readOnly: true
nodeSelector:
kubernetes.io/os: linux
volumes:
- name: web-app-secret
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: web-app-secret-provide
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: web-app-secret-provide
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: <key-vault-name>
objects: |
array:
- |
objectName: registryPassword
objectType: secret
tenantId: <tenant-id>
secretObjects:
- data:
- key: .dockerconfigjson
objectName: registryPassword
secretName: regcred
type: kubernetes.io/dockerconfigjson
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: kv-managed-identity
spec:
type: 0
resourceID: <resource-id>
clientID: <client-id>
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: kv-managed-binding
spec:
azureIdentity: kv-managed-identity
selector: web-app

Finally got Key Vault integrated with AKS... but not clear what I need to do if anything after that to read into env vars

The documentation is a bit confusing there are two sets:
https://learn.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes
https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/pod-identity-mode/
At any rate, I'm able to do the following to see that secrets are in the Pod:
kubectl exec -it nginx-secrets-store-inline -- ls /mnt/secrets-store/
kubectl exec -it nginx-secrets-store-inline -- cat /mnt/secrets-store/secret1
This is basically where the documentation and tutorials I've seen end.
Cool... but what needs to be done to get them into the environmental variables in the application running in the Pod?
For example, this is how my API deployment is setup from when I was doing kubectl create secret generic app-secrets --from-literal=PGUSER=$pguser...:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment-dev
namespace: production
spec:
replicas: 3
revisionHistoryLimit: 5
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: api
image: api
ports:
- containerPort: 5000
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGDATABASE
- name: PGHOST
value: postgres-cluster-ip-service-dev
- name: PGPORT
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGPORT
- name: PGUSER
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGUSER
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGPASSWORD
volumeMounts:
- mountPath: /mnt/file-storage
name: file-storage-dev
subPath: file-storage
volumes:
- name: file-storage-dev
persistentVolumeClaim:
claimName: file-storage-dev
---
apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service-dev
namespace: development
spec:
type: ClusterIP
selector:
component: api
ports:
- port: 5000
targetPort: 5000
What needs to be done now with all of these?
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGDATABASE
The CSI secret store driver is a container storage interface driver - it can only mount to files.
For postgres specifically, you can use docker secrets environment variables to point to the path you're mounting the secret in and it will read it from the file instead. This works via appending _FILE to the variable name.
Per that document: Currently, this is only supported for POSTGRES_INITDB_ARGS, POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/db-secret
In the general case, if you need the secrets in environment variables, I would typically use a startup script in the container to read the CSI mounted secrets and export them. If it's a custom container this is usually easy enough to add; if it's a standard container you may be able to override the command with a small set of shell commands that can export the appropriate variables by reading the files before calling whatever the normal ENTRYPOINT of the container would have been.
The answer above by Patrick helped, but is not fully correct. AKS provides support as well to "sync" Key Vault Secrets into Kubernetes Secrets which can be used as ENV variables.
See the microsoft docs on how to setup sync of a secret into kubernetes:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#sync-mounted-content-with-a-kubernetes-secret
And this article shows how you can reference the secret into an environment variable:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#set-an-environment-variable-to-reference-kubernetes-secrets

AKS Azure DevOps Build Agent

I'm trying to build a Azure DevOps Linux Build Agent in Azure Kubernetes Service.
I created the yaml file and created the secrets to use inside of the file.
I applied the file and have "CreateContainerConfigError" with my pod in a "waiting" state.
I run command
"kubectl get pod <pod name> -o yaml"
and it states the secret "vsts" could not be found.
I find this weird because I used "kubectl get secrets" and I see the secrets "vsts-account" and "vsts-token" listed.
You may check your kubernetes configuration, which is supposed to be like below:
apiVersion: v1
kind: ReplicationController
metadata:
name: vsts-agent
spec:
replicas: 1
template:
metadata:
labels:
app: vsts-agent
version: "0.1"
spec:
containers:
– name: vsts-agent
image: microsoft/vsts-agent:ubuntu-16.04-docker-18.06.1-ce-standard
env:
– name: VSTS_ACCOUNT
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_ACCOUNT
– name: VSTS_TOKEN
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_TOKEN
– name: VSTS_POOL
value: dockerized-vsts-agents
volumeMounts:
– mountPath: /var/run/docker.sock
name: docker-volume
volumes:
– name: docker-volume
hostPath:
path: /var/run/docker.sock
You may follow the blog below to see whether it helps you:
https://mohitgoyal.co/2019/01/10/run-azure-devops-private-agents-in-kubernetes-clusters/

Mount Error for Block Storage on Azure kubernetes

I have been trying to mount a file share on Kubernetes pod hosted on AKS in Azure. So far, I have tried to:
1. Successfully created a secret by base64 encoding the name and the key
2. Create a yaml by specifying the correct configurations
3. Once I apply it using kubectl apply -f azure-file-pod.yaml, it gives me the following error:
Output: mount error: could not resolve address for
demo.file.core.windows.net: Unknown error
I have an Azure File Share by the name of demo.
Here is my yaml file:
apiVersion: v1
kind: Pod
metadata:
name: azure-files-pod
spec:
containers:
- image: microsoft/sample-aks-helloworld
name: azure
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: demo
readOnly: false
How can this possibly be resolved?

Resources