We have a spring MVC application deployed using tomcat image in AKS.
How to get values from Secrets mounted as volumes?
Most of the examples points to spring boot only
I am mounting values from secret store
kind: Pod
apiVersion: v1
metadata:
name: nginx
namespace: default
labels:
aadpodidbinding: pod-mi
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: foo
mountPath: "/mnt/secrets"
readOnly: true
volumes:
- name: foo
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: spc.
can see the secrets get mounted correctly:
kubectl -n default exec -it nginx -- bash
root#nginx:/# ls /mnt/secrets
service-one-secret
service-two-secret
Cat service-one-secret doesn't return anything
Can any one suggest a way to read its values from spring mvc application?
When you mount the secret as a volume to the container, then it would show the data of the secret in that path. For example, you create a secret with the command:
kubectl create secret generic basic-secret \
--from-literal=username="jsmith" \
--from-literal=password="mysupersecurepassword"
Then you mount the secret as a volume:
...
spec:
volumes:
- name: vol-secret
secret:
secretName: my-secret
containers:
...
volumeMounts:
- name: vol-secret
mountPath: /etc/app/secrets
Then you can see the files named username and password in the path /etc/app/secrets, and the value looks like this:
/ # ls /etc/app/secrets
password user
/ # cat /etc/app/secrets/password
mysupersecurepassword
/ # cat /etc/app/secrets/username
jsmith
Related
I have AKV integrated with AKS using CSI driver (documentation).
I can access them in the Pod by doing something like:
## show secrets held in secrets-store
kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
## print a test secret 'ExampleSecret' held in secrets-store
kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret
I have it working with my PostgreSQL deployment doing the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-prod
namespace: prod
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/PG-DATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/PG-USER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: postgres-storage-prod
mountPath: /var/postgresql
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: postgres-storage-prod
persistentVolumeClaim:
claimName: postgres-storage-prod
- name: file-storage-prod
persistentVolumeClaim:
claimName: file-storage-prod
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Which works fine.
Figured all I'd need to do is swap out stuff like the following:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGPASSWORD
For:
- name: POSTGRES_PASSWORD
value: /mnt/secrets-store/PG-PASSWORD
# or
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
And I'd be golden, but that does not turn out to be the case.
In the Pods it is reading in the value as a string, which makes me confused about two things:
Why does this work for the PostgreSQL deployment but not my Django API, for example?
Is there a way to add them in env: without turning them in secrets and using secretKeyRef?
The CSI Driver injects the secrets in the pod by placing them as files on the file system. There will be one file per secret where
The filename is the name of the secret (or the alias specified in the secret provider class)
The content of the file is the value of the secret.
The CSI does not create environment variables of the secrets. The recomended way to add secrets as environment variables is to let CSI create a Kubernetes secret and then use the native secretKeyRef construct
Why does this work for the PostgreSQL deployment but not my Django API, for example?
In you Django API app you set an environment variable POSTGRES_PASSWORD
to the value /mnt/secrets-store/PG-PASSWORD. i.e you simply say that a certain variable should contain a certain value, nothing more. Thus the variable ill contaain the pat, not the secret value itself.
The same is true for the Postgres deployment it is just a path in an environment variable. The difference lies within how the Postgres deployment interprets the value. When the environment variables ending in _FILE is used Postgres does not expect the environment variable itself to contain the secret, but rather a path to a file that does. From the docs of the Postgres image:
As an alternative to passing sensitive information via environment
variables, _FILE may be appended to some of the previously listed
environment variables, causing the initialization script to load the
values for those variables from files present in the container. In
particular, this can be used to load passwords from Docker secrets
stored in /run/secrets/<secret_name> files. For example:
$ docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres
Currently, this is only supported for POSTGRES_INITDB_ARGS,
POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.
Is there a way to add them in env: without turning them in secrets and using secretKeyRef?
No, not out of the box. What you could do is to have an entrypoint script in your image that reads all the files in your secret folder and sets them as environment variables (The name of the variables being the filenames and the value the file content) before it starts the main application. That way the application can access the secrets as environment variables.
I deployed my first container, I got info:
deployment.apps/frontarena-ads-deployment created
but then I saw my container creation is stuck in Waiting status.
Then I saw the logs using kubectl describe pod frontarena-ads-deployment-5b475667dd-gzmlp and saw MountVolume error which I cannot figure out why it is thrown:
Warning FailedMount 9m24s kubelet MountVolume.SetUp
failed for volume "ads-filesharevolume" : mount failed: exit status 32 Mounting command:
systemd-run Mounting arguments: --description=Kubernetes transient
mount for
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
--scope -- mount -t cifs -o username=frontarenastorage,password=mypassword,file_mode=0777,dir_mode=0777,vers=3.0
//frontarenastorage.file.core.windows.net/azurecontainershare
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
Output: Running scope as unit
run-rf54d5b5f84854777956ae0e25810bb94.scope. mount error(115):
Operation now in progress Refer to the mount.cifs(8) manual page (e.g.
man mount.cifs)
Before I run the deployment I created a secret in Azure, using the already created azure file share, which I referenced within the YAML.
$AKS_PERS_STORAGE_ACCOUNT_NAME="frontarenastorage"
$STORAGE_KEY="mypassword"
kubectl create secret generic fa-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
In that file share I have folders and files which I need to mount and I reference azurecontainershare in YAML:
My YAML looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment
labels:
app: frontarena-ads-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test
labels:
app: frontarena-ads-aks-test
spec:
containers:
- name: frontarena-ads-aks-test
image: faselect-docker.dev/frontarena/ads:test1
imagePullPolicy: Always
ports:
- containerPort: 9000
volumeMounts:
- name: ads-filesharevolume
mountPath: /opt/front/arena/host
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: azurecontainershare
readOnly: false
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-aks-test
The Issue was because of the different Azure Regions in which AKS cluster and Azure File Share are deployed. If they are in the same Region you would not have this issue.
The documentation is a bit confusing there are two sets:
https://learn.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes
https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/pod-identity-mode/
At any rate, I'm able to do the following to see that secrets are in the Pod:
kubectl exec -it nginx-secrets-store-inline -- ls /mnt/secrets-store/
kubectl exec -it nginx-secrets-store-inline -- cat /mnt/secrets-store/secret1
This is basically where the documentation and tutorials I've seen end.
Cool... but what needs to be done to get them into the environmental variables in the application running in the Pod?
For example, this is how my API deployment is setup from when I was doing kubectl create secret generic app-secrets --from-literal=PGUSER=$pguser...:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment-dev
namespace: production
spec:
replicas: 3
revisionHistoryLimit: 5
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: api
image: api
ports:
- containerPort: 5000
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGDATABASE
- name: PGHOST
value: postgres-cluster-ip-service-dev
- name: PGPORT
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGPORT
- name: PGUSER
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGUSER
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGPASSWORD
volumeMounts:
- mountPath: /mnt/file-storage
name: file-storage-dev
subPath: file-storage
volumes:
- name: file-storage-dev
persistentVolumeClaim:
claimName: file-storage-dev
---
apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service-dev
namespace: development
spec:
type: ClusterIP
selector:
component: api
ports:
- port: 5000
targetPort: 5000
What needs to be done now with all of these?
env:
- name: PGDATABASE
valueFrom:
secretKeyRef:
name: k8stut-dev-secrets
key: PGDATABASE
The CSI secret store driver is a container storage interface driver - it can only mount to files.
For postgres specifically, you can use docker secrets environment variables to point to the path you're mounting the secret in and it will read it from the file instead. This works via appending _FILE to the variable name.
Per that document: Currently, this is only supported for POSTGRES_INITDB_ARGS, POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/db-secret
In the general case, if you need the secrets in environment variables, I would typically use a startup script in the container to read the CSI mounted secrets and export them. If it's a custom container this is usually easy enough to add; if it's a standard container you may be able to override the command with a small set of shell commands that can export the appropriate variables by reading the files before calling whatever the normal ENTRYPOINT of the container would have been.
The answer above by Patrick helped, but is not fully correct. AKS provides support as well to "sync" Key Vault Secrets into Kubernetes Secrets which can be used as ENV variables.
See the microsoft docs on how to setup sync of a secret into kubernetes:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#sync-mounted-content-with-a-kubernetes-secret
And this article shows how you can reference the secret into an environment variable:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#set-an-environment-variable-to-reference-kubernetes-secrets
I have an application that starts with docker-compose up. Some ssh credentials are provided with a json file, in a volume, in the host machine. I want to run the app in kubernetes, how can I provide the credentials using kubernetes secrets? my json file looks like:
{
"HOST_USERNAME"="myname",
"HOST_PASSWORD"="mypass",
"HOST_IP"="myip"
}
I created a file named mysecret.yml with base64 and I applied in kubernetes
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
HOST_USERNAME: c2gaQ=
HOST_PASSWORD: czMxMDIsdaf0NjcoKik=
HOST_IP: MTcyLjIeexLjAuMQ==
How I have to write the volumes in deployment.yml in order to use the secret properly?
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
This is the above example of using secret as volumes. You can use the same to define a deployment.
Please refer to official kubernetes documentation for further info:
https://kubernetes.io/docs/concepts/configuration/secret/
I have been trying to mount a file share on Kubernetes pod hosted on AKS in Azure. So far, I have tried to:
1. Successfully created a secret by base64 encoding the name and the key
2. Create a yaml by specifying the correct configurations
3. Once I apply it using kubectl apply -f azure-file-pod.yaml, it gives me the following error:
Output: mount error: could not resolve address for
demo.file.core.windows.net: Unknown error
I have an Azure File Share by the name of demo.
Here is my yaml file:
apiVersion: v1
kind: Pod
metadata:
name: azure-files-pod
spec:
containers:
- image: microsoft/sample-aks-helloworld
name: azure
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: demo
readOnly: false
How can this possibly be resolved?