I have a nodejs application which stores variables in environment variables.
I'm using the dotenv module, so I have a .env file that looks like :
VAR1=value1
VAR2=something_else
I'm currently setting up a BitBucket Pipeline to auto deploy this to a Kubernetes cluster.
I'm not very familiar with kubernetes secrets, though I'm reading up on them.
I'm wondering :
Is there an easy way to send to a Docker-container / kubernetes-deployment all of the environment variables I have defined in my .env file so they are available in the pods my app is running in ?
I'm hoping for an example secrets.yml file or similar which takes everything from .env and makes in into environment variables in the container. But it could also be done in the BitBucket pipeline level, or at the Docker container level .. I'm not sure ...
Step 1: Create a k8s secret with your .env file:
# kubectl create secret generic <secret-name> --from-env-file=<path-to-env-file>
$ kubectl create secret generic my-env-list --from-env-file=.env
secret/my-env-list created
Step 2: Varify secret:
$ kubectl get secret my-env-list -o yaml
apiVersion: v1
data:
VAR1: dmFsdWUx
VAR2: c29tZXRoaW5nX2Vsc2U=
kind: Secret
metadata:
name: my-env-list
namespace: default
type: Opaque
Step 3: Add env to your pod's container:
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: my-env-list # <---- here
restartPolicy: Never
Step 4: Run the pod and check if the env exist or not:
$ kubectl apply -f pod.yaml
pod/demo-pod created
$ kubectl logs -f demo-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=demo-pod
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
VAR1=value1 # <------------------------------------------------------here
VAR2=something_else # <-----------------------------------------------here
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
You can also use the kustomize operator to create a secret from file as follows:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: kust-example
generatorOptions:
# Prevents adding hash at the end of the secret name
disableNameSuffixHash: true
secretGenerator:
- name: your-secret
namespace: default
envs:
- path/secret.env
Then you just have to run `kubectl apply -k dir
You can also use this to achieve the same result as using Kustomization but with more control to automate your job
https://github.com/juliosmelo/dotenv2k8s
Related
EDIT: Found the issue.I didnt installed the addon for secret driver. Once installed that i was able to make it work
I am facing an issue here and i have no idea what else i can try to figure out the issue.
I have an aks running with a single pod that runs a basic web app todo list. Nothing too fancy or complicated. what i am trying to do here, is to give permission to the aks cluster to access a keyvault and GET a secret to pass to the pod. the secret is just an ASPNETCORE_ENVIRONMENT: Development.
Following the documentations, i used helm to install the repo:
helm repo add csi-secrets-store-provider-azure https://azure.github.io/secrets-store-csi-driver-provider-azure/charts
helm install csi csi-secrets-store-provider-azure/csi-secrets-store-provider-azure
I created a Service Principle in azure:
SERVICE_PRINCIPLE_CLIENT_SECRET = az ad sp create-for-rbac --skip-assignment --name sp-aks-keyvault
i queried the clientId and Secret and passed them to my cluster as follow:
kubectl create secret generic secrets-store-creds --from-literal clientid="ClientID" --from-literal clientsecret="Password"
Once everything was set. I set those deployments.
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
namespace: default
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dockerimage-acr
ports:
- containerPort: 80
env:
- name: ASPNETCORE_ENVIRONMENT
valueFrom:
secretKeyRef:
name: aspenet-environment
key: environment
securityContext:
allowPrivilegeEscalation: false
volumeMounts:
- name: secrets-mount
mountPath: "/mnt/secrets-store"
readOnly: true
restartPolicy: Always
volumes:
- name: secrets-mount
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "kv-name"
nodePublishSecretRef: # Only required when using service principal mode
name: secrets-store-creds
And my secretProvider.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: keyvault-secret-class
namespace: default
spec:
provider: azure
secretObjects:
- secretName: aspenet-environment
type: Opaque
data:
- objectName: aspnetcoreenvironment
key: environment
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "false"
userAssignedIdentityID: ""
keyvaultName: "mykeyvault-name"
objects: |
array:
- |
objectName: aspnetcoreenvironment
objectType: secret
objectVersion: ""
tenantId: "<Tenant-Id>"
In my keyvault i gave access policy to the Service principle created and assigned Secret Permissions: GET and created a secret called
Name: aspnetcoreenvironment
value: Development
So far everything went ok, but when i run the deployment. and use the command kubectl describe pod <podname> i see the error, that prevents the container to start
Warning Failed 8s (x3 over 21s) kubelet Error: secret "aspenet-environment" not found
I tried different solutions but nothing works.
if i run the command kubectl get secretproviderclass i get back my provider i created.
As far as i understand, if no service is requiring a specific secret, i wont be able to find the secret i want to create if i run the command: kubectl get secret
And this is correct, i guess, because my pod is not starting.
Any help or enlightenment here about what i am doing wrong or how to fix it?
Thank you so much guys
EDIT:
Some extra debugging i came across the fact that the volume mount is still required. So i did add the volume to the deployment. But this is still giving an error.
The issue is, as i realized. Is when i run the command kubectl apply -f secretProviderClass.yml, no secret get created at all, reason why is failing.
So i think something is wrong here. Applying the SecretProviderClass shouldnt create automatically a secret service?
I have AKV integrated with AKS using CSI driver (documentation).
I can access them in the Pod by doing something like:
## show secrets held in secrets-store
kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
## print a test secret 'ExampleSecret' held in secrets-store
kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret
I have it working with my PostgreSQL deployment doing the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-prod
namespace: prod
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/PG-DATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/PG-USER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: postgres-storage-prod
mountPath: /var/postgresql
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: postgres-storage-prod
persistentVolumeClaim:
claimName: postgres-storage-prod
- name: file-storage-prod
persistentVolumeClaim:
claimName: file-storage-prod
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Which works fine.
Figured all I'd need to do is swap out stuff like the following:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGPASSWORD
For:
- name: POSTGRES_PASSWORD
value: /mnt/secrets-store/PG-PASSWORD
# or
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
And I'd be golden, but that does not turn out to be the case.
In the Pods it is reading in the value as a string, which makes me confused about two things:
Why does this work for the PostgreSQL deployment but not my Django API, for example?
Is there a way to add them in env: without turning them in secrets and using secretKeyRef?
The CSI Driver injects the secrets in the pod by placing them as files on the file system. There will be one file per secret where
The filename is the name of the secret (or the alias specified in the secret provider class)
The content of the file is the value of the secret.
The CSI does not create environment variables of the secrets. The recomended way to add secrets as environment variables is to let CSI create a Kubernetes secret and then use the native secretKeyRef construct
Why does this work for the PostgreSQL deployment but not my Django API, for example?
In you Django API app you set an environment variable POSTGRES_PASSWORD
to the value /mnt/secrets-store/PG-PASSWORD. i.e you simply say that a certain variable should contain a certain value, nothing more. Thus the variable ill contaain the pat, not the secret value itself.
The same is true for the Postgres deployment it is just a path in an environment variable. The difference lies within how the Postgres deployment interprets the value. When the environment variables ending in _FILE is used Postgres does not expect the environment variable itself to contain the secret, but rather a path to a file that does. From the docs of the Postgres image:
As an alternative to passing sensitive information via environment
variables, _FILE may be appended to some of the previously listed
environment variables, causing the initialization script to load the
values for those variables from files present in the container. In
particular, this can be used to load passwords from Docker secrets
stored in /run/secrets/<secret_name> files. For example:
$ docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres
Currently, this is only supported for POSTGRES_INITDB_ARGS,
POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.
Is there a way to add them in env: without turning them in secrets and using secretKeyRef?
No, not out of the box. What you could do is to have an entrypoint script in your image that reads all the files in your secret folder and sets them as environment variables (The name of the variables being the filenames and the value the file content) before it starts the main application. That way the application can access the secrets as environment variables.
I'm trying to use a key file in my Kubernetes application and I can't seem to find an example of this anywhere. I want to use Firebase authentication in my NodeJS backend. When running my application locally I was using the following
admin.initializeApp({
credential: admin.credential.cert(SERVICE_ACCOUNT_KEY_PATH),
});
My initial thought was to create a secret from a key file like
$ gcloud container clusters get-credentials my-cluster --zone us-central1-c --project my-project
$ kubectl create secret generic service-account-key \
--from-file=${SERVICE_ACCOUNT_KEY_PATH}
However, since I am creating a secret there is not a path for me to set my SERVICE_ACCOUNT_KEY_PATH to when running my application in a Kubernetes container. What is the correct method for doing this in Kubernetes?
you can save the serviceaccount file inside the secret and mount the secret into the deployment volume.
so the secret will be accessible to deployment's volume and your pod can access it.
for example :
apiVersion: v1
kind: Deployment
metadata:
name: mypod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
you can check out the :
https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys
another example : https://kubernetes.io/docs/concepts/configuration/secret/#use-case-dotfiles-in-a-secret-volume
so basic idea is to mount the secret into the volume of the deployment and it will be used by the code.
My requirement is as follows:
Developer creates a branch in Jenkins. Lets say branch name is "mystory-101"
Now developer push the code to this branch
Jenkins job starts as soon as commit is pushed to the branch "mystory-101" and create a new docker image for this branch if not created already
My application is Node.js based app, so docker container starts with node.js and deployes the code from the branch "mystory-101"
After the code is deployed and node.js is running, then I would also like this node.js app to be accessible via the URL : https://mystory-101.mycompany.com
For this purpose I was reading this https://medium.com/swlh/ci-cd-pipeline-using-jenkins-dynamic-nodes-86ea854ff7a7
but I am not sure how to achive step #5. Can you please advice how to achive this autometically?
Reformatting answers from commentaries, having a Jenkins installation and Kubernetes cluster, you may automate your deployments using a Jenkins plugin such as oc or kubernetes, or you could prefer using the kubectl client directly, assuming your agents do have that binary.
Not going through the RBAC specifics, you would probably need a ServiceAccount for Jenkins, and use a token (can be found in a Secret named after your ServiceAccount). That ServiceAccount should have enough privileges to create resources in the namespaces you intend to deploy stuff into -- usually the edit ClusterRole, with a namespace-scoped RoleBinding:
kubectl create sa jenkins -n my-namespace
kubectl create rolebinding jenkins-edit \
--clusterrole=edit \
--serviceaccount=my-namespace:jenkins-edit \
--namespace=my-namespace
Once Jenkins is done building your image, you would deploy it to Kubernetes, most likely creating a Deployment, a Service, and an Ingress, substituting resource names, namespaces and your ingress requested FQDN to match your requirements.
Prepare your deployment yaml, something like:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-BRANCH
spec:
selector:
matchLabels:
name: app-BRANCH
template:
spec:
containers:
- image: my-registry/path/to/image:BRANCH
[...]
---
apiVersion: v1
kind: Service
metadata:
name: app-BRANCH
spec:
selector:
name: app-BRANCH
ports:
[...]
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-BRANCH
spec:
rules:
- host: app-BRANCH.my-base-domain.com
http:
paths:
- backend:
serviceName: app-BRANCH
Then, have your Jenkins agent apply that configuration, substituting values properly:
sed "s|BRANCH|$BRANCH|" deploy.yaml | kubectl apply -n my-namespace -f-
kubectl wait -n my-namespace deploy/app-$BRANCH --for=condition=Available
kubectl logs -n my-namespace deploy/app-$BRANCH --tail=200
I am reading the configuration details from the configmap as shown below
apiVersion: v1
kind: ConfigMap
metadata:
namespace: akv2k8s-test
name: akv2k8s-test
data:
Log_Level: Debug
Alert_Email: Test#demo.com
pod definition
apiVersion: v1
kind: Pod
metadata:
name: akv2k8s-test
namespace: akv2k8s-test
spec:
containers:
- name: akv2k8s-env-test
image: kavija/python-env-variable:latest
envFrom:
- configMapRef:
name: akv2k8s-test
and reading it with python
import os
loglevel = os.environ['Log_Level']
alertemail = os.environ['Alert_Email']
print("Running with Log Level: %s, Alert Email:%s" % (loglevel,alertemail))
I want to update the Configuration values while deploying, the below command fails
kubectl apply -f deploy.yaml --env="Log_Level=error"
How do I pass the environment variable while deploying?
Since you want to update the configmap while deplyoing, The easiest way would be by getting the file content using cat and then update as you wish -
cat configmap.yaml |sed -e 's|Log_Level: Debug|Log_Level: error|' | kubectl apply -f -
In case you want to update the existing configmap, use kubectl patch command.
kubectl patch configmap/akv2k8s-test -n akv2k8s-test --type merge -p '{"data":{"Log_Level":"error"}}'