Use Key File with Application Running on Kubernetes Cluster - node.js

I'm trying to use a key file in my Kubernetes application and I can't seem to find an example of this anywhere. I want to use Firebase authentication in my NodeJS backend. When running my application locally I was using the following
admin.initializeApp({
credential: admin.credential.cert(SERVICE_ACCOUNT_KEY_PATH),
});
My initial thought was to create a secret from a key file like
$ gcloud container clusters get-credentials my-cluster --zone us-central1-c --project my-project
$ kubectl create secret generic service-account-key \
--from-file=${SERVICE_ACCOUNT_KEY_PATH}
However, since I am creating a secret there is not a path for me to set my SERVICE_ACCOUNT_KEY_PATH to when running my application in a Kubernetes container. What is the correct method for doing this in Kubernetes?

you can save the serviceaccount file inside the secret and mount the secret into the deployment volume.
so the secret will be accessible to deployment's volume and your pod can access it.
for example :
apiVersion: v1
kind: Deployment
metadata:
name: mypod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
you can check out the :
https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys
another example : https://kubernetes.io/docs/concepts/configuration/secret/#use-case-dotfiles-in-a-secret-volume
so basic idea is to mount the secret into the volume of the deployment and it will be used by the code.

Related

Aks SecretProviderClass secret not found

EDIT: Found the issue.I didnt installed the addon for secret driver. Once installed that i was able to make it work
I am facing an issue here and i have no idea what else i can try to figure out the issue.
I have an aks running with a single pod that runs a basic web app todo list. Nothing too fancy or complicated. what i am trying to do here, is to give permission to the aks cluster to access a keyvault and GET a secret to pass to the pod. the secret is just an ASPNETCORE_ENVIRONMENT: Development.
Following the documentations, i used helm to install the repo:
helm repo add csi-secrets-store-provider-azure https://azure.github.io/secrets-store-csi-driver-provider-azure/charts
helm install csi csi-secrets-store-provider-azure/csi-secrets-store-provider-azure
I created a Service Principle in azure:
SERVICE_PRINCIPLE_CLIENT_SECRET = az ad sp create-for-rbac --skip-assignment --name sp-aks-keyvault
i queried the clientId and Secret and passed them to my cluster as follow:
kubectl create secret generic secrets-store-creds --from-literal clientid="ClientID" --from-literal clientsecret="Password"
Once everything was set. I set those deployments.
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
namespace: default
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dockerimage-acr
ports:
- containerPort: 80
env:
- name: ASPNETCORE_ENVIRONMENT
valueFrom:
secretKeyRef:
name: aspenet-environment
key: environment
securityContext:
allowPrivilegeEscalation: false
volumeMounts:
- name: secrets-mount
mountPath: "/mnt/secrets-store"
readOnly: true
restartPolicy: Always
volumes:
- name: secrets-mount
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "kv-name"
nodePublishSecretRef: # Only required when using service principal mode
name: secrets-store-creds
And my secretProvider.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: keyvault-secret-class
namespace: default
spec:
provider: azure
secretObjects:
- secretName: aspenet-environment
type: Opaque
data:
- objectName: aspnetcoreenvironment
key: environment
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "false"
userAssignedIdentityID: ""
keyvaultName: "mykeyvault-name"
objects: |
array:
- |
objectName: aspnetcoreenvironment
objectType: secret
objectVersion: ""
tenantId: "<Tenant-Id>"
In my keyvault i gave access policy to the Service principle created and assigned Secret Permissions: GET and created a secret called
Name: aspnetcoreenvironment
value: Development
So far everything went ok, but when i run the deployment. and use the command kubectl describe pod <podname> i see the error, that prevents the container to start
Warning Failed 8s (x3 over 21s) kubelet Error: secret "aspenet-environment" not found
I tried different solutions but nothing works.
if i run the command kubectl get secretproviderclass i get back my provider i created.
As far as i understand, if no service is requiring a specific secret, i wont be able to find the secret i want to create if i run the command: kubectl get secret
And this is correct, i guess, because my pod is not starting.
Any help or enlightenment here about what i am doing wrong or how to fix it?
Thank you so much guys
EDIT:
Some extra debugging i came across the fact that the volume mount is still required. So i did add the volume to the deployment. But this is still giving an error.
The issue is, as i realized. Is when i run the command kubectl apply -f secretProviderClass.yml, no secret get created at all, reason why is failing.
So i think something is wrong here. Applying the SecretProviderClass shouldnt create automatically a secret service?

Secret is not creating in AKS after fetching it with CSI Driver

By using the reference of https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-nginx-tls this document, I'm trying to fetch the TLS secrets from AKV to AKS pods.
Initially I created and configured CSI driver configuration with using User Assigned Managed Identity.
I have performed the following steps:
Create AKS Cluster with 1 nodepool.
Create AKV.
Created user assigned managed identity and assign it to the nodepool i.e. to the VMSS created for AKS.
Installed CSI Driver helm chart in AKS's "kube-system" namespace. and completed all the requirement to perform this operations.
Created the TLS certificate and key.
By using TLS certificate and key, created .pfx file.
Uploaded that .pfx file in the AKV certificates named as "ingresscert".
Created new namespace in AKS named as "ingress-test".
Deployed secretProviderClass in that namespace are as follows.:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-tls
spec:
provider: azure
secretObjects: # secretObjects defines the desired state of synced K8s secret objects
- secretName: ingress-tls-csi
type: kubernetes.io/tls
data:
- objectName: ingresscert
key: tls.key
- objectName: ingresscert
key: tls.crt
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "7*******-****-****-****-***********1"
keyvaultName: "*****-*****-kv" # the name of the AKV instance
objects: |
array:
- |
objectName: ingresscert
objectType: secret
tenantId: "e*******-****-****-****-***********f" # the tenant ID of the AKV instance
Deployed the nginx-ingress-controller helm chart in the same namespace, where certificates are binded with application.
Deployed the Busy Box deployment are as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-one
labels:
app: busybox-one
spec:
replicas: 1
selector:
matchLabels:
app: busybox-one
template:
metadata:
labels:
app: busybox-one
spec:
containers:
- name: busybox
image: k8s.gcr.io/e2e-test-images/busybox:1.29-1
command:
- "/bin/sleep"
- "10000"
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-tls"
---
apiVersion: v1
kind: Service
metadata:
name: busybox-one
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: busybox-one
Check secret is created or not by using command
kubectl get secret -n <namespaceName>
One thing to notice here is, if I attach shell with the busy box pod and go to the mount path which I provided to mount secrets I have seen that secrets are successfully fetched there. But this secrets are not showing in the AKS's secret list.
I have troubleshooted all the AKS,KV and manifest files but not found anything.
IF there is anything I have missed or anyone has solution for this please let me know.
Thanks in advance..!!!
i added this as a new answer, bcs the formatting was bad in the comments:
As you are using the Helm chart, you have to activate the secret sync in the values.yaml of the Helm Chart:
secrets-store-csi-driver:
syncSecret:
enabled: true
I would still recommend to use the csi-secrets-store-provider-azure as AKS Addon instead of the Helm-Chart
Your config looks good to me. One thing to consider is, that the User Assigned Managed Identity should not be the one you created for the AKS, it should be the managed identity from your nodepool (kubelet) and it also needs permission on the AKV.
I had the same issues while using the wrong Managed identity.
userAssignedIdentityID = Kubelet Client Id ( Nodepool Managed Idendity )
AZ CLI
export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export AKV_ID=$(az keyvault show -g <resource group> -n <akv name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "Key Vault Secrets Officer" --scope $AKV_ID

Create kubernetes env var secrets from .env file

I have a nodejs application which stores variables in environment variables.
I'm using the dotenv module, so I have a .env file that looks like :
VAR1=value1
VAR2=something_else
I'm currently setting up a BitBucket Pipeline to auto deploy this to a Kubernetes cluster.
I'm not very familiar with kubernetes secrets, though I'm reading up on them.
I'm wondering :
Is there an easy way to send to a Docker-container / kubernetes-deployment all of the environment variables I have defined in my .env file so they are available in the pods my app is running in ?
I'm hoping for an example secrets.yml file or similar which takes everything from .env and makes in into environment variables in the container. But it could also be done in the BitBucket pipeline level, or at the Docker container level .. I'm not sure ...
Step 1: Create a k8s secret with your .env file:
# kubectl create secret generic <secret-name> --from-env-file=<path-to-env-file>
$ kubectl create secret generic my-env-list --from-env-file=.env
secret/my-env-list created
Step 2: Varify secret:
$ kubectl get secret my-env-list -o yaml
apiVersion: v1
data:
VAR1: dmFsdWUx
VAR2: c29tZXRoaW5nX2Vsc2U=
kind: Secret
metadata:
name: my-env-list
namespace: default
type: Opaque
Step 3: Add env to your pod's container:
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: my-env-list # <---- here
restartPolicy: Never
Step 4: Run the pod and check if the env exist or not:
$ kubectl apply -f pod.yaml
pod/demo-pod created
$ kubectl logs -f demo-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=demo-pod
SHLVL=1
HOME=/root
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
VAR1=value1 # <------------------------------------------------------here
VAR2=something_else # <-----------------------------------------------here
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
You can also use the kustomize operator to create a secret from file as follows:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: kust-example
generatorOptions:
# Prevents adding hash at the end of the secret name
disableNameSuffixHash: true
secretGenerator:
- name: your-secret
namespace: default
envs:
- path/secret.env
Then you just have to run `kubectl apply -k dir
You can also use this to achieve the same result as using Kustomization but with more control to automate your job
https://github.com/juliosmelo/dotenv2k8s

azure kubernetes service - self signed cert on private registry

I have a tunnel created between my azure subscription and my on-prem servers. ON prem we have an artifactory server that is housing all of our docker images. For all internal servers we have a company wide CA trust and all certs are generated from this.
However, when I try to deploy something to aks and reference this docker registry. I am getting a cert error because the nodes themselves do not trust the "in house" self signed cert.
Is there anyway to get the root CA chain added to the nodes? Or a way to tell the docker daemon on the aks nodes this is an insecure registry?
Not one hundred percent sure, but you can try to use the docker config to create the secret for image pull, the command like this:
cat ~/.docker/config.json | base64
Then create the secret like this:
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
Use this secret in your deployment or pod as the value of imagePullSecrets. For more details, see Using a private Docker Registry with Kubernetes.
For the beginning I would recommend you to use curl to check connection between your azure cluster and on prem server.
Please use curl and curl -k and check if they both works(-k allow connections to SSL sites without certs, I assume it won't work, what means You don't have on prem certs on azure cluster)
If curl -k won't work then you need to copy and add certs from on prem to azure cluster.
Links which should help you do that
https://docs.docker.com/ee/enable-client-certificate-authentication/
https://askubuntu.com/questions/73287/how-do-i-install-a-root-certificate
And found some informations about doing that with docker daemon
https://docs.docker.com/registry/insecure/
I hope it will help you. Let me know if you have any more questions.
It looks like you are having the same problem described here: https://github.com/kubernetes/kubernetes/issues/43924.
This solution should probably work for you:
As far as I remember this was a docker issue, not a kubernetes one.
Docker does not use linux's ca certs. Nobody knows why.
You have to install those certs manually (on every node that could
spawn those pods) so that docker can use them:
/etc/docker/certs.d/mydomain.com:1234/ca.crt
This is a highly annoying issue as you have to butcher your nodes
after bootstrapping to get those certs in there. And kubernetes spawns
nodes all the time. How this issue has not been solved yet is a
mystery to me. It's a complete showstopper IMO.
Then it's just a question of how to run this for every node. You could do that with a DaemonSet which runs a script from a ConfigMap, as described here: https://cloud.google.com/solutions/automatically-bootstrapping-gke-nodes-with-daemonsets. That article refers to a GitHub project https://github.com/GoogleCloudPlatform/solutions-gke-init-daemonsets-tutorial.
The magic is in the DaemonSet.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-initializer
labels:
app: default-init
spec:
selector:
matchLabels:
app: default-init
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: node-initializer
app: default-init
spec:
volumes:
- name: root-mount
hostPath:
path: /
- name: entrypoint
configMap:
name: entrypoint
defaultMode: 0744
initContainers:
- image: ubuntu:18.04
name: node-initializer
command: ["/scripts/entrypoint.sh"]
env:
- name: ROOT_MOUNT_DIR
value: /root
securityContext:
privileged: true
volumeMounts:
- name: root-mount
mountPath: /root
- name: entrypoint
mountPath: /scripts
containers:
- image: "gcr.io/google-containers/pause:2.0"
name: pause
You could modify the script that is in the ConfigMap to pull your cert and put it in the correct directory.

Data from volumes as kubernetes secrets

I have an application that starts with docker-compose up. Some ssh credentials are provided with a json file, in a volume, in the host machine. I want to run the app in kubernetes, how can I provide the credentials using kubernetes secrets? my json file looks like:
{
"HOST_USERNAME"="myname",
"HOST_PASSWORD"="mypass",
"HOST_IP"="myip"
}
I created a file named mysecret.yml with base64 and I applied in kubernetes
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
HOST_USERNAME: c2gaQ=
HOST_PASSWORD: czMxMDIsdaf0NjcoKik=
HOST_IP: MTcyLjIeexLjAuMQ==
How I have to write the volumes in deployment.yml in order to use the secret properly?
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
This is the above example of using secret as volumes. You can use the same to define a deployment.
Please refer to official kubernetes documentation for further info:
https://kubernetes.io/docs/concepts/configuration/secret/

Resources