updating certificate to connect to external service automatically in containerized environments - security

We have multiple microservices and a couple of them use some external APIs. Certificates to connect to those external ones from our services are periodically rotated. How can we update our services to use the new public certs of external APIs without much disruption and much outage at our end. We use kubernetes and docker images.

You can use the configmap to store your certificate and mount it to your deployment.
Configmap will get auto-updated inside the Running POD without restarting the POD. So you have to just mount the config map once and changes will be autp updated to all available replicas of deployment without any restart and disruption.
Read my article : Update configmap without restarting POD
Store you cert in configmap and mount it to POD
Example :
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
data:
cert: <data>
Note : People consider configmap as insecure option to store secure data, if someone has your cluster access they can watch or view your certificates. If that's not issue in your case it is made for storing configuration only so will work like charm.

Related

Vaults secrets injected by vault sidecar container inside the pod are visible to kubernetes cluster users/admin

I have integrated the external vault into kubernetes cluster. Vault is injecting the secrets into shared volume “/vault/secrets” inside the pod which can be consumed by application container. Till now everything looks good.
But I can see security risk by inserting the secrets into shared volume in plain text as anyone can access the application secrets who has access to the kubernetes cluster.
Example: Secrets are injected into shared volume at /vault/secrets/config
Now, If kubernetes cluster admin logged in and he can access the pod along with credentials available at the shared volume in plain text format.
Kubectl exec -it <pod> command will be used to enter into pod.
In this case, my concern is cluster admin can access the application secrets (Ex: database passwords) which is security risk. In my scenario vault admin is different and kubernetes cluster admin is different.
Having a shared volume available to all pods in a cluster where all the secrets are stored in plain-text doesn't sound too secure to be honest. You could improve the securtity a little bit (only a little bit) by defining the use-limit (num_uses token attribute) to 1 (one) and alerting whenever legitimate application (that is the one that the secret was intended for) gets token invalid error messege.
I'm a K8s noob but how about this guide:
https://cloud.redhat.com/blog/integrating-hashicorp-vault-in-openshift-4
I know it's for RH OSE but maybe the concept sparks an idea.

Can i insert secrets into deployment.yml using Azure KeyVault?

I have integrated Azure KeyVault using Terraform. I can ssh into the container and can view the secrets.
My questions is: Is it possible to somehow reference that secret value inside my deployment.yml file that i use for deploying my pods in the kubernetes cluster?
I am using the following deployment file. Normally I access the Kubernetes secrets using the valueFrom and then referencing the secret name and key here. How would that be possible if i want to insert the value of secret using keyvault here.
-spec:
containers:
- name: test-container
image: test.azurecr.io/test-image:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: testSecret
valueFrom:
secretKeyRef:
name: testSecret
key: testSecretPassword
Thanks
You will need a Terraform data source. Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. Use of data sources allows a Terraform configuration to make use of information defined outside of Terraform, or defined by another separate Terraform configuration.
data "azurerm_key_vault_secret" "test" {
name = "secret-sauce"
key_vault_id = "${data.azurerm_key_vault.existing.id}"
}
output "secret_value" {
value = "${data.azurerm_key_vault_secret.test.value}"
}
You can look at Key Vault FlexVolume to integrate Key Vault into K8s. Secrets, keys, and certificates in a key management system become a volume accessible to pods. Once the volume is mounted, its data is available directly in the container filesystem for your application.
I will be honest, I have not tried this solution and don't know if it will work outside of our AKS offering.
https://www.terraform.io/docs/providers/azurerm/d/key_vault_secret.html
https://blog.azureandbeyond.com/2019/01/29/terraform-azure-keyvault-secrets/

How does kubernetes work from within a docker container

How does a Kubernetes run (kubectl get no) from within a docker container?
I know that it has to talk with the API server, but nowhere can I find a config file containing details of this (like .kube/config file found under my user)
I've done an env to check out what variables are set.
I've gone to the home directory which has a .kube directory but no config file.
As per documentation:
The recommended way to authenticate to the apiserver is with a service account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token
When kubectl is connecting with api using serviceaccount - token is placed in /var/run/secrets/kubernetes.io/serviceaccount/token
When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace
When you perform "config operation" with kubectl like:
kubectl config set-context test
.kube/config will be created automatically.
You can pass also different serviceAccountName into your pod and auto mount token like:
spec:
serviceAccountName: <your_service_account>
automountServiceAccountToken: true
You can find more information about Configure Service Accounts for Pods here.
Hope this help.

How to get all running PODs on Kubernetes cluster

This simple Node.js program works fine on local because it pulls the kubernetes config from my local /root/.kube/config file
const Client = require('kubernetes-client').Client;
const Config = require('kubernetes-client/backends/request').config;
const client = new K8sClient({ config: Config.fromKubeconfig(), version: '1.13' });
const pods = await client.api.v1.namespaces('xxxxx').pods.get({ qs: { labelSelector: 'application=test' } });
console.log('Pods: ', JSON.stringify(pods));
Now I want to run it as a Docker container on cluster and get all current cluster's running PODs (for same/current namespace). Now of course it fails:
Error: { Error: ENOENT: no such file or directory, open '/root/.kube/config'
So how make it work when deployed as a Docker container to cluster?
This little service needs to scan all running PODs... Assume it doesn't need pull config data since it's already deployed.. So it needs to access PODs on current cluster
Couple of concepts to grab your head around first:
Service account
Role
Role binding
To perform you end goal (which if i understand correct): Containerize Node js application
Step 1: Put application in a container
Step 2: Create a deployment/statefulset/daemonset as per you requirement using the container created above in step 1
Explanation:
In step 2 above {by default} if you do not (explicitly) mention a serviceaccount (custom) then it will be the default account the credentials of which are mounted inside the container (by default) here
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-xxxx
readOnly: true
which can be verified by this command after (successful) pod creation
kubectl get pod -n {yournamespace(by default is default)} POD_NAME -o yaml
Now (Gotchas!!) if you cannot access the cluster with those credentials then depending on which service account you are using and the rights of that serviceaccount needs to be accessed. For example if you are using abc serviceaccount which does not have rolebinding to it then you will not be able to view the cluster. In that case you need to create (first) a role (to read pods) and a rolebinding (for that role) to the serviceaccount.
UPDATE:The problem got resolved by changing Config.fromKubeconfig() to Config.getInCluster() Ref
Clarification: fromKubeconfig() function is good if you are running your application on a node which is a part of kubernetes cluster and has cluster accessing token saved here: /$USER/.kube/config but if you want to run the nodeJS appilcation in a container in a pod then you need this Config.getInCluster() to load the token.
if you are nosy enough then check the comments of this answer! :P
Note: here the nodejs library in discussion is this

How does PersistentVolume work with hostPath?

I have deployed Gitlab to my azure kubernetes cluster with a persistant storage defined the following way:
kind: PersistentVolume
apiVersion: v1
metadata:
name: gitlab-data
namespace: gitlab
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/tmp/gitlab-data"
That worked fine for some days. Suddenly all my data stored in Gitlab is gone and I don't know why. I was assuming that the hostPath defined PersistentVolumen is really persistent, because it is saved on a node and somehow replicated to all existing nodes. But my data now is lost and I cannot figure out why. I looked up the uptime of each node and there was no restart. I logged in into the nodes and checked the path and as far as I can see the data is gone.
So how do PersistentVolume Mounts work in Kubernetes? Are the data saved really persistent on the nodes? How do multiple nodes share the data, if a deployment is split to multiple nodes? Is hostPath reliable persistent storage?
hostPath doesn't share or replicate data between nodes and once your pod starts on another node, the data will be lost. You should consider to use some external shared storage.
Here's the related quote from the official docs:
HostPath (single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
from kubernetes.io/docs/user-guide/persistent-volumes/

Resources