kubernetes: Values from secret yaml are broken in node js container after gpg decryption - node.js

I am new to Kubernetes. I have a Kubernetes secret yaml file:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
API_KEY: 123409uhttt
SECRET_KEY: yu676jfjehfuehfu02
that I have encoded using gpg encryption:
gpg -a --symmetric --cipher-algo AES256 -o "secrets.yaml.gpg" "secrets.yaml"
and decrypting it in github action's workflow like this:
gpg -q --batch --yes --decrypt --passphrase=$GPG_SECRET my/location/to/secrets.yaml.gpg | kubectl apply -n $NAMESPACE -f -
When I run:
kubectl get secret my-secret -n my-namespace -o yaml
I get yaml showing correct values set for API_KEY and SECRET_KEY, like this:
apiVersion: v1
data:
API_KEY: 123409uhttt
SECRET_KEY: yu676jfjehfuehfu02
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"API_KEY":"123409uhttt","SECRET_KEY":"yu676jfjehfuehfu02"},"kind":"Secret","metadata":{"annotations":{},"name":"my-secret","namespace":"my-namespace"},"type":"Opaque"}
creationTimestamp: "2021-07-12T23:28:56Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:API_KEY: {}
f:SECRET_KEY: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-07-10T23:28:56Z"
name: my-secret
namespace: my-namespace
resourceVersion: "29813715"
uid: 89a34b6d-914eded509
type: Opaque
But when application requests using SECRET_KEY and API_KEY, it shows these values in broken encoding. I get these values printed When I log them:
Api_Key - ᶹ��4yַӭ�ӯu�ï¿8
Secret_Key - �V�s��Û[ï¶×¿zoï½9s��{�ï¿
When I don't use Api_Key and Secret_Key from secrets.yaml (as a hardcoded value in application) then it works as expected.
I need help to access secret data (Api_Key and Secret_Key) with correct values in container running node js application.

it appears as though the value of your secrets are not base64 encoded.
either change the type of data to "stringData" which does not need to be base64 encoded or encode the value of your secrets first.
e.g. echo "§SECRET_KEY" | base64 and use this value in your secrets.
the problem you describe happens as the values of the secret get injected base64 decoded into your pods.
However, when you try to decode the values you supplied by
echo "123409uhttt" | base64 -d
you get the following output: �m��ۡ��base64: invalid input

Related

KEDAScalerFailed : no azure identity found for request clientID

I tried various different methods but not able to access the Azure Storage Queues via PodIdentity. The resource group, client ID already exists.
The steps:-
kubectl create namespace keda
helm install keda kedacore/keda --set podIdentity.activeDirectory.identity= --namespace keda
kubectl create namespace myapp
The first few sections of myapp.yaml :
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
name: <idvalue>
namespace: myapp
spec:
clientID: "<clientId>"
resourceID: "<resourceId>"
type: 0
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
name: <idvalue>-binding
namespace: myapp
spec:
azureIdentity: <idvalue>
selector: <idvalue> #keeping same as identity
---
The rest of the file is the deployment section, so not pasting here.
Then ran the Helm to deploy the myapp.yaml via myappInt.values.yaml file ->
helm install -f C:\MyApp\myappInt.values.yaml (this file contains the clustername, role etc.)
myappInt.values.yaml file:-
image:
registry: <registryname>
deployment:
environment: INT
clusterName: <clustername>
clusterRole: <clusterrole>
region: <region>
processingRegion: <processingregion>
azureIdentityClientId: "<clientId>"
azureIdentityResourceId: "<resourceId>"
Then the scaler ->
kubectl apply -f c:\MyApp\kedascaling.yaml --namespace myapp
The kedascaling.yaml:-
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: keda-pod-identity-auth
spec:
podIdentity:
provider: azure
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: myapp-scaledobject
namespace: myapp
spec:
scaleTargetRef:
name: myapp # Corresponds with Deployment Name
minReplicaCount: 2
maxReplicaCount: 3
triggers:
- type: azure-queue
metadata:
queueName: myappqueue # Required
accountName: myappstorage # Required when pod identity is used
queueLength: "1" # Required
authenticationRef:
name: keda-pod-identity-auth # AuthenticationRef would need pod identity
Finally it gives the error below:-
kind: Event
apiVersion: v1
metadata:
name: myapp-scaledobject.16def024b939fdf2
namespace: myappnamespace
uid: someuid
resourceVersion: '186302648'
creationTimestamp: '2022-03-23T06:55:54Z'
managedFields:
- manager: keda
operation: Update
apiVersion: v1
time: '2022-03-23T06:55:54Z'
fieldsType: FieldsV1
fieldsV1:
f:count: {}
f:firstTimestamp: {}
f:involvedObject:
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:lastTimestamp: {}
f:message: {}
f:reason: {}
f:source:
f:component: {}
f:type: {}
involvedObject:
kind: ScaledObject
namespace: myapp
name: myapp-scaledobject
uid: <some id>
apiVersion: keda.sh/v1alpha1
resourceVersion: '<some version>'
**reason: KEDAScalerFailed
message: |
no azure identity found for request clientID**
source:
component: keda-operator
firstTimestamp: '2022-03-23T06:55:54Z'
lastTimestamp: '2022-03-23T07:30:54Z'
count: 71
type: Warning
eventTime: null
reportingComponent: ''
reportingInstance: ''
Any idea what I am doing wrong here? Any help would be greatly appreciated. Asked at Keda repo but no response.
I had a similar error recently... I needed to make sure that the AAD Pod Identity was in the same namespace as the KEDA operator service.
Whatever identity you assigned to KEDA when creating KEDA with HELM, ensure that it's within the same namespace (which in your instance is "keda").
For example after running:
helm install keda kedacore/keda --set podIdentity.activeDirectory.identity=my-keda-identity --namespace keda
if my-keda-identity is not in namespace "keda" then the KEDA operator will not be able to bind AAD because it can't find it. If you need to update the AAD reference you can simply run:
helm upgrade keda kedacore/keda --set podIdentity.activeDirectory.identity=my-second-app-reference --namespace keda
Next, recreate the KEDA operator pod (I like to do this to test things out in a clean manor) and then run the following command to see if binding worked:
kubectl logs -n keda <keda-operator-pod-name> -c keda-operator
You should see the error go away (as long as the identity has access to retrieve queue messages from Azure Storage via RBAC)

Issue with Cert-manager ClusterIssuer in AKS

I am getting this error in clusterissuer (cert-manager version 1.7.1):
"Error getting keypair for CA issuer: error decoding certificate PEM block"
I have the ca.crt, tls.crt and tls.key stored in a Key Vault in Azure.
kubectl describe clusterissuer ca-issuer
Ca:
Secret Name: cert-manager-secret
Status:
Conditions:
Last Transition Time: 2022-02-25T11:40:49Z
Message: Error getting keypair for CA issuer: error decoding certificate PEM block
Observed Generation: 1
Reason: ErrGetKeyPair
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrGetKeyPair 3m1s (x17 over 58m) cert-manager Error getting keypair for CA issuer: error decoding certificate PEM block
Warning ErrInitIssuer 3m1s (x17 over 58m) cert-manager Error initializing issuer: error decoding certificate PEM block
kubectl get clusterissuer
NAME READY AGE
ca-issuer False 69m
This is the clusterissuer yaml file:
ca-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: ca-issuer
namespace: cert-manager
spec:
ca:
secretName: cert-manager-secret
This is the KeyVault yaml file to retrieve the ca.crt, tls.crt and tls.key
keyvauls.yaml
apiVersion: spv.no/v2beta1
kind: AzureKeyVaultSecret
metadata:
name: secret-akscacrt
namespace: cert-manager
spec:
vault:
name: kv-xx # name of key vault
object:
name: akscacrt # name of the akv object
type: secret # akv object type
output:
secret:
name: cert-manager-secret # kubernetes secret name
dataKey: ca.crt # key to store object value in kubernetes secret
---
apiVersion: spv.no/v2beta1
kind: AzureKeyVaultSecret
metadata:
name: secret-akstlscrt
namespace: cert-manager
spec:
vault:
name: kv-xx # name of key vault
object:
name: akstlscrt # name of the akv object
type: secret # akv object type
output:
secret:
name: cert-manager-secret # kubernetes secret name
dataKey: tls.crt # key to store object value in kubernetes secret
---
apiVersion: spv.no/v2beta1
kind: AzureKeyVaultSecret
metadata:
name: secret-akstlskey
namespace: cert-manager
spec:
vault:
name: kv-xx # name of key vault
object:
name: akstlskey # name of the akv object
type: secret # akv object type
output:
secret:
name: cert-manager-secret # kubernetes secret name
dataKey: tls.key # key to store object value in kubernetes secret
---
and these are the certificates used:
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: argocd-xx
namespace: argocd
spec:
secretName: argocd-xx
issuerRef:
name: ca-issuer
kind: ClusterIssuer
commonName: "argocd.xx"
dnsNames:
- "argocd.xx"
privateKey:
size: 4096
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: sonarqube-xx
namespace: sonarqube
spec:
secretName: "sonarqube-xx"
issuerRef:
name: ca-issuer
kind: ClusterIssuer
commonName: "sonarqube.xx"
dnsNames:
- "sonarqube.xx"
privateKey:
size: 4096
I can see that I can retrive the secrets for the certificate from key vault:
kubectl get secret -n cert-manager cert-manager-secret -o yaml
apiVersion: v1
data:
ca.crt: XXX
tls.crt: XXX
tls.key: XXX
Also, another strange thing is that I am getting other secrets in sonarqube/argocd namespace which I deployed previously but are not any more in my deployment file. I cannot delete them, when I try to delete them, they are re-created automatically. Looks like they are stored in some kind of cache. Also I tried to delete the namespace akv2k8s/cert-manager and delete the cert-manager/akv2k8s controllers and re-install them again but same issue after re-installing and applying the deployment...
kubectl get secret -n sonarqube
NAME TYPE DATA AGE
cert-manager-secret Opaque 3 155m
default-token-c8b86 kubernetes.io/service-account-token 3 2d1h
sonarqube-xx-7v7dh Opaque 1 107m
sql-db-secret Opaque 2 170m
kubectl get secret -n argocd
NAME TYPE DATA AGE
argocd-xx-7b5kb Opaque 1 107m
cert-manager-secret-argo Opaque 3 157m
default-token-pjb4z kubernetes.io/service-account-token 3 3d15h
kubectl describe certificate sonarqube-xxx -n sonarqube
Status:
Conditions:
Last Transition Time: 2022-02-25T11:04:08Z
Message: Issuing certificate as Secret does not exist
Observed Generation: 1
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2022-02-25T11:04:08Z
Message: Issuing certificate as Secret does not exist
Observed Generation: 1
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: sonarqube-xxx-7v7dh
Events: <none>
Any idea?
Thanks.
I figured it out just uploading the certificate info ca.crt. tls.crt and tls.key in plain text, without BASE64 encoding in the Key Vault secrets in Azure.
When AKV2K8S retrives the secrets from the Key Vault and stored in Kubernetes, automatically it is encoded in BASE64.
Regards,

kustomize with azure secret provider class

I have a secretsProviderClass resource defined for my Azure Kubernetes Service deployment, which allows me to create secrets from Azure Key Vault. I'd like to use Kustomize with it in order to unify my deployments across multiple environments. Here is my manifest:
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: azure-kvname
spec:
provider: azure
secretObjects:
- data:
- key: dbuser
objectName: db-user
- key: dbpassword
objectName: db-pass
- key: admin
objectName: admin-user
- key: adminpass
objectName: admin-password
secretName: secret
type: Opaque
parameters:
usePodIdentity: "true"
keyvaultName: "dev-keyvault"
cloudName: ""
objects: |
array:
- |
objectName: db-user
objectType: secret
objectVersion: ""
- |
objectName: db-pass
objectType: secret
objectVersion: ""
- |
objectName: admin-user
objectType: secret
objectVersion: ""
- |
objectName: admin-password
objectType: secret
objectVersion: ""
tenantId: "XXXXXXXXXXXX"
This is the manifest that I use as a base. I'd like to use overlay on this and apply values depending on the environment that I am deploying to. To be specific, I'd like to modify the objectName property. I tried applying the Json6902 patch:
- op: replace
path: /spec/parameters/objects/array/0/objectName
value: "dev-db-user"
- op: replace
path: /spec/parameters/objects/array/1/objectName
value: "dev-db-password"
- op: replace
path: /spec/parameters/objects/array/2/objectName
value: "dev-admin-user"
- op: replace
path: /spec/parameters/objects/array/3/objectName
value: "dev-admin-password"
Unfortunately, it's not working and it is not replacing the values. Is it possible with Kustomize?
Unfortunately - the value that you're trying to access is not another nested YAML array - the pipe symbol at the end of a line in YAML signifies that any indented text that follows should be interpreted as a multi-line scalar value
With kustomize you'd probably need to replace whole /spec/parameters/objects value
if you haven't started using kustomize for good yet, you may consider rather templating engine like Helm, which should allow you to replace value inside of this string
...or you can use a combination of Helm for templating and the Kustomize for resource management, patches for specific configuration, and overlays.

Create kubeconfig with restricted permission

I need to create a kubeconfig with restricted access, I want to be able to provide permission to update configmap in specific namesapce, how can I create such a kubeconfig with the following permission
for specefic namespace (myns)
update only configmap (mycm)
Is there a simple way to create it ?
The tricky part here is that I need that some program will have access to cluster X and modify only this comfigMap, How would I do it from outside process without providing the full kubeconfig file which can be problematic from security reason
To make it clear, I own the cluster, I just want to give to some program restricted permissions
This is not straight forward. But still possible.
Create the namespace myns if not exists.
$ kubectl create ns myns
namespace/myns created
Create a service account cm-user in myns namespace. It'll create a secret token as well.
$ kubectl create sa cm-user -n myns
serviceaccount/cm-user created
$ kubectl get sa cm-user -n myns
NAME SECRETS AGE
cm-user 1 18s
$ kubectl get secrets -n myns
NAME TYPE DATA AGE
cm-user-token-kv5j5 kubernetes.io/service-account-token 3 63s
default-token-m7j9v kubernetes.io/service-account-token 3 96s
Get the token and ca.crt from cm-user-token-kv5j5 secret.
$ kubectl get secrets cm-user-token-kv5j5 -n myns -oyaml
Base64 decode the value of token from cm-user-token-kv5j5.
Now create a user using the decoded token.
$ kubectl config set-credentials cm-user --token=<decoded token value>
User "cm-user" set.
Now generate a kubeconfig file kubeconfig-cm.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <ca.crt value from cm-user-token-kv5j5 secret>
server: <kubernetes server>
name: <cluster>
contexts:
- context:
cluster:<cluster>
namespace: myns
user: cm-user
name: cm-user
current-context: cm-user
users:
- name: cm-user
user:
token: <decoded token>
Now create a role and rolebinding for sa cm-user.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: myns
name: cm-user-role
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["update", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-user-rb
namespace: myns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-user-role
subjects:
- namespace: myns
kind: ServiceAccount
name: cm-user
We are done. Now using this kubeconfig file you can update the mycm configmap. It doesn't have any other privileges.
$ kubectl get cm -n myns --kubeconfig kubeconfig-cm
NAME DATA AGE
mycm 0 8s
$ kubectl delete cm mycm -n myns --kubeconfig kubeconfig-cm
Error from server (Forbidden): configmaps "mycm" is forbidden: User "system:serviceaccount:myns:cm-user" cannot delete resource "configmaps" in API group "" in the namespace "myns"
You need to use RBAC and define role and then bind that role to a user or serviceaccount using rolebinding
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["configmaps"]
verbs: ["update", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read config maps in the "default" namespace.
# You need to already have a Role named "configmap-reader" in that namespace.
kind: RoleBinding
metadata:
name: read-configmap
namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
name: jane # "name" is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: configmap-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Azure AKS: Create a Kubeconfig from service account [duplicate]

I have a kubernetes cluster on Azure and I created 2 namespaces and 2 service accounts because I have two teams deploying on the cluster.
I want to give each team their own kubeconfig file for the serviceaccount I created.
I am pretty new to Kubernetes and haven't been able to find a clear instruction on the kubernetes website. How do I create a kube config file for a serviceaccount?
Hopefully someone can help me out :), I rather not give the default kube config file to the teams.
With kind regards,
Bram
# your server name goes here
server=https://localhost:8443
# the name of the secret containing the service account token goes here
name=default-token-sg96k
ca=$(kubectl get secret/$name -o jsonpath='{.data.ca\.crt}')
token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode)
namespace=$(kubectl get secret/$name -o jsonpath='{.data.namespace}' | base64 --decode)
echo "
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: default-context
context:
cluster: default-cluster
namespace: default
user: default-user
current-context: default-context
users:
- name: default-user
user:
token: ${token}
" > sa.kubeconfig
I cleaned up Jordan Liggitt's script a little.
Unfortunately I am not yet allowed to comment so this is an extra answer:
Be aware that starting with Kubernetes 1.24 you will need to create the Secret with the token yourself and reference that
# The script returns a kubeconfig for the ServiceAccount given
# you need to have kubectl on PATH with the context set to the cluster you want to create the config for
# Cosmetics for the created config
clusterName='some-cluster'
# your server address goes here get it via `kubectl cluster-info`
server='https://157.90.17.72:6443'
# the Namespace and ServiceAccount name that is used for the config
namespace='kube-system'
serviceAccount='developer'
# The following automation does not work from Kubernetes 1.24 and up.
# You might need to
# define a Secret, reference the ServiceAccount there and set the secretName by hand!
# See https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-long-lived-api-token-for-a-serviceaccount for details
secretName=$(kubectl --namespace="$namespace" get serviceAccount "$serviceAccount" -o=jsonpath='{.secrets[0].name}')
######################
# actual script starts
set -o errexit
ca=$(kubectl --namespace="$namespace" get secret/"$secretName" -o=jsonpath='{.data.ca\.crt}')
token=$(kubectl --namespace="$namespace" get secret/"$secretName" -o=jsonpath='{.data.token}' | base64 --decode)
echo "
---
apiVersion: v1
kind: Config
clusters:
- name: ${clusterName}
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: ${serviceAccount}#${clusterName}
context:
cluster: ${clusterName}
namespace: ${namespace}
user: ${serviceAccount}
users:
- name: ${serviceAccount}
user:
token: ${token}
current-context: ${serviceAccount}#${clusterName}
"
Look to https://github.com/superbrothers/kubectl-view-serviceaccount-kubeconfig-plugin
This plugin helps to get service account config via
kubectl view-serviceaccount-kubeconfig <service_account> -n <namespace>
Kubectl can be initialized to use a cluster account. To do so, get the cluster url, cluster certificate and account token.
KUBE_API_EP='URL+PORT'
KUBE_API_TOKEN='TOKEN'
KUBE_CERT='REDACTED'
echo $KUBE_CERT >deploy.crt
kubectl config set-cluster k8s --server=https://$KUBE_API_EP \
--certificate-authority=deploy.crt \
--embed-certs=true
kubectl config set-credentials gitlab-deployer --token=$KUBE_API_TOKEN
kubectl config set-context k8s --cluster k8s --user gitlab-deployer
kubectl config use-context k8s
The cluster file is stored under: ~/.kube/config. Now the cluster can be accessed using:
kubectl --context=k8s get pods -n test-namespace
add this flag --insecure-skip-tls-verify if you are using self signed certificate.
Revisiting this as I was looking for a way to create a serviceaccount from the command line instead of repetitive point/click tasks through Lens IDE. I came across this thread and took the original authors ideas and expanded on the capabilities as well as supporting serviceaccount creations for Kubernetes 1.24+
#!/bin/sh
# This shell script is intended for Kubernetes clusters running 1.24+ as secrets are no longer auto-generated with serviceaccount creations
# The script does a few things: creates a serviceaccount, creates a secret for that serviceaccount (and annotates accordingly), creates a clusterrolebinding or rolebinding
# provides a kubeconfig output to the screen as well as writing to a file that can be included in the KUBECONFIG or PATH
# Feed variables to kubectl commands (modify as needed). crb and rb can not both be true
# ------------------------------------------- #
clustername=some_cluster
name=some_user
ns=some_ns # namespace
server=https://some.server.com:6443
crb=false # clusterrolebinding
crb_name=some_binding # clusterrolebindingname_name
rb=true # rolebinding
rb_name=some_binding # rolebinding_name
# ------------------------------------------- #
# Check for existing serviceaccount first
sa_precheck=$(kubectl get sa $name -o jsonpath='{.metadata.name}' -n $ns) > /dev/null 2>&1
if [ -z "$sa_precheck" ]
then
kubectl create serviceaccount $name -n $ns
else
echo "serviceacccount/"$sa_precheck" already exists"
fi
sa_name=$(kubectl get sa $name -o jsonpath='{.metadata.name}' -n $ns)
sa_uid=$(kubectl get sa $name -o jsonpath='{.metadata.uid}' -n $ns)
# Check for existing secret/service-account-token, if one does not exist create one but do not output to external file
secret_precheck=$(kubectl get secret $sa_name-token-$sa_uid -o jsonpath='{.metadata.name}' -n $ns) > /dev/null 2>&1
if [ -z "$secret_precheck" ]
then
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: $sa_name-token-$sa_uid
namespace: $ns
annotations:
kubernetes.io/service-account.name: $sa_name
EOF
else
echo "secret/"$secret_precheck" already exists"
fi
# Check for adding clusterrolebinding or rolebinding (both can not be true)
if [ "$crb" = "true" ] && [ "$rb" = "true" ]
then
echo "Both clusterrolebinding and rolebinding can not be true, please fix"
exit
elif [ "$crb" = "true" ]
then
crb_test=$(kubectl get clusterrolebinding $crb_name -o jsonpath='{.metadata.name}') > /dev/null 2>&1
if [ "$crb_name" = "$crb_test" ]
then
kubectl patch clusterrolebinding $crb_name --type='json' -p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": '$sa_name', "namespace": '$ns' } }]'
else
echo "clusterrolebinding/"$crb_name" does not exist, please fix"
exit
fi
elif [ "$rb" = "true" ]
then
rb_test=$(kubectl get rolebinding $rb_name -n $ns -o jsonpath='{.metadata.name}' -n $ns) > /dev/null 2>&1
if [ "$rb_name" = "$rb_test" ]
then
kubectl patch rolebinding $rb_name -n $ns --type='json' -p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": '$sa_name', "namespace": '$ns' } }]'
else
echo "rolebinding/"$rb_name" does not exist in "$ns" namespace, please fix"
exit
fi
fi
# Create Kube Config and output to config file
ca=$(kubectl get secret $sa_name-token-$sa_uid -o jsonpath='{.data.ca\.crt}' -n $ns)
token=$(kubectl get secret $sa_name-token-$sa_uid -o jsonpath='{.data.token}' -n $ns | base64 --decode)
echo "
apiVersion: v1
kind: Config
clusters:
- name: ${clustername}
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: ${sa_name}#${clustername}
context:
cluster: ${clustername}
namespace: ${ns}
user: ${sa_name}
users:
- name: ${sa_name}
user:
token: ${token}
current-context: ${sa_name}#${clustername}
" | tee $sa_name#${clustername}

Resources