Create kubeconfig with restricted permission - azure

I need to create a kubeconfig with restricted access, I want to be able to provide permission to update configmap in specific namesapce, how can I create such a kubeconfig with the following permission
for specefic namespace (myns)
update only configmap (mycm)
Is there a simple way to create it ?
The tricky part here is that I need that some program will have access to cluster X and modify only this comfigMap, How would I do it from outside process without providing the full kubeconfig file which can be problematic from security reason
To make it clear, I own the cluster, I just want to give to some program restricted permissions

This is not straight forward. But still possible.
Create the namespace myns if not exists.
$ kubectl create ns myns
namespace/myns created
Create a service account cm-user in myns namespace. It'll create a secret token as well.
$ kubectl create sa cm-user -n myns
serviceaccount/cm-user created
$ kubectl get sa cm-user -n myns
NAME SECRETS AGE
cm-user 1 18s
$ kubectl get secrets -n myns
NAME TYPE DATA AGE
cm-user-token-kv5j5 kubernetes.io/service-account-token 3 63s
default-token-m7j9v kubernetes.io/service-account-token 3 96s
Get the token and ca.crt from cm-user-token-kv5j5 secret.
$ kubectl get secrets cm-user-token-kv5j5 -n myns -oyaml
Base64 decode the value of token from cm-user-token-kv5j5.
Now create a user using the decoded token.
$ kubectl config set-credentials cm-user --token=<decoded token value>
User "cm-user" set.
Now generate a kubeconfig file kubeconfig-cm.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <ca.crt value from cm-user-token-kv5j5 secret>
server: <kubernetes server>
name: <cluster>
contexts:
- context:
cluster:<cluster>
namespace: myns
user: cm-user
name: cm-user
current-context: cm-user
users:
- name: cm-user
user:
token: <decoded token>
Now create a role and rolebinding for sa cm-user.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: myns
name: cm-user-role
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["update", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-user-rb
namespace: myns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-user-role
subjects:
- namespace: myns
kind: ServiceAccount
name: cm-user
We are done. Now using this kubeconfig file you can update the mycm configmap. It doesn't have any other privileges.
$ kubectl get cm -n myns --kubeconfig kubeconfig-cm
NAME DATA AGE
mycm 0 8s
$ kubectl delete cm mycm -n myns --kubeconfig kubeconfig-cm
Error from server (Forbidden): configmaps "mycm" is forbidden: User "system:serviceaccount:myns:cm-user" cannot delete resource "configmaps" in API group "" in the namespace "myns"

You need to use RBAC and define role and then bind that role to a user or serviceaccount using rolebinding
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["configmaps"]
verbs: ["update", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read config maps in the "default" namespace.
# You need to already have a Role named "configmap-reader" in that namespace.
kind: RoleBinding
metadata:
name: read-configmap
namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
name: jane # "name" is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: configmap-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Related

Deploying influxDB 2 in Azure AKS cluster with provisioned storage account

I'm having trouble to deploy Influxdb2 into my Azure AKS Cluster. I'm using a simple storage account to serve as storage. Looking the influxdb pod:
** ts=2021-11-26T00:43:44.126091Z lvl=error msg=“Failed to apply SQL migrations” log_id=0Y2Q~wH0000 error=“database is locked”
** Error: database is locked
I change my PVC to use CSI:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: sc-influxdb
namespace: #{NAMESPACE}#
provisioner: file.csi.azure.com
allowVolumeExpansion: true
parameters:
storageAccount: #{STORAGE_ACCOUNT_NAME}#
location: #{STORAGE_ACCOUNT_LOCATION}#
# Check driver parameters here:
# https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict # https://linux.die.net/man/8/mount.cifs
- nosharesock # reduce probability of reconnect race
- actimeo=30 # reduce latency for metadata-heavy workload
---
# Create a Secret to hold the name and key of the Storage Account
# Remember: values are base64 encoded
apiVersion: v1
kind: Secret
metadata:
name: #{STORAGE_ACCOUNT_NAME}#
namespace: #{NAMESPACE}#
type: Opaque
data:
azurestorageaccountname: #{STORAGE_ACCOUNT_NAME_B64}#
azurestorageaccountkey: #{STORAGE_ACCOUNT_KEY_B64}#
---
# Create a persistent volume, with the corresponding StorageClass and the reference to the Azure File secret.
# Remember: Create the share in the storage account otherwise the pods will fail with a "No such file or directory"
apiVersion: v1
kind: PersistentVolume
metadata:
name: influxdb-pv
spec:
capacity:
storage: 5Ti
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: sc-influxdb
claimRef:
name: influxdb-pvc
namespace: #{NAMESPACE}#
azureFile:
secretName: #{STORAGE_ACCOUNT_NAME}#
secretNamespace: #{NAMESPACE}#
shareName: influxdb
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- nosharesock
- nobrl
---
# Create a PersistentVolumeClaim referencing the StorageClass and the volume
# Remember: this is a static scenario. The volume was created in the previous step.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: influxdb-pvc
namespace: #{NAMESPACE}#
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Ti
storageClassName: sc-influxdb
volumeName: influxdb-pv
In my values.yml I defined my persistence as:
## Persist data to a persistent volume
##
persistence:
enabled: true
## If true will use an existing PVC instead of creating one
useExisting: true
## Name of existing PVC to be used in the influx deployment
name: influxdb-pvc
## influxdb data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: sc-influxdb
size: 5Ti
To install I ran:
helm upgrade --install influxdb influxdata/influxdb2 -n influxdb -f values.yml

KEDAScalerFailed : no azure identity found for request clientID

I tried various different methods but not able to access the Azure Storage Queues via PodIdentity. The resource group, client ID already exists.
The steps:-
kubectl create namespace keda
helm install keda kedacore/keda --set podIdentity.activeDirectory.identity= --namespace keda
kubectl create namespace myapp
The first few sections of myapp.yaml :
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
name: <idvalue>
namespace: myapp
spec:
clientID: "<clientId>"
resourceID: "<resourceId>"
type: 0
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
name: <idvalue>-binding
namespace: myapp
spec:
azureIdentity: <idvalue>
selector: <idvalue> #keeping same as identity
---
The rest of the file is the deployment section, so not pasting here.
Then ran the Helm to deploy the myapp.yaml via myappInt.values.yaml file ->
helm install -f C:\MyApp\myappInt.values.yaml (this file contains the clustername, role etc.)
myappInt.values.yaml file:-
image:
registry: <registryname>
deployment:
environment: INT
clusterName: <clustername>
clusterRole: <clusterrole>
region: <region>
processingRegion: <processingregion>
azureIdentityClientId: "<clientId>"
azureIdentityResourceId: "<resourceId>"
Then the scaler ->
kubectl apply -f c:\MyApp\kedascaling.yaml --namespace myapp
The kedascaling.yaml:-
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: keda-pod-identity-auth
spec:
podIdentity:
provider: azure
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: myapp-scaledobject
namespace: myapp
spec:
scaleTargetRef:
name: myapp # Corresponds with Deployment Name
minReplicaCount: 2
maxReplicaCount: 3
triggers:
- type: azure-queue
metadata:
queueName: myappqueue # Required
accountName: myappstorage # Required when pod identity is used
queueLength: "1" # Required
authenticationRef:
name: keda-pod-identity-auth # AuthenticationRef would need pod identity
Finally it gives the error below:-
kind: Event
apiVersion: v1
metadata:
name: myapp-scaledobject.16def024b939fdf2
namespace: myappnamespace
uid: someuid
resourceVersion: '186302648'
creationTimestamp: '2022-03-23T06:55:54Z'
managedFields:
- manager: keda
operation: Update
apiVersion: v1
time: '2022-03-23T06:55:54Z'
fieldsType: FieldsV1
fieldsV1:
f:count: {}
f:firstTimestamp: {}
f:involvedObject:
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:lastTimestamp: {}
f:message: {}
f:reason: {}
f:source:
f:component: {}
f:type: {}
involvedObject:
kind: ScaledObject
namespace: myapp
name: myapp-scaledobject
uid: <some id>
apiVersion: keda.sh/v1alpha1
resourceVersion: '<some version>'
**reason: KEDAScalerFailed
message: |
no azure identity found for request clientID**
source:
component: keda-operator
firstTimestamp: '2022-03-23T06:55:54Z'
lastTimestamp: '2022-03-23T07:30:54Z'
count: 71
type: Warning
eventTime: null
reportingComponent: ''
reportingInstance: ''
Any idea what I am doing wrong here? Any help would be greatly appreciated. Asked at Keda repo but no response.
I had a similar error recently... I needed to make sure that the AAD Pod Identity was in the same namespace as the KEDA operator service.
Whatever identity you assigned to KEDA when creating KEDA with HELM, ensure that it's within the same namespace (which in your instance is "keda").
For example after running:
helm install keda kedacore/keda --set podIdentity.activeDirectory.identity=my-keda-identity --namespace keda
if my-keda-identity is not in namespace "keda" then the KEDA operator will not be able to bind AAD because it can't find it. If you need to update the AAD reference you can simply run:
helm upgrade keda kedacore/keda --set podIdentity.activeDirectory.identity=my-second-app-reference --namespace keda
Next, recreate the KEDA operator pod (I like to do this to test things out in a clean manor) and then run the following command to see if binding worked:
kubectl logs -n keda <keda-operator-pod-name> -c keda-operator
You should see the error go away (as long as the identity has access to retrieve queue messages from Azure Storage via RBAC)

How to configure deployment file for Azure Keyvault + SecretProviderClass + imagePullSecrets+ Private docker repository

How to configure deployment file for the combination of Azure Keyvault + SecretProviderClass + imagePullSecrets+ Private docker repository.
We have private docker repository to maintain images, now we have a requirement maintaining the credentials of that Docker repository in Azure key vault, import it into AKS using SecretProviderClass, use that secret under 'imagePullSecrets'
# This is a SecretProviderClass example using system-assigned identity to access your key vault
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-system-harbor
spec:
provider: azure
secretObjects:
- secretName: harborcredentialvault
data:
- key: harborcredentialvaultkey
objectName: harborcredentialvault
type: kubernetes.io/dockerconfigjson
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true" # Set to true for using managed identity
userAssignedIdentityID: "" # If empty, then defaults to use the system assigned identity on the VM
keyvaultName: "<Keyvault name>"
cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
objects: |
array:
- |
objectName: harborcredentialvault
objectType: secret # object types: secret, key, or cert
objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
tenantId: "<tenant ID>" # The tenant ID of the key vault
- name: harborcredentialvault
valueFrom:
secretKeyRef:
name: keyvault-secret
key: harborcredentialvaultkey
imagePullSecrets:
- name: ${harborcredentialvault}
volumeMounts:
- mountPath: "/mnt/secrets-store"
name: secrets-store01-inline
readOnly: true
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kvname-system-harbor"
As you do not provided a real question or an error im will be a bit general:
For the AKS/KeyVault integration it is important to understand that you are accessing the Key Vault with the Kubelet Identity of the Nodepool and not with the Managed Identity of the AKS as described here. So if you are using Managed Identity userAssignedIdentityID should not be empty.
So we need to give the Kubelet Identity access to the Key Vault, for example like this:
export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export AKV_ID=$(az keyvault show -g <resource group> -n <akv name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "Key Vault Secrets Officer" --scope $AKV_ID
The result of $KUBE_ID needs to be also added the the SecretProviderClass:
userAssignedIdentityID: "RESULT"
From this official example here your SecretProviderClass looks good for this use case.
This would be the pod config:
spec:
containers:
- name: demo
image: demo
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
imagePullSecrets:
- name: harborcredentialvault
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kvname-system-harbor"
This should sync the Key Vault secret to a Kubernetes secret. Here is also the documentation.
One thing you should consider is = The secrets will only sync once you start a pod mounting the secrets. Solely relying on the syncing with Kubernetes secrets feature thus does not work.
That being said you maybe would need another pod with a public image to sync your private pull secrets for your cluster bcs your pod would not start as it can not pull the image from your private registry.
#Philip Welz answer helped me to find the below solution
SecretProviderClass sample yaml
# This is a SecretProviderClass example using system-assigned identity to access your key vault
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-system-harbor
spec:
provider: azure
secretObjects:
- secretName: dockerconfig
type: kubernetes.io/dockerconfigjson
data:
- objectName: harborcredentialvault
key: .dockerconfigjson
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true" # Set to true for using managed identity
userAssignedIdentityID: "" # If empty, then defaults to use the system assigned identity on the VM
keyvaultName: "<Keyvault name>"
cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
objects: |
array:
- |
objectName: harborcredentialvault
objectType: secret # object types: secret, key, or cert
objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
tenantId: "<tenant ID>" # The tenant ID of the key vault
Deployment sample yaml file
spec:
containers:
- name: demo
image: demo
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
imagePullSecrets:
- name: dockerconfig
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kvname-system-harbor"
Create Secret in Keyvault, make sure value should be in below JSON format
Key: harborcredentialvault
Value: {
"auths": {
"dockerwebsite.com": {
"username": "username",
"password": "password"
}
}
}

Issue with Cert-manager ClusterIssuer in AKS

I am getting this error in clusterissuer (cert-manager version 1.7.1):
"Error getting keypair for CA issuer: error decoding certificate PEM block"
I have the ca.crt, tls.crt and tls.key stored in a Key Vault in Azure.
kubectl describe clusterissuer ca-issuer
Ca:
Secret Name: cert-manager-secret
Status:
Conditions:
Last Transition Time: 2022-02-25T11:40:49Z
Message: Error getting keypair for CA issuer: error decoding certificate PEM block
Observed Generation: 1
Reason: ErrGetKeyPair
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrGetKeyPair 3m1s (x17 over 58m) cert-manager Error getting keypair for CA issuer: error decoding certificate PEM block
Warning ErrInitIssuer 3m1s (x17 over 58m) cert-manager Error initializing issuer: error decoding certificate PEM block
kubectl get clusterissuer
NAME READY AGE
ca-issuer False 69m
This is the clusterissuer yaml file:
ca-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: ca-issuer
namespace: cert-manager
spec:
ca:
secretName: cert-manager-secret
This is the KeyVault yaml file to retrieve the ca.crt, tls.crt and tls.key
keyvauls.yaml
apiVersion: spv.no/v2beta1
kind: AzureKeyVaultSecret
metadata:
name: secret-akscacrt
namespace: cert-manager
spec:
vault:
name: kv-xx # name of key vault
object:
name: akscacrt # name of the akv object
type: secret # akv object type
output:
secret:
name: cert-manager-secret # kubernetes secret name
dataKey: ca.crt # key to store object value in kubernetes secret
---
apiVersion: spv.no/v2beta1
kind: AzureKeyVaultSecret
metadata:
name: secret-akstlscrt
namespace: cert-manager
spec:
vault:
name: kv-xx # name of key vault
object:
name: akstlscrt # name of the akv object
type: secret # akv object type
output:
secret:
name: cert-manager-secret # kubernetes secret name
dataKey: tls.crt # key to store object value in kubernetes secret
---
apiVersion: spv.no/v2beta1
kind: AzureKeyVaultSecret
metadata:
name: secret-akstlskey
namespace: cert-manager
spec:
vault:
name: kv-xx # name of key vault
object:
name: akstlskey # name of the akv object
type: secret # akv object type
output:
secret:
name: cert-manager-secret # kubernetes secret name
dataKey: tls.key # key to store object value in kubernetes secret
---
and these are the certificates used:
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: argocd-xx
namespace: argocd
spec:
secretName: argocd-xx
issuerRef:
name: ca-issuer
kind: ClusterIssuer
commonName: "argocd.xx"
dnsNames:
- "argocd.xx"
privateKey:
size: 4096
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: sonarqube-xx
namespace: sonarqube
spec:
secretName: "sonarqube-xx"
issuerRef:
name: ca-issuer
kind: ClusterIssuer
commonName: "sonarqube.xx"
dnsNames:
- "sonarqube.xx"
privateKey:
size: 4096
I can see that I can retrive the secrets for the certificate from key vault:
kubectl get secret -n cert-manager cert-manager-secret -o yaml
apiVersion: v1
data:
ca.crt: XXX
tls.crt: XXX
tls.key: XXX
Also, another strange thing is that I am getting other secrets in sonarqube/argocd namespace which I deployed previously but are not any more in my deployment file. I cannot delete them, when I try to delete them, they are re-created automatically. Looks like they are stored in some kind of cache. Also I tried to delete the namespace akv2k8s/cert-manager and delete the cert-manager/akv2k8s controllers and re-install them again but same issue after re-installing and applying the deployment...
kubectl get secret -n sonarqube
NAME TYPE DATA AGE
cert-manager-secret Opaque 3 155m
default-token-c8b86 kubernetes.io/service-account-token 3 2d1h
sonarqube-xx-7v7dh Opaque 1 107m
sql-db-secret Opaque 2 170m
kubectl get secret -n argocd
NAME TYPE DATA AGE
argocd-xx-7b5kb Opaque 1 107m
cert-manager-secret-argo Opaque 3 157m
default-token-pjb4z kubernetes.io/service-account-token 3 3d15h
kubectl describe certificate sonarqube-xxx -n sonarqube
Status:
Conditions:
Last Transition Time: 2022-02-25T11:04:08Z
Message: Issuing certificate as Secret does not exist
Observed Generation: 1
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2022-02-25T11:04:08Z
Message: Issuing certificate as Secret does not exist
Observed Generation: 1
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: sonarqube-xxx-7v7dh
Events: <none>
Any idea?
Thanks.
I figured it out just uploading the certificate info ca.crt. tls.crt and tls.key in plain text, without BASE64 encoding in the Key Vault secrets in Azure.
When AKV2K8S retrives the secrets from the Key Vault and stored in Kubernetes, automatically it is encoded in BASE64.
Regards,

jobs.batch is forbidden: User ' '"system:serviceaccount:default:default" cannot list resource "jobs" in API group "batch" in the namespace "default"

I am using Kubernetes javascript client with, in-cluster configurations to interact with the cluster.
I am trying to get the list of jobs
app.js(Node)
app.get("/", (req, res) => {
k8sApi2
.listNamespacedJob("default")
.then((res) => {
console.log(res.body);
res.send(res.body);
})
.catch((err) => console.log(err));
});
But this is the log of the pod I am getting.
Here are my deployment
Service
Also, I created a role and a role binding but still, I have no idea what makes this issue.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-apis
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-apis
rules:
- apiGroups:
- ""
- "apps"
- "batch"
resources:
- endpoints
- deployments
- pods
- jobs
verbs:
- get
- list
- watch
- create
- delete
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-apis
namespace: default
subjects:
- kind: ServiceAccount
name: node-apis
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: node-apis
I am new to Kubernetes, any help?
You need to use the service account by specifying it in the spec section of the pod.Since you are not doing that it's using the default service account which does not have Role and RoleBinding permitting the operation, leading to forbidden error.
spec:
serviceAccountName: node-apis
containers:
...
Alternatively you can give permission to the default service account in the RoleBinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-apis
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: node-apis

Resources