I have a Kubernetes Cluster running on Azure (AKS) with NGINX Ingress in front.
I'm a little bit confused how to seperate access on the diffrent ressources for multiple Users.
The users should work on the apps. That's why it is fine if they can read logs, descriptions and exec some commands inside of the pods. But they should never adjust some Ingress ressources.
Microsoft provide a very good tutorial how to handle cases like that on AKS: https://learn.microsoft.com/en-us/azure/aks/azure-ad-rbac
There is an example how to add permissions to a group to the whole namespace.
My Question is now, how can I add permissions for a group to specific ressources inside of a namespace.
For example I have following ressources:
ressource type namespace
ingress-controller pod nginx-ingress
ingress-service service nginx-ingress
ingress-nginx ingress nginx-ingress
app1-service service nginx-ingress
app1 pod nginx-ingress
app2-service service nginx-ingress
app2 pod nginx-ingress
From my understanding I need to deploy all of them in the same namespace, otherwise the ingress can't forward the requests.
But how I can grant read, write, execute permissions to group1 for app1 and app1-service, and read to the rest?
You can specify specific resources by name in RBAC roles with the resourceNames field.
Create a Role that allows full access to app1 and app1-service:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app1-admin
rules:
- apiGroups:
- ""
resourceNames:
- app1
- app1-service
resources:
- pods
- pods/exec
- service
verbs:
- get
- list
- watch
- create
Create Role that allows read access to all other Pods and Services:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: read-all
rules:
- apiGroups:
- ""
resources:
- pods
- service
verbs:
- get
- list
- watch
Create two RoleBindings that bind both of these Roles to the group1 group:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app1-admin-group1
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: app1-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: group1
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-all-group1
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: read-all
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: group1
Now, members of group1 should have full access to Pod app1 and Service app1-service, but only read access to all other resources.
I dont think thats true at all, you can have ingress resources in different namespaces. also, you can actually refer to resource names in rbac.
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources
Related
For AKS to use a azure disk for persistent storage, we can define a persistentVolumeClaim as follows -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-managed-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: default
resources:
requests:
storage: 5Gi
Is it possible to use an azure disk from another tenant B for use as persistentVolume for an AKS in tenant A?
I don't think that this is possible.
I guess you will have to migrate your disks into the Subscription AKS is running in. You can then use the existing disk as described here.
We have an AKS cluster set up with a multiple availability zone node pool. Using the default storage class, if a Pod needs to move to another node and the only available node is in a different region, the Pod can't start up because the storage is stuck in the original region. Do any of the other built-in storage classes support the relocation of workloads across multi-zone pools?
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data
namespace: $NAMESPACE
labels:
service: db
spec:
accessModes:
- ReadWriteOnce
storageClassName: default
resources:
requests:
storage: 4Gi
Yes you can use below configurations depending on your need.
Example StorageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
parameters:
skuName: Standard_ZRS
location: eastus
storageAccount: azure_storage_account_name
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- eastus2-1
- eastus2-2
- eastus2-3
According to that following skuName are available in Azure —
Standard_LRS — standard locally redundant storage (LRS)
Standard_GRS — standard geo-redundant storage (GRS)
Standard_ZRS — standard zone redundant storage (ZRS)
Standard_RAGRS — standard read-access geo-redundant storage (RA-GRS)
Premium_LRS — premium locally redundant storage (LRS)
Premium_ZRS — premium zone redundant storage (GRS)
References: K8s Allowed Topologies, AKS - Availability Zones, AKS - StorageClasses
I have created one Azure Kubernetes cluster with RBAC enabled.
So my thinking is if any pod want to access any resource in cluster, it should be associated with service account and service account should have a specific role assigned to access resource.
But in my case I am able to access resource like list pod , list namespace from pod which is associated with a service account that does not have any role assigned.
Please help me know if my understanding is wrong about RBAC or I am doing something wrong here !!
Your understanding is right, i'm not exactly sure about permissions granted to default service account, but if you create your own role and assign it to the service account you can control permissions. sample:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: myserviceaccount
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: orleans-cluster
namespace: mynamespace
subjects:
- kind: ServiceAccount
name: myserviceaccount
namespace: mynamespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: orleans-cluster
rules:
- apiGroups:
- orleans.dot.net
resources:
- clusterversions
- silos
verbs:
- '*'
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: myserviceccount
namespace: mynamespace
if you assign myserviceaccount to the pod it will only allow the pod to do whatever is defined in the role. so you need to create a role and a service account and use rolebinding (or clusterrolebinding for cluster wide permissions) to the service account.
Currently I am trying to deploy applications inside an AKS kubernetes cluster on Azure.
For the deployment pipeline I would like to use a service account which is managed through azure active directory (e.g. service principal).
I already have created a service principal through the Azure CLI.
What is the right way to make this service principal known as a service account inside the AKS cluster?
The reason I need a need a service account and not a user account and is because I want to use it from my devops pipeline without requiring a login, but still be able to manage it through active directory.
Currently I'm using the default service account to deploy my containers inside a namespace, this works but the account is only known inside the namespace and not centrally managed.
# This binding enables a cluster account to deploy on kubernetes
# You can confirm this with
# kubectl --as="${USER}" auth can-i create deployments
# See also: https://github.com/honestbee/drone-kubernetes/issues/8
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: default-deploy
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get","list","patch","update", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-default-deploy
namespace: default
roleRef:
kind: Role
name: default-deploy
apiGroup: rbac.authorization.k8s.io
subjects:
# working, the default account configured with deploy permissions
- name: default
kind: ServiceAccount
namespace: default
# works, if the service principal is configured as a User
- name: "111111-0000-1111-0000-********"
apiGroup: rbac.authorization.k8s.io
kind: User
# this does not work, the service principal is configured as a Service Account
- name: "111111-0000-1111-0000-********"
apiGroup: rbac.authorization.k8s.io
kind: ServiceAccount
I would expect to be able to configure the service account also through RBAC, however I get the following error:
The RoleBinding "role-default-deploy" is invalid:
subjects[1].apiGroup: Unsupported value:
"rbac.authorization.k8s.io": supported values: ""
anyone know how i can see my aks persisted volume (azurefile) data in Azure Storage Explorer or in the portal?
persisted volume is working but i can't see the raw files somehow..
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
parameters:
storageAccount: trstorage
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql
spec:
capacity:
storage: 1Gi
hostPath:
path: "/data/mysql"
accessModes:
- ReadWriteMany
storageClassName: azurefile
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 500Mi
p.s. i know it's a bad idea to use azurefile for a database so ignore that for now.
when i look in the storage account i don't see any files, that's what i'm not understanding..
For your issue, I understand that you did the persisted volume in Azure Storage for your mysql in Azure Kubenets.
First, if your mount path is right and the path that mysql will automatically create files itself. You will see files in Azure Storage Explorer or in the portal with File Share.
Second, you can check if Azure Storage File Share was mounted correctly to the mount point. You can use the command kubectl describe pod podName to check it. The resulting screenshot will like this.
Or check it in the browser with the command az aks browse --resource-group resourceGroupName --name AKSClusterName. And the resulting screenshot will like this.
Third, you can check the path with connecting to the AKS node. For connecting, you can follow the document SSH into Azure Kubernetes Service (AKS) cluster nodes.
I did the test and the resulting screenshots here:
See persisted Volume in the portal.
See persisted Volume in the Microsoft Azure Storage Explorer.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
storageAccount: gdkstore
note:
storageAccount: is what's missing from MS docs currently
change gdkstore to your own storage account name in the correct resource group