I've been working with EKS and GKE and in both places they have a formal, streamlined manner of ensuring a pod comes up with the necessary credentials for a "role" in the cloud's rbac system (IAM Role for AWS, Service Account in GCP) via K8's native service accounts. This makes integration with other of the host cloud's services quite easy. I'm now trying to do the same in AKS but I don't see any such integration for their respective rbac. Am I missing something badly? How do you do it?
By default when creating an AKS cluster a service principal is being created for that cluster.
Then that Service Principal can be set on the level of some other Azure Resource (VM?) in order for them to be able to establish a network connection and for them to be able to communicate (except of of course general network settings)
I am really not sure and can not understand when this is required and when not. If for example I have db on VM level do I need to grant the AKS service principal access to the VM to be able to communicate with it through the network or not?
Can someone provide me some guidance for this, and not general documentation. When this is required to be used/set on the level of those other Azure resources and when it is not?
I cannot find proper explanation for this.
Thank you
Regarding your question about the DB, you do not need to give the service principal any access to that VM. Given that the Database runs outside of Kubernetes does not need to access that VM in any way. The database could even be in a different data center or hosted on another cloud provider entirely, applications running inside kubernetes will still be able to communicate with it as long as the traffic is allowed by firewalls etc.
I know you did not ask for generic documentation, but the documentation on Kubernetes Service Principals puts it well:
To interact with Azure APIs, an AKS cluster requires either an Azure
Active Directory (AD) service principal or a managed identity. A
service principal or managed identity is needed to dynamically create
and manage other Azure resources such as an Azure load balancer or
container registry (ACR).
In other words, the Service principal is the identity that the Kubernetes cluster authenticates with when it interacts with other Azure resources such as:
Azure container registry: The images that the containers are created from must come from somehwere. If you are storing your custom images in a private registry the cluster must be authorized to pull images from the registry. If the private registry is an Azure container registry the service principal must be authorized for those operations
Networking: Kubernetes must be able to dynamically configure routetables and to register external IP's for services in a loadbalancer. Again, the service principal is used as identity so it must be authorized
Storage: To access disk resources and mount them into pods
Azure Container instances: In case you are using the virtual kubelet to dynamically add compute resources to your cluster Kubernetes must be allow to manage containers on ACI.
To delegate access to other Azure resources you can use the azure cli to assign a role to a an assignee on a certain scope:
az role assignment create --assignee <appId> --scope <resourceScope> --role Contributor
Here is a detailed list of all cluster identity permissions in use
I know with Azure Kubernetes service we can use managed identities to access azure resources like keyvaults. But i'm trying to learn if same procedure can be applied to a kubernetes cluster which is hosted on azure. My aim is to have kubernetes cluster in azure with 2 worker and 2 controller nodes but pods residing on those nodes should access azure keyvault with managed identity method similar to AKS. Is there anyway we can do it without coding in application?
I understand the scope of this question is big but it is really helpful if somebody provide any high level steps ?
thanks,
Santosh
That's totally possible. AAD Pod identities rely on AAD (Azure Active Directory) and its permissions.
At the end AKS will have an infrastructure behind the scenes. So if you plan to not use AKS but install a cluster by yourself, for example with AKS engine, you can use AAD Pod Identities / Managed Instances.
All you need is that those machines reside in "Azure" and rely on what is called Azure Instance Metadata Service (IMDS). Even you can enroll new machine instances coming outside from Azure with the project ARC. Anyway I cannot talk about it with Managed instances since I have not used it, anyway it should follow a similar pattern.
Here you have a good article that explains AAD Pod identities:
https://itnext.io/the-right-way-of-accessing-azure-services-from-inside-your-azure-kubernetes-cluster-14a335767680
I would like to know if it is always recommended to use Managed Identities in Azure , mostly system assigned or a Service Principal?
When should Service Principals be used in Azure compared to a managed identity, what is the advantage of one over the other?
Any help would be appreciated.
Internally, managed identities are service principals of a special type, which are locked to only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is automatically removed. Also, when a User-Assigned or System-Assigned Identity is created, the Managed Identity Resource Provider (MSRP) issues a certificate internally to that identity.
Source: What are managed identities for Azure resources?
and
So what’s the difference?
Put simply, the difference between a managed identity and a service principal is that a managed identity manages the creation and automatic renewal of a service principal on your behalf.
Source: What’s an Azure Service Principal and Managed Identity?
A managed identity is a type of the service principal.
A service principal can be one of three types: application, managed identity, and legacy. The division into types is based on circumstances of their usage. Thus their specific handling also differs based on their type.
rickvdbosch provided link to an article that talks about specifics of the managed identity type of the service principal.
For those who would like to learn about the concept of the service principal object and its types, here is a link to a different article:
Application and service principal objects in Azure Active Directory.
An Azure service principle is like an application, whose tokens can be used by other azure resources to authenticate and grant access to azure resources.
Managed identities are service principals of a special type, which are locked to only be used with Azure resources.
The main difference between both is that in managed identity you don’t need to specify any credentials in your code compared to service principles where you need to specify application id, client id, etc to generate a token to access any Azure resource. Ideally, you should opt for service principal only if the service you use doesn’t support managed identity.
Service Principal
We can say the most relevant part of the Service principal is the Enterprise Apps section under Azure Active Directory. This is basically an application that will allow your user apps to authenticate and access Azure resources, based on the RBAC.
It essentially is an ID of an application that needs to access Azure resources. In layman’s terms, imagine if you have to assign certain access to your colleague so that he\she can access Azure resources and perform required tasks, you can use their email id as a way to authenticate the user.
Managed Identity
We can say that the Managed Identities are actually Service Principals and they are identical in the functionality and purpose they serve.
The only difference is, that a managed identity is always linked to an Azure Resource, unlike an application or 3rd party connector mentioned above. They are automatically created for you, including the credentials; big benefit here is that no one knows the credentials
There are two types of managed identities:
1.) System assigned; in this scenario, the identity is linked to a single Azure Resource, eg a Virtual Machine, a Web App, Function,… so almost anything. Next, they also “live” with the Azure Resource, which means they get deleted when the Azure Resource gets deleted.
2.) User Assigned Managed Identity, which means that you first have to create it as a stand-alone Azure resource by itself, after which it can be linked to multiple Azure Resources. An example here could be out of integration with the Key Vault, where different Workload services belonging to the same application stack, need to read out information from Key Vault. In this case, one could create a “read KV” Managed Identity, and link it to the web app, storage account, function, logic app,… all belonging to the same application architecture.
Managed Identities are tied to a resource (VM, Logib App, etc). To give the resource grants and permissions for accessing(CRUD) other resources you use Managed Identities.
Service Principial do not have to be tied to a resource, they leave under tenant and above subscription, and what is more is more important - have some auth tokens that could be stored somewhere (Key Vault). It is like a fake user with some credentials and tokens.
A Service Principal could be looked at as similar to a service account-alike in a more traditional
on-premises application or service scenario. Managed Identities are used for “linking” a Service Principal
security object to an Azure Resource like a Virtual Machine, Web App, Logic App or similar
Is it possible to temporarily disable Azure Active Directory RBAC in Azure Kubernetes Service? The reason I ask is because we are unable to set up automated tasks (like continuous integration) because authenticating against kubectl now requires human intervention to complete device code auth - I have another post here regarding that. Perhaps even just disabling Kubernetes RBAC will bypass the need to authenticate with AD? I would do this until a solution to the issue is available.
Although there is no document exactly say that you cannot disable the RBAC of an existing AKS cluster. But it shows that enabling role-based access control (RBAC) on existing clusters isn't supported at this time. In my opinion, it also means that you cannot disable the RBAC on existing AKS cluster. And it seems there is no way to achieve it, no matter Azure CLI, PowerShell or REST API.
I think the RBAC is a setting for AKS cluster and it could not be changed after it created now. We can expect that it could be changed in the future. Hope this will help you.