I am planning to assign pod identity to one of my applications. However, I am unable to understand the part where what happens to the assigned pod identity when the pod restarts/dies on its own ?
Does the pod get assigned a new identity on its own?
Not sure about your configuring End to end setup however if you are using it with Service Account and annotating it workload idenetiy it will stay there even if POD will restart or so.
AZURE_AUTHORITY_HOST, azure-identity-token will get auto-injected if POD restarting. Instead of using POD you can also use deployment and attach the Service account to it.
As mentioned in the official doc, it's service account to AAD mapping so if you service account is there in config with POD or deployment it will get secret and other values.
Azure AD Workload Identity supports the following mappings:
one-to-one (a service account referencing an AAD object)
many-to-one (multiple service accounts referencing the same AAD object).
one-to-many (a service account referencing multiple AAD objects by changing the client ID annotation).
Related
How to connect securely from AKS cluster to Azure PostgreSQL Single Server using Service principal as the Managed Identity is not supported.
From my point of view you have 2 options (maybe more but lets focus on those 2):
Use Azure AD Workload identity together with federated identity credential linked to you Service Principal. Basically you configure trust between your AKS (OIDC issuer), the Kubernetes Service Account for your Pod and the Azure Service principal to access resources with an Azure AD Token. Here you have to adopt the code running inside your container to leverage the workload identity with the issued Azure AD access token.
Use the Azure Key Vault Provider for Secrets Store CSI Driver. You will configure the Kubelet Identity of your AKS to read the secrets from the KeyVault and mount the Service Principal Client ID & Client Secret (saved as KeVault secrets) during Pod startup as volume into your pod. Here you have to adopt the code running inside your container to read the information (Client ID & Secret) from the filesystem inside the pod. P.s.: You can also use Workload Identity, System assigned identity or a Service Principal instead of managed-identity to access the KeyVault.
I'm trying to find a way to give an entire AKS cluster to Azure Key vault. I have temporarily got this working by following the below process:
Go to the VMSS of the cluster -> Identity -> Set System Assigned Status to 'On'
Add this Managed identity as an access policy to Key Vault.
This works, however whenever I stop and start the cluster, I have to re-create this managed identity and re-add it to Key Vault. I have tried using the User Assigned Identities for the vmss as well but that does not seem to work.
I also cannot use the azure pod identities/CSI features for other reasons so I'm just looking for a simple way to give my cluster permanent access to key Vault.
Thanks in advance
Pod is smallest unit in Kubernetes. Pod is a group of one or more containers that are deployed together on the same host (node).
Pod runs a node which is controlled by master.
Pod uses OS level virtualization which can consume resources of VMSS when it runs and based on requirement.
Stopping and restarting cluster/nodes pod will lose all the resources that leads to loss of pods. So, there will be no pod under VMSS until you restart. In case you restart your cluster/node, the new pod will be created with different name and with another IP address.
From this github discussion, I found that MIC (Managed Identity Cluster) removes the identity from the underlying VMSS when no pods are configured to use that identity. So, you have to recreate the Managed Identity for VMSS.
You can refer this link for better understanding how to access keyvault from Azure AKS.
I am writing a script that logins into Azure, but I don't want to use my password explicitly. Therefore I switched on a system assigned managed identity:
And now in a shell script I do this:
az login --identity --username xxx
'xxx' is the Object (principal) ID, on the screenshot
when I execute the command, I get this (replaced ip and ID with 'xxx'):
Failed to connect to MSI. Please make sure MSI is configured correctly and check the network connection.
Error detail: HTTPConnectionPool(host='XXX.XXX.XXX.XXX', port=XX): Max retries exceeded with url: /metadata/identity/oauth2/token?resource=https%3A%2F%2Fmanagement.core.windows.net%2F&api-version=2018-02-01&client_id=xxx (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x04B7DB08>:
Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
EDIT: it is fine, when I run this command in Cloud-Shell on Azure portal.
Why can't I login? Am I missing something?
A system assigned managed identity cannot be used to login. It is explicitly tied to the service you created it for, and is not meant for re-use.
System-assigned. Some Azure services allow you to enable a managed identity directly on a service instance. When you enable a system-assigned managed identity, an identity is created in Azure AD. The identity is tied to the lifecycle of that service instance. When the resource is deleted, Azure automatically deletes the identity for you. By design, only that Azure resource can use this identity to request tokens from Azure AD.
The most important part of that quote is the last sentence:
By design, only that Azure resource can use this identity to request tokens from Azure AD.
More information: What are managed identities for Azure resources?.
Also:
Can’t be shared.
It can only be associated with a single Azure resource.
EDIT:
Based on your question and the comment below you might be looking for a Service Principal. A managed identity, either system assigned or user assigned, is for use with an Azure resource.
Managed identities provide an identity for applications to use when connecting to resources that support Azure AD authentication.
An Azure service principal is an identity created for use with applications, hosted services, and automated tools to access Azure resources.
For more information on Service principals, see Create an Azure service principal with the Azure CLI.
I would like to know under whose authority AKS is creating the resource.
I'm trying to create an Internal Loadbalancer in AKS, but it fails without permissions.
However, I don't know who to give that privilege to.
The account that connected to AKS or the managed identity of AKS ? Or something else ?
Is the account that connected to AKS in the first place the same as the account that creates the AKS resources ?
It would be great if you could tell me the source of the information as well, as I need the documentation to explain it to my boss.
Best regards.
I'm trying to create an Internal Loadbalancer in AKS, but it fails
without permissions. However, I don't know who to give that privilege
to. The account that connected to AKS or the managed identity of AKS ?
Or something else ?
You will have to provide the required permissions to the managed identity of the AKS Cluster . So for your requirement to create a ILB in AKS you need to give Network Contributor Role to the identity.
You can refer this Microsoft Documentation on How to delegate access for AKS to access other Azure resources.
Is the account that connected to AKS in the first place the same as
the account that creates the AKS resources ?
The account which is connected to AKS is same as the account that created the AKS resources from Azure Portal (User Account) But different while accessing the Azure resources from inside the AKS (Managed Identity / Service Principal).
For more information you can refer this Microsoft Documentation.
We are building a solution in Azure Government and we will be using Terraform to deploy the solution. Seems the preferred method is to create a Service Principal for Terraform with the Service Principal having the Contributor role scoped to the subscription.
The one issue with this we are looking to solve is that this gives the Service Principal management plane access to the Key Vault...since it is in the subscription. With Contributor role, the service principal is not able to create new access policies (assign itself or others permissions to the data plane) but we are looking for a way that can remove the service principal from having any management plane permissions.
We have tried putting a ReadOnly lock on the Key Vault before creating the Service Principal but the lock does not stop the service principal from getting the Contributor permissions on the Key Vault.
Outside of creating a new role that has Contributor for everything EXCEPT for Key Vault, does anyone have any creative ideas that might help achieve this?
Yes, the root cause of all security issues is that the Service Principal's contributor role assignment is at the subscription level/scope, which enables it to do quite damage specially if multiple applications are deployed to the same subscription (e.g. delete any resource group).
One approach would be:
Provision one resource group for the Azure Key Vault specific to the application and region (the latter in case of geo-distributed applications).
Provision the Azure Key Vault on the resource group created on the previous step.
In our case, the Security Office was responsible for the first 2 steps, where they had monitoring (e.g. email, text-messages, etc.) for any change in the Azure Key Vault (e.g. new keys/secrets/certificates added/deleted/changed, permission changes, etc.).
Provision a second resource group, which will serve as a container for the application components (e.g. Azure Function, Azure SQL Server/Database, Azure Service Bus Namespace/Queue, etc.).
Create the Service Principal and assign the Contributor role to the
application resource group only, for example:
scope =
/subscriptions/{Subscription Id}/resourceGroups/{Resource Group
Name}
Find a sample PS script to provision a Service Principal with custom scope at https://github.com/evandropaula/Azure/blob/master/ServicePrincipal/PS/Create-ServicePrincipal.ps1.
Give appropriate permissions for the Service Principal in the Azure
Key Vault. In our case, we decided to have separate Service
Principal accounts for deployment (Read-Write permissions on keys/secrets/certificates) and runtime (Read-Only permissions on keys/secrets/certificates);
Find a sample PS script to set Service Principal permission on an Azure Key Vault at https://github.com/evandropaula/Azure/blob/master/KeyVault/PS/Set-ServicePrincipalPermissions.ps1.
Having that said, there are lots of inconveniences with this approach, such as:
The process (e.g. via runbook) to provision the Azure Key Vault (including its resource group) and the application resource group will be outside of the main Terraform template responsible for the application components, which requires coordination with different teams/processes/tools/etc.
Live site involving connectivity often involves coordination among multiple teams to ensure RTO and MTTM (Mean Time To Mitigate) goals are achieved.
The Service Principal will be able to delete the application specific resource group when terraform destroy is executed, but it will fail to recreate it when running terraform apply after that due to lack of permission at the subscription level/scope. Here is the error:
provider.azurerm: Unable to list provider registration status, it is possible that this is due to invalid credentials or the service principal does not have permission to use the Resource Manager API, Azure error: resources.ProvidersClient#List: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '' with object id '' does not have authorization to perform action 'Microsoft.Resources/subscriptions/providers/read' over scope '/subscriptions/{Subscription Id}'.".
Yeah, I know, this is a long answer, but the topic usually requires lots of cross-team discussions/brainstorming to make sure the security controls established by the Security Office are met, Developer productivity is not affected to the point that it will impact release schedules and RTO/MTTM goals are met. I hope this helps a bit!