connect AAD to existing AKS that has - azure

Working with Azure, we started with AKS last year. On creation of the AKS clusters we use, we checked what needed to be done up front to enable rbac at a later moment and we then thought that setting 'rbac' to 'enabled' was the only thing we needed. This results in the following:
Now we're trying to implement rbac integration of AKS with AAD, but I read some seemingly conflicting pre-requisites. Some say that in order to integrate AAD and AKS, you need rbac enabled at cluster creation. I believe we have set that correct, looking at the picture above.
But then in the Azure docs, it is mentioned that you need to create a cluster and add some AAD-integration keys for the client and server applications.
My question is actually two-fold:
when people say you need rbac enabled in your aks cluster during creation do they actually mean you should select the 'rbac:enabled' box AND make sure you create the AAD-related applications up front and also configure these during cluster creation?
Is there a way to setup the AKS-AAD rbac connection on a cluster that has rbac:enabled but misses the aadProfile configuration?
I believe we indeed need to re-create all our clusters, but I want to know for sure by asking here as it's not 100% clear to me from what I've read online (also here at stack exchange) and it's going to be an awful lot of work.

For all of your requirements, you only need to make sure the RBAC enabled for your AKS cluster and it only can enable in the creation time. Then you can update the credential of the existing AKS AAD profile like this:
Before update:
CLI update command:
az aks update-credentials -g yourResourceGroup -n yourAKSCluster --reset-aad --aad-server-app-id appId --aad-server-app-secret appSecret --aad-client-app-id clientId --aad-tenant-id tenantId
After update:

yes, that is correct
no, there is no way of doing that. you need to recreate.

Related

How to create Azure databricks cluster using Service Principal

I have azure databricks workspace and I added service principal in that workspace using databricks cli. I have been trying to create cluster using service principal and not able to figure it. Can any help me?
I am able to create cluster using my account but I want to create using Service Principal and want it to be the owner of the cluster not me.
Also, it there a way I can transfer the ownership of my cluster to Service Principal?
First, answering the second question - no, you can't change the owner of the cluster.
To create a cluster that will have Service Principal as owner you need to execute creation operation under its identity. To do this you need to perform following steps:
Prepare a JSON file with cluster definition as described in the documentation
Set DATABRICKS_HOST environment variable to an address of your workspace:
export DATABRICKS_HOST=https://adb-....azuredatabricks.net
Generate AAD token for Service principal as described in documentation and assign its value to DATABRICKS_TOKEN or DATABRICKS_AAD_TOKEN environment variables (see docs).
Create Databricks cluster using databricks-cli providing name of JSON file with cluster specification (docs):
databricks clusters create --json-file create-cluster.json
P.S. Another approach (really recommended) is to use Databricks Terraform provider to script your Databricks infrastructure - it's used by significant number of Databricks customers, and much easier to use compared with command-line tools.

Storage account connectivity method for AKS

I'm setting up a Storage Account so I can Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS). Doing this to:
Have a PV and PVC for the database
A place to store the application files
AKS does create a storage account in the MC_<resource-group>_<aks-name>_<region> resource group that is automatically created. However, that storage account is destroyed if the node size/VM is changed (not node count), so it shouldn't be used since you'll lose your files and database if you need a node size/VM with more resources.
This documentation, nor any other I've really come across, says what the best practice is for the Connectivity method:
Public endpoint (all networks)
Public endpoint (selected networks)
Private endpoint
The first option sounds like a bad idea.
The second option allows me to select a virtual network, and there are two choices:
MC_<resource-group>_<aks-name>_<region>... again, doesn't seem like a good idea because if the node size/VM is changed, the connection will be broke.
aks-vnet-<number>... not sure what this is, but looks like it is part of the previous resource group so will also be destroyed in the previously mentioned scenario.
The third option contains a number of options some of which are included the second option.
So how should I securely set this up for AKS to share files with the application and persist database files?
EDIT
Looking at the both the "Firewalls and virtual networks" and "Private endpoint connections" for the storage account that comes with the AKS node, it looks like it is just setup for "All networks"... so maybe having that were my actual PV and PVC will be stored isn't such an issue...? Could use some clarity on the topic.
not sure where the problem lies. all the assets generated by AKS are tied to AKS lifecycle. if you delete AKS it will delete the MC_* resource group (and that it 100% right). Not sure what do you mean about storage account being destroyed, it wouldn't get destroyed unless you remove the pvc and set the delete action to reclaim.
Reading: https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
As for the networking part, selected networks with selecting the AKS nodes network should be the way to go. you can figure that network out by looking at the AKS nodes or the AKS agent pool definition(s). I dont think this is configurable only using kubernetes primitives, so that would be a manual\scripted action after storage account is created.

Work on kubernetes with two accounts in PowerShell with Azure

I have two separate Azure accounts.
One for each project in which I am involved, these accounts are totally independent, that is, they do not share any type of resource and do not have the same domain. They are from two totally different companies.
I find that both accounts respond to me at the time of login from PowerShell and I can access those resources.
Both work with Kubernetes (kubectl) but only one of the two accounts is shown. Whatever you do always shows the content of co-owners of one and not the other.
I have the Azure CLI (v.2.0.76) and the PS version is (5.1)
someone know how to I can do?
EDIT with pictures -
Although the account is default, I am not able to access the kubernetes of the same
PS Default Account
Services from the other account..not the default account
services from other cluster
I just found the solution.
When we access from PS with Az Login and select the account, it allows us to access all the resources of that account (the one that is predetermined)
What I have done is basically see the
kubectl config view
This returns the result of all the clusters that it finds with its context. The next thing we have to do is tell kubectl what CONTEXT we want to work with in the following way:
kubectl config use-context "CONTEXT NAME"
And thats it.

Accessing Azure Storage Blob from an AKS cluster

A little context: I'm having to migrate a project from AWS, where I'm currently using ECS, to Azure, where I'll be using AKS since their ACS (ECS equivalent) is deprecated.
This is a regular Django app, with its configuration variables being fetched from a server-config.json hosted on a private S3 bucket, the EC2 instance has the correct role with S3FullAccess,
I've been looking into reproducing that same behavior but with Azure Blob Storage instead, having achieved no success whatsoever :-(.
I tried using the Service Principal concept and adding it to the AKS Cluster with Storage Blob Data Owner roles, but that doesn't seem to work. Overall it's been quite the frustrating experience - maybe I'm just having a hard time grasping the right way to use the permissions/scopes. The fact that the AKS Cluster creates its own resource group is something unfathomable - but I've attempted attaching the policies to it as well, to no avail. I then moved onto a solution indicated by Microsoft.
I managed to bind my AKS pods with the correct User Managed Identity through their indicated solution aad-pod-identity, but I feel like I'm missing something. I assigned Storage Blob Data Owner/Contributor to the identity, but still, when I enter the pods and try to access a Blob (using the python sdk), I get a resource not found message.
Is what I'm trying to achieve possible at all? Or will I have to change to a solution using Azure Keyvault/something along those lines?
first off all, you can use AKS Engine which is more or less ACS for Kubernetes now.
As for the access to the blob storage, you dont have to use Managed Service Identity, you can just use account name\key ( which is a bit less secure, but a lot less error prone and more examples exist ). The fact that you are getting resource not found error most likely means your auth part is fine, you just dont have access to the resource, according to this storage blob contributor should be fine if you assigned it at a proper scope. For this to work 100% just give your identity contributor access at subscription level, this way its guaranteed to work.
I've found an example of using python with MSI (here). You should start with that (and grant your identity contributor access) and verify you can list resource groups. when that works making reading blobs working should be trivial.

azure container with RBAC

I am trying to understand how can I run a python application inside azure container that can spin up various resources like VM, route-table inside resource group of Azure.
http://azure-sdk-for-python.readthedocs.io/en/latest/
Looks like the examples provided in azure python sdk sets authentication stuff like credentials, subscription ID, etc.
Is there a way to avoid setting up of this information by using Role Based Access Control ? What if I provide container an Contributor access to the resource group, in that case, would i able to avoid setting up authentication variables and still achieve creating resources or atleast read/access resources in resource group ?
No, how do you think that would work? In order to be able to create\read\modify resources you need permissions (else anyone would be able to do that). There is no way to do that without some sort of Auth.

Resources