A little context: I'm having to migrate a project from AWS, where I'm currently using ECS, to Azure, where I'll be using AKS since their ACS (ECS equivalent) is deprecated.
This is a regular Django app, with its configuration variables being fetched from a server-config.json hosted on a private S3 bucket, the EC2 instance has the correct role with S3FullAccess,
I've been looking into reproducing that same behavior but with Azure Blob Storage instead, having achieved no success whatsoever :-(.
I tried using the Service Principal concept and adding it to the AKS Cluster with Storage Blob Data Owner roles, but that doesn't seem to work. Overall it's been quite the frustrating experience - maybe I'm just having a hard time grasping the right way to use the permissions/scopes. The fact that the AKS Cluster creates its own resource group is something unfathomable - but I've attempted attaching the policies to it as well, to no avail. I then moved onto a solution indicated by Microsoft.
I managed to bind my AKS pods with the correct User Managed Identity through their indicated solution aad-pod-identity, but I feel like I'm missing something. I assigned Storage Blob Data Owner/Contributor to the identity, but still, when I enter the pods and try to access a Blob (using the python sdk), I get a resource not found message.
Is what I'm trying to achieve possible at all? Or will I have to change to a solution using Azure Keyvault/something along those lines?
first off all, you can use AKS Engine which is more or less ACS for Kubernetes now.
As for the access to the blob storage, you dont have to use Managed Service Identity, you can just use account name\key ( which is a bit less secure, but a lot less error prone and more examples exist ). The fact that you are getting resource not found error most likely means your auth part is fine, you just dont have access to the resource, according to this storage blob contributor should be fine if you assigned it at a proper scope. For this to work 100% just give your identity contributor access at subscription level, this way its guaranteed to work.
I've found an example of using python with MSI (here). You should start with that (and grant your identity contributor access) and verify you can list resource groups. when that works making reading blobs working should be trivial.
Related
I have two storage accounts stinboundclient1 & stinboundclient2 and storage account have initial "stinbound" is common for both. Now inside storage accounts there are containers for each environment (dev,test,prod). Now I have a dev Virtual Machine (DevVM) and it needs access to only "dev" container of both storage accounts. What is the best way we can provide read/contributor access to VM using azure policy or custom role or any other approach?
Please do not suggest manual way of providing RBAC permission to VM bcoz its tedious task to provide each container that access as eventually there will 30-40 clients storage accounts.
Storage Account & Containers:
stinboundclient1/dev
stinboundclient1/test
stinboundclient1/prod
stinboundclient2/dev
stinboundclient2/test
stinboundclient2/prod
DevVM needs access to stinbound/dev*
Similarly Test and Prod need access respective containers::
TestVM needs access to stinbound*/test
ProdVM needs access to stinbound*/prod
It seems to me that what you are looking for is actually what Microsoft calls Attribute-based Access Control (ABAC).
That way, you can grant access to a scope and add a particular condition for this access to be effective on the name of the container, a tag to be present, etc.
This feature is still in Preview though.
I'm setting up a Storage Account so I can Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS). Doing this to:
Have a PV and PVC for the database
A place to store the application files
AKS does create a storage account in the MC_<resource-group>_<aks-name>_<region> resource group that is automatically created. However, that storage account is destroyed if the node size/VM is changed (not node count), so it shouldn't be used since you'll lose your files and database if you need a node size/VM with more resources.
This documentation, nor any other I've really come across, says what the best practice is for the Connectivity method:
Public endpoint (all networks)
Public endpoint (selected networks)
Private endpoint
The first option sounds like a bad idea.
The second option allows me to select a virtual network, and there are two choices:
MC_<resource-group>_<aks-name>_<region>... again, doesn't seem like a good idea because if the node size/VM is changed, the connection will be broke.
aks-vnet-<number>... not sure what this is, but looks like it is part of the previous resource group so will also be destroyed in the previously mentioned scenario.
The third option contains a number of options some of which are included the second option.
So how should I securely set this up for AKS to share files with the application and persist database files?
EDIT
Looking at the both the "Firewalls and virtual networks" and "Private endpoint connections" for the storage account that comes with the AKS node, it looks like it is just setup for "All networks"... so maybe having that were my actual PV and PVC will be stored isn't such an issue...? Could use some clarity on the topic.
not sure where the problem lies. all the assets generated by AKS are tied to AKS lifecycle. if you delete AKS it will delete the MC_* resource group (and that it 100% right). Not sure what do you mean about storage account being destroyed, it wouldn't get destroyed unless you remove the pvc and set the delete action to reclaim.
Reading: https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
As for the networking part, selected networks with selecting the AKS nodes network should be the way to go. you can figure that network out by looking at the AKS nodes or the AKS agent pool definition(s). I dont think this is configurable only using kubernetes primitives, so that would be a manual\scripted action after storage account is created.
I have two separate Azure accounts.
One for each project in which I am involved, these accounts are totally independent, that is, they do not share any type of resource and do not have the same domain. They are from two totally different companies.
I find that both accounts respond to me at the time of login from PowerShell and I can access those resources.
Both work with Kubernetes (kubectl) but only one of the two accounts is shown. Whatever you do always shows the content of co-owners of one and not the other.
I have the Azure CLI (v.2.0.76) and the PS version is (5.1)
someone know how to I can do?
EDIT with pictures -
Although the account is default, I am not able to access the kubernetes of the same
PS Default Account
Services from the other account..not the default account
services from other cluster
I just found the solution.
When we access from PS with Az Login and select the account, it allows us to access all the resources of that account (the one that is predetermined)
What I have done is basically see the
kubectl config view
This returns the result of all the clusters that it finds with its context. The next thing we have to do is tell kubectl what CONTEXT we want to work with in the following way:
kubectl config use-context "CONTEXT NAME"
And thats it.
I realize that Terraform supports Azure, and I've actually been able to get Terraform working with Azure by doing the following:
Create a storage account
Create a blob container
Plugged in access key
Created a file titled backend.tfvars with resource_group_name, storage_account_name, container_name, access_key, key values.
Added following to main.tf:
Main.tf
terraform {
backend "azurerm" {
}
}
I ran terraform init -backend-config="backend.tfvars"
When I look in the blob container, I see the myapp.tfstate file, which means that I've been successful, right?
What exactly does this allow me? I understand that my state file is now saved in Azure, but... how does that help me? I've looked around for documentation explaining this, but for some reason haven't been able to find anything.
Charles is right about nothing store "just" being in another place, but he is wrong there is no difference. There is a difference. Main difference is if you have a team of people working with TF.
You see, state is not only used to store state, but to signal that currently there is an operation going on. Called locking. With centralized storage none of your teammates can accidentally try and change resources when somebody else is doing that already.
Actually, store the Terraform in Azure Storage Account, I think it's no different with local, just replaced the place. But according to the description in the document:
By default, data stored in an Azure Blob is encrypted before being
persisted to the storage infrastructure. When Terraform needs state,
it is retrieved from the backend and stored in memory on your
development system. In this configuration, the state is secured in Azure
Storage and not written to your local disk.
It seems that there is still a little effect on the security of the data.
I am trying to understand how can I run a python application inside azure container that can spin up various resources like VM, route-table inside resource group of Azure.
http://azure-sdk-for-python.readthedocs.io/en/latest/
Looks like the examples provided in azure python sdk sets authentication stuff like credentials, subscription ID, etc.
Is there a way to avoid setting up of this information by using Role Based Access Control ? What if I provide container an Contributor access to the resource group, in that case, would i able to avoid setting up authentication variables and still achieve creating resources or atleast read/access resources in resource group ?
No, how do you think that would work? In order to be able to create\read\modify resources you need permissions (else anyone would be able to do that). There is no way to do that without some sort of Auth.