Provide Azure VM access to multiple Storage Accounts - azure

I have two storage accounts stinboundclient1 & stinboundclient2 and storage account have initial "stinbound" is common for both. Now inside storage accounts there are containers for each environment (dev,test,prod). Now I have a dev Virtual Machine (DevVM) and it needs access to only "dev" container of both storage accounts. What is the best way we can provide read/contributor access to VM using azure policy or custom role or any other approach?
Please do not suggest manual way of providing RBAC permission to VM bcoz its tedious task to provide each container that access as eventually there will 30-40 clients storage accounts.
Storage Account & Containers:
stinboundclient1/dev
stinboundclient1/test
stinboundclient1/prod
stinboundclient2/dev
stinboundclient2/test
stinboundclient2/prod
DevVM needs access to stinbound/dev*
Similarly Test and Prod need access respective containers::
TestVM needs access to stinbound*/test
ProdVM needs access to stinbound*/prod

It seems to me that what you are looking for is actually what Microsoft calls Attribute-based Access Control (ABAC).
That way, you can grant access to a scope and add a particular condition for this access to be effective on the name of the container, a tag to be present, etc.
This feature is still in Preview though.

Related

Azure - Blob Storage - Mixing Custom Domain with SAS

I have an Azure Storage account that hosts a static web site as explained here. This means the static web site "lives" in a storage container named $web. This web site is accessible via a custom domain. This is currently working as desired. However, there is one file that I want to restrict access to.
There is one file in the $web storage container that I only want individuals to access if a) they have a key and b) it's during a specific time window. My thinking was that I could accomplish this with a Shared Access Signature (SAS). However, while testing this approach, it doesn't seem to work. It seems that everything in the $web storage container is publicly visible whether a SAS has been generated or not. Is this correct?
Is there a way to require that a file in the $web storage container have an SAS? Or, do I need to "host" the file in a separate storage container (thus removing it from my custom domain)?
Thank you.
When visit the files stored in $web container via primary static website endpoint(for example, https://contosoblobaccount.z22.web.core.windows.net/index.html), the files are always be accessible whether the container is public or private. So it doesn't matter the sas token is specified or not.
And the sas token only take effects if the $web container is private access, and people visit it via primary blob service endpoint(For example, https://contosoblobaccount.blob.core.windows.net/$web/index.html).
Please refer to this official doc for more details.
So for your purpose, you should put it in another container with private access.

Storage account connectivity method for AKS

I'm setting up a Storage Account so I can Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS). Doing this to:
Have a PV and PVC for the database
A place to store the application files
AKS does create a storage account in the MC_<resource-group>_<aks-name>_<region> resource group that is automatically created. However, that storage account is destroyed if the node size/VM is changed (not node count), so it shouldn't be used since you'll lose your files and database if you need a node size/VM with more resources.
This documentation, nor any other I've really come across, says what the best practice is for the Connectivity method:
Public endpoint (all networks)
Public endpoint (selected networks)
Private endpoint
The first option sounds like a bad idea.
The second option allows me to select a virtual network, and there are two choices:
MC_<resource-group>_<aks-name>_<region>... again, doesn't seem like a good idea because if the node size/VM is changed, the connection will be broke.
aks-vnet-<number>... not sure what this is, but looks like it is part of the previous resource group so will also be destroyed in the previously mentioned scenario.
The third option contains a number of options some of which are included the second option.
So how should I securely set this up for AKS to share files with the application and persist database files?
EDIT
Looking at the both the "Firewalls and virtual networks" and "Private endpoint connections" for the storage account that comes with the AKS node, it looks like it is just setup for "All networks"... so maybe having that were my actual PV and PVC will be stored isn't such an issue...? Could use some clarity on the topic.
not sure where the problem lies. all the assets generated by AKS are tied to AKS lifecycle. if you delete AKS it will delete the MC_* resource group (and that it 100% right). Not sure what do you mean about storage account being destroyed, it wouldn't get destroyed unless you remove the pvc and set the delete action to reclaim.
Reading: https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
As for the networking part, selected networks with selecting the AKS nodes network should be the way to go. you can figure that network out by looking at the AKS nodes or the AKS agent pool definition(s). I dont think this is configurable only using kubernetes primitives, so that would be a manual\scripted action after storage account is created.

Work on kubernetes with two accounts in PowerShell with Azure

I have two separate Azure accounts.
One for each project in which I am involved, these accounts are totally independent, that is, they do not share any type of resource and do not have the same domain. They are from two totally different companies.
I find that both accounts respond to me at the time of login from PowerShell and I can access those resources.
Both work with Kubernetes (kubectl) but only one of the two accounts is shown. Whatever you do always shows the content of co-owners of one and not the other.
I have the Azure CLI (v.2.0.76) and the PS version is (5.1)
someone know how to I can do?
EDIT with pictures -
Although the account is default, I am not able to access the kubernetes of the same
PS Default Account
Services from the other account..not the default account
services from other cluster
I just found the solution.
When we access from PS with Az Login and select the account, it allows us to access all the resources of that account (the one that is predetermined)
What I have done is basically see the
kubectl config view
This returns the result of all the clusters that it finds with its context. The next thing we have to do is tell kubectl what CONTEXT we want to work with in the following way:
kubectl config use-context "CONTEXT NAME"
And thats it.

Accessing Azure Storage Blob from an AKS cluster

A little context: I'm having to migrate a project from AWS, where I'm currently using ECS, to Azure, where I'll be using AKS since their ACS (ECS equivalent) is deprecated.
This is a regular Django app, with its configuration variables being fetched from a server-config.json hosted on a private S3 bucket, the EC2 instance has the correct role with S3FullAccess,
I've been looking into reproducing that same behavior but with Azure Blob Storage instead, having achieved no success whatsoever :-(.
I tried using the Service Principal concept and adding it to the AKS Cluster with Storage Blob Data Owner roles, but that doesn't seem to work. Overall it's been quite the frustrating experience - maybe I'm just having a hard time grasping the right way to use the permissions/scopes. The fact that the AKS Cluster creates its own resource group is something unfathomable - but I've attempted attaching the policies to it as well, to no avail. I then moved onto a solution indicated by Microsoft.
I managed to bind my AKS pods with the correct User Managed Identity through their indicated solution aad-pod-identity, but I feel like I'm missing something. I assigned Storage Blob Data Owner/Contributor to the identity, but still, when I enter the pods and try to access a Blob (using the python sdk), I get a resource not found message.
Is what I'm trying to achieve possible at all? Or will I have to change to a solution using Azure Keyvault/something along those lines?
first off all, you can use AKS Engine which is more or less ACS for Kubernetes now.
As for the access to the blob storage, you dont have to use Managed Service Identity, you can just use account name\key ( which is a bit less secure, but a lot less error prone and more examples exist ). The fact that you are getting resource not found error most likely means your auth part is fine, you just dont have access to the resource, according to this storage blob contributor should be fine if you assigned it at a proper scope. For this to work 100% just give your identity contributor access at subscription level, this way its guaranteed to work.
I've found an example of using python with MSI (here). You should start with that (and grant your identity contributor access) and verify you can list resource groups. when that works making reading blobs working should be trivial.

azure container with RBAC

I am trying to understand how can I run a python application inside azure container that can spin up various resources like VM, route-table inside resource group of Azure.
http://azure-sdk-for-python.readthedocs.io/en/latest/
Looks like the examples provided in azure python sdk sets authentication stuff like credentials, subscription ID, etc.
Is there a way to avoid setting up of this information by using Role Based Access Control ? What if I provide container an Contributor access to the resource group, in that case, would i able to avoid setting up authentication variables and still achieve creating resources or atleast read/access resources in resource group ?
No, how do you think that would work? In order to be able to create\read\modify resources you need permissions (else anyone would be able to do that). There is no way to do that without some sort of Auth.

Resources