Currently, I am trying to dynamically provision the Azure blob storage for Kubernetes using Container Storage Interface plugin. The Azure documentation is quite confusing. The github says, Just create storage class and continue creating stateful set. The integration is complete.
While the official Azure Doc says, Create PVC and a pod followed by stateful set. This still seems like an incomplete doc, which is pretty unclear for me. Any leads on the same will be much appreciated.
How this works exactly ? My understanding is, create a PVC and statefulset after creating a storage class, and it should be working. If Anyone has implemented this in your project, please shed some light.
I got an understanding about these phenomena. We can create Persistence Volume Claim by two ways. The first way is to create a separate resource file where kind is a Persistent Volume Claim, and the other way is to rely on Statefulset. It's up to the user to decide whether they want to use a Statefulset or rely on a PVC file.
To dynamically provision the Azure Blob Storage, Install CSI Driver in the target environment and create a storage class. Create a Persistence Volume Claim (PVC) resource and use the target storage class name in it. Now, whenever the PVC resource is deployed, it dynamically fetches the required volume from the storage class automatically. Quite straightforward and simple.
Related
I would like to know if there is anyway i can create Azure Databricks mount points using the Azure Databricks Resource Provider. Various Azure Service Principals are used to give access to various mount points in ADLS Gen2.
So can these mount points be put inside Databricks with the right Service Principal access, can this be done using Terraform or what is the best way to do this.
Thanks
You can't do it with the azurerm provider as it works only with the Azure-related objects, and DBFS mount is specific for Databricks. But Databricks Terraform provider has databricks_mount resource that is designed for that task. Just take into account that because there is no such thing as "mount API", mounting is performed by spinning off a small cluster, and performing dbutils.fs.mount command inside it.
P.S. Mounts are really not recommended anymore due the fact that all users of the workspace will have access to the mount's content using the permissions of the service principal that was used for mounting.
I want to have Terraform's backend in Azure Storage Account. I'm following this article by Microsoft.
And, I quote from the article
Public access is allowed to Azure storage account for storing
Terraform state.
But wouldn't that make the state downloadable publicly, hence it will expose our infrastructure?
What's the best practice here? Thanks..
You are correct having your storage account publicly available is a bad idea. Best practice is to have your backend state file in a blob container that is locked down (usually with a firewall and the public access level of the container set to private). And then you can use any of the ways seen here to authenticate to Azure. I personally use a service principal as it is easy to set up and avoids using user credentials and access keys.
We have Azure Storage account Genv2 enabled with Datalake Genv2 feature and would need to create blobs inside the containers as per the timely requirement and assign set of "access control lists" for a set of service principles with different level of accesses. Looking for a solution with terraform and couldnt find any helpful article on this.
Requirement is as below.
Read the existing Storage account information ( which already have some blobs created with some accesspolicy)
Create new blobs inside that storage account and assign set of access control for a list of service principals with different kind of access like read, write
should be able to modify the existing access control list also inside the existing blobs.
Any helps highly appreciated..
The Storage account is enabled with Datalake Gen v2 feature and
requirement is to create and manage access control list of the blob
containers inside them. I modified the question above with the same
information. Will terraform will help on the above, if not, ARM can
help ?
It is not possible with Terraform or ARM template to set/get ACL's. You can use Azure SDK's which are mentioned in this Microsoft Documentation as per your requirement.
As part of the IaC workflow we are implementing through Terraform, for some of the common resources we provision for users, we want to create a centralized remote state store. We are using Azure cloud so the default choice is to use Azure blob storage. We were initially thinking of creating one storage continaer per pipeline and store the state there. But then there was another thought wherein create one container and create directory structure per pipeline and store the state there. I understand blob storage by default is the flat file system. But Azure storage also gives an option to enable hierarchical file structure with ADLS2. Did anyone attempt to store terraform states by enabling hierarchical file system structure in Azure? Is that a valid option at all? Also, can anyone suggest what would be the recommended apporach in my scenario?
Thanks
Tintu
Never tried with ADLS2 by using its hierarchical feature. But since your requirement is to save the statefiles in same container but within different folders, you can try out specifying different folder structure while configuring the backend in backend.tf
terraform init backend-config "key=$somePath/<tfstate-file-name>.tfstate"
And pass different somePath values from a different backend.tfvars files.
I hope this answers your question!
I'm coming from an AWS background and trying to get something relatively simple to work in Azure, but currently having a rough time parsing through all the documentation and Microsoft-specific jargon to find what I'm looking for.
I'm trying to download a single file I have in Azure blob storage (which from what I can gather is the closest equivalent to storing an object in S3) onto a Linux VM with the CLI. From what I've read. the command I need to run is:
az storage copy -s https://myaccount.blob.core.windows.net/mycontainer/myfile -d .
A couple of questions for automation purposes, however. Is there an equivalent to IAM roles for VMs in Azure? That way, I won't have to keep credential file(s) on the VM itself. But if not, what type of credentials should I generate for best practice? I ask, because it seems there's about a half-dozen different choices in Azure, and all I'm really looking for is something basic. Just essentially need what amounts to a "programmatic-access only" user in AWS. That way I can also lock down its permissions to a very specific set of resources and/or actions.
As always, thanks in advance!
Is there an equivalent to IAM roles for VMs in Azure?
What you're looking for is Managed Identity. Basically the way it works is that you assign an identity to your Azure resource (a Linux VM in your case) so that your Azure resource behaves like any other user in your Azure AD and then assign appropriate role/access to that user.
You can learn more about Managed Identities in Azure here: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview.