Use ARM template to create directories in Azure Storage Containers? - azure

I can't seem to find documentation saying this can be done? But neither saying that this can't be done. So wondering if anyone else has had this idea/query in past.
I have ARM template to generate our new data lake and it's creating storage account and container. It'd have been great to create folder structure also using the template. Or maybe the directories don't count as a 'resource'?

ARM api does not support that: https://learn.microsoft.com/en-us/rest/api/storagerp/blobcontainers/create
you can only work on containers\file shares\tables and queues. You cannot create object inside those. So its not possible with ARM Templates. They can only operate against ARM apis.
you can use deploymentScript resource as a workaround

Related

Is there any way to automate the ACL creation inside existing storage account blob?

We have Azure Storage account Genv2 enabled with Datalake Genv2 feature and would need to create blobs inside the containers as per the timely requirement and assign set of "access control lists" for a set of service principles with different level of accesses. Looking for a solution with terraform and couldnt find any helpful article on this.
Requirement is as below.
Read the existing Storage account information ( which already have some blobs created with some accesspolicy)
Create new blobs inside that storage account and assign set of access control for a list of service principals with different kind of access like read, write
should be able to modify the existing access control list also inside the existing blobs.
Any helps highly appreciated..
The Storage account is enabled with Datalake Gen v2 feature and
requirement is to create and manage access control list of the blob
containers inside them. I modified the question above with the same
information. Will terraform will help on the above, if not, ARM can
help ?
It is not possible with Terraform or ARM template to set/get ACL's. You can use Azure SDK's which are mentioned in this Microsoft Documentation as per your requirement.

Efficient way to manage terraform state with Azure storage contianer per pipeline

As part of the IaC workflow we are implementing through Terraform, for some of the common resources we provision for users, we want to create a centralized remote state store. We are using Azure cloud so the default choice is to use Azure blob storage. We were initially thinking of creating one storage continaer per pipeline and store the state there. But then there was another thought wherein create one container and create directory structure per pipeline and store the state there. I understand blob storage by default is the flat file system. But Azure storage also gives an option to enable hierarchical file structure with ADLS2. Did anyone attempt to store terraform states by enabling hierarchical file system structure in Azure? Is that a valid option at all? Also, can anyone suggest what would be the recommended apporach in my scenario?
Thanks
Tintu
Never tried with ADLS2 by using its hierarchical feature. But since your requirement is to save the statefiles in same container but within different folders, you can try out specifying different folder structure while configuring the backend in backend.tf
terraform init backend-config "key=$somePath/<tfstate-file-name>.tfstate"
And pass different somePath values from a different backend.tfvars files.
I hope this answers your question!

Automatically adding data to cosmos DB through ARM template

I made an ARM template which runs through an azure devops pipeline to create a new cosmos instance and put two collections inside it. I'd like to put some data inside the collections (fixed values, same every time). Everything is created in the standard way, e.g. the collections are using
"type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers"
I think these are the relevant docs.
I haven't found mentions of automatically adding data much, but it's such an obviously useful thing I'm sure it will have been added. If I need to add another step to my pipeline to add data, that's an option too.
ARM templates are not able to insert data into Cosmos DB or any service with a data plane for many of the reasons listed in the comments and more.
If you need to both provision a Cosmos resource and then insert data into it you may want to consider creating another ARM template to deploy an Azure Data Factory resource and then invoke the pipeline using PowerShell to copy the data from Blob Storage into the Cosmos DB collection. Based upon the ARM doc you referenced above it sounds as though you are creating a MongoDB collection resource. ADF supports MongoDB so this should work very well.
You can find the ADF ARM template docs here and the ADF PowerShell docs can be found here. If you're new to using ARM to create ADF resources, I recommend first creating it using the Azure Portal, then export it and examine the properties you will need to drive with parameters or variables during deployment.
PS: I'm not sure why but this container resource path (below) you pointed to in your question should not used as it breaks a few things in ARM, namely, you cannot put a resource lock on it or use Azure Policy. Please use the latest api-version which as of this writing is 2021-04-15.
"type": "Microsoft.DocumentDb/databaseAccounts/apis/databases/containers"

Is it possible to create a CosmosDB with ARM templates?

I'm trying to create an ARM template that provisions a CosmosDB instance, the only documentation I've found however is for DocumentDB which I'm aware is what Cosmos used to be called. If I provision a DocumentDB cluster will that create a CosmosDB instance? If not Does anybody have any experience of provisioning a CosmosDB with an ARM template and if so, is there any reference material I can read?
Yeah, those are the same. Just use this examples repo. You are looking for any examples with documentdb or cosmosdb in the name.
Alternatively you can créate a CosmosDB instance in the portal and look at the template that is being used to créate it (under deployments in the resource group where you deployed it).

What is using my Azure Blob Accounts

After starting to use it for a while we have accumulated a bunch of storage accounts. There doesn’t seem to be a way to figure out if those storage accounts are used, and what they are used by. It looks like even spinning up a VM creates a storage account.
Is there a way (without the PowerShell) to see what is being used and delete the unused storage?
As others have said, it's not possible to accurately give you an answer, but you can iterate over the storage accounts, and within that loop iterate the containers to see which ones have blobs or not. I would approach this from within VS by creating a new project, then using NuGet to add a reference to the WindowsAzure.Storage client library. This will make iterating those collections easier. It is essentially a wrapper to the Azure Management API. There is likely a way to do it with PS as well.

Resources