Tag a Managed Resource Group for Azure Databricks - azure

Given that managed resource groups are mandatory for creating an Azure Databricks cluster, is there any way that I can tag the resource group in order to comply with the tagging policy on my subscription?
I am using the template here to deploy my resources.

Based on the post I found:,
Since Feb 10 2020, the Databricks resource will propagate any tags applied to the Databricks resource to the managed resources it created.
I think the solution is more simpler now by tagging Databricks resource directly.
Hope this answer (by others) works for everyone seeing this post.

Any tags that you use when creating the Databricks workspace will be used for the managed resource group as well so just make sure you add the needed tags to the workspace when creating it. I know this works when creating the workspace from the Azure Portal but I can't see any reason this wouldn't work when using ARM templates (or Terraform for that matter).
Also, I'm also pretty sure that future changes will be propagated to said resource group.

Related

AzureTag Policy automation using AzureDevops pipelines

We have a requirement to force certain tags to our azure resources. Some tags which we need to enforce from subscription level say(eg: st1=st1, st2=st2..), But some are needed to be resourcegroup level(rt1-rt1, rt2=rt2..) and others are to specific resource type (like, aks, appservice, storage account).
By going through the MS doc I found that this can be achieved using azure policy. So the plan is to create azure policy with allowed tags and tag values which need to be enforced on
i) in subscription level
2) resource group level
3) Resource type wise
4) and for other remaining app specific tags to the resources use "az tag update" command.
We need to use automated solution to achieve all these with Azurepipleine and shell commands or scripts as we have only linux machines.
so for the 4th point i got some working pipeline solution to add the app specific tags to the resource level.
But for the requirements 1 to 3, will there any arm or script already available , so that we can integrate them with our azurepipelines.
Any automated solution or suggestion you already tried on this?
update on 12/12
As this docs for Use Tags to organize your azure resources sharing to us, you could use rest api or sdk to add the tags for your resource.
And for the loop for giving the resources with the tags from the resource groups. You could look into this potential workaround for reference. Bash Script

Azure resource group - How can I tell which ones are "related"?

Recently I was provisioning a new Azure Synapse resource, which ended up creating 2 different resource groups.
I understand a resource group is basically a container for related items, fair enough but thinking about it more I am actually confused why would Azure decide to create two separate resource groups instead of just putting it all into one?
The bigger burning question I have is after creating a number of resources - Lets say each one spawns multiple resource groups.
How can I tell which resource group is a "child" or a "parent" of another?
#rodneyc8063 Thanks for updating the concern of your question. Posting your discussion in the comments as an Answer to help other community members.
As said by Daniel Mann you are getting the additional resource group because your synapse workspace is creating the managed resource group.
A managed resource group is like a container which can hold the resources required by your resource. It is created by default when your workspace is created.
You can name it if you want, else its name will be created automatically.
When you delete the Main Resource Group of your resource, then the resources inside it also will be deleted.
The managed resource group also deleted when you delete the resource(managed application).
That’s why the second resource group is deleted when you delete the first one.
As far as I know apart from this there is no relationship between them.
References:
Overview of managed applications - Azure Managed Applications | Microsoft Docs
Blog from DataSimAntics about managed resource group.

How to import a remote resource while performing an apply in Terraform?

I'm using Terraform to create some resources. One of the side effects of creating the resource is the creation of another resource (let's call this B). The issue is that I can't access B to edit it in terraform because terraform considers it as "out of the state". I can't also import B in the state before the terraform apply is started because B does not exist.
Is there any solution to add (import) a remote resource to the state while running the apply command?
I'm thinking about this as a general question, if there was no solution I can also share the details of the resources I'm creating.
More details:
When I create a "Storage Account" on Azure using Terraform and enable static_website, Azure automatically creates a storage_container named $web. I need to edit one of the attributes of the $web container but Terraform tells me it is not in the current state and needs to be imported. Storage Account is A, Container is B
Unfortunately I do not have an answer to your specific question of importing a resource during an apply. The fundamental premise of Terraform is that it manages resources from creation. Therefore, you need to have a (in this case, azurerm_storage_container) resource declared, before you can import the current state of that resource into your state.
In an ideal world you would be able to explicitly create the container first and specify that the storage account uses that, but a quick look in the docs does not suggest that is an option (and I think is something you have already tried). If it is not exposed in Terraform, that is likely because it is not exposed by the Azure API (Disclaimer: not an Azure user)
The only (bad) answer I can think to suggest, is that you define an azurerm_storage_container data resource in your code, dependent on the the azurerm_storage_account resource, that will be able to pull back the details of the created container. You could then potentially have a null_resource that calls a local-exec provisioner that can fire a CLI command, using the params taken from the data resource to allow you to use the Azure CLI tools to edit the container.
I really hope someone else can come along with a better answer tho :|

Apply NSG/ASG by default on new subnets (Azure)

We manage an Azure subscription operated by several countries. Each of them is quite independant about they can do (create/edit/remove resources). A guide of good practices has been sent to them, but we (security team) would like to ensure a set of NSG is systematically applied for every new subnet/vnet created.
Giving a look to Azure Triggers, I am not sure that subnet creation belongs to the auditable events. I also was told to give a look to Azure policy, but once again I am not sure this will match our expectations which are : For every new vnet/subnet, automatically apply a set of predefined NSG.
Do you have any idea about a solution for our need ?
I have done work like this in the past (not this exact issue) and the way I solved it was with an Azure Function that walked the subscription and looked for these kinds of issues. You could have the code run as a Managed Identity with Reader rights on the subscription to report issues, or as a Contributor to update the setting. Here's some code that shows how you could do this with PowerShell https://github.com/Azure/azure-policy/tree/master/samples/Network/enforce-nsg-on-subnet
You could consider using a Policy that has a DeployIfNotExists Action, to deploy an ARM template that contains all the data for the NSG. https://learn.microsoft.com/en-us/azure/governance/policy/samples/pattern-deploy-resources
You can get the ARM template by creating the NSG and getting the template:
GettingNSGTemplate
Note also that creating a subnet is audited, you can see it in the Activity Log for the VNet. See the screen shot.
AddingASubnet

Update WadCfg "only" of existing Azure Service Fabric cluster?

I want to monitor Perfomance metrics of a existing Service Fabric Cluster.
Here is the link of Performance metrics -
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-event-generation-perf
I went through this Microsoft documentation -
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-perf-wad
My problem is, The ARM template I downloaded during Service Fabric creation time is quite big and contains lot of params and I don't have the template-params file. I think it is possible to build the params file but it will be time consuming.
Is it possible to download template and template-params file of
existing service fabric cluster ?
If no, Is it possible to just update the "WadCfg" section to add new
performance counters ?
Your can export your entire resource group with all definitions and parameters, there you can find all parameters(as default parameters) for the resources deployed in the resource group. I've never done for SF cluster, but a quick look to an existing resource group I have I could see the cluster definition included.
This link explain how: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-export-template
In Summary:
Find the resource group where your cluster is deployed
Open the resource group and navigate to 'Automation Scripts'
Click 'Download' on top bar
Open the ARM template with all definitions
Make the modifications and save
Publish the updates
1:
2:
You could also add it to a library and deploy from there, as guided in the link above.
From the docs: Not all resource types support the export template function. To resolve this issue, manually add the missing resources back into your template.
To be honest, I've never deployed this way other than test environments, so I am not sure if it is safe for production.

Resources