I'm working with an Azure SQL DW scaled to DW6000 and I want to put a user in the 'SloDWGroupC04' workload group.
However, DW6000 only provides the defaults of smallrc, mediumrc, largerc, and xlargerc resource classes that appear to have C00, C05, C06, and C07 respectively according to the documentation.
Usually I can run EXEC sp_addrolemember 'largerc', 'user' (which would put 'user' in C05) but the workload group C04 doesn't have a role yet.
Do I need to create a role first? How do I go about leveraging the other available workload groups beyond the default roles?
These SloDW* workload groups are internal use only. This generic set of workload groups are mapped to the resource classes (i.e. mediumrc, largerc etc.) depending on the DWU setting. For example, the article you have referenced shows the mapping for DW500. In that case C04 is used for the xlargerc.
Unfortunately you cannot alter the mappings yourself at this time. The mappings are fixed. If you would like to see specific improvements in this area I would encourage you to put your suggestions on the SQLDW feedback page.
Related
I am using below link to create/assign Standard Azure policies for my Management Groups. The problem is my org. has already created Management Groups and Subscriptions manually using Azure Portal. Can i still create / apply policies using ESLZ TF code and apply to these manually created Management groups using TF code:
https://github.com/Azure/terraform-azurerm-caf-enterprise-scale
When i see the code archetype (policy) is very tightly coupled to MG creation ?
locals.management_groups.tf:
"${local.root_id}-landing-zones" = {
archetype_id = "es_landing_zones"
parameters = local.empty_map
access_control = local.empty_map
}
archetype_id is the policy.
The enterprise scale example follows what they call a "supermodule approach".
The example from locals.management_groups.tf that you posted there is just the configuration layer of the module. If you want CAF to adopt an existing management group instead of creating a new one, you have to terraform import it. That requires you to know the resource address, which you can derive from the the actual resource definition.
So you're looking for resource "azurerm_management_group" and that's in resources.management_groups.tf. That file declares multiple management group resources level_1 to level_6, and they're using for_each to generate the resources from the module's configuration model.
So after you've somehow traced your way through the modules' configuration model and determined where you want your existing management group to end up in the CAFs model, you can run an import as described in terraform's docs and the azurerm provider's docs like this:
terraform import azurerm_management_group.level_3["my-root-demo-corp"] providers/Microsoft.Management/managementGroups/group1
However, even though all of that is technically possible, I'm not sure this is a good approach. The enterprise scale supermodule configuration model is super idiosyncratic to trace through - it's stated goal is to replace "infrastructure as code" with "infrastructure as data". You may find one of the approaches described in Azure Landing Zone for existing Azure infrastructure more practical for your purposes.
You could import the deployed environment into the TF state and proceed from there.
However, do you see an absolute necessity ? IMO, setting up the LZ is a one-time activity. However, one can argue that ensuring compliance across active & DR LZs is a good use-case for a TF based LZ deployment.
Currently for our Azure Disaster recovery plan we replicate workloads from a primary site/region to a secondary site. Where we mirror the source VM config and create required or associated resource groups, storage accounts, virtual networks, etc.
We are looking into an alternate method the wouldn't require a second resource group. This would require:
Use one, already existing resource group; i.e. testGroup-rg in East-US
Deploy new IaC components into the same RG but in Central-US
So in the singular resource group, if we wanted a function app, we would have two sets of components. testFuncApp in East-US and testFuncApp in Central-US.
This way we would only ever have one set of IaC created. Of course we would need to automate how to flow traffic etc. into a particular region if both exist.
Is this a possibility? If it is, is it even necessary/worth it?
Unfortunately there is no way to use the same RG. We need to have a resource group in target region if not Site Recovery creates a new resource group in the target region, with an "asr" suffix.
I am trying to understand the difference between google_service_account_iam_binding and google_service_account_iam_member in the GCP terraform provider at https://www.terraform.io/docs/providers/google/r/google_service_account_iam.html.
I understand that google_service_account_iam_binding is for granting a role to a list of members whereas google_service_account_iam_member is for granting a role to a single member, however I'm not clear on what is meant by "Authoritative" and "Non-Authoritative" in these definitions:
google_service_account_iam_binding: Authoritative for a given role. Updates the IAM policy to grant a role to a list of members. Other roles within the IAM policy for the service account are preserved.
google_service_account_iam_member: Non-authoritative. Updates the IAM policy to grant a role to a new member. Other members for the role for the service account are preserved.
Can anyone elaborate for me please?
"Authoritative" means to change all related privileges, on the other hand, "non-authoritative" means not to change related privileges, only to change ones you specified.
Otherwise, you can interpret authoritative as the single source of truth, and non-authoritative as a piece of truth.
This link helps a lot.
Basically it means: if a role is bound to a set of IAM identities and you want to add more identities, authoritative one will require you to specify all the old identities again plus the new identies you wanna add otherwise any old identities you didn't specify will be unbinded from the role.
It is quite close to the idea of force push in git cause it will overwrite any existing stuff. In our case it is identity.
Non-authoritative is the opposite:
You only need to care the identity you are updating
Authoritative may remove existing configurations and destroy your project, while Non-Authoritative does not.
The consequence of using the Authoritative resource can be severely destructive. You may regret if you used them. Do not use them unless you are 100% confident that you must use Authoritative resources.
Usability improvements for *_iam_policy and *_iam_binding resources #8354
I'm sure you know by now there is a decent amount of care required when using the *_iam_policy and *_iam_binding versions of IAM resources. There are a number of "be careful!" and "note" warnings in the resources that outline some of the potential pitfalls, but there are hidden dangers as well. For example, using the google_project_iam_policy resource may inadvertently remove Google's service agents' (https://cloud.google.com/iam/docs/service-agents) IAM roles from the project. Or, the dangers of using google_storage_bucket_iam_policy and google_storage_bucket_iam_binding, which may remove the default IAM roles granted to projectViewers:, projectEditors:, and projectOwners: of the containing project.
The largest issue I encounter with people running into the above situations is that the initial terraform plan does not show that anything is being removed. While the documentation for google_project_iam_policy notes that it's best to terraform import the resource beforehand, this is in fact applicable to all *_iam_policy and *_iam_binding resources. Unfortunately this is tedious, potentially forgotten, and not something that you can abstract away in a Terraform module.
See terraform/gcp - In what use cases we have no choice but to use authoritative resources? and reported issues.
A simple example. If you run the script, what you think will happen. Do you think you can continue using your GCP project?
resource "google_service_account" "last_editor_standing" {
account_id = "last_editor_standing"
display_name = "last editor you will have after running this terraform"
}
resource "google_project_iam_binding" "last_editor_standing" {
project = "ToBeDemised"
members = [
"serviceAccount:${google_service_account.last_editor_standing.email}"
]
role = "roles/editor"
}
This will at least delete the Google APIs Service Agent which is essential to your project.
If you still think it is the type of resource to use, use at own your risk.
Is there any way to disable or stop a particular resource group temporarily? I know we can delete the resource group or we can stop certain services under the resource group but I am unable to find a way where I can just shut down the resource group or all of it's resources at once, temporarily.
Please let me know if I can provide few more details about this.
Thanks.
This does not seem to be possible at the moment but a request has been made here, however, no response from Microsoft on what it's status.
In general, if there are features that are not available in e.g. Azure, use the feedback site to suggest and vote on new features.
However, if you only got some specific type of resources in your resource group, like e.g. virtual machines, then you can stop them all in one PowerShell command like this:
Get-AzureRmResourceGroup <group name> | Stop-AzureRmVM -Force
Note: this approach is highly dependent on the type of resource and not a generic solutions like requested
A resource group is just a bounding-box, serving as a grouping mechanism and a security boundary. You cannot "stop" a resource group, as a resource group is never running. Yes, you can delete a resource group (along with everything in it), but that's a one-shot operation. It's not a fine-grained resource-management operation.
As for the services inside a resource group: some can be stopped, some cannot. For instance, you cannot stop a storage account. Others have very different behaviors when stopped: A VM simply sleeps/hibernates until restarted with everything preserved, while an HDInsight cluster, when stopped, deletes everything.
TL;DR there is currently no way to point to a resource group and have it stop all of its services, given the variability of behavior (and the fact there's no such supported API). You'll need to manage your resource starts/stops.
I just had a new "MSDN account" hit its budget limit and that made me realize this SHOULD be achievable!
When this happened Microsoft "disabled" my subscription.
In my case, I'm actually fine with having to "fence the resources" within a subscription if I had to. But at the moment, I haven't found a way to easily stop/start it in this manor. Anyone a guru with the Azure budgets? It looks like they can be applied at a Resource Group level as well.
Can you "enable/disable" resource groups or subscriptions this way?
Simply want to create something. Pay for it, of course. Pay for storage, sure. But 'disable' it, until I need to run it. Then, Enable it. Simple. :)
I've been upvoting and watching this "Feature Request" thread for some time:
https://feedback.azure.com/forums/217313-networking/suggestions/17670613-hibernate-pause-a-resource-group-or-subscription
I would like to keep each VM in separate resource group for ease for lifecycle management. I have a cluster containing n VLMs.
So I create one resource group for common things like public IP, load balancer and put availabilitySet declaration into it because is also must be shared between VMs.
Then I create VM in separate resource group and reference to availabilitySet with
"availabilitySet": {
"id": "[resourceId('Microsoft.Compute/availabilitySets',variables('availabilitySetName'))]"
},
Of cause 'availabilitySetName' is defined.
When I deploy my template I get an error saying
{"error":{"code":"BadRequest","message":"Entity resourceGroupName in resource reference id /subscriptions/a719381f-1fa0-4b06-8e29-ad6ea7d3c90b/resourceGroups/TB_PIP_OPSRV_UAT/providers/Microsoft.Compute/availabilitySets/tb_avlbs_opsrv_uat is invalid."}}
I double checked that resource and availability set name are specified correctly.
Does it mean that I can't put a set in separate resource group from VM?
Unfortunately, having a VM use an availabilitySet in a different resource group is not supported :(.
First of all, let me ask you why you want different resource groups? I strongly believe that you're overthinking it with multiple resource groups. A resource group is basically your "Entire System" and within the boundaries of one solution, you should only have one resource group for production, one for beta/staging etc, but never mix.
If you're selling SaaS to your customers it would make sense to have one resource group for each of your customers.
And as you know, a Resource group is simply a way for you to link together and manage all of the assets in your solution; vm's, storage, databases etc under one common name. I am very doubtful as to why one would want to consider multiple resource groups in a single solution, however, I am always willing to learn :)
Availability groups
Now, Availability groups are a different thing. This has to do with "Update Domains" and "Fault Domains" for your VM instances. Because Azure does not keep 3 separate VM's for you, as it does with most of it's PaaS services, you have to manage these yourself to ensure full uptime. Basically, when you're adding two or more VM's in an Availability Set, you're ensured that planned or unplanned events, at least one of the VM's will be available to meet the SLA.
Trying to combine the two in an effort of preventing downtime may sound like a good idea, but it is not solving any problems that I'm aware of. Like the old saying goes: if it aint broke, don't fix it :)