I want to look up the roleDefinition ID in Azure to allow me to use a role name rather than ID - as it is more user friendly
This is what I have tried
param roleDefinitionName string = ''
resource existingRoleDefinition 'Microsoft.Authorization/roleDefinitions#2022-04-01' existing = {
name: 'Storage Blob Data Reader'
}
output test string = 'ID is ${existingRoleDefinition.id}'
The output returned however is
/subscriptions/xxx/resourceGroups/rg-my-ab-h-uks-01/providers/Microsoft.Authorization/roleDefinitions/Storage Blob Data Reader
rather than 2a2b9908-6ea1-4ae2-8e65-a410df84e7d1
Can anyone help?
Currently there's no way to do this in the template language - you can provide your own mapping for built-in roles (which some users do) but that adds a bit of overhead/maintenance so really depends on the problem you're trying to solve to see if that's worth it...
Related
Trying to create a simple conditional access policy report,
$Policies = Get-AzureADMSConditionalAccessPolicy
$Policies[1].Conditions.Users.IncludeGroups
These group object ID's can be resolved using:
Get-AzureADObjectByObjectId -ObjectIds xxxx-xxxxx
But how to resolve, for example, locations?
$Policies[1].Conditions.Locations.ExcludeLocations
Could not find the location ID's using Get-AzureADObjectByObjectId.
Any ideas appreciated ...
For the location id's you need to query the namedLocations API rather than the directory itself. For this, use Get-AzureADMSNamedLocationPolicy:
$Policies[1].Conditions.Locations.ExcludeLocations |ForEach-Object {
Get-AzureADMSNamedLocationPolicy -PolicyId $_
}
Context: I'm developing a TF provider (here's the official guide from HashiCorp).
I run into the following situation:
# main.tf
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
parent_name = foo.example.name
parent_lastname = foo.example.lastname
}
where I have to declare parent_name and parent_lastname (effectively duplicate them) explicitly to be able to read these values that are necessary for read / create request for Bar resource.
Is it possible to use a fancy trick with d *schema.ResourceData in
func resourceBarRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
to avoid duplicated in my TF config, i.e. have just:
resource "foo" "example" {
id = "foo-123"
name = "foo-name"
lastname = "foo-lastname"
}
resource "bar" "example" {
id = "bar-123"
parent_id = foo.example.id
}
infer foo.example.name and foo.example.lastname just based on foo.example.id in resourceBarRead() somehow so I won't have to duplicate those fields in both resources?
Obviously, this is minimal example and let's assume I need both foo.example.name and foo.example.lastname to send a read / create request for Bar resource? In other words, can I iterate through other resource in TF state / main.tf file based on target ID to find its other attributes? It seems to be a useful feauture howerever it's not mentioned in HashiCorp's guide so I guess it's undesirable and I have to duplicate those fields.
In Terraform's provider/resource model, each resource block is independent of all others unless the user explicitly connects them using references like you showed.
However, in most providers it's sufficient to pass only the id (or similar unique identifier) attribute downstream to create a relationship like this, because the other information about that object is already known to the remote system.
Without knowledge about the particular remote system you are interacting with, I would expect that you'd be able to use the value given as parent_id either directly in an API call (and thus have the remote system connect it with the existing object), or to make an additional read request to the remote API to look up the object using that ID and obtain the name and lastname values that were saved earlier.
If those values only exist in the provider's context and not in the remote API then I don't think there will be any alternative but to have the user pass them in again, since that is the only way that local values (as opposed to values persisted in the remote API) can travel between resources.
I'm pretty new to Terraform. I'm trying to use the sops provider plugin for encrypting secrets from a yaml file:
Sops Provider
I need to create a Terraform user object for a later provisioning stage like this example:
users = [{
name = "user123"
password = "password12"
}]
I've prepared a secrets.values.enc.yaml file for storing my secret data:
yaml_users:
- name: user123
password: password12
I've encrypted the file using "sops" command. I can decrypt the file successfully for testing purposes.
Now I try to use the encrypted file in Terraform for creating the user object:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
# user data decryption
users = yamldecode(data.sops_file.test-secret.raw).yaml_users
Unfortunately I cannot debug the data or the structure of "users" as Terraform doesn't display sensitive data. When I try to use that users variable for the later provisioning stage than it doesn't seem to be what is needed:
Cannot use a set of map of string value in for_each. An iterable
collection is required.
When I do the same thing with the unencrypted yaml file everything seems to be working fine:
users = yamldecode(file("secrets.values.dec.yaml")).yaml_users
It looks like the sops provider decryption doesn't create an array or that "iterable collection" that I need.
Does anyone know how to use the terraform sops provider for decrypting an array of key-value pairs? A single value like "adminpassword" is working fine.
I think the "set of map of string" part of this error message is the important part: for_each requires either a map directly (in which case the map keys become the instance identifiers) or a set of individual strings (in which case those strings become the instance identifiers).
Your example YAML file shows yaml_users being defined as a YAML sequence of maps, which corresponds to a tuple of objects on conversion with yamldecode.
To use that data structure with for_each you'll need to first project it into a map whose keys will serve as the unique identifier for each instance of the resource. Assuming that the name values are suitably unique, you could project it so that those values are the keys:
data "sops_file" "test-secret" {
source_file = "secrets.values.enc.yaml"
}
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
u.name => u
})
}
The result being a sensitive value adds an extra wrinkle here, because Terraform won't allow using a sensitive value as the identifier for an instance of a resource -- to do so would make it impossible to show the resource instance address in the UI, and impossible to describe the instance on the command line for commands that need that.
However, this does seem like exactly the use-case shown in the example of the nonsensitive function at the time I'm writing this: you have a collection that is currently wholly marked as sensitive, but you know that only parts of it are actually sensitive and so you can use nonsensitive to explain to Terraform how to separate the nonsensitive parts from the sensitive parts. Here's an updated version of the locals block in my previous example using that function:
locals {
users = tomap({
for u in yamldecode(data.sops_file.test-secret.raw).yaml_users :
nonsensitive(u.name) => u
})
}
If I'm making a correct assumption that it's only the passwords that are sensitive and that the usernames are okay to disclose, the above will produce a suitable data structure where the usernames are visible in the keys but the individual element values will still be marked as sensitive.
local.users then meets all of the expectations of resource for_each, and so you should be able to use it with whichever other resources you need to repeat systematically for each user.
Please note that Terraform's tracking of sensitive values is for UI purposes only and will not prevent this passwords from being saved in the state as a part of whichever resources make use of them. If you use Terraform to manage sensitive data then you should treat the resulting state snapshots as sensitive artifacts in their own right, being careful about where and how you store them.
I have the following use-case: I'm using a combination of the Azure DevOps pipelines and Terraform to synchronize our TAP for Grafana (v7.4). Intention is that we can tweak and tune our dashboards on Test, and push the changes to Acceptance (and Production) via the pipelines.
I've got one pipeline that pulls in the state of the Test environment and writes it to a set of json files (for the dashboards) and a single json array (for the folders).
The second pipeline should use these resources to synchronize the Acceptance environment.
This works flawlessly for the dashboards, but I'm hitting a snag putting the dashboards in the right folder dynamically. Here's my latest working code:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
}
The folder resources pushes the folders based on a variable list of names that I pass via variables. This generates the folders correctly.
The dashboard resource pushes the dashboards correctly, based on all dashboard files in the specified folder.
But now I'd like to make sure the dashboards end up in the right folder. The provider specifies that I need to do this based on the folder UID, which is generated when the folder is created. So I'd like to take the output from the grafana_folder resource and use it in the grafana_dashboard resource. I'm trying the following:
resource "grafana_folder" "folders" {
for_each = toset(var.grafana_folders)
title = each.key
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset(path.module, "../dashboards/*.json")
config_json = file("${path.module}/${each.key}")
folder = lookup(transpose(grafana_folder.folders), "Station_Details", "Station_Details")
depends_on = [grafana_folder.folders]
}
If I read the Grafana Provider github correctly, the grafana_folder resource should output a map of [uid, title]. So I figured if I transpose that map, and (by way of test) lookup a folder title that I know exists, I can test the concept.
This gives the following error:
on main.tf line 38, in resource "grafana_dashboard" "dashboards":
38: folder = lookup(transpose(grafana_folder.folders),
"Station_Details", "Station_Details")
Invalid value for "default" parameter: the default value must have the
same type as the map elements.
Both Uid and Title should be strings, so I'm obviously overlooking something.
Does anyone have an inkling where I'm going wrong and/or have suggestions on how I can do this (better)?
I think the problem this error is trying to report is that grafana_folder.folders is a map of objects, and so passing it to transpose doesn't really make sense but seems to be succeeding because Terraform has found some clever way to do automatic type conversions to produce some result, but then that result (due to the signature of transpose) is a map of lists rather than a map of strings, and so "Station_Details" (a string, rather than a list) isn't a valid fallback value for that lookup.
My limited familiarity with folders in Grafana leaves me unsure as to what to suggest instead, but I expect the final expression will look something like the following:
folder = grafana_folder.folders[SOMETHING].id
SOMETHING here will be an expression that allows you to know for a given dashboard which folder key it ought to belong to. I'm not seeing an answer to that from what you shared in your question, but just as a placeholder to make this a complete answer I'll suggest that one option would be to make a local map from dashboard filename to folder name:
locals {
# a local value probably isn't actually the right answer
# here, but I'm just showing it as a placeholder for one
# possible way to map from dashboard filename to folder
# name. These names should all be elements of
# var.grafana_folders in order for this to work.
dashboard_folders = {
"example1.json" = "example-folder"
"example2.json" = "example-folder"
"example3.json" = "another-folder"
}
}
resource "grafana_dashboard" "dashboards" {
for_each = fileset("${path.module}/dashboards", "*.json")
config_json = file("${path.module}/dashboards/${each.key}")
folder = grafana_folder.folders[local.dashboard_folders[each.key]].id
}
I am trying to write a plugin that will trigger when an account is created. If there is a originating lead I want to fetch the company name in the lead and put it in the account name field. What I'm not sure how to do is to obtain the information out of the lead entity.
I have the following code (I'll keep updating this)...
Entity member = service.Retrieve("lead",
((EntityReference)account["originatingleadid"]).Id, new ColumnSet(true));
if (member.Attributes.Contains("companyname"))
{
companyName = member.Attributes["companyname"].ToString();
}
if (context.PostEntityImages.Contains("AccountPostImage") &&
context.PostEntityImages["AccountPostImage"] is Entity)
{
accountPostImage = (Entity)context.PostEntityImages["AccountPostImage"];
companyName = "This is a test";
if (companyName != String.Empty)
{
accountPostImage.Attributes["name"] = companyName;
service.Update(account);
}
}
I'm not going to spoil the fun for you just yet but the general idea is to:
Catch the message of Create.
Extract the guid from your Entity (that's your created account).
Obtain the guid from its EntityReference (that's your lead).
Read the appropriate field from it.
Update the name field in your account.
Store the information.
Which of the steps is giving you issues? :)
As always, I recommend using query expressions before fetchXML. YMMV
Is lead connected to the account? Just use the IOrganizationService.Retrieve Method
To retrieve the correct lead (assuming you have the lead id from the account entity)..
Create the organizationService in the execute method of your plugin.
http://msdn.microsoft.com/en-us/library/gg334504.aspx
Also here is a nice example to write the plugin:
http://mscrmkb.blogspot.co.il/2010/11/develop-your-first-plugin-in-crm-2011.html?m=1