I have a problem when accessing azure automation account variable if the encrypted flag is true. Value is empty. Here are the steps:
Step 1:
resource "azurerm_automation_variable_string" "db-password" {
name = "test-database-password"
resource_group_name = var.rgr-initial
automation_account_name = var.aut-acc-name
value = "bhdc3tSLZjZUcVj8"
encrypted = true
}
Step 2:
data "azurerm_automation_variable_string" "database-password-var" {
name = "test-database-password"
resource_group_name = var.rgr-initial
automation_account_name = var.aut-acc-name
}
Step 3:
password = data.azurerm_automation_variable_string.database-password-var.value
If the flag encrypted is false the I am able to get the value. If it is true and the value is encrypted, it comes empty.
Is there something I am doing wrong?
This is because you can't read encrypted value by design. From Azure docs:
You can't use this [Get-AzAutomationVariable] cmdlet to retrieve the value of an encrypted variable. The only way to do this is by using the internal Get-AutomationVariable cmdlet in a runbook or DSC configuration. For example, to see the value of an encrypted variable, you might create a runbook to get the variable and then write it to the output stream:
Related
As per my requirement using Azure Python SDK I want to get Azure VM family details if the existing VM belongs to N servies family or something else.
Below is the code which I am using.
from azure.mgmt.compute import ComputeManagementClient
from azure.common.credentials import ServicePrincipalCredentials
client_id = "sp appId"
secret = "sp password"
tenant = "sp tenant"
credentials = ServicePrincipalCredentials(
client_id = client_id,
secret = secret,
tenant = tenant
)
Subscription_Id = ''
compute_client =ComputeManagementClient(credentials,Subscription_Id)
resource_group_name='Networking-WebApp-AppGW-V1-E2ESSL'
virtual_machine_scale_set_name='VMSS'
results = compute_client.resource_skus.list(raw=True)
resourceSkusList = [result.as_dict() for result in results]
r=json.dumps(resourceSkusList)
print(r)
But here above code gives me details of all available resource SKUs and I wanted the resource SKU on the basis of a VM names as I want to check if a particular VM belongs to which family.
Could anyone help me in achieving this?
The SkuOperations class contains a list() method starting with azure-mgmt-storage==16.0.0. You can use a set to get rid of duplicate SKU names since the method will return numerous SKUs for each area and storage type it supports.
from azure.mgmt.storage import StorageManagementClient
from azure.identity import DefaultAzureCredential
storage_client = StorageManagementClient(
credential=DefaultAzureCredential(),
subscription_id="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
)
skus = {sku.name for sku in storage_client.skus.list()}
for sku in skus:
print(sku)
Iterate the set for good format:
for sku in skus:
print(sku)
Which outputs the SKUs on newlines like so:
Standard_LRS
Standard_RAGRS
Standard_RAGZRS
Standard_ZRS
Premium_LRS
Standard_GRS
Standard_GZRS
Premium_ZRS
I am trying to create a single GCP Workflows using Terraform (Terraform Workflows documentation here). To create a workflow, I have defined the desired steps and order of execution using the Workflows syntax in YAML (can also be JSON).
I have around 20 different jobs and each of theses jobs are on different .yaml files under the same folder, workflows/. I just want to loop over the /workflows folder and have a single .yaml file to be able to create my resource. What would be the best way to achieve this using Terraform? I read about for_each but it was primarily used to loop over something to create multiple resources rather than a single resource.
workflows/job-1.yaml
- getCurrentTime:
call: http.get
args:
url: https://us-central1-workflowsample.cloudfunctions.net/datetime
result: currentDateTime
workflows/job-2.yaml
- readWikipedia:
call: http.get
args:
url: https://en.wikipedia.org/w/api.php
query:
action: opensearch
search: ${currentDateTime.body.dayOfTheWeek}
result: wikiResult
main.tf
resource "google_workflows_workflow" "example" {
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = YAML FILE HERE
Terraform has a function fileset which allows a configuration to react to files available on disk alongside its definition. You can use this as a starting point for constructing a suitable expression for for_each:
locals {
workflow_files = fileset("${path.module}/workflows", "*.yaml")
}
It looks like you'd also need to specify a separate name for each workflow, due to the design of the remote system, and so perhaps you'd decide to set the name to be the same as the filename but with the .yaml suffix removed, like this:
locals {
workflows = tomap({
for fn in local.workflow_files :
substr(fn, 0, length(fn)-5) => "${path.module}/workflows/${fn}"
})
}
This uses a for expression to project the set of filenames into a map from workflow name (trimmed filename) to the path to the specific file. The result then would look something like this:
{
job-1 = "./module/workflows/job-1.yaml"
job-2 = "./module/workflows/job-2.yaml"
}
This now meets the requirements for for_each, so you can refer to it directly as the for_each expression:
resource "google_workflows_workflow" "example" {
for_each = local.workflows
name = each.key
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
source_contents = file(each.value)
}
Your question didn't include any definition for how to populate the description argument, so I've left it set to hard-coded "Magic" as in your example. In order to populate that with something reasonable you'd need to have an additional data source for that, since what I wrote above is already making full use of the information we get just from scanning the content of the directory.
resource "google_workflows_workflow" "example" {
# count for total iterations
count = 20
name = "workflow"
region = "us-central1"
description = "Magic"
service_account = google_service_account.test_account.id
# refer to file using index, index starts from 0
source_contents = file("${path.module}/workflows/job-${each.index}.yaml")
}
I have written a Terraform script to create a few Azure Virtual Machines.
The number of VMs created is based upon a variable called type in my .tfvars file:
type = [ "Master-1", "Master-2", "Master-3", "Slave-1", "Slave-2", "Slave-3" ]
My variables.tf file contains the following local:
count_of_types = "${length(var.type)}"
And my resources.tf file contains the code required to actual create the relevant number of VMs from this information:
resource "azurerm_virtual_machine" "vm" {
count = "${local.count_of_types}"
name = "${replace(local.prefix_specific,"##TYPE##",var.type[count.index])}-VM"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.*.id[count.index]}"]
vm_size = "Standard_B2ms"
tags = "${local.tags}"
Finally, in my output.tf file, I output the IP address of each server:
output "public_ip_address" {
value = ["${azurerm_public_ip.main.*.ip_address}"]
}
I am creating a Kubernetes cluster with 1x Master and 1x Slave VM. For this purpose, the script works fine - the first IP output is the Master and the second IP output is the Slave.
However, when I move to 8+ VMs in total, I'd like to know which IP refers to which VM.
Is there a way of amending my output to include the type local, or just the server's hostname alongside the Public IP?
E.g. 54.10.31.100 // Master-1.
Take a look at formatlist (which is one of the functions for string manipulations) and can be used to iterate over the instance attributes and list tags and other attributes of interest.
output "ip-address-hostname" {
value = "${
formatlist(
"%s:%s",
azurerm_public_ip.resource_name.*.fqdn,
azurerm_public_ip.resource_name.*.ip_address
)
}"
}
Note this is just a draft pseudo code. You may have to tweak this and create additional data sources in your TF file for effective enums
More reading available - https://www.terraform.io/docs/configuration/functions/formatlist.html
Raunak Jhawar's answer pointed me in the right direction, and therefore got the green tick.
For reference, here's the exact code I used in the end:
output "public_ip_address" {
value = "${formatlist("%s: %s", azurerm_virtual_machine.vm.*.name, azurerm_public_ip.main.*.ip_address)}"
}
This resulted in the following output:
I'm relatively new to Terraform - I have a module setup as below, the issue I'm having is with the outputs if the module count is '0' when running a terraform plan. Output PW works fine now that I've used the element(concat workaround but the Output I'm having issues with is DCPWUn, I get the following error:
Error: Error refreshing state: 1 error(s) occurred:
* module.PrimaryDC.output.DCPWUn: At column 21, line 1: rsadecrypt: argument 1 should be type string, got type list in:
${element(concat("${rsadecrypt(aws_spot_instance_request.PrimaryDC.*.password_data,file("${var.PATH_TO_PRIVATE_KEY}"))}", list("")), 0)}
Code:
resource "aws_spot_instance_request" "PrimaryDC" {
wait_for_fulfillment = true
provisioner "local-exec" {
command = "aws ec2 create-tags --resources ${self.spot_instance_id} --tags Key=Name,Value=${var.ServerName}0${count.index +01}"
}
ami = "ami-629a7405"
spot_price = "0.01"
instance_type = "t2.micro"
count = "${var.count}"
key_name = "${var.KeyPair}"
subnet_id = "${var.Subnet}"
vpc_security_group_ids = ["${var.SecurityGroup}"]
get_password_data = "true"
user_data = <<EOF
<powershell>
Rename-computer -NewName "${var.ServerName}0${count.index +01}"
</powershell>
EOF
tags {
Name = "${var.ServerName}0${count.index +01}"
}
}
output "PW" {
value = "${element(concat("${aws_spot_instance_request.PrimaryDC.*.password_data}", list("")), 0)}"
}
output "DCPWUn" {
value = "${element(concat("${rsadecrypt(aws_spot_instance_request.PrimaryDC.*.password_data,file("${var.PATH_TO_PRIVATE_KEY}"))}", list("")), 0)}"
}
As the error says, rsadecrypt has an argument that is of type list, not string as it should be. If you want to ensure that the argument is a string, you need to invert your function call nesting to make sure that rsadecrypt gets a string:
output "DCPWUn" {
value = "${rsadecrypt(element(concat(aws_spot_instance_request.PrimaryDC.*.password_data, list("")), 0),file("${var.PATH_TO_PRIVATE_KEY}"))}"
}
The problem lies within this line
${element(concat("${rsadecrypt(aws_spot_instance_request.PrimaryDC.*.password_data,file("${var.PATH_TO_PRIVATE_KEY}"))}", list("")), 0)}
What are you trying to achieve? Let's break it down a little
element(…, 0): Get the first element of the following list.
concat(…,list("")): Concatenate the following list of strings and then append the concatenation of a list containing the empty string (Note that the second part is not useful, since you are appending an empty string).
rsadecrypt(…,file("${var.PATH_TO_PRIVATE_KEY}")): decrypt the following expression with the private key (Error: The following thing needs to be a string, you will be supplying a list)
aws_spot_instance_request.PrimaryDC.*.password_data This is a list of all password data (and not a string).
I don't know what your desired output should look like, but with the above list, you may be able to mix-and-match the functions to suit your needs.
edit: Fixed a mistake thanks to the comment by rahuljain1311.
I am not able to target a single aws_volume_attachment with its corresponding aws_instance via -target.
The problem is that the aws_instance is taken from a list by using count.index, which forces terraform to refresh all aws_instance resources from that list.
In my concrete case I am trying to manage a consul cluster with terraform.
The goal is to be able to reinit a single aws_instance resource via the -target flag, so I can upgrade/change the whole cluster node by node without downtime.
I have the following tf code:
### IP suffixes
variable "subnet_cidr" { "10.10.0.0/16" }
// I want nodes with addresses 10.10.1.100, 10.10.1.101, 10.10.1.102
variable "consul_private_ips_suffix" {
default = {
"0" = "100"
"1" = "101"
"2" = "102"
}
}
###########
# EBS
#
// Get existing data EBS via Name Tag
data "aws_ebs_volume" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
filter {
name = "volume-type"
values = ["gp2"]
}
filter {
name = "tag:Name"
values = ["${var.platform_type}.${var.platform_id}.consul.data.${count.index}"]
}
}
#########
# EC2
#
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
}
resource "aws_volume_attachment" "consul-data" {
count = "${length(keys(var.consul_private_ips_suffix))}"
device_name = "/dev/sdh"
volume_id = "${element(data.aws_ebs_volume.consul-data.*.id, count.index)}"
instance_id = "${element(aws_instance.consul.*.id, count.index)}"
}
This works perfectly fine for initializing the cluster.
Now I make a change in my user_data init script of the consul nodes and want to rollout node by node.
I run terraform plan -target=aws_volume_attachment.consul_data[0] to reinit node 0.
This is when I run into the above mentioned problem, that terraform renders all aws_instance resources because of instance_id = "${element(aws_instance.consul.*.id, count.index)}".
Is there a way to "force" tf to target a single aws_volume_attachment with only its corresponding aws_instance resource?
At the time of writing this sort of usage is not possible due to the fact that, as you've seen, an expression like aws_instance.consul.*.id creates a dependency on all the instances, before the element function is applied.
The -target option is not intended for routine use and is instead provided only for exceptional circumstances such as recovering carefully from an unintended change.
For this specific situation it may work better to use the ignore_changes lifecycle setting to prevent automatic replacement of the instances when user_data changes, like this:
resource "aws_instance" "consul" {
count = "${length(keys(var.consul_private_ips_suffix))}"
...
private_ip = "${cidrhost(aws_subnet.private-b.cidr_block, lookup(var.consul_private_ips_suffix, count.index))}"
lifecycle {
ignore_changes = ["user_data"]
}
}
With this set, Terraform will detect but ignore changes to the user_data attribute. You can then get the gradual replacement behavior you want by manually tainting the resources one at a time:
$ terraform taint aws_instance.consul[0]
On the next plan, Terraform will then see that this resource instance is tainted and produce a plan to replace it. This gives you direct control over when the resources are replaced, so you can therefore ensure that e.g. the consul leave step gets a chance to run first, or whatever other cleanup you need to do.
This workflow is recommended over -target because it makes the replacement step explicit. -target can be confusing in a collaborative environment because there is no evidence of its use, and thus no clear explanation of how the current state was reached. taint, on the other hand, explicitly marks your intention in the state where other team members can see it, and then replaces the resource via the normal plan/apply steps.