This may seem silly, but I've been looking for instructions/tutorial on how to automate an Amazon AMI to teardown/up on a schedule. This is because we have non-production servers used for development that don't need to run 24/7. Any chance someone can assist or point me in the proper direction?
Here is how I do it;
resource "aws_autoscaling_schedule" "asg_morning" {
count = "${var.schedule_enabled}"
scheduled_action_name = "${upper(var.environment)}-${app}-AM-Schedule"
min_size = 1
max_size = 1
desired_capacity = 1
recurrence = "${var.schedule_am}"
autoscaling_group_name = "${aws_autoscaling_group.app.name}"
}
resource "aws_autoscaling_schedule" "asg_evening" {
count = "${var.schedule_enabled}"
scheduled_action_name = "${upper(var.environment)}-${var.app}-PM-Schedule"
min_size = 0
max_size = 0
desired_capacity = 0
recurrence = "${var.schedule_pm}"
autoscaling_group_name = "${aws_autoscaling_group.app.name}"
}
Related
I want to exempt certain policies for an Azure VM. I have the following terraform code to exempt the policies.
It uses locals to identify the scope on which policies should be exempt.
locals {
exemption_scope = try({
mg = length(regexall("(\\/managementGroups\\/)", var.scope)) > 0 ? 1 : 0,
sub = length(split("/", var.scope)) == 3 ? 1 : 0,
rg = length(regexall("(\\/managementGroups\\/)", var.scope)) < 1 ? length(split("/", var.scope)) == 5 ? 1 : 0 : 0,
resource = length(split("/", var.scope)) >= 6 ? 1 : 0,
})
expires_on = var.expires_on != null ? "${var.expires_on}T23:00:00Z" : null
metadata = var.metadata != null ? jsonencode(var.metadata) : null
# generate reference Ids when unknown, assumes the set was created with the initiative module
policy_definition_reference_ids = length(var.member_definition_names) > 0 ? [for name in var.member_definition_names :
replace(substr(title(replace(name, "/-|_|\\s/", " ")), 0, 64), "/\\s/", "")
] : var.policy_definition_reference_ids
exemption_id = try(
azurerm_management_group_policy_exemption.management_group_exemption[0].id,
azurerm_subscription_policy_exemption.subscription_exemption[0].id,
azurerm_resource_group_policy_exemption.resource_group_exemption[0].id,
azurerm_resource_policy_exemption.resource_exemption[0].id,
"")
}
and the above local is used like mentioned below
resource "azurerm_management_group_policy_exemption" "management_group_exemption" {
count = local.exemption_scope.mg
name = var.name
display_name = var.display_name
description = var.description
management_group_id = var.scope
policy_assignment_id = var.policy_assignment_id
exemption_category = var.exemption_category
expires_on = local.expires_on
policy_definition_reference_ids = local.policy_definition_reference_ids
metadata = local.metadata
}
Both the locals and azurerm_management_group_policy_exemption are part of the same module file. And Policy exemption is applied like mentioned below
module exemption_jumpbox_sql_vulnerability_assessment {
count = var.enable_jumpbox == true ? 1 : 0
source = "../policy_exemption"
name = "Exemption - SQL servers on machines should have vulnerability"
display_name = "Exemption - SQL servers on machines should have vulnerability"
description = "Not required for Jumpbox"
scope = module.create_jumbox_vm[0].virtual_machine_id
policy_assignment_id = module.security_center.azurerm_subscription_policy_assignment_id
policy_definition_reference_ids = var.exemption_policy_definition_ids
exemption_category = "Waiver"
depends_on = [module.create_jumbox_vm,module.security_center]
}
It works for an existing Azure VM. However it throws the following error while trying to provision the Azure VM and apply the policy exemption on this Azure VM.
Ideally, module.exemption_jumpbox_sql_vulnerability_assessment should get executed only after [module.create_jumbox_vm as it is defined as a dependent. But not sure why it is throwing the error
│ The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.
I tried to reproduce the scenario in my environment.
resource "azurerm_management_group_policy_exemption" "management_group_exemption" {
count = local.exemption_scope.mg
name = var.name
display_name = var.display_name
description = var.description
management_group_id = var.scope
policy_assignment_id = var.policy_assignment_id
exemption_category = var.exemption_category
expires_on = local.expires_on
policy_definition_reference_ids = local.policy_definition_reference_ids
metadata = local.metadata
}
locals {
exemption_scope = try({
...
})
Received the same error:
The "count" value depends on resource attributes that cannot be determined
│ until apply, so Terraform cannot predict how many instances will be
│ created. To work around this, use the -target argument to first apply only
│ the resources that the count depends on.
Referring to local values , the values will be known on the apply time only, and not during the apply time .So if it is not dependent on other sources , it will expmpt policies but it is dependent on the VM which may be still in process of creation.
So target only the resource that is dependent on first ,as only when vm is created is when the exemption policy can be assigned to that vm.
Check count:using-expressions-in-count | Terraform | HashiCorp Developer
Also note that while using terraform count argument with Azure Virtual Machines ,NIC resource also to be created for each Virtual Machine resource.
resource "azurerm_network_interface" "nic" {
count = var.vm_count
name = "${var.vm_name_pfx}-${count.index}-nic"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
//tags = var.tags
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
Reference: terraform-azurerm-policy-exemptions/examples/count at main · AnsumanBal-MT/terraform-azurerm-policy-exemptions · GitHub
I am having child module for Windows virtual machine.
Then I have root module (main.tf file), where I am using that child module
module "vm-win-resource" {
source = "./Modules/ServerWindows"
count = 2
vm-name = "vm-win-${random_string.rnd.result}" #OR "vm-win-${module.rnd-num.rnd-result}"
vm-rg = module.rg-resouce.rg-name
vm-location = module.rg-resouce.rg-location
nic-name = "vm-win-${random_string.rnd.result}-nic1" #OR "vm-win-${module.rnd-num.rnd-result}-nic1"
nic-rg = module.rg-resouce.rg-name
nic-location = module.rg-resouce.rg-location
nic-ip-subnet = "HERE IS SUBNET ID"
}
In same main.tf file, if I use random_string provider directly
resource "random_string" "rnd" {
length = 4
min_numeric = 4
special = false
lower = true
}
or if I create module, for random number and use it in module for virtual machine, result is same.
module "rnd-num" {
source = "./Modules/RandomNumber"
}
I get same name (generated number for both)
+ vm-win-name = [
+ [
+ "vm-win-6286",
+ "vm-win-6286",
],
]
So in both cases, value is generated only once.
Question is how can I generate random number for every loop in module for virtual machine?
Thank you for any help!
UPDATE
As workaround, I have placed provider to generate random number into virtual machine resource/module specification
resource "azurerm_windows_virtual_machine" "vm-resource" {
name = "${var.vm-name}-${random_string.rnd.result}"
resource_group_name = var.vm-rg
location = var.vm-location
size = var.vm-size
admin_username = var.vm-admin
admin_password = var.vm-adminpwd
network_interface_ids = [
azurerm_network_interface.nic-resource.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = var.vm-os-disk-type
}
source_image_reference {
publisher = var.vm-os-image.publisher
offer = var.vm-os-image.offer
sku = var.vm-os-image.sku
version = var.vm-os-image.version
}
tags = var.resource-tags
}
resource "random_string" "rnd" {
length = 4
min_numeric = 4
special = false
lower = true
}
it does the job but I would prefer to use it in main.tf file and not directly in resource/module specification, if it is possible.
A few words about how Terraform random_string works:
random_string generates a random string from specific characters. This string is generated once. Referencing its result attribute in multiple places will provide you the same output. Using it as random_string.rnd.result will not act as a function call, this means that it will provide the same value in every place.
The result value of a random_string will not change after consecutive applies. This is obvious, if we think about it. If it would change, the usage of random_string would be dangerous, since it would result in re-provisioning the resources which are referencing it.
If we want to have multiple different random strings, we have to define multiple random_string resources. For example:
resource "random_string" "rnd" {
count = 2
length = 4
min_numeric = 4
special = false
lower = true
}
module "vm-win-resource" {
source = "./Modules/ServerWindows"
count = 2
vm-name = "vm-win-${random_string.rnd[count.index].result}"
vm-rg = module.rg-resouce.rg-name
vm-location = module.rg-resouce.rg-location
nic-name = "vm-win-${random_string.rnd[count.index].result}-nic1"
nic-rg = module.rg-resouce.rg-name
nic-location = module.rg-resouce.rg-location
nic-ip-subnet = "HERE IS SUBNET ID"
}
Please note, we are using a count for the random_string resource as well.
Let say I have a auto-scaling group which I manage via terraform. And i want that auto scaling group to scale up and scale down based on our business hours .
The TF template for managing ASG :
resource "aws_autoscaling_group" "foobar" {
availability_zones = ["us-west-2a"]
name = "terraform-test-foobar5"
max_size = 1
min_size = 1
health_check_grace_period = 300
health_check_type = "ELB"
force_delete = true
termination_policies = ["OldestInstance"]
}
resource "aws_autoscaling_schedule" "foobar" {
scheduled_action_name = "foobar"
min_size = 0
max_size = 1
desired_capacity = 0
start_time = "2016-12-11T18:00:00Z"
end_time = "2016-12-12T06:00:00Z"
autoscaling_group_name = aws_autoscaling_group.foobar.name
}
As we can see here i have to set a particular date and time for the action.
what I want is : I want to scale down on saturday night 9 pm by 10% of my current capacity, and then again want to scale up by 10% on monday morning 6 am .
How can I achieve this.
Any help is highly appreciated. Please let me know how to get through this.
The solution is not straightforward, but is doable. The required steps are:
create a Lambda function that scales down the ASG (e.g. with Boto3 and Python)
assign an IAM role with the right permissions
create a Cron trigger for "every saturday 9pm" with aws_cloudwatch_event_rule
create a aws_cloudwatch_event_target, with the previously created Cron trigger and Lambda function
repeat for scaling up
This module will probably fit your needs, you just have to code the Lambda and use the module to trigger it on a schedule.
When I try to add a PTR record in DNS, I get this error with Invalid index. I am uncertain how to remove the error.
resource "openstack_compute_instance_v2" "app-stage" {
count = length(var.datacenter)
name = "app-stage-${var.datacenter[count.index]}.example.com"
flavor_name = var.flavor["app-stage"]
availability_zone = element(var.datacenter, count.index)
key_pair = var.key_pair
image_id = var.os_image
config_drive = true
user_data = data.template_file.app-stage[count.index].rendered
scheduler_hints {
group = openstack_compute_servergroup_v2.app_sg.id
}
network {
port = openstack_networking_port_v2.app-stage[count.index].id
}
}
resource "dns_aaaa_record_set" "app-stage-dns" {
count = length(var.datacenter)
zone = format("%s.", var.dns_zone)
name = "app-stage-${var.datacenter[count.index]}.example"
addresses = [replace(openstack_compute_instance_v2.app-stage[count.index].access_ip_v6, "/\\[|\\]/", "")]
ttl = 300
}
resource "dns_ptr_record" "app-stage-dns-ptr" {
count = length(var.datacenter)
zone = format("%s.", var.dns_ptr_zone)
ptr = "app-stage-${var.datacenter[count.index]}.example"
name = tolist(dns_aaaa_record_set.app-stage-dns)[count.index].addresses[0]
ttl = 300
This is the error-messages i get when running terraform apply,:
Error: Invalid index
on app-stage.tf line 94, in resource "dns_ptr_record" "app-stage-dns-ptr":
94: name = tolist(dns_aaaa_record_set.app-stage-dns)[count.index].addresses[0]
|----------------
| count.index is 1
| dns_aaaa_record_set.app-stage-dns is tuple with 2 elements
This value does not have any indices.
This is repeated 2 times, since I try to create 2 machines/2records.
Based on the comments.
It should be:
name = tolist(dns_aaaa_record_set.app-stage-dns[count.index].addresses)[0]
not (closing parenthesis in different place)
name = tolist(dns_aaaa_record_set.app-stage-dns)[count.index].addresses[0]
I am not able to create multiple Glue Jobs through Terraform. I am trying to do a count for jobname using count but when I try to do the same for job s3 script path its saying only string or single allowed." command.0.script_location must be a single value, not a list
"
I tried playing around count order but looks like for every count name its creating 2 paths
resource "aws_glue_job" "glue_ETL_jobs" {
count = "${length(var.jobnames)}"
count = "${length(var.script_location)}"
name ="${var.jobnames[count.index]}_glueETLjob"
role_arn = "${var.ETLjob_glue_role}"
command {
script_location = ["${var.script_location[count.index]}"]
}
default_arguments = {
"--job-language" = "${var.job_language}"
"--job-bookmark-option" = "${var.job_bookmark_option}"
"--TempDir" = "${var.tempdirectory}"
"--enable-continuous-cloudwatch-log" = "${var.cloud_watch_logging}"
"--enable-continuous-log-filter" = "${var.continuous_log_filter}"
"--max-capacity" = "${var.max-capacity}"
}
}
as give abv
name="${var.jobnames[count.index]}_glueETLjob"
script_location = ["${var.script_location[count.index]}"]