Step Scaling ASG policy via terraform - terraform

Like below is the example of target tracking ASG policy in TF docs :
resource "aws_autoscaling_policy" "example" {
# ... other configuration ...
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = 40.0
}
target_tracking_configuration {
customized_metric_specification {
metric_dimension {
name = "fuga"
value = "fuga"
}
metric_name = "hoge"
namespace = "hoge"
statistic = "Average"
}
target_value = 40.0
}
}
I want to create a step scaling policy which also has definition of custom metric like in this block and i am using the below code but getting error saying THIS BLOCK DOES NOT EXIST.
resource "aws_autoscaling_policy" "contentworker_inbound_step_scaling_policy" {
name = "${var.host_group}-${var.stack}-step-scaling-policy"
policy_type = "StepScaling"
autoscaling_group_name = aws_autoscaling_group.contentworker_inbound_asg.name
estimated_instance_warmup = 300
step_configuration {
customized_metric_specification {
metric_dimension {
name = "test"
value = “Size”
}
metric_name = "anything"
namespace = "test"
statistic = "Average"
unit = "None"
}
step_adjustment {
adjustment_type = "PercentChangeInCapacity"
scaling_adjustment = 10
metric_interval_lower_bound = 10
metric_interval_upper_bound = 25
}
}
}
I have the custom metric working fine with the target tracking policy, but not with the step scaling.
Any suggestions how can I setup a step scaling policy for my custom metric

Related

auto scale azure spring app URI with terraform

I need enable auto scale for an spring app hosted by spring app services.I am used below terraform code.
resource "azurerm_monitor_autoscale_setting" "spring_apps_app_carrier_events" {
name = "default_auto_scale"
enabled = true
resource_group_name = module.rg.resource_group_name
location = module.rg.resource_group_location
target_resource_id = module.spring_apps_app_carrier_events.app_identities[0].principal_id
profile {
name = "defaultProfile"
capacity {
default = 1
minimum = 1
maximum = 2
}
It return errors:
Error: Can not parse "target_resource_id" as a resource id: Cannot parse Azure ID: parse "290dc6bd-1895-4e52-bac2-a34e63a138a9": invalid URI for request
It seems it need a uri. May u know how can I get the uri of a spring app?
Thanks in advance
I tried to reproduce the same in my environment.
Received the same error:
│ Error: Can not parse "target_resource_id" as a resource id: Cannot parse Azure ID: parse "xxxxx": invalid URI for request
│ with azurerm_monitor_autoscale_setting.spring_apps_app_carrier_events,
The target_resource_id should not be in just number id form,
It has to be something like /subscriptions/xxxxxc/resourceGroups/<myrg>/providers/Microsoft.xxx/xx/sxx
In your case,
target_resource_id = module.spring_apps_app_carrier_events.app_identities[0].principal_id
gives the principal Id which is in “23434354544466” format which is not correct.
I tried below code with targetid being, resourceId : /subscriptions/xxx/resourceGroups/ <myrg>/providers/Microsoft.AppPlatform/spring/springcloudappkavya/apps/kaexamplspringcloudapp/deployments/kavyadeploy1
Code:
resource "azurerm_spring_cloud_service" "example" {
name = "springcloudappkavya"
location =data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
sku_name = "S0"
config_server_git_setting {
uri = "https://github.com/Azure-Samples/piggymetrics"
label = "config"
search_paths = ["dir1", "dir2"]
}
trace {
connection_string = azurerm_application_insights.example.connection_string
sample_rate = 10.0
}
tags = {
Env = "staging"
}
}
resource "azurerm_spring_cloud_app" "example" {
name = "kaexamplspringcloudapp"
resource_group_name = data.azurerm_resource_group.example.name
service_name = azurerm_spring_cloud_service.example.name
identity {
type = "SystemAssigned"
}
}
resource "azurerm_spring_cloud_java_deployment" "test" {
name = "kavyadeploy1"
spring_cloud_app_id = azurerm_spring_cloud_app.example.id
instance_count = 2
jvm_options = "-XX:+PrintGC"
quota {
cpu = "2"
memory = "4Gi"
}
runtime_version = "Java_11"
environment_variables = {
"Foo" : "Bar"
"Env" : "Staging"
}
}
resource "azurerm_monitor_autoscale_setting" "spring_apps_app_carrier_events" {
name = "default_auto_scale"
enabled = true
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
target_resource_id = azurerm_spring_cloud_java_deployment.test.id
// target_resource_id = .spring_apps_app_carrier_events.app_identities[0].principal_id
// target_resource_id = "18xxxxxe2"
profile {
name = "metricRules"
capacity {
default = 1
minimum = 1
maximum = 2
}
rule {
metric_trigger {
dimensions {
name = "AppName"
operator = "Equals"
values = [azurerm_spring_cloud_app.example.name]
}
dimensions {
name = "Deployment"
operator = "Equals"
values = [azurerm_spring_cloud_java_deployment.test.name]
}
metric_name = "AppCpuUsage"
metric_namespace = "microsoft.appplatform/spring"
metric_resource_id = azurerm_spring_cloud_service.example.id
time_grain = "PT1M"
statistic = "Average"
time_window = "PT5M"
time_aggregation = "Average"
operator = "GreaterThan"
threshold = 75
}
scale_action {
direction = "Increase"
type = "ChangeCount"
value = 1
cooldown = "PT1M"
}
}
}
}
Could execute without errors.
Portal view of Autoscale settings for spring apps.
Reference : An Azure Spring Cloud Update: Managed Virtual Network and Autoscale are now generally available in Azure Spring Cloud

Azure Storage (Blob, Queue, Table) Logging in Terraform with for_each and locals

I am writing Terraform code to enable logging on Azure Storage Blob, Queue and Table types. With my current code, I need to fetch data for each Storage type,say for example Blob, and use it to get it's log and metrics details.
Is there any way I could use for_each and locals to avoid repeating the same block of code for each Storage type. Below is what the code looks like now for Blob type,
data "azurerm_monitor_diagnostic_categories" "storage_blob" {
resource_id = "${azurerm_storage_account.stamp.id}/blobServices/default/"
}
resource "azurerm_monitor_diagnostic_setting" "storage_blob" {
name = "storageblobladiagnostics"
target_resource_id = "${azurerm_storage_account.stamp.id}/blobServices/default/"
log_analytics_workspace_id = azurerm_log_analytics_workspace.stamp.id
dynamic "log" {
iterator = entry
for_each = data.azurerm_monitor_diagnostic_categories.storage_blob.logs
content {
category = entry.value
enabled = true
retention_policy {
enabled = true
days = 30
}
}
}
dynamic "metric" {
iterator = entry
for_each = data.azurerm_monitor_diagnostic_categories.storage_blob.metrics
content {
category = entry.value
enabled = true
retention_policy {
enabled = true
days = 30
}
}
}
}
The below implementation doesn't seem to work as the data block is not able handle the for_each expression in the dynamic block
locals {
storage = ["blobServices", "tableServices", "queueServices"]
}
data "azurerm_monitor_diagnostic_categories" "storage_blob" {
resource_id = "${azurerm_storage_account.stamp.id}/${each.key}/default/"
}
resource "azurerm_monitor_diagnostic_setting" "storage_blob" {
for_each = toset(local.storage)
name = "storageblobladiagnostics"
target_resource_id = "${azurerm_storage_account.stamp.id}/${each.key}/default/"
log_analytics_workspace_id = azurerm_log_analytics_workspace.stamp.id
dynamic "log" {
iterator = entry
for_each = data.azurerm_monitor_diagnostic_categories.storage_blob.logs
content {
category = entry.value
enabled = true
retention_policy {
enabled = true
days = 30
}
}
}
dynamic "metric" {
iterator = entry
for_each = data.azurerm_monitor_diagnostic_categories.storage_blob.metrics
content {
category = entry.value
enabled = true
retention_policy {
enabled = true
days = 30
}
}
}
}
In order for this to work, you would have to adjust the code slightly. In your example, the data source is not using for_each, so it cannot be used the way you want. The adjustment is as follows:
locals {
storage = ["blobServices", "tableServices", "queueServices"]
}
data "azurerm_monitor_diagnostic_categories" "storage_blob" {
for_each = toset(local.storage)
resource_id = "${azurerm_storage_account.stamp.id}/${each.key}/default/"
}
resource "azurerm_monitor_diagnostic_setting" "storage_blob" {
for_each = toset(local.storage)
name = "storageblobladiagnostics"
target_resource_id = "${azurerm_storage_account.stamp.id}/${each.key}/default/"
log_analytics_workspace_id = azurerm_log_analytics_workspace.stamp.id
dynamic "log" {
iterator = entry
for_each = "${data.azurerm_monitor_diagnostic_categories.storage_blob[each.key].logs}"
content {
category = entry.value
enabled = true
retention_policy {
enabled = true
days = 30
}
}
}
dynamic "metric" {
iterator = entry
for_each = "${data.azurerm_monitor_diagnostic_categories.storage_blob[each.key].metrics}"
content {
category = entry.value
enabled = true
retention_policy {
enabled = true
days = 30
}
}
}
}

if condition in terraform in count

I am adding autoscale settings in the Azure cosmosdb database, My problem is not all our db requires autoscale only a selection of database require autoscalse rest are manual. I will not be able to specify the autoscalse block also the throughout in the same resource as there are conflicts between those two. so I thought of using the count but I will be not be able to run the resouece block for only one of the DB. for the below example
variable
variable "databases" {
description = "The list of Cosmos DB SQL Databases."
type = list(object({
name = string
throughput = number
autoscale = bool
max_throughput = number
}))
default = [
{
name = "testcoll1"
throughput = 400
autoscale = false
max_throughput = 0
},
{
name = "testcoll2"
throughput = 400
autoscale = true
max_throughput = 1000
}
]
}
For the first I dont need autoscale and next one I need. My main.tf code
resource "azurerm_cosmosdb_mongo_database" "database_manual" {
count = length(var.databases)
name = var.databases[count.index].name
resource_group_name = azurerm_cosmosdb_account.cosmosdb.resource_group_name
account_name = local.account_name
throughput = var.databases[count.index].throughput
}
resource "azurerm_cosmosdb_mongo_database" "database_autoscale" {
count = length(var.databases)
name = var.databases[count.index].name
resource_group_name = azurerm_cosmosdb_account.cosmosdb.resource_group_name
account_name = local.account_name
autoscale_settings {
max_throughput = var.databases[count.index].max_throughput
}
}
First I thought of running two blocks one with scale and on without, but I will not be able to proceed because it requires the count numbers
count = var.autoscale_required == true ? len(databases) : 0
at the start but in my case I will only know at the time of iteration. I have tried to use dynamic within the block but errored out.
*Update
I have switched to foreach and able to run the condition but still it requires 2 blocks
resource "azurerm_cosmosdb_mongo_database" "database_autoscale"
resource "azurerm_cosmosdb_mongo_database" "database_manual"
resource "azurerm_cosmosdb_mongo_database" "database_autoscale" {
for_each = {
for key, value in var.databases : key => value
if value.autoscale_required == true }
name = each.value.name
resource_group_name = azurerm_cosmosdb_account.cosmosdb.resource_group_name
account_name = local.account_name
autoscale_settings {
max_throughput = each.value.max_throughput
}
}
If I understand correctly, I think you could do what you want using the following:
resource "azurerm_cosmosdb_mongo_database" "database_autoscale" {
count = length(var.databases)
name = var.databases[count.index].name
resource_group_name = azurerm_cosmosdb_account.cosmosdb.resource_group_name
account_name = local.account_name
throughput = var.databases[count.index].autoscale == false ? var.databases[count.index].throughput : null
dynamic "autoscale_settings" {
for_each = var.databases[count.index].autoscale == false ? [] : [1]
content {
max_throughput = var.databases[count.index].max_throughput
}
}
}

Terraform ignore sub-block changes in AWS metric_query

Am in the process of trying to configure IOPS alerting on EBS volumes as we move them to GP3. The plan is to configure the alarms in TF but to shift the setting of the target to a lambda that can keep the alarm up-to-date based on lifecycle changes to the ASG. For GP2 volumes I was able to get this configured cleanly and have ignore_changes on the dimensions block of each alert but now that I have moved to several metric_query blocks I cannot seem to find a way to address the nested dimension config.
resource "aws_cloudwatch_metric_alarm" "foobar" {
count = length(data.aws_availability_zones.available.names)
alarm_name = "${local.env_short}_app_volume_IOPS_${data.aws_availability_zones.available.names[count.index]}"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "5"
threshold = "2700"
alarm_description = "IOPS in breach of 90% of provisioned"
insufficient_data_actions = []
actions_enabled = "true"
datapoints_to_alarm = "5"
alarm_actions = [aws_sns_topic.app_alert.arn]
ok_actions = [aws_sns_topic.app_alert.arn]
metric_query {
id = "e1"
expression = "(m1+m2)/PERIOD(m1)"
label = "IOPSCalc"
return_data = "true"
}
metric_query {
id = "m1"
metric {
metric_name = "VolumeWriteOps"
namespace = "AWS/EBS"
period = "60"
stat = "Sum"
dimensions = {}
}
}
metric_query {
id = "m2"
metric {
metric_name = "VolumeReadOps"
namespace = "AWS/EBS"
period = "60"
stat = "Sum"
dimensions = {}
}
}
lifecycle {
ignore_changes = [metric_query.1.metric.dimensions]
}
}
I have tried various iterations of the ignore_changes block and so far have only succeeded if I set the value to [metric_query] but that then ignores the whole thing whereas I am trying just to target the metric_query.metric.dimensions piece. Anyone have any clever ideas around addressing this block?

How to attach a scheduler policy to a persistent volume claim in Gcloud with terraform

I created a webserver infrastructure with terraform (v0.12.21) in Gcloud to deploy a lot of websites.
I created a persistent volume claim for each deploy (1GB each):
I used this code to create them:
resource "kubernetes_persistent_volume_claim" "wordpress_volumeclaim" {
for_each = var.wordpress_site
metadata {
name = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
namespace = "default"
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = each.value.disk
resource_policies = google_compute_resource_policy.policy.name
}
}
}
}
resource "kubernetes_deployment" "wordpress" {
for_each = var.wordpress_site
metadata {
name = each.value.name
labels = { app = each.value.name }
}
spec {
replicas = 1
selector {
match_labels = { app = each.value.name }
}
template {
metadata {
labels = { app = each.value.name }
}
spec {
volume {
name = "wordpress-persistent-storage-${terraform.workspace}-${each.value.name}"
persistent_volume_claim {
claim_name = "wordpress-volumeclaim-${terraform.workspace}-${each.value.name}"
}
}
[...]
But now I need to backup all these disks, and my best idea is using the Gcloud snapshot functionallity, and it must be dynamic, as the creation of these disks are dynamic.
First of all, I created a Snapshot policy:
resource "google_compute_resource_policy" "policy" {
name = "my-resource-policy"
region = "zone-region-here"
project = var.project
snapshot_schedule_policy {
schedule {
daily_schedule {
days_in_cycle = 1
start_time = "04:00"
}
}
retention_policy {
max_retention_days = 7
on_source_disk_delete = "KEEP_AUTO_SNAPSHOTS"
}
}
}
And now I want to add it to my persistent volumen claim. But I dont know how, because this line is not working at all:
resource_policies = google_compute_resource_policy.policy.name
All my tries resulted in errors. Could you help me here?

Resources