I use the AWS Redshift Terraform module, https://github.com/terraform-aws-modules/terraform-aws-redshift to provision redshift. Per document, final_snapshot_identifier is not required. But, I got the error,
Error: only alphanumeric characters and hyphens allowed in "final_snapshot_identifier".
The document says, "final_snapshot_identifier: (Optional) The identifier of the final snapshot that is to be created immediately before deleting the cluster. If this parameter is provided, 'skip_final_snapshot' must be false", I can solve this problem by adding the code,
final_snapshot_identifier = var.final_snapshot_identifier
skip_final_snapshot = true
But, why?
module "redshift" {
source = "terraform-aws-modules/redshift/aws"
version = "2.7.0"
#redshift_subnet_group_name = var.redshift_subnet_group_name
subnets = data.terraform_remote_state.vpc.outputs.redshift_subnets
#parameter_group_name = var.parameter_group_name
cluster_identifier = var.cluster_identifier
cluster_database_name = var.cluster_database_name
encrypted = false
cluster_master_password = var.cluster_master_password
cluster_master_username = var.cluster_master_username
cluster_node_type = var.cluster_node_type
cluster_number_of_nodes = var.cluster_number_of_nodes
enhanced_vpc_routing = false
publicly_accessible = true
vpc_security_group_ids = [module.sg.this_security_group_id]
final_snapshot_identifier = var.final_snapshot_identifier
skip_final_snapshot = true
}
If you are providing value for final snapshot idenitifier,skip_final_snapshot should be false. But you have given as true
final_snapshot_identifier = var.final_snapshot_identifier
skip_final_snapshot = true
Related
Trying to set an optional block called "sensitive_labels" and i'm trying to set it as an optional one, however, doesn't work.
My code:
variables.tf:
variable "notification_channels" {
type = any
}
variable "project_id" {
type = string
}
main.tf:
project = var.project_id
for_each = { for k, v in var.notification_channels : k => v }
type = each.value.type
display_name = each.value.display_name
description = each.value.description
labels = each.value.labels
enabled = each.value.enabled
dynamic "sensitive_labels" {
for_each = each.value.sensitive_labels != {} ?[each.value.sensitive_labels] : []
content {
auth_token = lookup(sensitive_labels.value, "auth_token", null)
}
}
}
dev.tfvars:
notification_channels = [
{
type = "email"
display_name = "a channel to send emails"
description = "a nice channel"
labels = {
email_address = "HeyThere#something.com"
}
enabled = true
sensitive_labels = {} // this one doesn't give any errors.
},
{
type = "email"
display_name = "HeyThere Email"
description = "a channel to send emails"
labels = {
email_address = "HeyThere2#something.com"
}
enabled = true
}
]
Getting:
Error: Unsupported attribute
on notification_channels.tf line 11, in resource "google_monitoring_notification_channel" "channels":
11: for_each = each.value.sensitive_labels != {} ? [each.value.sensitive_labels] : []
│ ├────────────────
each.value is object with 5 attributes
This object does not have an attribute named "sensitive_labels".
How can I make setting sensitive_labels an optional attribute here?
EDIT:
This seems to work but feels a bit off:
project = var.project_id
for_each = { for k, v in var.notification_channels : k => v }
type = each.value.type
display_name = each.value.display_name
description = each.value.description
labels = each.value.labels
enabled = each.value.enabled
dynamic "sensitive_labels" {
for_each = lookup(each.value, "sensitive_labels", {})
content {
auth_token = lookup(sensitive_labels.value, "auth_token", null)
}
}
}
Is there a better way that doesn't feel hacky?
A good place to start is to properly define a type constraint for your input variable, so that Terraform can understand better what data structure is expected and help ensure that the given value matches that data structure.
type = any is not there so you can skip defining a type constraint, but instead for the very rare situation where a module is just passing a data structure verbatim to a provider without interpreting it at all. Since your module clearly expects that input variable to be a map of objects (based on how you've used it), you should tell Terraform what object type you are expecting to recieve:
variable "notification_channels" {
type = map(object({
type = string
display_name = string
labels = map(string)
enabled = bool
sensitive_labels = object({
auth_token = string
password = string
service_key = string
})
}))
}
From your example it seems like you want sensitive_labels to be optional, so that the caller of the module can omit it. In that case you can use the optional modifier when you declare that particular attribute, and also the three attributes inside it:
sensitive_labels = optional(object({
auth_token = optional(string)
password = optional(string)
service_key = optional(string)
}))
An attribute that's marked as optional can be omitted by the caller, and in that case Terraform will automatically set it to null inside your module to represent that it wasn't set.
Now you can use this variable elsewhere in your module and safely assume that it will always have exactly the type defined in the variable block:
resource "google_monitoring_notification_channel" "channels" {
for_each = var.notification_channels
project = var.project_id
type = each.value.type
display_name = each.value.display_name
description = each.value.description
labels = each.value.labels
enabled = each.value.enabled
dynamic "sensitive_labels" {
for_each = each.value.sensitive_labels[*]
content {
auth_token = sensitive_labels.value.auth_token
password = sensitive_labels.value.password
service_key = sensitive_labels.value.service_key
}
}
}
The each.value.sensitive_labels[*] expression is a splat expression using the single values as lists feature, which concisely transforms the given value into either a one-element list or a zero-element list depending on whether the value is null. That effectively means that there will be one sensitive_labels block if each.value.sensitive_labels is set, and zero blocks of that type if that attribute is unset (null).
The attributes inside those blocks can also just be assigned directly without any special logic, because Terraform will have automatically set them to null if not specified by the caller and setting a resource argument to null is always the same as not setting it at all.
If you take the time to actually describe the types of variables you expect then it tends to make logic elsewhere in the module much simpler, because you no longer need to deal with all of the ways in which the caller might pass you an incorrect value: Terraform will either convert the value automatically to the expected type if possible, or will report an error to the caller explaining why the value they provided isn't acceptable.
Currently, we are using the following code to create a topic
resource “kafka_topic” “topic” {
count = length(var.topics)
name = “
{var.environment}
{lookup(var.topics[count.index], “name”)}
{lookup(var.topics[count.index], “max_message_bytes”)}” : 1000012
}
}
and passing topic as a list like
locals {
iddn_news_cms_kafka_topics = [
{
name = “topic1”
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = “-1”
},
{
name = “topic2”
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = “-1”
}}}
but when we are removing a topic in between that then it will destroy the topic that came down in the sequence and recreate it as we are passing it as a list which works on index of element.
I have tried this set of code also
for_each = {
for index, topic in var.topics:
topic.name => topic
}
name = “{var.environment}{each.value[“name”]}${each.value[“version”]}”
but when tried terraform plan it is showing the changes which should not happen in this case as topic are already created
is there any other alternative to do the changes without impacting the existing created topic
I have some Kinesis Firehose Delivery Stream resources created via Terraform. Due to a known bug (https://github.com/hashicorp/terraform-provider-aws/issues/9827) , when lambda transform params are kept default, Terraform avoids them to be written in state file and Every plan/apply is trying to create them again. Because of this issue, I'm trying to add ignore_lifecycle to them.
This is one of my resources;
resource "aws_kinesis_firehose_delivery_stream" "some_stream" {
name = "some_name"
destination = ""
s3_configuration {
role_arn = "some_name"
bucket_arn = "arn:aws:s3:::somebucket"
prefix = "some/prefix/"
buffer_size = 64
buffer_interval = 60
compression_format = "GZIP"
cloudwatch_logging_options {
enabled = true
log_group_name = aws_cloudwatch_log_group.some_log_group.name
log_stream_name = aws_cloudwatch_log_stream.some_log_stream.name
}
}
elasticsearch_configuration {
domain_arn = "arn:aws:es:some-es-domain"
role_arn = "arn:aws:iam::some-role"
index_name = "some-index"
index_rotation_period = "OneDay"
buffering_interval = 60
buffering_size = 64
retry_duration = 300
s3_backup_mode = "AllDocuments"
cloudwatch_logging_options {
enabled = true
log_group_name = aws_cloudwatch_log_group.some_log_group.name
log_stream_name = aws_cloudwatch_log_stream.some_log_stream.name
}
processing_configuration {
enabled = "true"
processors {
type = "Lambda"
parameters {
parameter_name = "LambdaArn"
parameter_value = "arn:aws:lambda:some-lambda"
}
parameters {
parameter_name = "BufferSizeInMBs"
parameter_value = "3"
}
parameters {
parameter_name = "BufferIntervalInSeconds"
parameter_value = "60"
}
}
}
}
}
In the resource above BufferSizeInMBs and BufferIntervalInSeconds are constantly changing. I'm trying to ignore these two without touching the LambdaArn but since all of them are using the same structure below, I couldn't quite figure it out how to do that, I don't even know it's possible or not.
parameters {
parameter_name = ""
parameter_value = ""
}
I tried this;
lifecycle {
ignore_changes = [elasticsearch_configuration.0.processing_configuration.0.processors]
}
But this doesn't exclude the parameter_name = "LambdaArn"
To go further,
I tried something like;
lifecycle {
ignore_changes=[elasticsearch_configuration.0.processing_configuration.0.processors[1],elasticsearch_configuration.0.processing_configuration.0.processors[2]]
]
}
But it didn't work. It didn't give an error, but didn't ignore the changes either. My Terraform version is 1.1.6 and provider version is ~3.0 (3.75.1 to be exact)
Any help will be highly appreciated,
Thank you very much,
Best Regards.
I have a variable that defines the query parameters in my API Gateway resource. Every resource has a base set of default query parameters. Some resources have the base set plus an additional parameter.
# base set of query parameters that apply to all resources
variable "parameters_default" {
default = {
"method.request.querystring.brokerage" = false
"method.request.querystring.account_alias" = false
"method.request.querystring.start_date" = false
"method.request.querystring.end_date" = false
"method.request.querystring.valuation_date" = false
}
}
# additional query parameter that applies to only one resource
variable "parameters_special_resource" {
default = {
"method.request.querystring.brokerage" = false
"method.request.querystring.account_alias" = false
"method.request.querystring.start_date" = false
"method.request.querystring.end_date" = false
"method.request.querystring.valuation_date" = false
"method.request.querystring.top" = false
}
}
Instead of having to redefine all the baseline query parameters, I want to use the baseline to compose the second one. Something like this
# compose the parameters_special_resource variable using parameters_special_resource
variable "parameters_special_resource" {
# baseline parameters_special_resource
default = {
"method.request.querystring.top" = false
}
}
How is this done?
Its not possible to create dynamic variables. Instead you should use locals and merge:
# base set of query parameters that apply to all resources
variable "parameters_default" {
default = {
"method.request.querystring.brokerage" = false
"method.request.querystring.account_alias" = false
"method.request.querystring.start_date" = false
"method.request.querystring.end_date" = false
"method.request.querystring.valuation_date" = false
}
}
# additional query parameter that applies to only one resource
variable "parameters_special_resource" {
default = {
"method.request.querystring.top" = false
}
}
locals {
# marge base and special paramters
parameters_special_resource = merge(var.parameters_default, var.parameters_special_resource)
}
Then you use local.parameters_special_resource in your code.
I provisioned Elasticsearch. I got URL outputs of "domain_endpoint", "domain_hostname", "kibana_endpoint" and "kibana_hostname". But, I cannot hit any of these URLS. I got, "This site can’t be reached". What do I miss? Below is the code:
main.tf:
module "elasticsearch" {
source = "git::https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
zone_awareness_enabled = var.zone_awareness_enabled
subnet_ids = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = var.kibana_hostname_enabled
domain_hostname_enabled = var.domain_hostname_enabled
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
context = module.this.context
}
terraform.tfvars:
enabled = true
region = "us-west-2"
namespace = "dev"
stage = "abcd"
name = "abcd"
instance_type = "m5.xlarge.elasticsearch"
elasticsearch_version = "7.7"
instance_count = 2
zone_awareness_enabled = true
encrypt_at_rest_enabled = false
dedicated_master_enabled = false
elasticsearch_subdomain_name = "abcd"
kibana_subdomain_name = "abcd"
ebs_volume_size = 250
create_iam_service_linked_role = false
dns_zone_id = "Z08006012KJUIEOPDLIUQ"
kibana_hostname_enabled = true
domain_hostname_enabled = true
You are placing your ES domain in VPC in private subnets. It does not matter if its public or private subent, public access does not apply here. From the AWS docs:
To perform even basic GET requests, your computer must be able to connect to the VPC. This connection often takes the form of a VPN, managed network, or proxy server.
Even if you place it in public subnet, it will not be accessible over internet. A popular solution to this issue is through ssh tunnel which is also described in AWS docs for ES:
Testing VPC Domains