How do I escape true/false in terraform? - terraform

I need to pass the word true or false to a data template file in terraform. However, if I try to provide the value, it comes out 0 or 1 due to interpolation syntax. I tried doing \\true\\ as recommended in https://www.terraform.io/docs/configuration/interpolation.html, however that results in \true\, which obviously isn't right. Same with \\false\\ = \false\
To complicate matters, I also have a scenario where I need to pass it the value of a variable, which can either equal true or false.
Any ideas?
# control whether to enable REST API and set other port defaults
data "template_file" "master_spark_defaults" {
template = "${file("${path.module}/templates/spark/spark- defaults.conf")}"
vars = {
spark_server_port = "${var.application_port}"
spark_driver_port = "${var.spark_driver_port}"
rest_port = "${var.spark_master_rest_port}"
history_server_port = "${var.history_server_port}"
enable_rest = "${var.spark_master_enable_rest}"
}
}
var.spark_master_enable_rest can be either true or false. I tried setting the variable as "\\${var.spark_master_enable_rest}\\" but again this resulted in either \true\ or \false\
Edit 1:
Here is the relevant portion of conf file in question:
spark.ui.port ${spark_server_port}
# set to default worker random number.
spark.driver.port ${spark_driver_port}
spark.history.fs.logDirectory /var/log/spark
spark.history.ui.port ${history_server_port}
spark.worker.cleanup.enabled true
spark.worker.cleanup.appDataTtl 86400
spark.master.rest.enabled ${enable_rest}
spark.master.rest.port ${rest_port}

I think you must be overthinking,
if i set my var value as
spark_master_enable_rest="true"
Then i get :
spark.worker.cleanup.enabled true
spark.worker.cleanup.appDataTtl 86400
spark.master.rest.enabled true
in my result when i apply.

I ended up creating a cloud-config script to find/replace the 0/1 in the file:
part {
content_type = "text/x-shellscript"
content = <<SCRIPT
#!/bin/sh
sed -i.bak -e '/spark.master.rest.enabled/s/0/false/' -e '/spark.master.rest.enabled/s/1/true/' /opt/spark/conf/spark-defaults.conf
SCRIPT
}

Related

How can i pass a value to local variable in terraform based on a flag. The challenge is there are multiple flags

can i get some help on this please? i want to assign the local variable based on the flags. The challenge is there are multiple flags and based on the flag set to true i should have my local variable set ...
example i have the following flags
docker_enabled = true
maven_enabled = false
nugget_enabled = flase
in my local i should have a condition to set the package_type as "docker" if docker_enabled flag is true and similiarly package_type should be maven if maven_enabled is true
something like this ..
`package_type = local.docker_enabled ? local.docker_package : null || local.generic_enabled ? local.generic_package : null || local.maven_enabled ? local.maven_package : null
dev_repo = formatlist("%s-dev-%s-local", local.tla, local.package_type )
uat_repo = formatlist("%s-uat-%s-local", local.tla, local.package_type)
prod_repo = formatlist("%s-prod-%s-local", local.tla, local.package_type)
virtual_repo = formatlist("%s-%s-virtual", local.tla, local.package_type)
premission_repos = concat(local.dev_repo, local.uat_repo, local.prod_repo)`
The issue am facing is when i make "false" all the 3 (docker,maven and nugget) it complains package_type cant be null
any help much appreciated here.. thank you

Terraform - How to use conditionally created resource's output in conditional operator?

I have a case where I have to create an aws_vpc resource if the user does not provide vpc id. After that I am supposed to create resources with that VPC.
Now, I am applying conditionals while creating an aws_vpc resource. For example, only create VPC if existing_vpc is false:
count = "${var.existing_vpc ? 0 : 1}"
Next, for example, I have to create nodes in the VPC. If the existing_vpc is true, use the var.vpc_id, else use the computed VPC ID from aws_vpc resource.
But, the issue is, if existing_vpc is true, aws_vpc will not create a new resource and the ternary condition is anyways trying to check if the aws_vpc resource is being created or not. If it doesn't get created, terraform errors out.
An example of the error when using conditional operator on aws_subnet:
Resource 'aws_subnet.xyz-subnet' not found for variable 'aws_subnet.xyz-subnet.id'
The code resulting in the error is:
subnet_id = "${var.existing_vpc ? var.subnet_id : aws_subnet.xyz-subnet.id}"
If both things are dependent on each other, how can we create conditional resources and assign values to other configuration based on them?
You can access dynamically created modules and resources as follows
output "vpc_id" {
value = length(module.vpc) > 0 ? module.vpc[*].id : null
}
If count = 0, output is null
If count > 0, output is list of vpc ids
If count = 1 and you want to receive a single vpc id you can specify:
output "vpc_id" {
value = length(module.vpc) > 0 ? one(module.vpc).id : null
}
The following example shows how to optionally specify whether a resource is created (using the conditional operator), and shows how to handle returning output when a resource is not created. This happens to be done using a module, and uses an object variable's element as a flag to indicate whether the resource should be created or not.
But to specifically answer your question, you can use the conditional operator as follows:
output "module_id" {
value = var.module_config.skip == true ? null : format("%v",null_resource.null.*.id)
}
And access the output in the calling main.tf:
module "use_conditionals" {
source = "../../scratch/conditionals-modules/m2" # << Change to your directory
a = module.skipped_module.module_id # Doesn't exist, so might need to handle that.
b = module.notskipped_module.module_id
c = module.default_module.module_id
}
Full example follows. NOTE: this is using terraform v0.14.2
# root/main.tf
provider "null" {}
module "skipped_module" {
source = "../../scratch/conditionals-modules/m1" # << Change to your directory
module_config = {
skip = true # explicitly skip this module.
name = "skipped"
}
}
module "notskipped_module" {
source = "../../scratch/conditionals-modules/m1" # << Change to your directory
module_config = {
skip = false # explicitly don't skip this module.
name = "notskipped"
}
}
module "default_module" {
source = "../../scratch/conditionals-modules/m1" # << Change to your directory
# The default position is, don't skip. see m1/variables.tf
}
module "use_conditionals" {
source = "../../scratch/conditionals-modules/m2" # << Change to your directory
a = module.skipped_module.module_id
b = module.notskipped_module.module_id
c = module.default_module.module_id
}
# root/outputs.tf
output skipped_module_name_and_id {
value = module.skipped_module.module_name_and_id
}
output notskipped_module_name_and_id {
value = module.notskipped_module.module_name_and_id
}
output default_module_name_and_id {
value = module.default_module.module_name_and_id
}
the module
# m1/main.tf
resource "null_resource" "null" {
count = var.module_config.skip ? 0 : 1 # If skip == true, then don't create the resource.
provisioner "local-exec" {
command = <<EOT
#!/usr/bin/env bash
echo "null resource, var.module_config.name: ${var.module_config.name}"
EOT
}
}
# m1/variables.tf
variable "module_config" {
type = object ({
skip = bool,
name = string
})
default = {
skip = false
name = "<NAME>"
}
}
# m1/outputs.tf
output "module_name_and_id" {
value = var.module_config.skip == true ? "SKIPPED" : format(
"%s id:%v",
var.module_config.name,
null_resource.null.*.id
)
}
output "module_id" {
value = var.module_config.skip == true ? null : format("%v",null_resource.null.*.id)
}
The current answers here are helpful when you are working with more modern versions of terraform, but as noted by OP here they do not work when you are working with terraform < 0.12 (If you're like me and still dealing with these older versions, I am sorry, I feel your pain.)
See the relevant issue from the terraform project for more info on why the below is necessary with the older versions.
but to avoid link rot, I'll use the OPs example subnet_id argument using the answers in the github issue.
subnet_id = "${element(compact(concat(aws_subnet.xyz-subnet.*.id, list(var.subnet_id))),0)}"
From the inside out:
concat will join the splat output list to list(var.subnet_id) -- per the background link 'When count = 0, the "splat syntax" expands to an empty list'
compact will remove the empty item
element will return your var.subnet_id only when compact recieves the empty splat output.

how to read different block setting from kinto.ini file

I created a different block in my kinto.ini file and i want to use those setting in my program.
#kinto.ini
[mysetting]
name = json
username = jsonmellow
password = *********
[app:main]
use = egg:kinto
kinto.storage_url = postgre//
if we use 'config.get_setting' function of kinto it gives me the setting of the default block "app:main" only. so how can i get the other setting from "mysetting" block.
you can use prefix for your settings like:
[app:main]
...
mysetting.name = json
mysetting.username = jsonmellow
But if you still need some extra section in ini: How can I access a custom section in a Pyramid .ini file?

Parameter aliasing

when implementing Origen::Parameters, I understood the importance of defining a 'default' set. But, in essence, my real default is named something different. So I implemented a hack of a parameter alias:
Origen.top_level.define_params :default do |params|
params.tconds.override = 1
params.tconds.override_lev_equ_set = 1
params.tconds.override_lev_spec_set = 1
params.tconds.override_levset = 1
params.tconds.override_seqlbl = 'my_pattern'
params.tconds.override_testf = 'tm_3'
params.tconds.override_tim_spec_set = 'bist_xxMhz'
params.tconds.override_timset = '1,1,1,1,1,1,1,1'
params.tconds.site_control = 'parallel:'
params.tconds.site_match = 2
end
Origen.top_level.define_params :cpu_mbist_hr, inherit: :default do |params|
# way of aliasing parameter names
end
Is there a proper method of parameter aliasing that is just not documented?
There is no other way to do this currently, though I would be open to a PR to enable something like:
default_params = :cpu_mbist_hr
If you don't want them to be called :default in this case though, then maybe you don't really want them to be the default anyway.
e.g. adding this immediately after you define them would effectively give you an alternative default and would do pretty much the same job as the proposed API above:
# self is required here to help Ruby know that you are calling the params= API
# and not defining a local variable called params
self.params = :cpu_mbist_hr

Include monotonically increasing value in logstash field?

I know there's no built in "line count" functionality while processing files through logstash (for various, understandable and documented reasons). But - there should be a mechanism, within any given logstash instance - to have an monotonically increasing variable / count for every parsed line.
I don't want to go the metrics route since it's a continuous polling mechanism (every n-seconds). Alternatives include pre-processing of log files which given my particular use case - is unacceptable.
Again, let me reiterate - I need the ability to generate/read a monotonically increasing variable that I can store during in a logstash filter.
Thoughts?
here's nothing built into logstash to do it.
You can build a filter to do it pretty easily
Just drop something like this into lib/logstash/filters/seq.rb
# encoding: utf-8
require "logstash/filters/base"
require "logstash/namespace"
require "set"
#
# This filter will adds a sequence number to a log entry
#
# The config looks like this:
#
# filter {
# seq {
# field => "seq"
# }
# }
#
# The `field` is the field you want added to the event.
class LogStash::Filters::Seq < LogStash::Filters::Base
config_name "seq"
milestone 1
config :field, :validate => :string, :required => false, :default => "seq"
public
def register
# Nothing
end # def register
public
def initialize(config = {})
super
#threadsafe = false
# This filter needs to keep state.
#seq=1
end # def initialize
public
def filter(event)
return unless filter?(event)
event[#field] = #seq
#seq = #seq + 1
filter_matched(event)
end # def filter
end # class LogStash::Filters::Seq
This will start at 1 every time Logstash is restarted, but for most situations, this would be ok. If you need something that is persistent across restarts, you need to do a bit more work to persist it somewhere
For anyone finding this in 2018+: logstash now has a ruby filter that makes this much simpler. Put the following in a file somewhere:
# encoding: utf-8
def register(params)
#seq = 1
end
def filter(event)
event.set("seq", #seq)
#seq += 1
return [event]
end
And then configure it like this in your logstash.conf (substitute in the filename you used):
ruby {
path => "/usr/local/lib/logstash/seq.rb"
}
It would be pretty easy to make the field name configurable from logstash.conf, but I'll leave that as an exercise for the reader.
I suspect this isn't thread-safe, so I'm running only a single logstash worker.
this is another choice to slove the problem,this work for me,thanks to the answer from the previous person about thread safe. i use seq field to sort my desc
this is my configure
logstash.conf
filter {
ruby {
code => 'event.set("seq", Time.now.strftime("%N").to_i)'
}
}
logstash.yml
pipeline.batch.size: 200
pipeline.batch.delay: 60
pipeline.workers: 1
pipeline.output.workers: 1

Resources