If I use the following yamldecode, I get the following output:
output "product" {
value = distinct(yamldecode(file("resource/lab/abc.yaml"))["account_list"]["resource_tags"][*]["TAG:product"])
}
Output:
+ product = [
+ "fargate",
+ "CRM",
]
I want fargate to be removed from my output and the expected output is this:
+ product = [
+ "CRM"
]
Please let me know how I can do this.
output "product" {
value = compact([for x in distinct(yamldecode(file("resource/lab/abc.yaml"))["account_list"]["resource_tags"][*]["TAG:product"]) : x == "fargate" ? "" : x])
}
I get this output:
test = [
"enter",
]
compact function solved the problem.
The Terraform language is based on functional principles rather than imperative principles and so it does not support directly modifying an existing data structure. Instead, the best way to think about goals like this is how to define a new value which differs from your existing value in some particular way.
For collection and structural types, the most common way to derive a new value with different elements is to use a for expression, which describes a rule for creating a new collection based on the elements of an existing collection. For each element in the source collection we can decide whether to use it in the result at all, and then if we do decide to use it we can also describe how to calculate a new element value based on the input element value.
In your case you seem to want to create a new list with fewer elements than an existing list. (Your question title mentions maps, but the question text shows a tuple which is behaving as a list.)
To produce a new collection with fewer elements we use the optional if clause of a for expression. For example:
[for x in var.example : x if x != "fargate"]
In the above expression, x refers to each element of var.example in turn. Terraform first evaluates the if clause, and expects it to return true if the element should be used. If so, it will then evaluate the expression immediately after the colon, which in this case is just x and so the result element is identical to the input element.
Your example expression also includes some unrelated work to decode a YAML string and then traverse to the list inside it. That expression replaces var.example in the above expression, as follows:
output "products" {
value = toset([
for x in yamldecode(file("${path.module}/resource/lab/abc.yaml"))["account_list"]["resource_tags"][*]["TAG:product"]: x
if x != "fargate"
])
}
I made some other small changes to the above compared to your example:
I named the output value "products" instead of "product", because it's returning a collection and it's Terraform language idiom to use plural names for collection values.
I wrapped the expression in toset instead of distinct, because from your description it seems like this is an unordered collection of unique strings rather than an ordered sequence of strings.
I added path.module to the front of the file path so that this will look for the file in the current module directory. If your output value is currently in a root module then this doesn't really matter because the module directory will always be the current working directory, but it's good practice to include this so that it's clear that the file belongs to the module.
Therefore returning a set is more appropriate than returning a list because it communicates to the caller of the module that they may not rely on the order of these items, and therefore allows the order to potentially change in future without it being a breaking change to your module.
Related
I recently found the following line of terraform in our code (some values sanitized):
subnet_ids = [ "${split(",", var.xxx_lb ? join(",", data.yyy_ids.private.ids) : join(",", concat(data.yyy_ids.public.ids, list(""))))}" ]
I'm trying to understand why code would be written this way. More specifically, what is the final join doing? Pulling it out for clarity:
join(",", concat(data.yyy_ids.public.ids, list("")))
It seems that someone (no longer at the company) was trying to ensure that a non-empty list is returned. We definitely don't want the empty ("") item in the list.
So, the questions here are:
What logically is going on in this statement?
Is there a better way?
If there is not a better way, how can we remove the empty entry from
the resulting list?
Update for others who may run into this sort of code:
Terraform versions lower than 0.12 conditionals don't work with lists, so join/split is used to turn lists into strings and then back to lists:
https://github.com/hashicorp/terraform/issues/12453
What logically is going on in this statement?
The original author attempts to create a list with the subnet ids
a) The first split statement will take a string in this case the subnet ids and return them as a list, by splitting them based on the delimiter ,
var.xxx_lb ? clause_if_true : clause_if_false
b) Next terraform will evaluate this variable as a boolean and according to the result you will get the public or the private subnet ids, by employing the ternary operator syntax
join(",", data.yyy_ids.private.ids)
c) In case the boolean value is true, terraform will examine this part
This will return a string by joining the items of the list.
And add the delimeter ,. I assume the reason that he attempts to join them as a string is to be accordance with the section a)
join(",", concat(data.yyy_ids.public.ids, list("")))
d) If the boolean value in b) evaluates to false terraform will examine this part.
The concat function takes as input lists and returns them as a single list.
And then performs the same logic as in c)
The list function is deprecated, tolist should be used instead.
Is there a better way?
I would employ a straight forward way. Check the boolean value, if it is true get the list with private ids. If false the public ones.
subnet_ids = var.xxx_lb ? data.yyy_ids.private.ids : data.yyy_ids.public.ids
I'm creating data resource to create a policy document for allowing users to access rds, but i'm stuck on how to use format to pass account_id and rds's resource_id,
Code:
data "aws_iam_policy_document" "iam_authentication_doc" {
depends_on = [aws_db_instance.name]
statement {
effect = "Allow"
actions = [
"rds-db:connect"
]
resources = flatten([format("arn:aws:rds-db:us-east-1:${var.account_id}:dbuser:${aws_db_instance.name.resource_id}/%s", var.usernames)])
}
}
Error:
resources = flatten([format("arn:aws:rds-db:us-east-1:${var.account_id}:dbuser:${aws_db_instance.pgauth.resource_id}/%s", var.usernames)])
|----------------
| aws_db_instance.pgauth.resource_id is "db-xxxxxxxxxxxxxxxx"
| var.account_id is 8.12345678901+11
| var.usernames is list of string with 12 elements
Call to function "format" failed: unsupported value for "%s" at 75: string
required.
I tried passing
[formatlist("arn:aws:rds-db:us-east-1:%s:dbuser:%s/%s", var.account_id, aws_db_instance.pgauth.resource_id, var.amp_usernames)]
got an error
22: resources = [formatlist("arn:aws:rds-db:us-east-1:%s:dbuser:%s/%s", var.account_id, aws_db_instance.name.resource_id, var.usernames)]
|----------------
| aws_db_instance.name.resource_id is "db-xxxxxxxxxxxxxxx"
| var.account_id is "123456789012"
| var.usernames is list of string with 12 elements
Inappropriate value for attribute "resources": element 0: string required.
I want resources like
arn:aws:rds-db:us-east1:1234567890:dbuser:db-xxxxxxxxxxxxxx/foo,
arn:aws:rds-db:us-east1:1234567890:dbuser:db-xxxxxxxxxxxxxx/bar,
arn:aws:rds-db:us-east1:1234567890:dbuser:db-xxxxxxxxxxxxxx/tim
The first example with format did not work because format expects all of its arguments to be single values and it produces a single value.
As you've seen, the formatlist function is one way to solve your problem: it produces a list as its result, and if any of its arguments are lists then it repeats the formatting process once for each set of elements with the same index across the lists.
Your second example didn't work because you wrapped the call to formatlist in [ ... ], which constructs a list. Becuse formatlist returns a list itself, the result was therefore a list of lists of strings rather than just a list of strings.
We can get it working by removing the redundant brackets:
resources = formatlist("arn:aws:rds-db:us-east-1:%s:dbuser:%s/%s", var.account_id, aws_db_instance.name.resource_id, var.usernames)
Another way to write this is using a for expression, which will perhap make the repetition more explicit in your configuration:
resources = [for u in var.usernames : "arn:aws:rds-db:us-east-1:${var.account_id}:dbuser:${aws_db_instance.name.resource_id}/${u}"]
Which one is easier to understand is of course subjective: the formatlist approach shows the format string up front but it leaves it implied that we're repeating based on elements of var.usernames. The for expression approach pushes the template to the end of the line, but it makes the repetition based on var.usernames more explicit.
resources = flatten(formatlist("arn:aws:rds-db:us-east-1:%s:dbuser:%s/%s", var.account_id, aws_db_instance.pgauth.resource_id, var.usernames))
i did not specify the type for account_id.
I am attempting to write an algorithm that selects a specific reference standard (vector) as a function of temperature. The temperature values are stored in a structure ( procspectra(i).temperature ). My reference standards are stored in another structure ( standards.interp.zeroed.ClOxxx ) where xxx are numbers such as 200, 210, 220, etc. I have built the rounding construct and paste it below.
for i = 1:length(procspectra);
if mod(-procspectra(i).temperature,10) > mod(procspectra(i).temperature,10);
%if mod(-) > mod(+) round down, else round up
tempvector(i) = procspectra(i).temperature - mod(procspectra(i).temperature,10);
else
tempvector(i) = procspectra(i).temperature + mod(-procspectra(i).temperature,10);
end
clostd = strcat('standards.interp.zeroed.ClO',num2str(tempvector(i)));
end
This construct works well. Now, I have built a string which is identical to the name of the vector I want to invoke, but I'm uncertain how to actually call the vector given that this is encoded as a string. Ideally I want to do something within the for-loop like:
parameters(i).standards.ClOstandard = clostd
where I actually am assigning that parameter structure to be the same as the vector I have saved in the standards structure I have previously generated (and not just a string)
Could anyone help out?
Don't construct clostd like that (containing the full variable name), make it contain only the last field name instead:
clostd = ['ClO' num2str(tempvector(i))];
parameters(i).standards.ClOstandard = standards.interp.zeroed.(clostd);
This is the syntax of accessing a structure's field dynamically, using a string. So the following three are equivalent:
struc.Cl0123
struc.('Cl0123')
fieldn='Cl0123'; struc.(fieldn)
I have an elasticsearch index that contains various member documents. Each member document contains a membership object, along with various fields associated with / describing individual membership. For example:
{membership:{'join_date':2015-01-01,'status':'A'}}
Membership status can be 'A' (active) or 'I' (inactive); both Unicode string values. I'm interested in providing a slight boost the score of documents that contain active membership status.
In my groovy script, along with other custom boosters on various numeric fields, I have added the following:
String status = doc['membership.status'].value;
float status_boost = 0.0;
if (status=='A') {status_boost = 2.0} else {status_boost=0.0};
return _score + status_boost
For some reason associated with how strings operate via groovy, the check (status=='A') does not work. I've attempted (status.toString()=='A'), (status.toString()=="A"), (status.equals('A')), plus a number of other variations.
How should I go about troubleshooting this (in a productive, efficient manner)? I don't have a stand-alone installation of groovy, but when I pull the response data in python the status is very much so either a Unicode 'A' or 'I' with no additional spacing or characters.
#VineetMohan is most likely right about the value being 'a' rather than 'A'.
You can check how the values are indexed by spitting them back out as script fields:
$ curl -XGET localhost:9200/test/_search -d '
{
"script_fields": {
"status": {
"script": "doc[\"membership.status\"].values"
}
}
}
'
From there, it should be an indication of what you're actually working with. More than likely based on the name and your usage, you will want to reindex (recreate) your data so that membership.status is mapped as a not_analyzed string. If done, then you won't need to worry about lowercasing of anything.
In the mean time, you can probably get by with:
return _score + (doc['membership.status'].value == 'a' ? 2 : 0)
As a big aside, you should not be using dynamic scripting. Use stored scripts in production to avoid security issues.
I'm trying to use get() to access a list element in R, but am getting an error.
example.list <- list()
example.list$attribute <- c("test")
get("example.list") # Works just fine
get("example.list$attribute") # breaks
## Error in get("example.list$attribute") :
## object 'example.list$attribute' not found
Any tips? I am looping over a vector of strings which identify the list names, and this would be really useful.
Here's the incantation that you are probably looking for:
get("attribute", example.list)
# [1] "test"
Or perhaps, for your situation, this:
get("attribute", eval(as.symbol("example.list")))
# [1] "test"
# Applied to your situation, as I understand it...
example.list2 <- example.list
listNames <- c("example.list", "example.list2")
sapply(listNames, function(X) get("attribute", eval(as.symbol(X))))
# example.list example.list2
# "test" "test"
Why not simply:
example.list <- list(attribute="test")
listName <- "example.list"
get(listName)$attribute
# or, if both the list name and the element name are given as arguments:
elementName <- "attribute"
get(listName)[[elementName]]
If your strings contain more than just object names, e.g. operators like here, you can evaluate them as expressions as follows:
> string <- "example.list$attribute"
> eval(parse(text = string))
[1] "test"
If your strings are all of the type "object$attribute", you could also parse them into object/attribute, so you can still get the object, then extract the attribute with [[:
> parsed <- unlist(strsplit(string, "\\$"))
> get(parsed[1])[[parsed[2]]]
[1] "test"
flodel's answer worked for my application, so I'm gonna post what I built on it, even though this is pretty uninspired. You can access each list element with a for loop, like so:
#============== List with five elements of non-uniform length ================#
example.list=
list(letters[1:5], letters[6:10], letters[11:15], letters[16:20], letters[21:26])
#===============================================================================#
#====== for loop that names and concatenates each consecutive element ========#
derp=c(); for(i in 1:length(example.list))
{derp=append(derp,eval(parse(text=example.list[i])))}
derp #Not a particularly useful application here, but it proves the point.
I'm using code like this for a function that calls certain sets of columns from a data frame by the column names. The user enters a list with elements that each represent different sets of column names (each set is a group of items belonging to one measure), and the big data frame containing all those columns. The for loop applies each consecutive list element as the set of column names for an internal function* applied only to the currently named set of columns of the big data frame. It then populates one column per loop of a matrix with the output for the subset of the big data frame that corresponds to the names in the element of the list corresponding to that loop's number. After the for loop, the function ends by outputting that matrix it produced.
Not sure if you're looking to do something similar with your list elements, but I'm happy I picked up this trick. Thanks to everyone for the ideas!
"Second example" / tangential info regarding application in graded response model factor scoring:
Here's the function I described above, just in case anyone wants to calculate graded response model factor scores* in large batches...Each column of the output matrix corresponds to an element of the list (i.e., a latent trait with ordinal indicator items specified by column name in the list element), and the rows correspond to the rows of the data frame used as input. Each row should presumably contain mutually dependent observations, as from a given individual, to whom the factor scores in the same row of the ouput matrix belong. Also, I feel I should add that if all the items in a given list element use the exact same Likert scale rating options, the graded response model may be less appropriate for factor scoring than a rating scale model (cf. http://www.rasch.org/rmt/rmt143k.htm).
'grmscores'=function(ColumnNameList,DataFrame) {require(ltm) #(Rizopoulos,2006)
x = matrix ( NA , nrow = nrow ( DataFrame ), ncol = length ( ColumnNameList ))
for(i in 1:length(ColumnNameList)) #flodel's magic featured below!#
{x[,i]=factor.scores(grm(DataFrame[, eval(parse(text= ColumnNameList[i]))]),
resp.patterns=DataFrame[,eval(parse(text= ColumnNameList[i]))])$score.dat$z1}; x}
Reference
*Rizopoulos, D. (2006). ltm: An R package for latent variable modelling and item response theory analyses, Journal of Statistical Software, 17(5), 1-25. URL: http://www.jstatsoft.org/v17/i05/