How to force test kitchen attributes to be passed as integers? - attributes

I'm trying to set some attributes for some cookbooks that I imported through my kitchen.yml file.
kitchen.yml:
---
...
attributes:
some_cookbook:
key: 1
The cookbook that I'm importing seems to require that the attribute node['some_cookbook']['key'] be an integer. I went onto my virtual machine to look at the dna.json file that gets generated and I can see the following:
dna.json:
{
"some_cookbook": {
"key": "1"
}
"run_list": ["recipe[some_cookbook::default]"]
}
So, what I'm seeing here is that test kitchen does not preserve the type as an integer when creating the file above. If I change the file above and remove the quotes surrounding the value 1, then the recipe runs fine. Is there anything I can do to make kitchen pass in the right type? Or, perhaps this is better resolved by the cookbook maintainer validating the attributes better?

Related

Custom library not recognized inside DSL file in groovyscript part

Context: I'm implementing a Jenkins Pipeline. For this, in order to define the pipeline's parameters I implemented the DSL file.
In the DSL file, I have a parameter of ActiveChoiceParam type, called ORDERS. This parameter will allow me to choose one or more orders numbers at the same time.
Problem: What I want to do is to set the values that gets rendered for ORDERS parameter from a custom library. Basically I have a directory my_libraries with a file, orders.groovy. In this file, there is an order class, with a references list property that contains my values.
The code in the DSL file is as follows:
def order = new my_libraries.order()
pipelineJob("111_name_of_pipeline") {
description("pipeline_description")
keepDependencies(false)
parameters {
stringParam("BRANCH", "master", "The branch")
activeChoiceParam("ORDERS") {
description("orders references")
choiceType('CHECKBOX')
groovyScript{
script("return order.references")
fallbackScript("return ['error']")
}
}
}
}
Also, is good to mention that my custom library works well. For example, if I choose to use a ChoiceParam as below, it works, but of course, is not the behaviour I want. I want to select multiple choices.
choiceParam("ORDERS", order.references, "Orders references list but single-choice")
How can I make order.references available in the script part of groovyScript?
I've tried using global instance of order class, instantiate the class directly in the groovyScript, but no positive result.

Extracting raw (non string) parameter values from terraform using terraform-config-inspect

I'm trying to generate json from terraform modules using terraform-config-inspect (https://github.com/hashicorp/terraform-config-inspect).
Note: Started with terraform-docs but then found what it uses underneath and it's terraform-config-inspect library.
The problem is that I want to go beyond what terraform-config-inspect provides out of box at the moment:
As an example, I want to get the name of aws_ssm_parameter resource.
For example, I have resource like this:
resource "aws_ssm_parameter" "service_security_group_id" {
name = "/${var.deployment}/shared/${var.service_name}/security_group_id"
type = "String"
value = aws_security_group.service.id
overwrite = "true"
tags = var.tags
}
and I would like to extract the value of the name parameter but by default it does not output this parameter. I tried to hack the code by modifying resource schema and other parts but ended up in getting empty string instead of name value or error because it contains parts like ${var.deployment}.
When I set it to plain string then my modified code returns what I expect
"aws_ssm_parameter.service_security_group_id": {
"mode": "managed",
"type": "aws_ssm_parameter",
"name": "service_security_group_id",
"value": "/test-env/shared/my-service/security_group_id",
"provider": {
"name": "aws"
}
}
but in normal case it fails with the following error
{
"severity": "error",
"summary": "Unsuitable value type",
"detail": "Unsuitable value: value must be known",
...
}
I know that I could build something totally custom for my specific use case but I hope there is something that could be re-used :)
So the questions are:
Is it somehow possible to take the real raw value from terraform resource so I could get "/${var.deployment}/shared/${var.service_name}/security_group_id" in json output?
Maybe some other tool out there?
Thanks in advance!
Input Variables in Terraform are a planning option and so to resolve them fully requires creating a Terraform plan. If you are able to create a Terraform plan then you can find the resolved values in the JSON serialization of the plan, using steps like the following:
terraform plan -out=tfplan (optionally include -var=... and -var-file=... if you need to set particular values for those variables.
terraform show -json tfplan to get a JSON representation of the plan.
Alternatively, if you've already applied the configuration you want to analyse then you can get similar information from the JSON representation of the latest state snapshot:
terraform show -json to get a JSON representation of the latest state snapshot.
As you've seen, terraform-config-inspect is only for static analysis of the top-level declarations and so it contains no functionality for evaluating expressions.
In order to properly evaluate expressions here without creating a Terraform plan or reading from a Terraform state snapshot would require reimplementing the Terraform Core runtime, at least to some extent. However, for this particular expression (which only relies on input variable values) you could potentially use the HCL API directly with some hard-coded placeholder values for those variables in order to get a value for that argument, derived from whatever you happen to have set var.deployment and var.service_name to in the hcl.EvalContext you construct yourself.

Terraform aws_ssm_parameter null/empty with ignore_changes

I have a Terraform config that looks like this:
resource "random_string" "foo" {
length = 31
special = false
}
resource "aws_ssm_parameter" "bar" {
name = "baz"
type = "SecureString"
value = random_string.foo.result
lifecycle {
ignore_changes = [value]
}
}
The idea is that on the first terraform apply the bar resource will be stored in baz in SSM based on the value of foo, and then on subsequent calls to apply I'll be able to reference aws_ssm_parameter.bar.value, however what I see is that it works on the first run, stores the newly created random value, and then on subsequent runs aws_ssm_parameter.bar.value is empty.
If I create a aws_ssm_parameter data source that can pull the value correctly, but it doesn't work on the first apply when it doesn't exist yet. How can I modify this config so I can get the value stored in baz in SSM and work for creating the value in the same config?
(Sorry not enough karma to comment)
To fix the chicken-egg problem, you could add depends_on = [aws_ssm_parameter.bar] to a data resource, but this introduces some awkwardness (especially if you need to call destroy often in your workflow). It's not particularly recommended (see here).
It doesn't really make sense that it's returning empty, though, so I wonder if you've hit a different bug. Does the value actually get posted to SSM (i.e. can you see it when you run aws ssm get-paramter ...)?
Edit- I just tested your example code above with:
output "bar" {
value = aws_ssm_parameter.bar.value
}
and it seems to work fine. Maybe you need to update tf or plugins?
Oh, I forgot about this question, but turns out I did figure out the problem.
The issue was that I was creating the ssm parameter inside a module that was being used in another module. The problem was because I didn't output anything related to this parameter, so it seemed to get dropped from state by Terraform on subsequent replans after it was created. Exposing it as output on the module fixed the issue.

Can I dynamically generate parameters in Azure templates?

AFAIK, all the parameters have to be defined in the parent template right from the start. Is it possible to generate parameters dynamically at all, such as looping n times to generate n name fields?
This shows how parameters are defined in templates. Note that none of the parameters are created dynamically.
You can use parameters that depend on parameters to emulate something like that:
"parameters": {
"first": {
"type": "string",
"defaultValue": "lol"
},
"second": {
"type": "string",
"defaultValue": "[concat('not_so_', parameters('first'))]"
}
}
would give you value of not_so_lol for the first parameter.
You another option is to create variables that take values depending on the parameter:
"parameterOne": "defaultValue": x, - I'm lazy to type out proper definition in json.
...
"option-x": "something"
"option-y": "something-else"
"result": "[variables(concat('option-', parameters('parameterOne')))]"
so this is basically an If statement in ARM template. the value of the result variable equals to "[variables('option-x')]" or "[variables('option-y')]", depending on your input.
Another (a bit more complex option) is to use deployments outputs. So an example would be, you create a deployment filled with different outputs needed by you (basically you create a pool of constants), and after that, you can reference that deployment outputs in all of your templates (given they reside in the same subscription, but you can create that deployment in all of the subscriptions). that would basically create a pool of constants you can get needed value based on the current value.
"something": "[reference(concat('resourceGroupName', 'Microsoft.Resources/deployments/', parameters('deploymentName')),'2015-01-01').outputs]",
The last (most complex) option is to construct needed stuff on the fly, using nested templates. That's a bit too much to get through in an answer, but I'll just say that in this case you need to use nested templates as aggregator\transformator, where you feed values in and get desired output. This is pretty advanced stuff, but worth knowing. This would be a good example (for starters).
According to your description, we can use uniqueString() to achieve generate parameters dynamically. This function is helpful when you need to create a unique name for a resource.
More information about uniqueString, please refer to this link.

How to build a file based on defined-type instances in Puppet

I want my Puppet class to create a file resource with contents based on all instances of a particular defined type. I looked at this question with the idea of iterating over the instances to build the file, but apparently it's a "Bad Idea" per the one answer currently there.
Some background: I am building a monitor_service class in Puppet to deploy a custom monitoring application. The application reads a config file that tells it what to monitor, one item per line, along the lines of
ITEM: /var/things/thing-one (123)
ITEM: /var/things/thing-two (456)
I am also writing a defined type that deploys instances of the monitored items:
define my_thing::monitored_thing ( $port ) {
file { "/var/things/$name":
...
}
}
On a given node, I set up several monitored_things like
my_thing::monitored_thing { "thing-one":
port => 123
}
my_thing::monitored_thing { "thing-two":
port => 456
}
What's the "right" Puppet idiom for building the monitoring service config file? I would prefer for this to work in such a way that the monitor_service class doesn't have to be told which monitored_thing instances it is watching -- just creating a monitored_thing instance should cause it to be added to the config file automatically.
There are several ways to modify/declare only part of a file within multiple defined types:
Use puppetlabs-stdlib's file_line. This lets you specify that a file should contain a specific line. Best when you do not care about the other file contents and just want to make sure a line is present or absent.
Use puppetlabs-concat if you want to make sure that the final file only includes the fragments that you are specifying or the order of the fragments matters.
Use the augeas type if you need to edit/add configuration to a file with a more complicated structure, like xml, apache configurations, etc.

Resources