Make Map values sensitive in custom Terraform provider - terraform

I'm writing a custom Terraform provider, and I have a resource that has an argument that is a map[string]string which may contain sensitive values. I want to make the values sensitive but not the keys. I tried setting the Sensitive attribute of the Elem in the map to true (see example below) but I still get the values printed out the console during the plan phase.
return &schema.Resource{
// ...
Schema: map[string]*schema.Schema{
"sensitive_map": {
Type: schema.TypeMap,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
// Sensitive: true,
},
},
},
}
Example plan phase output:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# deploy_project.this will be created
+ resource "my_resource" "this" {
+ sensitive_map = {
+ "key" = "value"
}
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
How can I get the value to be marked as sensitive but not the key?

In the Terraform SDK's current model of sensitivity, there is no way to achieve what you are aiming to achieve. Sensitivity is set for an entire attribute at a time, not for parts of an attribute.
Although the SDK model re-uses *schema.Schema as a possible type for Elem as a convenience, in practice only a small subset of the schema.Schema fields can work in that position, because a declaration like this is roughly the same as declaring a variable like the following in a Terraform module:
variable "sensitive_map" {
type = map(string)
sensitive = true
}
Notice that the "sensitive" concept applies to the variable as a whole. It isn't a part of the variable's type constraint, so there isn't any way to write down "map of sensitive strings" as a type constraint. Although provider arguments are not actually module variables, they do still participate in the same system of values and types that variables do, and so have a similar set of capabilities.

I ended settling for a solution using nested blocks rather than a simple map. The schema definition is more complex than a simple map and it makes for a userland configuration that is more verbose but it does satisfy my initial requirements quite well.
"sensitive_map": {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"key": {
Type: schema.TypeString,
Required: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"value": {
Type: schema.TypeString,
Required: true,
Sensitive: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
},
},
},
And it shows up in the plan phase as:
+ resource "my_resource" "this" {
+ sensitive_map {
+ key = "foo"
+ value = (sensitive value)
}
}
It changes the representation in Go from a map[string]string to a map[string]interface{} where the empty interface is itself a map[string]string. In the Create hook of the resource, this is what the code looks like to parse the input config:
sensitiveMap := make(client.EnvVars)
tmp := d.Get("sensitive_map").([]interface{})
for _, v := range tmp {
keyval := v.(map[string]interface{})
vars[keyval["key"].(string)] = keyval["value"].(string)
}
I'm sure it could be optimized further but for now it works just fine!

Related

Why does adding elements of schema.TypeSet forces replacement in Terraform?

Context: we're building a new TF provider.
Our schema definition looks as follows:
"foo": {
Type: schema.TypeInt,
...
},
"bar": {
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"xxx": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validation.StringIsNotEmpty,
},
"yyy": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validation.StringIsNotEmpty,
},
"zzz": {
Type: schema.TypeInt,
Required: true,
ForceNew: true,
},
},
},
},
So there's no ForceNew: true, on for a bar attribute on a top level but when I update my resource from
resource "aaa" "before" {
foo = 2
}
->
resource "aaa" "before" {
foo = 2
bar {
xxx = "aaa"
yyy = "bbb"
zzz = 3
}
}
and yet I can see
+ bar { # forces replacement
+ xxx = "aaa"
+ yyy = "bbb"
+ zzz = 3
}
The SDKv2 ForceNew feature is a convenience helper for a common case where any change to a particular argument requires replacing an object.
It seems like in your case you need a more specific rule. From what you've described it seems like you'd need to replace only if there's an item being removed from the set, because adding new items and removing old items are the only two possible changes to a set.
Any time when the built-in helpers don't work you can typically implement a more specific rule using arbitrary code by implementing a CustomizeDiff function.
A CustomizeDiff function with similar behavior to ForceNew would have a structure like the following:
func exampleCustomizeDiff(d *ResourceDiff, meta interface{}) error {
old, new := d.GetChange("bar")
if barChangeNeedsReplacement(old, new) {
d.ForceNew("bar")
}
return nil
}
You will need to provide a definition of barChangeNeedsReplacement that implements whatever logic decides that replacement is needed.
I believe an attribute of TypeSet will appear in old and new as *schema.Set values, and so if you type-assert to that type then you can use the methods of that type to perform standard set operations to make your calculation.
For example, if the rule were to require replacement only if there is an item in old that isn't in new then you could perhaps write it like this:
func barChangeNeedsReplacement(old, new any) bool {
oldSet := old.(*schema.Set)
newSet := new.(*schema.Set)
removedSet := oldSet.Difference(newSet)
return removedSet.Len() > 0
}
If that isn't quite the rule you wanted then hopefully you can see how to modify this example to implement the rule you need.

Convert from Tuple of strings to strings in terraform

I have an issue where I want to pass a list of vpc_ids to aws_route53_zone while getting the id from a couple of module calls and iterating it from the state file.
The out put format I am using is:
output "development_vpc_id" {
value = [for vpc in values(module.layout)[*] : vpc.id if vpc.environment == "development"]
description = "VPC id for development env"
}
where I get the output like:
"development_vpc_id": {
"value": [
"xxxx"
],
"type": [
"tuple",
[
"string"
]
]
},
instead I want to achieve below:
"developmemt_vpc_id": {
"value": "xxx",
"type": "string"
},
Can someone please help me with the same.
There isn't any automatic way to "convert" a sequence of strings into a single string, because you need to decide how you want to represent the multiple separate strings once you've reduced it into only a single string.
One solution would be to apply JSON encoding so that your output value is a string containing JSON array syntax:
output "development_vpc_id" {
value = jsonencode([
for vpc in values(module.layout)[*] : vpc.id
if vpc.environment == "development"
])
}
Another possibility is to concatenate all of the strings together with a particular character as a marker to separate each one, such as a comma:
output "development_vpc_id" {
value = join(",", [
for vpc in values(module.layout)[*] : vpc.id
if vpc.environment == "development"
])
}
If you expect that this list will always contain exactly one item -- that is, if each of your objects has a unique environment value -- then you could also tell Terraform about that assumption using the one function:
output "development_vpc_id" {
value = one([
for vpc in values(module.layout)[*] : vpc.id
if vpc.environment == "development"
])
}
In this case, Terraform will either return the one element of this sequence or will raise an error saying there are too many items in the sequence. The one function therefore acts as an assertion to help you detect if there's a bug which causes there to be more than one item in this list, rather than just silently discarding some of the items.

Use of TypeSet vs TypeList in Terraform when building a custom provider

I'm developing a terraform provider by following this guide.
However I stumbled upon using TypeList vs TypeSet:
TypeSet implements set behavior and is used to represent an unordered collection of items, meaning that their ordering specified does not need to be consistent, and the ordering itself has no impact on the behavior of the resource.
TypeList is used to represent an ordered collection of items, where the order the items are presented can impact the behavior of the resource being modeled. An example of ordered items would be network routing rules, where rules are examined in the order they are given until a match is found. The items are all of the same type defined by the Elem property.
My resource require one of 2 blocks to be present, i.e.:
resource "foo" "example" {
name = "123"
# Only one of basketball / football are expected to be present
basketball {
nba_id = "23"
}
football {
nfl_id = "1"
}
}
and my schema looks the following:
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
},
"basketball": basketballSchema(),
"football": footballSchema(),
},
func basketballSchema() *schema.Schema {
return &schema.Schema{
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"nba_id": {
Type: schema.TypeString,
Required: true,
},
},
},
ExactlyOneOf: ["basketball", "football"],
}
}
func footballSchema() *schema.Schema {
return &schema.Schema{
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"nfl_id": {
Type: schema.TypeString,
Required: true,
},
},
},
ExactlyOneOf: ["basketball", "football"],
}
}
Is that accurate that both TypeSet and TypeList will work in this scenario where we restrict the number of elements to either 0 or just 1?

This field cannot be set - Terraform custom provider

I am newbie for creating the custom provider to terraform. I am trying to get some values from tf files. But I am gettings some errors.
Error: "tags": this field cannot be set
Here is my sample code
main.tf
# This is required for Terraform 0.13+
terraform {
required_providers {
example = {
version = "~> 1.0.0"
source = "example.com/sd/example"
}
}
}
resource "example_server" "my-server" {
address = "1.2.3.4"
sensitive_map {
key = "foo"
value = "dddd"
}
tags = {
env = "development"
name = "example tag"
}
}
Here is my resource provider file.
func resourceServer() *schema.Resource {
return &schema.Resource{
Create: resourceServerCreate,
Read: resourceServerRead,
Update: resourceServerUpdate,
Delete: resourceServerDelete,
Schema: map[string]*schema.Schema{
"address": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"tags": {
Type: schema.TypeMap,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
},
}
}
func resourceServerCreate(d *schema.ResourceData, m interface{}) error {
logs.Info("Creating word")
address := d.Get("address").(string)
// tags := d.Get("tags").(interface{})
// keyval := tags.(map[string]interface{})
d.SetId(address)
log.Printf("[WARN] No Server found: %s", d.Id())
f, err := os.OpenFile("/home/sdfd/Desktop/123.txt", os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600)
if err != nil {
panic(err)
}
defer f.Close()
tmps := d.Get("tags").(map[string]interface{})
address += tmps["env"].(string)
address += tmps["name"].(string)
if _, err = f.WriteString(address); err != nil {
panic(err)
}
return nil
}
I am not able to find the exact error. Logs also not printing in the terminal. Could anyone help to resolve this problem?
Thanks in advance
A schema.Schema object used to define an attribute or nested block must always have at least one of Required: true, Optional: true, or Computed: true set.
Required and Optional both indicate that the argument is one that can be set in the configuration. Since you've set neither of them for tags, the SDK is rejecting attempts to set tags in the configuration.
Computed indicates that the provider itself will be the one to decide the value. This can either be used alone or in conjunction with Optional. If you set both Optional and Computed then that means that the provider will provide its own value if (and only if) the user leaves it unset in the configuration.
Since it seems like your intent here is for tags to be set by the user in the configuration, I think the answer here would be to mark it as Optional:
"tags": {
Type: schema.TypeMap,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Optional: true,
},
The above means that it may be set in the configuration, but the user isn't required to set it. If the user doesn't set it then your provider code will see it as the built-in default placeholder value for a map, which is an empty map.

AWS Cloudformation: How to reuse bash script placed in user-data parameter when creating EC2?

In Cloudformation I have two stacks (one nested).
Nested stack "ec2-setup":
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Parameters" : {
// (...) some parameters here
"userData" : {
"Description" : "user data to be passed to instance",
"Type" : "String",
"Default": ""
}
},
"Resources" : {
"EC2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"UserData" : { "Ref" : "userData" },
// (...) some other properties here
}
}
},
// (...)
}
Now in my main template I want to refer to nested template presented above and pass a bash script using the userData parameter. Additionally I do not want to inline the content of user data script because I want to reuse it for few ec2 instances (so I do not want to duplicate the script each time I declare ec2 instance in my main template).
I tried to achieve this by setting the content of the script as a default value of a parameter:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters" : {
"myUserData": {
"Type": "String",
"Default" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash \n",
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"echo 'tralala' > /tmp/hahaha"
]]}}
}
},
(...)
"myEc2": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/ec2-setup.json",
"TimeoutInMinutes": "10",
"Parameters": {
// (...)
"userData" : { "Ref" : "myUserData" }
}
But I get following error while trying to launch stack:
"Template validation error: Template format error: Every Default
member must be a string."
The error seems to be caused by the fact that the declaration { Fn::Base64 (...) } is an object - not a string (although it results in returning base64 encoded string).
All works ok, if I paste my script directly into to the parameters section (as inline script) when calling my nested template (instead of reffering to string set as parameter):
"myEc2": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/ec2-setup.json",
"TimeoutInMinutes": "10",
"Parameters": {
// (...)
"userData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash \n",
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"echo 'tralala' > /tmp/hahaha"
]]}}
}
but I want to keep the content of userData script in a parameter/variable to be able to reuse it.
Any chance to reuse such a bash script without a need to copy/paste it each time?
Here are a few options on how to reuse a bash script in user-data for multiple EC2 instances defined through CloudFormation:
1. Set default parameter as string
Your original attempted solution should work, with a minor tweak: you must declare the default parameter as a string, as follows (using YAML instead of JSON makes it possible/easier to declare a multi-line string inline):
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
myUserData:
Type: String
Default: |
#!/bin/bash
yum update -y
# Install the files and packages from the metadata
echo 'tralala' > /tmp/hahaha
(...)
Resources:
myEc2:
Type: AWS::CloudFormation::Stack
Properties
TemplateURL: "s3://path/to/ec2-setup.yml"
TimeoutInMinutes: 10
Parameters:
# (...)
userData: !Ref myUserData
Then, in your nested stack, apply any required intrinsic functions (Fn::Base64, as well as Fn::Sub which is quite helpful if you need to apply any Ref or Fn::GetAtt functions within your user-data script) within the EC2 instance's resource properties:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
# (...) some parameters here
userData:
Description: user data to be passed to instance
Type: String
Default: ""
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
UserData:
"Fn::Base64":
"Fn::Sub": !Ref userData
# (...) some other properties here
# (...)
2. Upload script to S3
You can upload your single Bash script to an S3 bucket, then invoke the script by adding a minimal user-data script in each EC2 instance in your template:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
# (...) some parameters here
ScriptBucket:
Description: S3 bucket containing user-data script
Type: String
ScriptKey:
Description: S3 object key containing user-data script
Type: String
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
UserData:
"Fn::Base64":
"Fn::Sub": |
#!/bin/bash
aws s3 cp s3://${ScriptBucket}/${ScriptKey} - | bash -s
# (...) some other properties here
# (...)
3. Use preprocessor to inline script from single source
Finally, you can use a template-preprocessor tool like troposphere or your own to 'generate' verbose CloudFormation-executable templates from more compact/expressive source files. This approach will allow you to eliminate duplication in your source files - although the templates will contain 'duplicate' user-data scripts, this will only occur in the generated templates, so should not pose a problem.
You'll have to look outside the template to provide the same user data to multiple templates. A common approach here would be to abstract your template one step further, or "template the template". Use the same method to create both templates, and you'll keep them both DRY.
I'm a huge fan of cloudformation and use it to create most all my resources, especially for production-bound uses. But as powerful as it is, it isn't quite turn-key. In addition to creating the template, you'll also have to call the coudformation API to create the stack, and provide a stack name and parameters. Thus, automation around the use of cloudformation is a necessary part of a complete solution. This automation can be simplistic ( bash script, for example ) or sophisticated. I've taken to using ansible's cloudformation module to automate "around" the template, be it creating a template for the template with Jinja, or just providing different sets of parameters to the same reusable template, or doing discovery before the stack is created; whatever ancillary operations are necessary. Some folks really like troposphere for this purpose - if you're a pythonic thinker you might find it to be a good fit. Once you have automation of any kind handling the stack creation, you'll find it's easy to add steps to make the template itself more dynamic, or assemble multiple stacks from reusable components.
At work we use cloudformation quite a bit and are tending these days to prefer a compositional approach, where we define the shared components of the templates we use, and then compose the actual templates from components.
the other option would be to merge the two stacks, using conditionals to control the inclusion of the defined resources in any particular stack created from the template. This works OK in simple cases, but the combinatorial complexity of all those conditions tends to make this a difficult solution in the long run, unless the differences are really simple.
Actually I found one more solution than already mentioned. This solution on the one hand is a little "hackish", but on the other hand I found it to be really useful for "bash script" use case (and also for other parameters).
The idea is to create an extra stack - "parameters stack" - which will output the values. Since outputs of a stack are not limited to String (as it is for default values) we can define entire base64 encoded script as a single output from a stack.
The drawback is that every stack needs to define at least one resource, so our parameters stack also needs to define at least one resource. The solution for this issue is either to define the parameters in another template which already defines existing resource, or create a "fake resource" which will never be created becasue of a Condition which will never be satisified.
Here I present the solution with fake resource. First we create our new paramaters-stack.json as follows:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Outputs/returns parameter values",
"Conditions" : {
"alwaysFalseCondition" : {"Fn::Equals" : ["aaaaaaaaaa", "bbbbbbbbbb"]}
},
"Resources": {
"FakeResource" : {
"Type" : "AWS::EC2::EIPAssociation",
"Condition" : "alwaysFalseCondition",
"Properties" : {
"AllocationId" : { "Ref": "AWS::NoValue" },
"NetworkInterfaceId" : { "Ref": "AWS::NoValue" }
}
}
},
"Outputs": {
"ec2InitScript": {
"Value":
{ "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash \n",
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"echo 'tralala' > /tmp/hahaha"
]]}}
}
}
}
Now in the main template we first declare our paramters stack and later we refer to the output from that parameters stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"myParameters": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/paramaters-stack.json",
"TimeoutInMinutes": "10"
}
},
"myEc2": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/ec2-setup.json",
"TimeoutInMinutes": "10",
"Parameters": {
// (...)
"userData" : {"Fn::GetAtt": [ "myParameters", "Outputs.ec2InitScript" ]}
}
}
}
}
Please note that one can create up to 60 outputs in one stack file, so it is possible to define 60 variables/paramaters per single stack file using this technique.

Resources