This field cannot be set - Terraform custom provider - terraform

I am newbie for creating the custom provider to terraform. I am trying to get some values from tf files. But I am gettings some errors.
Error: "tags": this field cannot be set
Here is my sample code
main.tf
# This is required for Terraform 0.13+
terraform {
required_providers {
example = {
version = "~> 1.0.0"
source = "example.com/sd/example"
}
}
}
resource "example_server" "my-server" {
address = "1.2.3.4"
sensitive_map {
key = "foo"
value = "dddd"
}
tags = {
env = "development"
name = "example tag"
}
}
Here is my resource provider file.
func resourceServer() *schema.Resource {
return &schema.Resource{
Create: resourceServerCreate,
Read: resourceServerRead,
Update: resourceServerUpdate,
Delete: resourceServerDelete,
Schema: map[string]*schema.Schema{
"address": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"tags": {
Type: schema.TypeMap,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
},
}
}
func resourceServerCreate(d *schema.ResourceData, m interface{}) error {
logs.Info("Creating word")
address := d.Get("address").(string)
// tags := d.Get("tags").(interface{})
// keyval := tags.(map[string]interface{})
d.SetId(address)
log.Printf("[WARN] No Server found: %s", d.Id())
f, err := os.OpenFile("/home/sdfd/Desktop/123.txt", os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600)
if err != nil {
panic(err)
}
defer f.Close()
tmps := d.Get("tags").(map[string]interface{})
address += tmps["env"].(string)
address += tmps["name"].(string)
if _, err = f.WriteString(address); err != nil {
panic(err)
}
return nil
}
I am not able to find the exact error. Logs also not printing in the terminal. Could anyone help to resolve this problem?
Thanks in advance

A schema.Schema object used to define an attribute or nested block must always have at least one of Required: true, Optional: true, or Computed: true set.
Required and Optional both indicate that the argument is one that can be set in the configuration. Since you've set neither of them for tags, the SDK is rejecting attempts to set tags in the configuration.
Computed indicates that the provider itself will be the one to decide the value. This can either be used alone or in conjunction with Optional. If you set both Optional and Computed then that means that the provider will provide its own value if (and only if) the user leaves it unset in the configuration.
Since it seems like your intent here is for tags to be set by the user in the configuration, I think the answer here would be to mark it as Optional:
"tags": {
Type: schema.TypeMap,
Elem: &schema.Schema{
Type: schema.TypeString,
},
Optional: true,
},
The above means that it may be set in the configuration, but the user isn't required to set it. If the user doesn't set it then your provider code will see it as the built-in default placeholder value for a map, which is an empty map.

Related

Why does adding elements of schema.TypeSet forces replacement in Terraform?

Context: we're building a new TF provider.
Our schema definition looks as follows:
"foo": {
Type: schema.TypeInt,
...
},
"bar": {
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"xxx": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validation.StringIsNotEmpty,
},
"yyy": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validation.StringIsNotEmpty,
},
"zzz": {
Type: schema.TypeInt,
Required: true,
ForceNew: true,
},
},
},
},
So there's no ForceNew: true, on for a bar attribute on a top level but when I update my resource from
resource "aaa" "before" {
foo = 2
}
->
resource "aaa" "before" {
foo = 2
bar {
xxx = "aaa"
yyy = "bbb"
zzz = 3
}
}
and yet I can see
+ bar { # forces replacement
+ xxx = "aaa"
+ yyy = "bbb"
+ zzz = 3
}
The SDKv2 ForceNew feature is a convenience helper for a common case where any change to a particular argument requires replacing an object.
It seems like in your case you need a more specific rule. From what you've described it seems like you'd need to replace only if there's an item being removed from the set, because adding new items and removing old items are the only two possible changes to a set.
Any time when the built-in helpers don't work you can typically implement a more specific rule using arbitrary code by implementing a CustomizeDiff function.
A CustomizeDiff function with similar behavior to ForceNew would have a structure like the following:
func exampleCustomizeDiff(d *ResourceDiff, meta interface{}) error {
old, new := d.GetChange("bar")
if barChangeNeedsReplacement(old, new) {
d.ForceNew("bar")
}
return nil
}
You will need to provide a definition of barChangeNeedsReplacement that implements whatever logic decides that replacement is needed.
I believe an attribute of TypeSet will appear in old and new as *schema.Set values, and so if you type-assert to that type then you can use the methods of that type to perform standard set operations to make your calculation.
For example, if the rule were to require replacement only if there is an item in old that isn't in new then you could perhaps write it like this:
func barChangeNeedsReplacement(old, new any) bool {
oldSet := old.(*schema.Set)
newSet := new.(*schema.Set)
removedSet := oldSet.Difference(newSet)
return removedSet.Len() > 0
}
If that isn't quite the rule you wanted then hopefully you can see how to modify this example to implement the rule you need.

Shall I make foo read-only attribute of terraform resource ForceNew: true or print out error?

I'm developing a Terraform provider.
There's a resource with 2 required string attributes: foo and bar where underlying API support updates for bar attribute but doesn't support foo attribute.
The question is what to do when a user edits foo attribute in their TF configuration:
"foo": {
Type: schema.TypeString,
Required: true,
},
Shall I print out an error like
func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) error {
if d.HasChange("foo") {
return fmt.Sprintf("updates for foo are not supported")
}
}
which will print out an error during terraform apply or just add ForceNew: true to trigger resource recreation:
"foo": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},

Use of TypeSet vs TypeList in Terraform when building a custom provider

I'm developing a terraform provider by following this guide.
However I stumbled upon using TypeList vs TypeSet:
TypeSet implements set behavior and is used to represent an unordered collection of items, meaning that their ordering specified does not need to be consistent, and the ordering itself has no impact on the behavior of the resource.
TypeList is used to represent an ordered collection of items, where the order the items are presented can impact the behavior of the resource being modeled. An example of ordered items would be network routing rules, where rules are examined in the order they are given until a match is found. The items are all of the same type defined by the Elem property.
My resource require one of 2 blocks to be present, i.e.:
resource "foo" "example" {
name = "123"
# Only one of basketball / football are expected to be present
basketball {
nba_id = "23"
}
football {
nfl_id = "1"
}
}
and my schema looks the following:
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
},
"basketball": basketballSchema(),
"football": footballSchema(),
},
func basketballSchema() *schema.Schema {
return &schema.Schema{
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"nba_id": {
Type: schema.TypeString,
Required: true,
},
},
},
ExactlyOneOf: ["basketball", "football"],
}
}
func footballSchema() *schema.Schema {
return &schema.Schema{
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"nfl_id": {
Type: schema.TypeString,
Required: true,
},
},
},
ExactlyOneOf: ["basketball", "football"],
}
}
Is that accurate that both TypeSet and TypeList will work in this scenario where we restrict the number of elements to either 0 or just 1?

Make Map values sensitive in custom Terraform provider

I'm writing a custom Terraform provider, and I have a resource that has an argument that is a map[string]string which may contain sensitive values. I want to make the values sensitive but not the keys. I tried setting the Sensitive attribute of the Elem in the map to true (see example below) but I still get the values printed out the console during the plan phase.
return &schema.Resource{
// ...
Schema: map[string]*schema.Schema{
"sensitive_map": {
Type: schema.TypeMap,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
// Sensitive: true,
},
},
},
}
Example plan phase output:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# deploy_project.this will be created
+ resource "my_resource" "this" {
+ sensitive_map = {
+ "key" = "value"
}
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
How can I get the value to be marked as sensitive but not the key?
In the Terraform SDK's current model of sensitivity, there is no way to achieve what you are aiming to achieve. Sensitivity is set for an entire attribute at a time, not for parts of an attribute.
Although the SDK model re-uses *schema.Schema as a possible type for Elem as a convenience, in practice only a small subset of the schema.Schema fields can work in that position, because a declaration like this is roughly the same as declaring a variable like the following in a Terraform module:
variable "sensitive_map" {
type = map(string)
sensitive = true
}
Notice that the "sensitive" concept applies to the variable as a whole. It isn't a part of the variable's type constraint, so there isn't any way to write down "map of sensitive strings" as a type constraint. Although provider arguments are not actually module variables, they do still participate in the same system of values and types that variables do, and so have a similar set of capabilities.
I ended settling for a solution using nested blocks rather than a simple map. The schema definition is more complex than a simple map and it makes for a userland configuration that is more verbose but it does satisfy my initial requirements quite well.
"sensitive_map": {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"key": {
Type: schema.TypeString,
Required: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
"value": {
Type: schema.TypeString,
Required: true,
Sensitive: true,
Elem: &schema.Schema{Type: schema.TypeString},
},
},
},
},
And it shows up in the plan phase as:
+ resource "my_resource" "this" {
+ sensitive_map {
+ key = "foo"
+ value = (sensitive value)
}
}
It changes the representation in Go from a map[string]string to a map[string]interface{} where the empty interface is itself a map[string]string. In the Create hook of the resource, this is what the code looks like to parse the input config:
sensitiveMap := make(client.EnvVars)
tmp := d.Get("sensitive_map").([]interface{})
for _, v := range tmp {
keyval := v.(map[string]interface{})
vars[keyval["key"].(string)] = keyval["value"].(string)
}
I'm sure it could be optimized further but for now it works just fine!

Custom mapping type in Elasticsearch 7.3.2 using nodejs

I am trying to create custom mapping type in elasticsearch 7.3.2 using nodejs client!.But it's only creating default _doc type mapping.When I provide custom doc type it throws me an error that you should give include_type_name = true but in elasticsearch 7.3.2 mapping type has been deprecated!
I have already tried giving custom mapping type with include_type_name=true.
var indexName = req.body.indexName;
var mappingType = req.body.mappingType;
const mapping = {
properties: {
name: {
type: "text",
analyzer: "autocomplete",
search_analyzer: "standard"
},
createdate: {
type: "date"
}
}
}
await esClient.indices.putMapping({
index: indexName,
type: mappingType,
include_type_name:true,
body: mapping
}, (err: any, resp: any, status: any) => {
if (err) {
return res.status(400).json({ status, err });
}
res.status(200).json({ resp, status });
})
}
Expected:I am trying to create custom mapping type because when I ingest the data from Mongodb by using Transporter it takes the mapping type as the name of the collection and the mapping that I am creating which has a default _doc type mapping.
Error:"Rejecting mapping update to [tag] as the final mapping would have more than 1 type: [_doc, tags]" tags(collection name
Yes, that's an expected behavior in ES 7 (onwards), only a single mapping type is supported and it's is called _doc by default.
As you've noticed, you can change that behavior in ES7 by adding include_type_name=true to your call. (That won't work in ES8 anymore, though).
The thing is that you can only do it at index creation time, i.e. you cannot modify the mapping type name after the index has been created. So, since you don't control the index creation from MongoDB, all you can do is to create an index template in ES that will kick in once your index will be created.
Run this in Kibana Dev Tools before the index creation:
PUT _template/tags?include_type_name=true
{
"index_patterns": ["my-index*"], <--- change this pattern to match yours
"mappings": {
"tags": {
"properties": {
"name": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "standard"
},
"createdate": {
"type": "date"
}
}
}
}
}
That way, when the my-index index gets created, it will automatically get a mapping type called tags instead of the default _doc one.

Resources