I was getting the following error while running terraform plan:
Error: Cycle: aws_sagemaker_notebook_instance.mlops_datapipeline_notebookinstance_main, aws_sagemaker_notebook_instance.mlops_datapipeline_notebookinstance_demo, data.aws_iam_policy_document.sagemaker_neptune-access, aws_iam_policy.sagemaker_execution_policy, aws_neptune_cluster.neptune_for_demo, aws_neptune_cluster.neptune_for_main, data.aws_iam_policy_document.neptune-access, aws_iam_policy.neptune_access_policy, aws_iam_role.Neptune_execution_role
I assume you are using AWS because your filename contains "ec2", even though you don't show enough code in your question or provide enough details.
The AWS Terraform provider expects tags to be a map, not a single string. You have enclosed the entire thing in double quotes, converting it into a string. Try this:
tags = merge(var.tags, map({"Name", format("%s-%d", var.name, count.index+1)}))
Related
I wanted to ask if there is a way to ignore whitespace changes when creating a terraform plan.
This question is related to this one, I created a new one because I wanted to give a new example of the issue.
Terraform shows unnecessary changes due to whitespace
For example, when running
terraform plan
I get the following change for a helm provider resource
# helm_release.cert-manager will be updated in-place
~ resource "helm_release" "cert-manager" {
id = "cert-manager"
name = "cert-manager"
~ values = [
- <<-EOT
installCRDs: true
EOT,
+ <<-EOT
installCRDs: true
EOT,
]
# (27 unchanged attributes hidden)
}
I found out that the change was due to line endings. Deployed was CRLF and my local source file had LF as line ending.
Is there an option to ignore whitespaces and/or line ending characters?
It's typically the responsibility of the provider itself to determine whether the prior value and the new value are equivalent despite not being exactly equal, and so making this work automatically would require a change to the provider itself to notice that this argument is defined as being YAML and YAML doesn't ascribe any meaning to the decision between CRLF and just LF. The provider would ideally perform this check itself and thus avoid you needing to worry about it, and I would suggest opening a feature request with the provider developer to see if they would be interested in handling that.
However, if a provider isn't performing that job correctly itself then you can potentially work around it by doing your own normalization of the value using Terraform language features, so that the value passed to the provider is always the same when the meaning is the same.
One straightforward way to achieve that in this case would be to round-trip the value through both yamldecode and yamlencode, thereby normalizing the input to be in the style that yamlencode produces:
values = [yamlencode(yamldecode(var.something))]
If you want to be more surgical about it and only normalize the line endings, you could use replace to remove the CR character from any CRLF pair:
values = [replace(var.something, "\r\n", "\n")]
The above solution assumes that the difference in whitespace is being caused by something in your module, such as if you're storing your Terraform configuration in a misconfigured Git repository that's rewriting LF to CRLF when you clone it on a Windows system. This config-based normalization can undo that sort of transformation so that the provider will always see the value in the same way.
This solution cannot address problems that are caused by the provider itself misbehaving. Unfortunately some providers have bugs where they will silently rewrite the stored values for some arguments during the "refresh" step, regardless of how you wrote it in the configuration. In that case the only recourse is to fix the provider, because that incorrect value is originating inside the provider itself and isn't under the control of the module author.
I would like to know if some of you encountered the following issue;
While I'm trying to upgrade my EKS cluster to version 1.20 with the following variable-
eks_version = 1.20
This picture shows the result, terraform converts 1.20 to 1.2-
For some reason, terraform not take into account the total decimal number, resulting in an error;
Error: error updating EKS Cluster (stage) version: InvalidParameterException: unsupported Kubernetes version
P.S
I tried to use the format function as well
eks_version = format("%.2s", 1.20)
With the same output.
Any ideas on how to make terraform take into account the whole decimal number?
Ervin's comment is correct.
The answer is to stop formatting it using this method.
The format spec of %.2f says to limit the input 1.20 with a width of 2.
If you want a specific version, remove the call to the format function.
Thank you guys for your comments!
Your comments helped me realize that I need to make this variable a string, not a number!
I had to change my variable defenition to string:
variable "eks_version" {
type = string
default = "1.20"
}
I have a terraform code which has to execute multiple times which means terraform init,plan,apply will be within a for loop. One resource block has a count variable which gets evaluated based on local variable. First iteration works well until terraform apply. In the second iteration it fails at terraform plan with the following error.
The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created.To work around this, use the -target argument to first apply only the resources that the count depends on.
The following block is where count is used
resource "null_resource" "test" {
count = length(local.stacc)
provisioner "local-exec" {
command = "echo ${local.data[count.index]} >> myfile.txt"
}
}
This local.stcacc is achieved based on certain for loop processing which will result in a list. Hence count of items in the list is the value of local.stacc
My doubt is how the first iteration passes but second iteration fails.
local.stacc can't be dependent on other resources. Its value must be know at apply time, not during. As the error suggest, use -target to apply and create the resources which are needed for evaluation of local.stacc, and then apply again to run your null_resource" "test".
I've deployed my infra using Terraform and I noticed that I have some interesting information in the state (terraform.tfstate) file of terraform which I would like to extract. For example
$ terraform state show 'packet_device.worker'
id = 6015bg2b-b8c4-4925-aad2-f0671d5d3b13
billing_cycle = hourly
created = 2015-12-17T00:06:56Z
facility = ewr1
...
which I would like to transform somehow to
$ terraform state show 'packet_device.worker.id'
6015bg2b-b8c4-4925-aad2-f0671d5d3b13
But adding the id at the end doesn't seem to work. Any suggestions how I can achieve this behaviour?
Terraform state show command is used to retrieve all the attributes of a given resource and you won't be able to fetch a single attribute from it as the argument is resource ADDRESS and is used to refer a resource specifically. Documented in https://www.terraform.io/docs/internals/resource-addressing.html
What you can do is store the resource attribute in output value and use the command
terraform output {output-valaue-extractor}
Refer: https://www.terraform.io/docs/configuration/outputs.html
You can utilize terraform show -json and jq to get a specific value out of a Terraform state file.
terraform show -json <state_file> | jq '.values.root_module.resources[] | select(.address=="<terraform_resource_name>") | .values.<property_name>'
You have a state file named terraform.tfstate and a Terraform resource as packet_device.worker and you want to get id. Then it would be as follows:
terraform show -json terraform.tfstate | jq '.values.root_module.resources[] | select(.address=="packet_device.worker") | .values.id'
terraform.tfstate also can be omitted since it is a default name for a state file.
The primary way to export information from a Terraform configuration is to declare Output Values in your root module. You can then access them using terraform output once the apply has completed. If you need that information in a machine-readable way, you can alternatively run terraform output -json from the consuming program and parse the output as JSON.
If you are in an unusual situation where you need programmatic access to all values in the state (for example, if you were implementing some sort of generic Terraform state visualization tool) then you can instead use terraform show -json, which will print out all of the data from the state in a JSON format.
If you are accessing only specific values, perhaps to integrate with some other system in an automation solution, I'd recommend using explicit Output Values because then it's explicit to future maintainers what the interface with the caller is, and so they are less likely to accidentally break the caller by e.g. refactoring the packet_device.worker resource into a child module, which would cause it to appear in a different place in the state. The usual assumption is that the resources inside a module are an implementation detail of that module and thus that you can safely refactor them as needed as long as the output values remain unchanged.
If you want to get the exact value and are willing to install jq, the other answers here are great!
If you're looking for a quick answer to manually copy/paste, etc., piping to grep does the trick.
ex:
terraform state show 'packet_device.worker' | grep "id"
which would show the relevant line(s), like:
id = 6015bg2b-b8c4-4925-aad2-f0671d5d3b13
Using Terraform to configure vSphere vms, I'd like to be able to provide an IP address (and gateway and netmask) in the tfvars file, but have the vm default to using DHCP if the values are not provided. I know it will use DHCP if the 'vsphere_virtual_machine' resources' 'customize' block contains an empty 'network_interface' block. I was hoping that be giving a default value of "" to the settings in the variables.tf file I could set values if present and use DHCP if not, but I get an error stating:
Error: module.vm.vsphere_virtual_machine.node:
clone.0.customize.0.network_interface.0.ipv4_netmask: cannot parse ''
as int: strconv.ParseInt: parsing "": invalid syntax
So putting in a blank string won't parse, and it won't just leave the whole network_interface blank if the values are blank.
I can't use COUNT on a subresource, so the only thing I've come up with so far is to put two entire, nearly identical, 'vsphere_virtual_machine' resources into my module and then put COUNT statements on both so only one gets created, depending on whether the network settings are provided or not, but man, does that seem ugly...?
I think you are in luck. I've been waiting for this exact same problem to be solved since almost a year now.
Lo and behold, Terraform v0.12.0-alpha1:
They now support dynamic block definitions instead of just static ones
Enjoy, while I'm gonna throw away a couple of hundreds of lines worth of hacks just like the one you mentioned...