For some reason Terraform doesn't show which resource is producing error. It just outputs the errors. Is there a way I can make sure terraform shows which resource is producing the error?
I'm using Terraform v0.12.21.
Terraform plan doesn't produce any errors. The error is during the apply command.
All the resources are in different tf files and I have to go through one by one to figure out which wasn't run and producing error.
In the below error, lb_listener has completed. So I'm not which was next that could be producing the error.
module.Tester_vpc.aws_lb_target_group.nlb_tg_port_80[0]: Creating...
module.Tester_vpc.aws_lb_target_group.nlb_tg_port_80[0]: Creation complete after 1s [id=arn:aws:elasticloadbalancing:ap-south-1:123456:targetgroup/nlbPort80/123456]
module.Tester_vpc.aws_lb_listener.listener[0]: Creating...
module.Tester_vpc.aws_lb_listener.listener[0]: Creation complete after 1s [id=arn:aws:elasticloadbalancing:ap-south-1:123456:listener/net/myNLB/123456/8d51be081230319c]
Error: no matching SecurityGroup found
Error: Your query returned no results. Please change your search criteria and try again.
In this case, I'm fairly sure it's not a resource failing that is causing the error.
Error: Your query returned no results. Please change your search criteria and try again.
This is the error message that a data source gives when it fails. Do you have an aws_security_group data source that's failing?
As to your actual question, how to troubleshoot these sorts of errors. I always reach for TF_LOG (see https://www.terraform.io/docs/internals/debugging.html).
You can set TF_LOG as an environment variable with the value DEBUG (or TRACE) to see detailed debugging information. Often this will include the output of what fails.
Here's an example:
$ TF_LOG=DEBUG terraform apply
Related
I'm using ClassSerializerInterceptor in my NestJS application to apply instanceToPlain when I return objects on incoming requests. I also use firestore as my main database. Some of my entities contains DocumentReference which I want to directly return without applying every time #Transform on it. When I do so, I got following error in my console:
ERROR [ExceptionsHandler] Value for argument "documentPath" is not a
valid resource path. Path must be a non-empty string. Error: Value for
argument "documentPath" is not a valid resource path. Path must be a
non-empty string.
I was trying to fix this by myself and found package class-transformer-firestore which seems to be potential solution, but it use prototype and have no readme at all, so I have no idea how to use it. I tried just to install it with no success.
Maybe someone faced the same issue and have solution to this.
Please check original repo again, seems like author updated it 👌🏻
I am using terraform version 0.11.8
and tried this POC https://github.com/salizzar/terraform-aws-docker
when I do terraform init it throws following error.
I am novice to use terraform. googled a lot and tried reffering terraform module registry to get rid of this error,but in vain.
Can someone please run this POC and point what needs to be changed?
https://github.com/salizzar/terraform-aws-docker/blob/master/main.tf
**ERROR :**
[root#localhost test]# terraform init
There are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
Error: Error loading /home/tottally/main.tf: Invalid dot index found: 'var.aws_security_group.sg_count'. Values in maps and lists can be referenced using square bracket indexing, like: 'var.mymap["key"]' or 'var.mylist[1]'. in:
${var.aws_security_group.sg_count}
I think Terraform is not happy with this line:
count = "${var.aws_security_group.sg_count}"
Instead of using this double dot notation, try using square bracket indexing, as Terraform itself suggests in the error message:
count = "${var.aws_security_group[sg_count]}"
By the way, this repo is really old, you can find better and up-to-date examples in the public Terraform registry.
I am facing an issue that I could not understand how to resolve.
I created a test plan that need to connect DB and count the results.
The problem is that Jmeter not perform any validation afterwards, I created a JSSR223 in the JDBC request and just want to print the results and Jmeter not print.
I created another sampler to print the DB results and still Jmeter not printing.
Jmeter just passes this steps,
In the results tree I saw that it connects to DB and failed in the assertion, but why it passes the other steps? and just moving to debug sampler?
I can not print the results, I can not perform any debug since it is just black box.
can someone please advise?
you can see in yellow all the steps that Jmeter not performed and just not exists in the results tree.
enter image description here
Get used to check jmeter.log file, it normally contains information regarding what went wrong, you should be able to figure out the root cause by looking into the log file. If you are not - update your question with jmeter.log file contents (at least essential parts)
My expectation is that your ${Conv_sense} variable is not defined (or cannot be cast to Integer). Double check whether it is defined or not using Debug Sampler and View Results Tree listener combination.
Also don't refer JMeter Variables like ${Conv_sense} in Groovy scripts body, use vars.get('Conv_sense}') instead, otherwise it might conflict with Groovy GStringTemplate resulting in undefined behavior.
I have an experiment in Azure. When I launch the run I obtain:
If you look at the top on the right you see that there is an error, but no module has it.
If I run the single module where (in this simple case) I know that the error has to be, I can highlight the specific error.
Is it a bug or am I doing something wrong?
I had a similar error once when, for some reason, a module (not created by me) was simply under another one. So the error was shown, but I couldn't see that module.
When we ran our deploy from octopus deploy, we failed on our last step. I will include the error message, but the main problem is that it seems to be deleting our transform files. We have the checkbox marked to automatically run configuration transform files, but it results in them being deleted. Anyone else run into this problem or possibly know how to fix it?
The error message is:
Set-Location : Cannot process argument because the value of argument "path" is null. Change the value of argument "path" to a non-null value.
When we went to check the files from there, we noticed they were being deleted.
Octopus deletes the config transforms after executing them by design. That is not causing your deployment to fail.
Your release is failing because you are calling the PowerShell cmdlet Set-Location and passing a null value to its Path parameter.