I am trying to declare the following Terraform provider:
provider "mysql" {
endpoint = "${aws_db_instance.main.endpoint}:3306"
username = "root"
password = "root"
}
I get the following error:
Error refreshing state: 1 error(s) occurred:
* dial tcp: lookup ${aws_db_instance.main.endpoint}: invalid domain name
It seems that Terraform is not performing interpolation on my endpoint string, yet I don't see anything in the documentation about this -- what gives?
Yes, it does. There's an example in the docs at https://www.terraform.io/docs/providers/mysql/
# Configure the MySQL provider based on the outcome of
# creating the aws_db_instance.
provider "mysql" {
endpoint = "${aws_db_instance.default.endpoint}"
username = "${aws_db_instance.default.username}"
password = "${aws_db_instance.default.password}"
}
I ran into a similar set of error messages ("connect failed," "invalid domain lookup") and looked into this a bit. I hope this helps you or someone else working across cloud and database providers in Terraform.
This seems to come down to the MySQL provider attempting to establish a database connection as soon as it's initialized, which could be a problem if you're trying to build a database server and configure the database / grants on it as part of the same Terraform run. Providers get initialized based on Terraform finding a resource owned by that provider in your Terraform code, and since this connection attempt happens when the provider gets initialized, you can't work around this with -target=<SPECIFIC RESOURCE>.
The workarounds I can think of would be to have a codebase for setting up the database server and a different codebase for setting up the database grants and suchlike ... or to have Terraform kick off a script that does that work for you (with dynamic parameters, of course!). Either way, you're effectively removing mysql_* resources from your initial Terraform run and that's what fixes this.
There are a couple of code changes that probably need to happen here - the Terraform MySQL provider would need to delay connecting to the database until Terraform tells it to run an operation on a resource, and it may be necessary to look at how Terraform handles dependencies across providers. I tried hacking in deferred connection logic just for the mysql_database resource to see if that solved all my problems and Terraform still complained about a dependency loop in the graph.
You can track the MySQL provider issue here:
https://github.com/terraform-providers/terraform-provider-mysql/issues/2
And the comments from before providers were split into their own releasable codebases:
https://github.com/hashicorp/terraform/issues/5687
Related
I am following this terraforming Snowflake tutorial: https://quickstarts.snowflake.com/guide/terraforming_snowflake/index.html?index=..%2F..index#6
When I run the command terraform plan in my project folder, it says:
provider.snowflake.account
Enter a value:
and then
provider.snowflake.username
Enter a value: MYUSERNAME
Which value do I have to enter? I tried entering my snowflake instance link as the account value:
dc70490.eu-central-1.snowflakecomputing.com
as well as dc70490as the account
and then my username MYUSERNAME as the username value.
However, it gives me an error that:
│ Error: could not build dsn for snowflake connection: no authentication method provided
│
│ with provider["registry.terraform.io/chanzuckerberg/snowflake"],
│ on <input-prompt> line 1:
│ (source code not available)
I also tried tf-snow as the username, since we exported this in a previous step of the tutorial
The account name should be without snowflakecomputing.com and the username need not be in caps.
https://quickstarts.snowflake.com/guide/devops_dcm_terraform_github/index.html?index=..%2F..index#3
Edit:
This is what I have used for terraform configuration to Snowflake for successful connection.
provider "snowflake"{
alias = "sys_admin"
role = "SYSADMIN"
region = "EU-CENTRAL-1"
account = "abcd123"
private_key_path = "<path to the key>"
username = "tf-snow"
}
I've been testing the Snowflake provider with version 0.25.10.
From 0.25.10 - to - 0.25.11, 0.25.11 was able to see resources the previous version (0.25.10) couldn't. The current version is 0.26.33.
I'm using Terraform 1.1.2. This is all important, because, along the way I've seen many strange errors depending on the combination.
If in doubt, try 0.25.10 first. I used:
provider "snowflake" {
account = "zx12345"
username = "A_SUITABLE_USER"
region = "eu-west-1"
private_key_path = "./my_private_key.p8"
}
I created a Snowflake user with key/pair authentication. Look at that private key path (Not for production kids), when I put it in a suitable location:
~/.ssh/my_private_tf_key.p8
This was the error:
Terraform v1.1.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
Error: could not build dsn
for snowflake connection: Private Key file could not be read: Could
not read private key:
open /home/terraform/keys/tf_london_admin_key.p8:
no such file or directory
with provider["registry.terraform.io/chanzuckerberg/snowflake"],
on main.tf line 27, in provider "snowflake": 27: provider
snowflake" {
Why highlight this? Because, I have no idea how it decided to use that dir, there's not even a /home/terraform/ dir on my system. Completely made up.
So let's just say, I'm not sure this provider is ready for prime time!
Day wasted, (YMMV).
I hope the Chan/Zukerberg combo keep supporting this going forward; I'll open a few issues on GitHub, I'm sure when all the issues are ironed out it'll be good, as I said, probably not for production though.
There is a mistake in the Snowflake tutorial, the path of the ssh key should not be :
export SNOWFLAKE_PRIVATE_KEY_PATH="~/.ssh/snowflake_tf_snow_key"
but
export SNOWFLAKE_PRIVATE_KEY_PATH="~/.ssh/snowflake_tf_snow_key.p8"
Please not that you should run terraform plan to make it work and not sudo terraform plan otherwise it will look for a ssh-key in /root/.ssh/ instead of $HOME/.ssh/ and so the whole process won't work.
I am trying to create a AWS CodeBuild using Terraform.
resource "aws_codebuild_project" "cicd_codebuild" {
name = "cicd-${var.profile}-build"
description = "cicd ${var.profile} CodeBuild"
service_role = "${aws_iam_role.cicd_role.arn}"
source {
type = "GITHUB_ENTERPRISE"
location = "https://git.xxx.com/yyy/zzz.git"
git_clone_depth = 0
buildspec = "NO_SOURCE"
}
environment {
compute_type = "BUILD_GENERAL1_MEDIUM"
image = "aws/codebuild/windows-base:2019-1.0"
type = "WINDOWS_SERVER_2019_CONTAINER"
image_pull_credentials_type = "CODEBUILD"
}
artifacts {
type = "NO_ARTIFACTS"
}
}
Upon terraform apply I get error:
Error: aws_codebuild_project.cicd_codebuild: expected environment.0.type to be one of [LINUX_CONTAINER LINUX_GPU_CONTAINER WINDOWS_CONTAINER ARM_CONTAINER], got WINDOWS_SERVER_2019_CONTAINER
And when I change value of environment.0.type = "WINDOWS_CONTAINER" I get below error:
Error: Error applying plan:
1 error occurred:
* aws_codebuild_project.cicd_codebuild: 1 error occurred:
* aws_codebuild_project.cicd_codebuild: Error creating CodeBuild project: InvalidInputException: The environment type WINDOWS_CONTAINER is deprecated for new projects or existing project environment updates. Please consider using Windows Server 2019 instead.
I found on GitHub that this issue has been addressed in next versions. So, I know upgrading provider version can solve this but do we have any workaround to fix this issue in same version of Terraform and Provider.
Thanks.
Terraform has plan time validation on many resource parameters that allows for catching where you are passing an invalid parameter before you get to the point of trying to apply it.
Normally this is beneficial but if you are not able to keep up to date with the provider versions it means that that list of allowed values can get out of date with what is actually allowed by the backing service the provider is talking to.
In this specific case a pull request added the WINDOWS_SERVER_2019_CONTAINER as a plan time validation option after AWS added that functionality in July 2020.
Unfortunately for you, this work was merged and released as part of the v3.20.0 release of the AWS provider and the v3 releases only support Terraform 0.12 and up:
BREAKING CHANGES
provider: New versions of the provider can only be automatically installed on Terraform 0.12 and later (#14143)
If you want to be able to use Windows containers in CodeBuild you either need to upgrade to a more recent version of Terraform and the AWS provider or you need to use a different tool for creating the CodeBuild project.
One potential workaround here is to use CloudFormation to create the CodeBuild project which you could run via Terraform using the aws_cloudformation_stack resource.
I am experiencing a weird behaviour with terraform. I have been working on an infra. I have a backend state configured to state my state file in a storage account in azure. Until yesterday everything was fine, this morning when I tried to update my infra, the output from terraform plan was weird as its trying to create all the resources as new, when I checked my local testate..it was empty.
I tried terraform pull and terraform refresh but nothing, still same result. I checked my remote state and I have all the resources still declared.
So I went for plan b, copy and paste my remote state into my local project and run terraform once again, but nothing, seems that terraform is ignoring my terraform state on my local and doesn't wanna pull the remote one.
EDIT:
this is the structure of my terraform backend:
terraform {
backend "azurerm" {
resource_group_name = "<resource-group-name>"
storage_account_name = "<storage-name>"
container_name = "<container-name>"
key = "terraform.tfstate"
}
}
The weird thing also, is that I just used terraform to create 8 resource for another project, and it did created everything and updated my backend state without any issue. The problem is only with the old resources.
Any help please?
if you run terraform workspace show are you in the default workspace?
if you have the tfstate locally but you're not on the correct workspace terraform will ignore it : https://www.terraform.io/docs/language/state/workspaces.html#using-workspaces
also is it possible to see your backend file structure?
EDIT:
i dont know why it ignores your remote state, but i think that your problem is that when you run terraform refresh it ignores your local file because you have a remote config:
Usage: terraform refresh [options]
-state=path - Path to read and write the state file to. Defaults to "terraform.tfstate". Ignored when remote state is used.
-state-out=path - Path to write updated state file. By default, the -state path will be used. Ignored when remote state is used.
is it possible to see the ouput of your terraform state pull?
I am getting this error when applying or planning terraform
Error: Failed to instantiate provider "aws" to obtain schema: timeout while waiting for plugin to start
Terraform init works.
Last time I was able to fix it by restarting my computer but now, it didn't work.
I tried very simple code to test but didn't work.
I had the same problem and I could get a solution adding a new rule on the security group that allow connect to my eks on AWS. In my case my IP change and I forgot adding for this reason I got the error.
Did anyone experienced issues with Terraform being throttled when using it with AWS Route53 records and being VERY slow?
I have enabled DEBUG mode and getting this:
2018-11-30T14:35:08.467Z [DEBUG] plugin.terraform-provider-aws_v1.36.0_x4: 2018/11/30 14:35:08 [DEBUG] [aws-sdk-go] <?xml version="1.0"?>
2018-11-30T14:35:08.467Z [DEBUG] plugin.terraform-provider aws_v1.36.0_x4: <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"><Error><Type>Sender</Type><Code>Throttling</Code><Message>Rate exceeded</Message></Error><RequestId>REQUEST_ID</RequestId></ErrorResponse>
2018-11-30T14:35:08.518Z [DEBUG] plugin.terraform-provider-aws_v1.36.0_x4: 2018/11/30 14:35:08 [DEBUG] [aws-sdk-go] DEBUG: Validate Response route53/ListResourceRecordSets failed, will retry, error Throttling: Rate exceeded
Terraform takes >1h just to do simple Plan, something which normally takes <5 mins.
My infrastructure is organized like this:
alb.tf:
module "ALB"
{ source = "modules/alb" }
modules/alb/alb.tf:
resource "aws_alb" "ALB"
{ name = "alb"
subnets = var.subnets ...
}
modules/alb/dns.tf
resource "aws_route53_record" "r53" {
count = "${length(var.cnames_generic)}"
zone_id = "HOSTED_ZONE_ID"
name = "${element(var.cnames_generic_dns, count.index)}.${var.environment}.${var.domain}"
type = "A"
alias {
name = "dualstack.${aws_alb.ALB.dns_name}"
zone_id = "${aws_alb.ALB.zone_id}"
evaluate_target_health = false
}
}
modules/alb/variables.tf:
variable "cnames_generic_dns" {
type = "list"
default = [
"hostname1",
"hostname2",
"hostname3",
"hostname4",
"hostname5",
"hostname6",
"hostname7",
...
"hostname25"
]
}
So I am using modules to configure Terraform, and inside modules there are resources (ALB, DNS..).
However, looks like Terraform is describing every single DNS Resource (CNAME and A records, which I have ~1000) in a HostedZone which is causing it to Throttle?
Terraform v0.10.7
Terraform AWS provider version = "~> 1.36.0"
that's a lot of DNS records! And partly the reason why the AWS API is throttling you.
First, I'd recommend upgrading your AWS provider. v1.36 is fairly old and there have been more than a few bug fixes since.
(Next, but not absolutely necessary, is to use TF v0.11.x if possible.)
In your AWS Provider block, increase max_retries to at least 10 and experiment with higher values.
Then, use Terraform's --parallelism flag to limit TF's concurrency rate. Try setting that to 5 for starters.
Last, enable Terraform's debug mode to see if it gives you any more useful info.
Hope this helps!
The problem is solved by performing the following actions:
since we re-structured DNS records by adding one resource and then variables / iterate through them, this probably caused Terraform to query constantly all DNS records
we decided to leave Terraform to finish refresh (took 4h and lots of throttling)
manually deleted DNS records from R53 for the Workspace which we were doing this
commenting out Terraform DNS resources so let it also delete from state files
uncommenting Terraform DNS and re-run it again so it created them again
run Terraform plan went fine again
Looks like throttling with Terraform AWS Route53 is completely resolved after upgrading to newer AWS provider. We have updated TF AWS provider to 1.54.0 like this in our init.tf :
version = "~> 1.54.0"
Here are more details about the issue and suggestions from Hashicorp engineers:
https://github.com/terraform-providers/terraform-provider-aws/issues/7056