Terraform keyvault delete protection not work as expected - azure

When such configuration is used:
module "key_vault" {
for_each = var.STAGES
resourceName = "kv-${each.value}-${local.resource_name_suffix}"
...
purgeProtectionEnabled = true
}
Then when for some reason secrets should be deleted. Restore gets error:
Error: keyvault.BaseClient#RecoverDeletedSecret: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="Conflict" Message="Secret AzureAiApiKey is currently being deleted." InnerError={"code":"Object
IsBeingDeleted"}
So how to make RecoverDeletedSecret delay or trigger when delete is completed?

According go this github issue, this should already be fixed with version 2.49.
provider "azurerm" {
version = "~> 2.49.0"
}
Please always try to use the latest azurerm provider. I’ve seen many errors I couldn’t wrap my head around. And more often than not, things are already fixed in later versions of the provider.

Related

GRPCProvider.GetProviderSchema: Error while dialing tcp 127.0.0.1:10000

While applying terraform plan, we are getting error logs as highlighted in panic output box. It has been working very well on multiple terraform plan, apply and destroy.
And, we are unable to make a meaningful summary of this error. We have even searched in stackoverflow and github issues forums. Yet, none matched what we are facing. They even didn't closely match.
Terraform version: 1.1.8
Azure Provider Plugin Version: 3.3.0
terraform {
required providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.3.0"
}
}
backend "azurerm" {
resource_group_name = "abcd"
storage_account_name = "tfstate30303"
container_name = "tfstate"
key = "terraform.tfstate"
access_key = "abcdsample....."
}
}
provider "azurerm" {
features {}
skip_provider_registration = true
}
Error:
Failed to load plugin schemas
Error while loading loading schemas for plugin components: Failed to obtain provider schema: Could not load the schema for provider registry.terraform.io/hashicorp/azurerm: Plugin did not respond. The plugin encountered an error, and failed to respond to the plugin. (*GRPCProvider).getProviderSchema call. The plugin logs might contain more details.
TF_LOG= TRACE Error log
[ERROR] plugin (*GRPCProvider).getProviderSchema: error="rpc error: code=Unavailable desc = connection error: desc = "transport:error while dialing: dial tcp 127.0.0.1:10000: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. [WARN]: plugin failed to exit gracefully
Note: Azure region is Central India. Was working fine till yesterday. And, we are working in an air-gapped environment. Hence, plugins are downloaded manually and placed in plugin-dir of code folder.
Please let me know if there is any mistake from my end. I am unable to make a meaning out of this error. never faced this error from terraform.
Thank you CK5 you get it works by simple rebooting the system. Posting this as solution with RCA why you were getting an error.
RCA : This is because you might earlier install other plugin for the same terraform file which you might deleted and now you are using with specific version of terraform plugin this might be possibilties to cause error while communicating with azure even though plugin is install.
So make sure to reboot your system and run the VS code editor as adminstiator if getting this kind of error so plugin sync properly to communitcate with azure.

Terraform event monitor migration

Trying to migrate event monitors according to migration steps
resource "datadog_monitor" "cache_event" {
name = "${var.cache_replication_group} :: Event"
type = "event-v2 alert"
message = join(" ", ["Cache event occured in ${var.cache_replication_group}.", join(" ", var.recipients.general)])
query = "events(\"source:elasticache tags:(replication_group:${var.cache_replication_group}) NOT(Finished)\").rollup(\"count\").last(\"15m\") > 0"
require_full_window = false
tags = concat(var.tags, ["cache:${var.cache_replication_group}", "monitor:stability"])
}
But when trying to apply changes Terraform returns the error Error: error creating monitor: event-v2 alert is not a valid MonitorType, but according to their documentation event-v2 alert is a valid type. DataDog provider version at the moment is 3.10.0, hence it should support this type
My bad. After changing the provider version, I didn't reinitialise the state file.
Edit:
Before monitor migration, I used an older DataDog provider version. After adjusting query to be complient with a new event monitor format, I bumped DataDog provider version to the one that supports new v2 monitors, but instead of deleting .terraform folder and triggering init Terraform command, I just went straight with apply. Instead I should have initialized it first (init).

Terraform with AWS provider unable to create CodeBuild

I am trying to create a AWS CodeBuild using Terraform.
resource "aws_codebuild_project" "cicd_codebuild" {
name = "cicd-${var.profile}-build"
description = "cicd ${var.profile} CodeBuild"
service_role = "${aws_iam_role.cicd_role.arn}"
source {
type = "GITHUB_ENTERPRISE"
location = "https://git.xxx.com/yyy/zzz.git"
git_clone_depth = 0
buildspec = "NO_SOURCE"
}
environment {
compute_type = "BUILD_GENERAL1_MEDIUM"
image = "aws/codebuild/windows-base:2019-1.0"
type = "WINDOWS_SERVER_2019_CONTAINER"
image_pull_credentials_type = "CODEBUILD"
}
artifacts {
type = "NO_ARTIFACTS"
}
}
Upon terraform apply I get error:
Error: aws_codebuild_project.cicd_codebuild: expected environment.0.type to be one of [LINUX_CONTAINER LINUX_GPU_CONTAINER WINDOWS_CONTAINER ARM_CONTAINER], got WINDOWS_SERVER_2019_CONTAINER
And when I change value of environment.0.type = "WINDOWS_CONTAINER" I get below error:
Error: Error applying plan:
1 error occurred:
* aws_codebuild_project.cicd_codebuild: 1 error occurred:
* aws_codebuild_project.cicd_codebuild: Error creating CodeBuild project: InvalidInputException: The environment type WINDOWS_CONTAINER is deprecated for new projects or existing project environment updates. Please consider using Windows Server 2019 instead.
I found on GitHub that this issue has been addressed in next versions. So, I know upgrading provider version can solve this but do we have any workaround to fix this issue in same version of Terraform and Provider.
Thanks.
Terraform has plan time validation on many resource parameters that allows for catching where you are passing an invalid parameter before you get to the point of trying to apply it.
Normally this is beneficial but if you are not able to keep up to date with the provider versions it means that that list of allowed values can get out of date with what is actually allowed by the backing service the provider is talking to.
In this specific case a pull request added the WINDOWS_SERVER_2019_CONTAINER as a plan time validation option after AWS added that functionality in July 2020.
Unfortunately for you, this work was merged and released as part of the v3.20.0 release of the AWS provider and the v3 releases only support Terraform 0.12 and up:
BREAKING CHANGES
provider: New versions of the provider can only be automatically installed on Terraform 0.12 and later (#14143)
If you want to be able to use Windows containers in CodeBuild you either need to upgrade to a more recent version of Terraform and the AWS provider or you need to use a different tool for creating the CodeBuild project.
One potential workaround here is to use CloudFormation to create the CodeBuild project which you could run via Terraform using the aws_cloudformation_stack resource.

Terraform settings - remote state s3 - InvalidParameter validation error

Environment
Terraform v0.12.24
+ provider.aws v2.61.0
Running in an alpine container.
Background
I have a basic terraform script running ok, but now I'm extending it and am trying to configure a remote (S3) state.
terraform.tf:
terraform {
backend "s3" {
bucket = "labs"
key = "com/company/labs"
region = "eu-west-2"
dynamodb_table = "labs-tf-locks"
encrypt = true
}
}
The bucket exists, and so does the table. I have created them both with terraform and have confirmed through the console.
Problem
When I run terraform init I get:
Error refreshing state: InvalidParameter: 2 validation error(s) found.
- minimum field size of 1, GetObjectInput.Bucket.
- minimum field size of 1, GetObjectInput.Key.
What I've tried
terraform fmt reports no errors and happily reformats my terraform.tf file. I tried moving the stanza into my main.tf too, just in case the terraform.tf file was being ignored for some reason. I got exactly the same results.
I've also tried running this without the alpine container, from an ubuntu ec2 instance in aws, but I get the same results.
I originally had the name of the terraform file in the key. I've removed that (thanks) but it hasn't helped resolve the problem.
Also, I've just tried running this in an older image: hashicorp/terraform:0.12.17 but I get a similar error:
Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListObjectsInput.Bucket.
I'm guessing that I've done something trivially stupid here, but I can't see what it is.
Solved!!!
I don't understand the problem, but I have a working solution now. I deleted the .terraform directory and reran terraform init. This is ok for me because I don't have an existing state. The insight came from reading the error from the 0.12.17 version of terraform, which complained about not being able to read the workspace.
Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, ListObjectsInput.Bucket.
Which initially led me to believe there was a problem with an earlier version of tf reading a newer version's configuration. So, I blew away the .terraform and it worked with the older tf, so I did it again and it worked with the newer tf too. Obviously, something had gotten itself screwed up in terraform's storage. I don't know how or why. But, it works for me, so...
If you are facing this issue in app side, there is a chance that you are sending a wrong payload or the payload is updated by the backend.
Before i was doing this->
--> POST .../register
{"email":"FamilyKalwar#gmail.com","user":{"password":"123456#aA","username":"familykalwar"}}
--> END POST (92-byte body)
<-- 500 Internal Server Error .../register (282ms)
{"message":"InvalidParameter: 2 validation error(s) found.\n- minimum field size of 6, SignUpInput.Password.\n- minimum field size of 1, SignUpInput.Username.\n"}
Later I found that the payload is updated to this->*
{
"email": "tanishat1#gmail.com",
"username": "tanishat1",
"password": "123456#aA"
}
I removed the "user" data class and updated the payload and it worked!

Terraform Throttling Route53

Did anyone experienced issues with Terraform being throttled when using it with AWS Route53 records and being VERY slow?
I have enabled DEBUG mode and getting this:
2018-11-30T14:35:08.467Z [DEBUG] plugin.terraform-provider-aws_v1.36.0_x4: 2018/11/30 14:35:08 [DEBUG] [aws-sdk-go] <?xml version="1.0"?>
2018-11-30T14:35:08.467Z [DEBUG] plugin.terraform-provider aws_v1.36.0_x4: <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"><Error><Type>Sender</Type><Code>Throttling</Code><Message>Rate exceeded</Message></Error><RequestId>REQUEST_ID</RequestId></ErrorResponse>
2018-11-30T14:35:08.518Z [DEBUG] plugin.terraform-provider-aws_v1.36.0_x4: 2018/11/30 14:35:08 [DEBUG] [aws-sdk-go] DEBUG: Validate Response route53/ListResourceRecordSets failed, will retry, error Throttling: Rate exceeded
Terraform takes >1h just to do simple Plan, something which normally takes <5 mins.
My infrastructure is organized like this:
alb.tf:
module "ALB"
{ source = "modules/alb" }
modules/alb/alb.tf:
resource "aws_alb" "ALB"
{ name = "alb"
subnets = var.subnets ...
}
modules/alb/dns.tf
resource "aws_route53_record" "r53" {
count = "${length(var.cnames_generic)}"
zone_id = "HOSTED_ZONE_ID"
name = "${element(var.cnames_generic_dns, count.index)}.${var.environment}.${var.domain}"
type = "A"
alias {
name = "dualstack.${aws_alb.ALB.dns_name}"
zone_id = "${aws_alb.ALB.zone_id}"
evaluate_target_health = false
}
}
modules/alb/variables.tf:
variable "cnames_generic_dns" {
type = "list"
default = [
"hostname1",
"hostname2",
"hostname3",
"hostname4",
"hostname5",
"hostname6",
"hostname7",
...
"hostname25"
]
}
So I am using modules to configure Terraform, and inside modules there are resources (ALB, DNS..).
However, looks like Terraform is describing every single DNS Resource (CNAME and A records, which I have ~1000) in a HostedZone which is causing it to Throttle?
Terraform v0.10.7
Terraform AWS provider version = "~> 1.36.0"
that's a lot of DNS records! And partly the reason why the AWS API is throttling you.
First, I'd recommend upgrading your AWS provider. v1.36 is fairly old and there have been more than a few bug fixes since.
(Next, but not absolutely necessary, is to use TF v0.11.x if possible.)
In your AWS Provider block, increase max_retries to at least 10 and experiment with higher values.
Then, use Terraform's --parallelism flag to limit TF's concurrency rate. Try setting that to 5 for starters.
Last, enable Terraform's debug mode to see if it gives you any more useful info.
Hope this helps!
The problem is solved by performing the following actions:
since we re-structured DNS records by adding one resource and then variables / iterate through them, this probably caused Terraform to query constantly all DNS records
we decided to leave Terraform to finish refresh (took 4h and lots of throttling)
manually deleted DNS records from R53 for the Workspace which we were doing this
commenting out Terraform DNS resources so let it also delete from state files
uncommenting Terraform DNS and re-run it again so it created them again
run Terraform plan went fine again
Looks like throttling with Terraform AWS Route53 is completely resolved after upgrading to newer AWS provider. We have updated TF AWS provider to 1.54.0 like this in our init.tf :
version = "~> 1.54.0"
Here are more details about the issue and suggestions from Hashicorp engineers:
https://github.com/terraform-providers/terraform-provider-aws/issues/7056

Resources