Syncing terraform resource creation with manual creation - terraform

I have a use-case where we have two ways of allowing resource creation:
With the help of terraform
manual creation.
for example (this is just an example resource, not the actual one we are using):
resource "aws_codestarconnections_connection" "example" {
name = "example-connection"
provider_type = "GitHub"
}
Using the UI also we can create the same connection.
If the resource is already there, I do not want terraform to attempt the creation again.
If it was earlier created by terraform only, it stays in the terraform state, and terraform won't create this resource.
But, if it was a manual creation, is there a way to avoid the creation?
Option tried:
Fetching the data source first and then using count while resource creation.
Eg.
resource "aws_codestarconnections_connection" "example" {
count = number_of_fetched_resources == 0 ? 1 : number_of_fetched_resources
name = "example-connection"
provider_type = "GitHub"
}
Now, if there was no resource available, it works fine. Because it creates 1 resource in that case. But if there was already a manual creation, I want, terraform shouldn't attempt resource creation (leave it at the same number of resources). But terraform takes it as if I am asking to create a resource because in the state it has 0.
Also
count = number_of_fetched_resources == 0 ? 1 : 0
doesn't work because now it will delete the existing resource also, in case terraform created it earlier.
So is there a way to sync the state using terraform code (I cannot use commands like terraform import as running commands is done in a different environment)?

Related

Diagnostic Settings - Master" already exists - to be managed via Terraform this resource needs to be imported into the State

I have a diagnostic setting configured on my master db. As shown below in my main.tf
resource "azurerm_monitor_diagnostic_setting" "main" {
name = "Diagnostic Settings - Master"
target_resource_id = "${azurerm_mssql_server.main.id}/databases/master"
log_analytics_workspace_id = azurerm_log_analytics_workspace.main.id
log {
category = "SQLSecurityAuditEvents"
enabled = true
retention_policy {
enabled = false
}
}
metric {
category = "AllMetrics"
retention_policy {
enabled = false
}
}
lifecycle {
ignore_changes = [log, metric]
}
}
If I don't delete it before in the resource group before I run the Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via
Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains - but I don't know why that is a problem with Terraform. I have also noticed that it is in my tfplan.
What could be the problem?
If I don't delete it before in the resource group before I run the
Terraform, I get the error:
Diagnostic Settings - Master" already exists - to be managed via Terraform this resource needs to be imported into the State
I know that if I delete the SQL Server the diagnostic setting remains but I don't know why that is a problem with Terraform.
If you have created the resource in Azure from a different way (i.e. Portal/Templates/CLI/Powershell), that means Terraform is not aware of resource already existing in Azure. So, during Terraform Plan, it shows you the plan what will be created from what you have written in main.tf. But when you run Terraform Apply the azurerm provider checks the resources names with the existing resources of the same resource providers and result in giving an error that it already exists and needs to be imported to be managed by Terraform.
Also if you have created everything from Terraform then doing a Terraform destroy deletes all the resources present on the main.tf.
Well, it's in the .tfplan and also it's in main.tf - so it's imported, right ?
If you mention the resource and its details in main.tf and .tfplan, it doesn't mean that you have imported the resource or Terraform gets aware of the resource. Terraform is only aware of the resources that are stored in the Terraform state file i.e. .tfstate.
So , to overcome the error that you get without deleting the resource from Portal, you will have to add the resource in the main.tf as you have already done and then use Terraform import command to import the Azure resource to Terraform State file like below:
terraform import azurerm_monitor_diagnostic_setting.example "{resourceID}|{DiagnosticsSettingsName}"
So, for you it will be like:
terraform import azurerm_monitor_diagnostic_setting.main "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Sql/servers/<SQLServerName>/databases/master|Diagnostic Settings - Master"
After the Import is done, any changes you make from Terraform to that resource will get reflected in portal as well and you will be able to destroy the resource from terraform as well.

How to automatically import resource to Terraform state?

I want to develop a single Terraform module to deploy my resources, with the resources being stored in separate YAML files. For example:
# resource_group_a.yml
name: "ResourceGroupA"
location: "westus"
# resource_group_b.yml
name: "ResourceGroupB"
location: "norwayeast"
And the following Terraform module:
# deploy/main.tf
variable source_file {
type = string # Path to a YAML file
}
locals {
rg = yamldecode(file(var.source_file))
}
resource "azurerm_resource_group" "rg" {
name = local.rg.name
location = local.rg.location
}
I can deploy the resource groups with:
terraform apply -var="source_file=resource_group_a.yml"
terraform apply -var="source_file=resource_group_b.yml"
But then I run into 2 problems, due to the state that Terraform keeps about my infrastructure:
If I deploy Resource Group A, it deletes Resource Group B and vice-versa.
If I manually remove the .tfstate file prior to running apply, and the resource group already exists, I get an error:
A resource with the ID "/..." already exists - to be managed via Terraform
this resource needs to be imported into the State.
with azurerm_resource_group.rg,
on main.tf line 8 in resource "azurerm_resource_group" "rg"
I can import the resource into my state with
terraform import azurerm_resource_group.reg "/..."
But it's a long file and there may be multiple resources that I need to import.
So my questions are:
How to keep the state separate between the two resource groups?
How to automatically import existing resources when I run terraform apply?
How to keep the state separate between the two resource groups?
I recommend using Terraform Workspaces for this, which will give you separate state files, each with an associated workspace name.
How to automatically import existing resources when I run terraform
apply?
That's not currently possible. There are some third-party tools out there like Terraformer for accomplishing automated imports, but in my experience they don't work very well, or they never support all the resource types you need. Even then they wouldn't import resources automatically every time you run terraform apply.

Error message while deleting google_kms_crypto_key resource

I am managing kms keys and key rings with gcp terraform provider
resource "google_kms_key_ring" "vault" {
name = "vault"
location = "global"
}
resource "google_kms_crypto_key" "vault_init" {
name = "vault"
key_ring = google_kms_key_ring.vault.self_link
rotation_period = "100000s" #
}
When I ran this for the first time, I was able to create the keys and keyrings successfully and doing a terraform destroy allows the terraform code to execute successfully with out any errors.
The next time I do a terraform apply, I just use terraform import to import the resources from GCP and the code execution works fine.
But after a while, certain key version 1 was destroyed. Now everytime I do a terrafrom destroy, I get the below error
module.cluster_vault.google_kms_crypto_key.vault_init: Destroying... [id=projects/<MY-PROJECT>/locations/global/keyRings/vault/cryptoKeys/vault]
Error: googleapi: Error 400: The request cannot be fulfilled. Resource projects/<MY-PROJECT>/locations/global/keyRings/vault/cryptoKeys/vault/cryptoKeyVersions/1 has value DESTROYED in field crypto_key_version.state., failedPrecondition
Is there was way to suppress this particular error ? KeyVersions 1-3 are destroyed.
At present, Cloud KMS resources cannot be deleted. This is against Terraform's desired behavior to be able to completely destroy and re-create resources. You will need to use a different key name or key ring name to proceed.

How terraform know which resource should it run first to spin up infrastructure?

I’m using terraform to spin up Aws-DMS. To spin up DMS, we need subnet groups, dms replication task, dms endpoints, dms replication instance. I’ve configured everything using terraform documentation. My question is how will terraform know which task to be completed first to spin up other dependency tasks?
Do we need to declare it somewhere in terraform or is terraform intelligent enough to run accordingly?
Terraform uses references in the configuration to infer ordering.
Consider the following example:
resource "aws_s3_bucket" "example" {
bucket = "terraform-dependencies-example"
acl = "private"
}
resource "aws_s3_bucket_object" "example" {
bucket = aws_s3_bucket.example.bucket # reference to aws_s3_bucket.example
key = "example"
content = "example"
}
In the above example, the aws_s3_bucket_object.example resource contains an expression that refers to aws_s3_bucket.example.bucket, and so Terraform can infer that aws_s3_bucket.example must be created before aws_s3_bucket_object.example.
These implicit dependencies created by references are the primary way to create ordering in Terraform. In some rare circumstances we need to represent dependencies that cannot be inferred by expressions, and so for those exceptional circumstances only we can add additional explicit dependencies using the depends_on meta argument.
One situation where that can occur is AWS IAM policies, where the graph created naturally by references will tend to have the following shape:
Due to AWS IAM's data model, we must first create a role and then assign a policy to it as a separate step, but the objects assuming that role (in this case, an AWS Lambda function just for example) only take a reference to the role itself, not to the policy. With the dependencies created implicitly by references then, the Lambda function could potentially be created before its role has the access it needs, causing errors if the function tries to take any actions before the policy is assigned.
To address this, we can use depends_on in the aws_lambda_function resource block to force that extra dependency and thus create the correct execution order:
resource "aws_iam_role" "example" {
# ...
}
resource "aws_iam_role_policy" "example" {
# ...
}
resource "aws_lambda_function" "exmaple" {
depends_on = [aws_iam_role_policy.example]
}
For more information on resource dependencies in Terraform, see Resource Dependencies in the Terraform documentation.
Terraform will automatically create the resources in an order that all dependencies can be fulfilled.
E.g.: If you set a security group id in your DMS definition as "${aws_security_group.my_sg.id}", Terraform recognizes this dependency and created the security group prior to the DMS resource.

Adding tags created to child resources created by terraform

Terraform v0.11.9
+ provider.aws v1.41.0
I want to know if there is a way to update a resource that is not directly created in the plan but by a resource in the plan. The example is creating a managed Active Directory by using aws_directory_service_directory This process creates a security group and I want to add tags to the security group. Here is the snippet I'm using to create the resource
resource "aws_directory_service_directory" "NewDS" {
name = "${local.DSFQDN}"
password = "${var.ADPassword}"
size = "Large"
type = "MicrosoftAD"
short_name = "${local.DSShortName}"
vpc_settings {
vpc_id = "${aws_vpc.this.id}"
subnet_ids = ["${aws_subnet.private.0.id}",
"${aws_subnet.private.1.id}",
]
}
tags = "${merge(var.tags, var.ds_tags, map("Name", format("%s", local.VPCname)))}"
}
I can reference the newly created security group using
"${aws_directory_service_directory.NewDS.security_group_id}"
I can't use that to update the resource. I want to add all of the tags I have on the directory to the security, as well as updating the Name tag. I've tried using a local-exec provisioner, but the results have not been consistent and getting the map of tags to the command without hard coding it has not worked.
Thanks
I moved the local provider out of the directory service resource and into a dummy resource.
resource "null_resource" "ManagedADTags"
{
provisioner "local-exec"
{
command = "aws --profile ${var.profile} --region ${var.region} ec2 create-tags --
resources ${aws_directory_service_directory.NewDS.security_group_id} --tags
Key=Name,Value=${format("${local.security_group_prefix}-%s","ManagedAD")}"
}
}
(The command = is a single line)
Using the format command allowed me to send the entire list of tags to the resource. Terraform doesn't "manage" it, but it does allow me to update it as part of the plan.
You can then leverage the aws_ec2_tag resource, which works on non-ec2 resources as well, on conjunction with the provider attribute ignore_tags. Please refer to another answer I made on the topic for more detail.
aws already exposes api for that where you can tag resources not just a resource. not sure why terraform is not implementing that
Just hit this as well. Turns out the tags propagate from the directory service. So if you tag your directory appropriately, the name tag from your directory service will be applied to the security group.

Resources