I have created module named workflow for Azure LogicApp
Here is the module:
resource "azurerm_logic_app_workflow" "LogicApp" {
name = "${var.LogicAppName}"
location = "${var.LogicAppLocation}"
resource_group_name = "${var.rgName}"
workflow_schema = "${var.schema}"
}
In workflow_schema i'm specifying the path to my file which contains the logicapp configuration
In main config.tf I have the following setup:
module "workflow" {
source = "./modules/workflow/"
LogicAppName = "LaName"
LogicAppLocation = "${azurerm_resource_group.rg.location}"
rgName = "${azurerm_resource_group.rg.name}"
schema = "${file("./path/to/the/file/LaName")}"
}
So, when I'm running terraform init and terraform plan everything works perfectly fine.
Since my logic app was created earlier, I want to import it so that terraform apply won't overwrite it.
I am running the following command and it returns the error:
terraform import module.workflow.azurerm_logic_app_workflow.LogicApp /subscriptions/mySubscriptionID/resourceGroups/myRgName/providers/Microsoft.Logic/workflows/LaName
Error: Import to non-existent module
module.workflow is not defined in the configuration. Please add configuration
for this module before importing into it.
I'm using the following versions of software:
Terraform v0.12.13
+ provider.azurerm v1.28.0
If anyone has any ideas why terraform import fails, please share them.
I see the issue in naming.
Your module is named workflow and in your configuration you name the resource workflow too, this should be different. You are trying to import into the resource directly.
Example:
module "workflow-azure" {
source = "./modules/workflow/"
LogicAppName = "LaName"
LogicAppLocation = "${azurerm_resource_group.rg.location}"
rgName = "${azurerm_resource_group.rg.name}"
schema = "${file("./path/to/the/file/LaName")}"
}
and the import should be then
terraform import module.workflow-azure.azurerm_logic_app_workflow.LogicApp /subscriptions/mySubscriptionID/resourceGroups/myRgName/providers/Microsoft.Logic/workflows/LaName
Related
I have created storage account and container inside it to store my aks backup using terraform. I have created child module for the storage account and container.I am creating the storage account and continer calling it from root module from "main.tf".i have created two modules such as module Ex:"module aks_backup_storage" and "module aks_backup_conatiner". The module have been created successfully after applying the terraform command "terraform apply" but at the end it is raising the following errors are mentioned bellow in the console.
A resource with the ID "/subscriptions/...../resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_storage_account" for more information.
failed creating container: failed creating container: containers.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ContainerAlreadyExists" Message="The specified container already exists.\nRequestId:f.........\nTime:2022-12-28T12:52:08.2075701Z"
root module
module "aks_backup_storage" {
source = "../modules/aks_pv_storage_container"
rg_aks_backup_storage = var.rg_aks_backup_storage
aks_backup_storage_account = var.aks_backup_storage_account
aks_backup_container = var.aks_backup_container
rg_aks_backup_storage_location = var.rg_aks_backup_storage_location
aks_backup_retention_days = var.aks_backup_retention_days
}
Child module
resource "azurerm_resource_group" "rg_aksbackup" {
name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = var.aks_backup_storage_account
resource_group_name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "ZRS"
access_tier = "Hot"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
#allow_blob_public_access = false
allow_nested_items_to_be_public = false
is_hns_enabled = false
blob_properties {
container_delete_retention_policy {
days = var.aks_backup_retention_days
}
delete_retention_policy {
days = var.aks_backup_retention_days
}
}
}
# Different container can be created for the different backup level such as cluster, Namespace, PV
resource "azurerm_storage_container" "aks_backup_container" {
#name = "aks-backup-container"
name = var.aks_backup_container
#storage_account_name = azurerm_storage_account.aks_backup_storage.name
storage_account_name= var.aks_backup_storage_account
}
I have also try to import the resource using the bellow command
terraform import ['azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage']
But it also saying ZSH command not found
zsh: no matches found: [azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage/]
I had no issue when i was creating the resources using the same code without declaring any module.
Now, I have several modules in root module in the main.tf file
here is my project directory structure
I really appreciate any suggestions thanks in advance
variable.tf
variable "rg_aks_backup_storage" {
type = string
description = "storage account name for the backup"
default = "rg-aks-backup-storage"
}
variable "aks_backup_storage_account" {
type = string
description = "storage account name for the backup"
default = "aksbackupstorage"
}
variable "aks_backup_container" {
type = string
description = "storage container name "
#default = "aks-storage-container"
default = "aksbackupstoragecontaine"
}
variable "rg_aks_backup_storage_location" {
type = string
default = "westeurope"
}
variable "aks_backup_retention_days" {
type = number
default = 90
}
The storage account name that you use must be unique within Azure (see naming restrictions). I checked, and the default storage account name that you are using is already taken. Have you tried changing the name to something you know is unique?
A way to consistently do this would be to add a random suffix at the end of the name, eg:
resource "random_string" "random_suffix" {
length = 6
special = false
upper = false
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = join("", tolist([var.aks_backup_storage_account, random_string.random_suffix.result]))
...
}
I also received the same error when I tried to run terraform apply while creating container registry.
It usually occurs when the terraform state file (running locally) does not match the Portal terraform state file resources.
Even if a resource with the same name does not exist in the portal or resource group, it will appear in terraform state files if it was deployed previously. If you've received these types of issues, verify the tf state file in portal. If the resource is not existent, use the following command to import it.
Note: Validate that the terraform state files are identical. Run terraform init & terraform apply once you are done with the changes.
To resolve this error, Use terraform import .
Here I tried to import the container registry (let's say) and it imported successfully.
terraform import azurerm_container_registry.acr "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.ContainerRegistry/registries/xxxxcontainerRegistry1"
Output:
After that I applied terraform apply and successfully deployed the resource without any errors.
Deployed successfully in Portal:
When I import an existing vpc into terraform I get the following error when running my terraform script.
Error: error deleting EC2 VPC (vpc-0ea21234): DependencyViolation: The vpc 'vpc-0ea21234' has dependencies and cannot be deleted.
status code: 400, request id: 4630706a-5378-4e72-a3df-b58c8c7fd09b
Why is it trying to delete the VPC? How can I make it use the VPC? I'll post the main file I used to make the import and the import command below.
import command (this succeeds)
terraform import aws_vpc.main vpc-0ea21234
main file
provider "aws" {
region = "us-gov-west-1"
profile = "int-pipe"
}
# terraform import aws_vpc.main vpc-0ea21234
resource "aws_vpc" "main" {
name = "cred-int-pipeline-vpc"
cidr = "10.25.0.0/25"
}
# terraform import aws_subnet.main subnet-030de2345
resource "aws_subnet" "main" {
vpc_id = "vpc-0ea21234"
name = "cred-int-pipeline-subnet-d-az2"
cidr = "10.25.0.96/27"
}
You probably have differences between what you have in your terraform configuration file and the resource you imported.
Run terraform plan, it will show you exactly what the differences are and the reason why it must be deleted/re-created.
After that, either manually change the resource in AWS or in your configuration file, if both existing resource and file match configuration, than delete and re-create won't be triggered.
I'm trying to replace deprecated resource 'azurerm_sql_server' with the 'azurerm_mssql_server' and got an 'invalid index' error in the case.
A simplified demo of the situation (with Terraform v0.14.5 and v1.0.5):
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.49.0"
}
}
}
provider "azurerm" {
features {}
}
locals {
prefix = toset(["primary", "secondary"])
}
resource "azurerm_resource_group" "rg" {
name = "rgtest"
location = "Canada Central"
}
resource "random_password" "sql_admin_password" {
length = 16
special = true
number = true
upper = true
lower = true
min_special = 2
min_numeric = 2
min_upper = 2
min_lower = 2
}
resource "azurerm_sql_server" "instance" {
for_each = local.prefix
name = "${each.value}-sqlsvr"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
version = "12.0"
administrator_login = "ssadmin"
administrator_login_password = random_password.sql_admin_password.result
}
locals {
primary_sql_srv = azurerm_sql_server.instance["primary"].name
secondary_sql_srv = azurerm_sql_server.instance["secondary"].name
}
# other TF resources using local.primary_sql_srv and local.secondary_sql_srv
The infrastructure has been deployed and no intention to re-create the database servers so we need to change the resource and import existing servers. According to Terraform document, this can be done with 'terraform state rm' and 'terraform import' command.
So,
Change the configuration script
...
resource "azurerm_mssql_server" "instance" {
...
locals {
primary_sql_srv = azurerm_mssql_server.instance["primary"].name
secondary_sql_srv = azurerm_mssql_server.instance["secondary"].name
}
# other TF resources using local.primary_sql_srv and local.secondary_sql_srv
Remove azurerm_sql_server resource from the state file, both are successful
terraform.exe state rm azurerm_sql_server.instance[`\`"primary`\`"]
terraform.exe state rm azurerm_sql_server.instance[`\`"secondary`\`"]
Import the primary database server
> terraform.exe import azurerm_mssql_server.instance[`\`"primary`\`"] "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/primary-sqlsvr"
azurerm_mssql_server.instance["primary"]: Importing from ID "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/primary-sqlsvr"...
azurerm_mssql_server.instance["primary"]: Import prepared!
Prepared azurerm_mssql_server for import
azurerm_mssql_server.instance["primary"]: Refreshing state... [id=/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/primary-sqlsvr]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
Current state list
❯ terraform.exe state list
azurerm_mssql_server.instance["primary"]
azurerm_resource_group.rg
random_password.sql_admin_password
Import the secondary database server
> terraform.exe import azurerm_mssql_server.instance[`\`"secondary`\`"] "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/secondary-sqlsvr"
azurerm_mssql_server.instance["secondary"]: Importing from ID "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/secondary-sqlsvr"...
azurerm_mssql_server.instance["secondary"]: Import prepared!
Prepared azurerm_mssql_server for import
azurerm_mssql_server.instance["secondary"]: Refreshing state... [id=/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rgtest/providers/Microsoft.Sql/servers/secondary-sqlsvr]
Error: Invalid index
on C:\Work\Projects\2021\20210812RenameResource\t1env\main.tf line 49, in locals:
49: secondary_sql_srv = azurerm_mssql_server.instance["secondary"].name
|----------------
| azurerm_mssql_server.instance is object with 1 attribute "primary"
The given key does not identify an element in this collection value.
The state refresh for the 2nd import hit the locals block and failed due to no 'secondary' server resource.
So to me, this is a deadlock, I cannot import the 'secondary' server resource because of the refresh error and the refresh error was caused by the lack of the 'secondary' server resource.
Two ways I can think of:
Manually add the 'secondary' server resource to the state file, which is definitely not proper
Remove the 'locals' block which is OK in the demo but lots of changes in real code for dependencies.
Any thoughts, please? Thank you.
This is a bug in terraform import that was introduced in version 0.13. During a terraform import execution, it will attempt to validate the local variables in the config containing the resource namespace against non-existent state. There are basically three workarounds for this:
Downgrade temporarily to Terraform 0.12 where this bug does not exist.
This is really not a great option because the version(s) is stored in the state, and you may be locked out of executing terraform CLI commands against the state synced with a later version.
Manually modify the state to contain the resources.
Also really not a great option because this could corrupt the state and/or cause other obvious issues with malformation.
Temporarily comment out the relevant locals and any code referencing the local variable values.
This is what I always ended up using. You can do a multiline comment in the /* ... */ style around the relevant locals that references the exported resource attributes of the imported resource, and you will also need to do so in any other areas of the config that reference the local variables. You can then uncomment the code once the imports are complete.
I had the same issue with terraform 1.2.8, updated to 1.3.0 and the import was successfull. Looks like this last version resolves the issue ?
edit : As stated in Terraform v1.3 changelog :
terraform import: Better handling of resources or modules that use for_each, and situations where data resources are needed to complete the operation. (#31283)
This description matches my situation 100% (use of for_each, data blocs & modules).
I am refactoring my terraform code to use modules. But I am getting alot of variable/resource not defined errors/found.
I found that I need to move my variable [name] {} blocks into the module. Isit impossible for modules to reference parent/another modules variables? Eg. sometimes I may have some reused variables eg. NODE_ENV.
Next ... after this, I find that it says missing required argument. I am just running terraform init because terraform says I need to do it... I tried adding -var-file but it does not seem to work for modules? How should I resolve this?
There are also alot of
resource 'aws_ecs_service.xxx-ecs-service' config: unknown module referenced: ecs
errors ... it appears I cannot reference resources the usual way anymore?
# ecs/ecs.tf
resource "aws_ecs_task_definition" "xxx-ecs-task" {
family = "${var.family}"
container_definitions = "${data.template_file.container_defination.rendered}"
task_role_arn = "${var.role_arn}"
execution_role_arn = "${var.role_arn}"
network_mode = "awsvpc"
cpu = "${var.cpu}"
memory = "${var.memory}"
requires_compatibilities = ["FARGATE"]
tags = "${var.tags}"
}
resource "aws_ecs_service" "xxx-ecs-service" {
name = "${var.service_name}"
cluster = "${var.ecs_cluster}"
task_definition = "${module.ecs.aws_ecs_task_definition.pinfare-ecs-task.arn}"
}
For the task_defination, I tried adding module.ecs since ecs is the name of my module
module "ecs" {
source = "./ecs"
name = "ecs"
}
To avoid confusion, would recommend complete directory separation.
module/ecs/ecs.tf
env/dev/myecs.tf
Below is illustration purpose and have not tested by myself.
Configurations
module/ecs/ecs.tf
resource "aws_ecs_task_definition" "xxx-ecs-task" {
family = "${var.family}"
container_definitions = "${data.template_file.container_defination.rendered}"
task_role_arn = "${var.role_arn}"
execution_role_arn = "${var.role_arn}"
network_mode = "awsvpc"
cpu = "${var.cpu}"
memory = "${var.memory}"
requires_compatibilities = ["FARGATE"]
tags = "${var.tags}"
}
resource "aws_ecs_service" "xxx-ecs-service" {
name = "${var.service_name}"
cluster = "${var.ecs_cluster}"
task_definition = "${aws_ecs_task_definition.xxx-ecs-task.arn}"
}
module/ecs/variables.tf
Define all the variables to be passed from the module user side (env/dev/myecs.tf).
variable "family" {
}
variable "role_arn" {
}
...
env/dev/myeecs.tf
module "ecs" {
source = "../../module/ecs"
family = "value of family here"
role_arn = "value of IAM role arn"
...
}
module path
Please beware of ${path.module} as in Paths and Embedded Files to specify a path to a file within a module. Confusingly both env/dev/ and module/ecs are module in Terraform where env/dev/ is the root module.
Creating Modules
Modules in Terraform are folders with Terraform files. In fact, when you run terraform apply, the current working directory holding the Terraform files you're applying comprise what is called the root module. This itself is a valid module.
When specify a path, basically it is that within the root module. Therefore within a called module (module/ecs), prefix with ${path.module}/ so that the called module will not try to load files inside env/dev.
Terraform Registry
I would recommend having a look at Terraform Registry, for instance Security Group module to get used to how to use Terraform module. Links to GitHub are there too.
We are trying to create Terraform modules for below activities in AWS, so that we can use them where ever that is required.
VPC creation
Subnets creation
Instance creation etc.
But while creating these modules we have to define the provider in all above listed modules. So we decided to create one more module for provider so that we can call that provider module in other modules (VPC, Subnet, etc.).
Issue in above approach is that it is not taking provider value, and asking for the user input for region.
Terraform configuration is as follow:
$HOME/modules/providers/main.tf
provider "aws" {
region = "${var.region}"
}
$HOME/modules/providers/variables.tf
variable "region" {}
$HOME/modules/vpc/main.tf
module "provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
}
$HOME/modules/vpc/variables.tf
variable "vpc_cidr" {}
variable "environment" {}
variable "region" {}
$HOME/main.tf
module "dev_vpc" {
source = "modules/vpc"
vpc_cidr = "${var.vpc_cidr}"
environment = "${var.environment}"
region = "${var.region}"
}
$HOME/variables.tf
variable "vpc_cidr" {
default = "192.168.0.0/16"
}
variable "environment" {
default = "dev"
}
variable "region" {
default = "ap-south-1"
}
Then when running terraform plan command at $HOME/ location it is not taking provider value and instead asking for the user input for region.
I need help from the Terraform experts, what approach we should follow to address below concerns:
Wrap provider in a Terraform module
Handle multiple region use case using provider module or any other way.
I knew a long time back that it wasn't possible to do this because Terraform built a graph that required a provider for any resource before it included any dependencies and it didn't used to be possible to force a dependency on a module.
However since Terraform 0.8 it is now possible to set a dependency on modules with the following syntax:
module "network" {
# ...
}
resource "aws_instance" "foo" {
# ...
depends_on = ["module.network"]
}
However, if I try that with your setup by changing modules/vpc/main.tf to look something like this:
module "aws_provider" {
source = "../../modules/providers"
region = "${var.region}"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
"name" = "${var.environment}_McD_VPC"
}
depends_on = ["module.aws_provider"]
}
And run terraform graph | dot -Tpng > graph.png against it it looks like the graph doesn't change at all from when the explicit dependency isn't there.
This seems like it might be a potential bug in the graph building stage in Terraform that should probably be raised as an issue but I don't know the core code base well enough to spot where the change needs to be.
For our usage we use symlinks heavily in our Terraform code base, some of which is historic from before Terraform supported other ways of doing things but could work for you here.
We simply define the provider in a single .tf file (such as environment.tf) along with any other generic config needed for every place you would ever run Terraform (ie not at a module level) and then symlink this into each location. That allows us to define the provider in a single place with overridable variables if necessary.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.