I have written below backend configuration in terraform:
terraform {
backend "s3" {
bucket = "${var.application_name}"
region = "${var.AWS_REGION}"
key = "tf-scripts/${var.application_name}-tfstate"
}
}
while running terraform init, I am getting below error message:
terraform init
Initializing the backend...
╷
│ Error: Variables not allowed
│
│ on backend.tf line 4, in terraform:
│ 4: bucket = "${var.application_name}"
│
│ Variables may not be used here.
╵
╷
│ Error: Variables not allowed
│
│ on backend.tf line 5, in terraform:
│ 5: region = "${var.AWS_REGION}"
│
│ Variables may not be used here.
╵
╷
│ Error: Variables not allowed
│
│ on backend.tf line 6, in terraform:
│ 6: key = "tf-scripts/${var.application_name}-tfstate"
│
│ Variables may not be used here.
Can anyone assist me on achieving the same?
If you want to pass variables you could do something like this:
echo "yes" | terraform init -backend-config="${backend_env}" -backend-config="key=global/state/${backend_config}/terraform.tfstate" -backend-config="region=us-east-1" -backend-config="encrypt=true" -backend-config="kms_key_id=${kmykeyid}" -backend-config="dynamodb_table=iha-test-platform-db-${backend_config}"
Now the trick here is that when you initialize it has to be done at the command line level. Terraform can not do this as mentioned already by other community members, it's just the way it is. that said you can modify the commands for initializing and pass it through as environment variables on your host or pull in variables from another source.
in this example, I declared the variables using a container through AWS Codebuild but you can use any method as long as the variable is defined prior to initialization. Let me know if you need help with this, the documentation isn't very clear and the folks on StackOverflow have been amazing at addressing this but for beginners, it's been hard to understand how this all comes together.
Related
I'm trying to use terragrunt to access a local module using the live/remote double-repo techinique discussed in the Terragrunt documentation.
When I run terragrunt plan locally, all is well, and I get no errors.
But when I run it from a second repo, I get these errors:
Upgrading modules...
- common in
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../common: no such file or
│ directory
╵
╷
│ Error: Failed to read module directory
│
│ Module directory does not exist or cannot be read.
╵
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../common: no such file or
│ directory
╵
╷
│ Error: Failed to read module directory
│
│ Module directory does not exist or cannot be read.
╵
Here the path hierarchy in the "Remote" repo:
├project
├──terragrunt.hcl
├──common
├──main.tf
├──my-stack
├── main.tf
├──terragrunt.hcl
The code for project/my-stack/main.tf is:
module "common" {
source = "../common/"
}
...
For project/my-stack/terragrunt.hcl:
include "root" {
path = find_in_parent_folders()
}
And for project/terragrunt.hcl:
remote_state{...}
In the above scenarios, all is good and terragrunt runs without error. The errors occur only when running from the "live` repo.
Here the path hierarchy in the "Live" repo:
├live
├──terragrunt.hcl
├──my-live-stack
├──terragrunt.hcl
For live/my-live-stack/terragrunt.hcl:
terraform {
source = "git::git#bitbucket.com:my-company/my-repo.git//my-project?ref=dev"
}
include "root" {
path = find_in_parent_folders()
}
And for project/terragrunt.hcl:
remote_state{...}
I've tried adding terraform{} commands and other recommended command blocks, but none seem to work. There must be something simple that I am doing wrong here.
EDIT: It turns out the git commits had not been applied in the ../common directory, hence no files existed and so the error message was correct.
Objective: Trying to import module.resource which is already created azure resource via terraform into my statefile
What I tried:
terraform -chdir=Terraform import synapse_workspace_pe.azurerm_private_endpoint.syn_ws_pe_dev "/subscriptions/xxxx-xxxx-xxx--xx/resourceGroups/hub/providers/Microsoft.Network/privateEndpoints/pe-cb-ab-dev-we-dev"
Error that I get:
error: Invalid address
│
│ on <import-address> line 1:
│ 1: synapse_workspace_pe.azurerm_private_endpoint.syn_ws_pe_dev
│
│ Resource instance key must be given in square brackets
I referred to some stackoverflow posts, syntax is same. can some one tell me how to fix this ?
When importing resources from a declared module, then the namespace must be prefixed with the string literal module:
terraform -chdir=Terraform import module.synapse_workspace_pe.azurerm_private_endpoint.syn_ws_pe_dev "/subscriptions/xxxx-xxxx-xxx--xx/resourceGroups/hub/providers/Microsoft.Network/privateEndpoints/pe-cb-ab-dev-we-dev"
I am trying to install the following provider https://github.com/marqeta/terraform-provider-dfd
Followed the installation steps . Attached the folder structure.
But getting error when trying to run terraform initFolder Structure
-> dfd-demo
-main.tf
-terraform-provider-dfd_v0.0.2
dfd-demo % terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/dfd...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/dfd: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/dfd
│
│ All modules should specify their required_providers so that external consumers will get the correct providers when using a module. To see which modules
│ are currently depending on hashicorp/dfd, run the following command:
│ terraform providers
╵
dfd-demo %
I configure my terraform using a GCS backend, with a workspace. My CI environment exactly has access to the state file it requires for the workspace.
terraform {
required_version = ">= 0.14"
backend "gcs" {
prefix = "<my prefix>"
bucket = "<my bucket>"
credentials = "credentials.json"
}
}
I define the output of my terraform module inside output.tf:
output "base_api_url" {
description = "Base url for the deployed cloud run service"
value = google_cloud_run_service.api.status[0].url
}
My CI Server runs terraform apply -auto-approve -lock-timeout 15m. It succeeds and it shows me the output in the console logs:
Outputs:
base_api_url = "https://<my project url>.run.app"
When I call terraform output base_api_url and it gives me the following error:
│ Warning: No outputs found
│
│ The state file either has no outputs defined, or all the defined outputs
│ are empty. Please define an output in your configuration with the `output`
│ keyword and run `terraform refresh` for it to become available. If you are
│ using interpolation, please verify the interpolated value is not empty. You
│ can use the `terraform console` command to assist.
I try calling terraform refresh like it mentions in the warning and it tells me:
╷
│ Warning: Empty or non-existent state
│
│ There are currently no remote objects tracked in the state, so there is
│ nothing to refresh.
╵
I'm not sure what to do. I'm calling terraform output RIGHT after I call apply, but it's still giving me no outputs. What am I doing wrong?
I had the exact same issue, and this was happening because I was running terraform commands from a different path than the one I was at.
terraform -chdir="another/path" apply
And then running the output command would fail with that error. Unless you cd to that path before running the output command.
cd "another/path"
terraform output
In one solution have 2 projects. The common-infra project is for creating ecs cluster and common ecs services like nginx used by all other services. ecs-service1 project contains resource definition for creating ecs services. I do reference resource ARNs created in common-infra project in my ecs-service1 project.
I first go to common-infra and do terraforma plan and create. Now the cluster and nginx service is up and running. Next I go to ecs-service1 and then to terraform plan. At this point it recoganizes the fact that I have linked to a module common-infra and shows that it will create the cluster and common service like nginx again.
Is there a way to arrange/reference the project in such a way that when I run terrafrom plan ecs-service1 it know that common-infra is already built and it knows the state and it only creates only the resoruces in ecs-services1 and only pulling in the ARNs reference created in common-infra?
.
├── ecs-service1
│ ├── main.tf
│ ├── task-def
│ │ ├── adt-api-staging2-task-definition.json
│ │ └── adt-frontend-staging2-task-definition.json
│ ├── terraform.tfstate
│ ├── terraform.tfstate.backup
│ └── variables.tf
├── common-infra
│ ├── main.tf
│ ├── task-def
│ │ └── my-nginx-staging2-task-definition.json
│ ├── terraform.tfstate
│ ├── user-data.sh
│ └── variables.tf
└── script
└── get-taskdefinitions.sh
common-infra main.tf
output "splat_lb_listener_http_80_arn"{
value = aws_lb_listener.http_80.arn
}
output "splat_lb_listener_http_8080_arn"{
value = aws_lb_listener.http_8080.arn
}
output "splat_ecs_cluster_arn" {
value = aws_ecs_cluster.ecs_cluster.arn
}
ecs-service1 main.tf
module "splat_common" {
source = "../common-infa"
}
resource "aws_ecs_service" "frontend_webapp_service" {
name = var.frontend_services["service_name"]
cluster = module.splat_common.splat_ecs_cluster_arn
...
}
There are a few solutions, but first I'd like to say that your ecs-service should be calling common-infra as a module only - so that all of the resource creation is handled at once (and not split apart as you describe).
Another solution would be to use terraform import to get the current state into your existing terraform. This is less than ideal, because now you have the same infrastructure being managed by 2 state files.
If you are including the common-infra because it provides some output, you should look into using data lookups (https://www.terraform.io/docs/language/data-sources/index.html). You can even reference output of other terraform state (https://www.terraform.io/docs/language/state/remote-state-data.html) (although I've never actually tried this, it can be done).