I'm trying to use terragrunt to access a local module using the live/remote double-repo techinique discussed in the Terragrunt documentation.
When I run terragrunt plan locally, all is well, and I get no errors.
But when I run it from a second repo, I get these errors:
Upgrading modules...
- common in
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../common: no such file or
│ directory
╵
╷
│ Error: Failed to read module directory
│
│ Module directory does not exist or cannot be read.
╵
╷
│ Error: Unreadable module directory
│
│ Unable to evaluate directory symlink: lstat ../common: no such file or
│ directory
╵
╷
│ Error: Failed to read module directory
│
│ Module directory does not exist or cannot be read.
╵
Here the path hierarchy in the "Remote" repo:
├project
├──terragrunt.hcl
├──common
├──main.tf
├──my-stack
├── main.tf
├──terragrunt.hcl
The code for project/my-stack/main.tf is:
module "common" {
source = "../common/"
}
...
For project/my-stack/terragrunt.hcl:
include "root" {
path = find_in_parent_folders()
}
And for project/terragrunt.hcl:
remote_state{...}
In the above scenarios, all is good and terragrunt runs without error. The errors occur only when running from the "live` repo.
Here the path hierarchy in the "Live" repo:
├live
├──terragrunt.hcl
├──my-live-stack
├──terragrunt.hcl
For live/my-live-stack/terragrunt.hcl:
terraform {
source = "git::git#bitbucket.com:my-company/my-repo.git//my-project?ref=dev"
}
include "root" {
path = find_in_parent_folders()
}
And for project/terragrunt.hcl:
remote_state{...}
I've tried adding terraform{} commands and other recommended command blocks, but none seem to work. There must be something simple that I am doing wrong here.
EDIT: It turns out the git commits had not been applied in the ../common directory, hence no files existed and so the error message was correct.
Related
I am trying to install the following provider https://github.com/marqeta/terraform-provider-dfd
Followed the installation steps . Attached the folder structure.
But getting error when trying to run terraform initFolder Structure
-> dfd-demo
-main.tf
-terraform-provider-dfd_v0.0.2
dfd-demo % terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/dfd...
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider hashicorp/dfd: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/dfd
│
│ All modules should specify their required_providers so that external consumers will get the correct providers when using a module. To see which modules
│ are currently depending on hashicorp/dfd, run the following command:
│ terraform providers
╵
dfd-demo %
I have been struggling with uploading a bunch of css/html/js files to a static website hosted on a storage container $web using terraform. It fails even with a single index.html throwing below error.
Error: local-exec provisioner error
│
│ with null_resource.frontend_files,
│ on c08-02-website-storage-account.tf line 111, in resource "null_resource" "frontend_files":
│ 111: provisioner "local-exec" {
│
│ Error running command '
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://***********.blob.core.windows.net/web?sv=2018-11-09&sr=c&st=2022-01-01T00%3A00%3A00Z&se=2023-01-01T00%3A00%3A00Z&sp=racwl&spr=https&sig=*******************" --recursive
': exit status 1. Output: INFO: Scanning...
│ INFO: Any empty folders will not be processed, because source and/or
│ destination doesn't have full folder support
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 has started
│ Log file is located at:
│ /home/runner/.azcopy/718f9960-b7eb-7843-648a-6b57d14f5e27.log
│
│
100.0 %, 0 Done, 0 Failed, 0 Pending, 0 Skipped, 0 Total,
│
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 summary
│ Elapsed Time (Minutes): 0.0336
│ Number of File Transfers: 1
│ Number of Folder Property Transfers: 0
│ Total Number of Transfers: 1
│ Number of Transfers Completed: 0
│ Number of Transfers Failed: 1
│ Number of Transfers Skipped: 0
│ TotalBytesTransferred: 0
│ Final Job Status: Failed
│
The $web container is empty. So I placed a dummy index.html file before I executed the code to see if that would make this "empty folder" error go away. But still no luck.
I gave the complete set of permissions to SAS key to rule out any access issue.
I suspect the azcopy commmand is unable to navigate to the source folder and get the contents to be uploaded. I am not sure though.
Excerpts from tf file:
resource "null_resource" "frontend_files"{
depends_on = [data.azurerm_storage_account_blob_container_sas.website_blob_container_sas,
azurerm_storage_account.resume_static_storage]
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://${azurerm_storage_account.resume_static_storage.name}.blob.core.windows.net/web${data.azurerm_storage_account_blob_container_sas.website_blob_container_sas.sas}" --recursive
EOT
}
}
Any help would be appreciated.
Per a solution listed here, we need to add an escape character (\) before $web. Following command (to copy all files and subfolders to web container) worked for me:
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/\$web/?<SAS token>" --recursive
Without the escape character, it was failing with error: "failed to perform copy command due to error: cannot transfer individual files/folders to the root of a service. Add a container or directory to the destination URL"
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/$web/?<SAS token>" --recursive
I configure my terraform using a GCS backend, with a workspace. My CI environment exactly has access to the state file it requires for the workspace.
terraform {
required_version = ">= 0.14"
backend "gcs" {
prefix = "<my prefix>"
bucket = "<my bucket>"
credentials = "credentials.json"
}
}
I define the output of my terraform module inside output.tf:
output "base_api_url" {
description = "Base url for the deployed cloud run service"
value = google_cloud_run_service.api.status[0].url
}
My CI Server runs terraform apply -auto-approve -lock-timeout 15m. It succeeds and it shows me the output in the console logs:
Outputs:
base_api_url = "https://<my project url>.run.app"
When I call terraform output base_api_url and it gives me the following error:
│ Warning: No outputs found
│
│ The state file either has no outputs defined, or all the defined outputs
│ are empty. Please define an output in your configuration with the `output`
│ keyword and run `terraform refresh` for it to become available. If you are
│ using interpolation, please verify the interpolated value is not empty. You
│ can use the `terraform console` command to assist.
I try calling terraform refresh like it mentions in the warning and it tells me:
╷
│ Warning: Empty or non-existent state
│
│ There are currently no remote objects tracked in the state, so there is
│ nothing to refresh.
╵
I'm not sure what to do. I'm calling terraform output RIGHT after I call apply, but it's still giving me no outputs. What am I doing wrong?
I had the exact same issue, and this was happening because I was running terraform commands from a different path than the one I was at.
terraform -chdir="another/path" apply
And then running the output command would fail with that error. Unless you cd to that path before running the output command.
cd "another/path"
terraform output
I have written below backend configuration in terraform:
terraform {
backend "s3" {
bucket = "${var.application_name}"
region = "${var.AWS_REGION}"
key = "tf-scripts/${var.application_name}-tfstate"
}
}
while running terraform init, I am getting below error message:
terraform init
Initializing the backend...
╷
│ Error: Variables not allowed
│
│ on backend.tf line 4, in terraform:
│ 4: bucket = "${var.application_name}"
│
│ Variables may not be used here.
╵
╷
│ Error: Variables not allowed
│
│ on backend.tf line 5, in terraform:
│ 5: region = "${var.AWS_REGION}"
│
│ Variables may not be used here.
╵
╷
│ Error: Variables not allowed
│
│ on backend.tf line 6, in terraform:
│ 6: key = "tf-scripts/${var.application_name}-tfstate"
│
│ Variables may not be used here.
Can anyone assist me on achieving the same?
If you want to pass variables you could do something like this:
echo "yes" | terraform init -backend-config="${backend_env}" -backend-config="key=global/state/${backend_config}/terraform.tfstate" -backend-config="region=us-east-1" -backend-config="encrypt=true" -backend-config="kms_key_id=${kmykeyid}" -backend-config="dynamodb_table=iha-test-platform-db-${backend_config}"
Now the trick here is that when you initialize it has to be done at the command line level. Terraform can not do this as mentioned already by other community members, it's just the way it is. that said you can modify the commands for initializing and pass it through as environment variables on your host or pull in variables from another source.
in this example, I declared the variables using a container through AWS Codebuild but you can use any method as long as the variable is defined prior to initialization. Let me know if you need help with this, the documentation isn't very clear and the folks on StackOverflow have been amazing at addressing this but for beginners, it's been hard to understand how this all comes together.
In one solution have 2 projects. The common-infra project is for creating ecs cluster and common ecs services like nginx used by all other services. ecs-service1 project contains resource definition for creating ecs services. I do reference resource ARNs created in common-infra project in my ecs-service1 project.
I first go to common-infra and do terraforma plan and create. Now the cluster and nginx service is up and running. Next I go to ecs-service1 and then to terraform plan. At this point it recoganizes the fact that I have linked to a module common-infra and shows that it will create the cluster and common service like nginx again.
Is there a way to arrange/reference the project in such a way that when I run terrafrom plan ecs-service1 it know that common-infra is already built and it knows the state and it only creates only the resoruces in ecs-services1 and only pulling in the ARNs reference created in common-infra?
.
├── ecs-service1
│ ├── main.tf
│ ├── task-def
│ │ ├── adt-api-staging2-task-definition.json
│ │ └── adt-frontend-staging2-task-definition.json
│ ├── terraform.tfstate
│ ├── terraform.tfstate.backup
│ └── variables.tf
├── common-infra
│ ├── main.tf
│ ├── task-def
│ │ └── my-nginx-staging2-task-definition.json
│ ├── terraform.tfstate
│ ├── user-data.sh
│ └── variables.tf
└── script
└── get-taskdefinitions.sh
common-infra main.tf
output "splat_lb_listener_http_80_arn"{
value = aws_lb_listener.http_80.arn
}
output "splat_lb_listener_http_8080_arn"{
value = aws_lb_listener.http_8080.arn
}
output "splat_ecs_cluster_arn" {
value = aws_ecs_cluster.ecs_cluster.arn
}
ecs-service1 main.tf
module "splat_common" {
source = "../common-infa"
}
resource "aws_ecs_service" "frontend_webapp_service" {
name = var.frontend_services["service_name"]
cluster = module.splat_common.splat_ecs_cluster_arn
...
}
There are a few solutions, but first I'd like to say that your ecs-service should be calling common-infra as a module only - so that all of the resource creation is handled at once (and not split apart as you describe).
Another solution would be to use terraform import to get the current state into your existing terraform. This is less than ideal, because now you have the same infrastructure being managed by 2 state files.
If you are including the common-infra because it provides some output, you should look into using data lookups (https://www.terraform.io/docs/language/data-sources/index.html). You can even reference output of other terraform state (https://www.terraform.io/docs/language/state/remote-state-data.html) (although I've never actually tried this, it can be done).