trying to run a aws cli call as a terraform external data but getting a error for unmarshal string
Terraform:
data "external" "cognito" {
program = ["sh", "-c", "aws cognito-idp list-user-pool-clients --user-pool-id eu-west-xxx | jq '.UserPoolClients | .[] | select(.ClientName | contains(\"AmazonOpenSearchService\")) | .ClientId'"]
}
running the command directly from bash returns:
"123456"
Via terraform errors with:
│ The data source received unexpected results after executing the program.
│
│ Program output must be a JSON encoded map of string keys and string values.
│
│ If the error is unclear, the output can be viewed by enabling Terraform's logging at TRACE level. Terraform documentation on logging: https://www.terraform.io/internals/debugging
│
│ Program: /bin/sh
│ Result Error: json: │ The data source received unexpected results after executing the program.
│
│ Program output must be a JSON encoded map of string keys and string values.
│
│ If the error is unclear, the output can be viewed by enabling Terraform's logging at TRACE level. Terraform documentation on logging: https://www.terraform.io/internals/debugging
│
│ Program: /bin/sh
│ Result Error: json: cannot unmarshal string into Go value of type map[string]string
When running an ad hoc script using the Terraform external data source you need to return a valid json object (The docs also mention a map of string) which "123456" is not. You'd need something such as '{"clientId":"123456"}'.
If you want to avoid the script completely, what Mark B suggested in his comment above is the best way to go about it.
Related
I am deploying Terraform code through a Bitbucket pipeline and am having an problem when parsing a map of objects variable in the pipeline. Below is the variable:
variable "images" {
type = map(object({
port = number
}))
}
Below is how the variable value is defined in the BitBucket Pipeline variables:
"{"image_one"={port=1000}"image_two"={port=2000}}"
When the pipeline runs, I am getting the following error:
Error: Extra characters after expression
│
│ on <value for var.images> line 1:
│ (source code not available)
│
│ An expression was successfully parsed, but extra characters were found
│ after it.
Below is the command within bitbucket-pipelines.yml on how the variable is called within the pipeline:
terraform apply -var images="$IMAGES" -auto-approve
Any advice on how to get the map of objects to execute through the pipeline would be helpful.
Thank you.
Assuming that this is your actual code, and not again some mistake you did when making the question, then you are missing a comma between arguments. It should be:
"{"image_one"={port=1000}, "image_two"={port=2000}}"
Objective: Trying to import module.resource which is already created azure resource via terraform into my statefile
What I tried:
terraform -chdir=Terraform import synapse_workspace_pe.azurerm_private_endpoint.syn_ws_pe_dev "/subscriptions/xxxx-xxxx-xxx--xx/resourceGroups/hub/providers/Microsoft.Network/privateEndpoints/pe-cb-ab-dev-we-dev"
Error that I get:
error: Invalid address
│
│ on <import-address> line 1:
│ 1: synapse_workspace_pe.azurerm_private_endpoint.syn_ws_pe_dev
│
│ Resource instance key must be given in square brackets
I referred to some stackoverflow posts, syntax is same. can some one tell me how to fix this ?
When importing resources from a declared module, then the namespace must be prefixed with the string literal module:
terraform -chdir=Terraform import module.synapse_workspace_pe.azurerm_private_endpoint.syn_ws_pe_dev "/subscriptions/xxxx-xxxx-xxx--xx/resourceGroups/hub/providers/Microsoft.Network/privateEndpoints/pe-cb-ab-dev-we-dev"
I have been struggling with uploading a bunch of css/html/js files to a static website hosted on a storage container $web using terraform. It fails even with a single index.html throwing below error.
Error: local-exec provisioner error
│
│ with null_resource.frontend_files,
│ on c08-02-website-storage-account.tf line 111, in resource "null_resource" "frontend_files":
│ 111: provisioner "local-exec" {
│
│ Error running command '
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://***********.blob.core.windows.net/web?sv=2018-11-09&sr=c&st=2022-01-01T00%3A00%3A00Z&se=2023-01-01T00%3A00%3A00Z&sp=racwl&spr=https&sig=*******************" --recursive
': exit status 1. Output: INFO: Scanning...
│ INFO: Any empty folders will not be processed, because source and/or
│ destination doesn't have full folder support
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 has started
│ Log file is located at:
│ /home/runner/.azcopy/718f9960-b7eb-7843-648a-6b57d14f5e27.log
│
│
100.0 %, 0 Done, 0 Failed, 0 Pending, 0 Skipped, 0 Total,
│
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 summary
│ Elapsed Time (Minutes): 0.0336
│ Number of File Transfers: 1
│ Number of Folder Property Transfers: 0
│ Total Number of Transfers: 1
│ Number of Transfers Completed: 0
│ Number of Transfers Failed: 1
│ Number of Transfers Skipped: 0
│ TotalBytesTransferred: 0
│ Final Job Status: Failed
│
The $web container is empty. So I placed a dummy index.html file before I executed the code to see if that would make this "empty folder" error go away. But still no luck.
I gave the complete set of permissions to SAS key to rule out any access issue.
I suspect the azcopy commmand is unable to navigate to the source folder and get the contents to be uploaded. I am not sure though.
Excerpts from tf file:
resource "null_resource" "frontend_files"{
depends_on = [data.azurerm_storage_account_blob_container_sas.website_blob_container_sas,
azurerm_storage_account.resume_static_storage]
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://${azurerm_storage_account.resume_static_storage.name}.blob.core.windows.net/web${data.azurerm_storage_account_blob_container_sas.website_blob_container_sas.sas}" --recursive
EOT
}
}
Any help would be appreciated.
Per a solution listed here, we need to add an escape character (\) before $web. Following command (to copy all files and subfolders to web container) worked for me:
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/\$web/?<SAS token>" --recursive
Without the escape character, it was failing with error: "failed to perform copy command due to error: cannot transfer individual files/folders to the root of a service. Add a container or directory to the destination URL"
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/$web/?<SAS token>" --recursive
I configure my terraform using a GCS backend, with a workspace. My CI environment exactly has access to the state file it requires for the workspace.
terraform {
required_version = ">= 0.14"
backend "gcs" {
prefix = "<my prefix>"
bucket = "<my bucket>"
credentials = "credentials.json"
}
}
I define the output of my terraform module inside output.tf:
output "base_api_url" {
description = "Base url for the deployed cloud run service"
value = google_cloud_run_service.api.status[0].url
}
My CI Server runs terraform apply -auto-approve -lock-timeout 15m. It succeeds and it shows me the output in the console logs:
Outputs:
base_api_url = "https://<my project url>.run.app"
When I call terraform output base_api_url and it gives me the following error:
│ Warning: No outputs found
│
│ The state file either has no outputs defined, or all the defined outputs
│ are empty. Please define an output in your configuration with the `output`
│ keyword and run `terraform refresh` for it to become available. If you are
│ using interpolation, please verify the interpolated value is not empty. You
│ can use the `terraform console` command to assist.
I try calling terraform refresh like it mentions in the warning and it tells me:
╷
│ Warning: Empty or non-existent state
│
│ There are currently no remote objects tracked in the state, so there is
│ nothing to refresh.
╵
I'm not sure what to do. I'm calling terraform output RIGHT after I call apply, but it's still giving me no outputs. What am I doing wrong?
I had the exact same issue, and this was happening because I was running terraform commands from a different path than the one I was at.
terraform -chdir="another/path" apply
And then running the output command would fail with that error. Unless you cd to that path before running the output command.
cd "another/path"
terraform output
I have written below backend configuration in terraform:
terraform {
backend "s3" {
bucket = "${var.application_name}"
region = "${var.AWS_REGION}"
key = "tf-scripts/${var.application_name}-tfstate"
}
}
while running terraform init, I am getting below error message:
terraform init
Initializing the backend...
╷
│ Error: Variables not allowed
│
│ on backend.tf line 4, in terraform:
│ 4: bucket = "${var.application_name}"
│
│ Variables may not be used here.
╵
╷
│ Error: Variables not allowed
│
│ on backend.tf line 5, in terraform:
│ 5: region = "${var.AWS_REGION}"
│
│ Variables may not be used here.
╵
╷
│ Error: Variables not allowed
│
│ on backend.tf line 6, in terraform:
│ 6: key = "tf-scripts/${var.application_name}-tfstate"
│
│ Variables may not be used here.
Can anyone assist me on achieving the same?
If you want to pass variables you could do something like this:
echo "yes" | terraform init -backend-config="${backend_env}" -backend-config="key=global/state/${backend_config}/terraform.tfstate" -backend-config="region=us-east-1" -backend-config="encrypt=true" -backend-config="kms_key_id=${kmykeyid}" -backend-config="dynamodb_table=iha-test-platform-db-${backend_config}"
Now the trick here is that when you initialize it has to be done at the command line level. Terraform can not do this as mentioned already by other community members, it's just the way it is. that said you can modify the commands for initializing and pass it through as environment variables on your host or pull in variables from another source.
in this example, I declared the variables using a container through AWS Codebuild but you can use any method as long as the variable is defined prior to initialization. Let me know if you need help with this, the documentation isn't very clear and the folks on StackOverflow have been amazing at addressing this but for beginners, it's been hard to understand how this all comes together.