How to setup the terraform Backend configuration using CLI arguments using Terraform init Command
Whenever a configuration's backend changes you must run the terraform init to again validate and configure the backend before you can perform any plans and operations.
The Terraform init [options] performs several different initialization steps.After Initialization you can perform other commands.
for backend configuration you need to define a conffiguration file specified in init command.
init setting is defined in this way:
$ terraform init \
-backend-config="address=demo.consul.io" \
-backend-config="path=example_app/terraform_state" \
-backend-config="scheme=https"
Related
I am trying to automate terraform implementation using github actions. Terraform version is 0.11.13
This is my folder structure
.github
- workflows
- account
- tokyo
- networking
- backend.tfvars
terraform
- networking
I am trying to run terraform init in the networking folder with backend config file in account/tokyo/networking folder.
terraform init --backend-config=/account/tokyo/networking
I also tried
terraform init --backend-config=/account/tokyo/networking/backend.tfvars
The github actions script runs on ubuntu system and i get the following error for both the above configurations.
terraform init --backend-config='/account/tokyo/networking'
Usage: terraform init [options] [DIR]
.
.
.
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.
What am i missing.
Try this command
terraform init --backend-config '/account/tokyo/networking'
I have looked on the Internets but did not find anything close to an answer.
I have the following main.tf :
terraform {
cloud {
organization = "my-organization"
workspaces {
tags = ["app:myapplication"]
}
}
}
I am using terraform cloud and I would like to use workspace in automation.
In order to so, i need first to do a terraform init :
/my/path # terraform init
Initializing Terraform Cloud...
No workspaces found.
There are no workspaces with the configured tags
(app:myapplication) in your Terraform Cloud
organization. To finish initializing, Terraform needs at least one
workspace available.
Terraform can create a properly tagged workspace for you now. Please
enter a name to create a new Terraform Cloud workspace.
Enter a value:
I would like to do something of the kind :
terraform init -workspace=my-workspace
so that it is created if it does not exist. But I do not find anything. The only way to create a the first workspace is manually.
How to do that in automation with ci/cd?
[edit]
terraform workspace commands are not available before init
/src/terraform # terraform workspace list
Error: Terraform Cloud initialization required: please run "terraform
init"
Reason: Initial configuration of Terraform Cloud.
Changes to the Terraform Cloud configuration block require
reinitialization, to discover any changes to the available workspaces.
To re-initialize, run: terraform init
Terraform has not yet made changes to your existing configuration or
state.
You would need to use the TF Cloud/TFE API. You are using TF Cloud, but can modify the endpoint to target your installation to use TFE.
You first need to list the TF Cloud Workspaces:
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/my-organization/workspaces
where my-organization is your TF Cloud organization. This will return the workspaces in a JSON format. You would then need to parse the JSON and iterate over the maps/hashes/dictionaries of existing TF Cloud workspaces. For each iteration, inside the data and then the name key would be the nested value for the name of the workspace. You would gather the names of the workspaces and check that against the name of the workspace you want to exist. If the desired workspace does not exist in the list of workspaces, then you create the TF Cloud workspace:
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data #payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/workspaces
again substituting with your organization and your specific payload. You can then terraform init successfully with the backend specifying the Tf Cloud workspace.
Note that if you are executing this in automation as you specify in the question, then the build agent needs connectivity to TF Cloud.
I will not mark this as the answer, but I finally did this, which look like a bad trick to me :
export TF_WORKSPACE=myWorkspace
if terraform init -input=false; then echo "already exist"; else (sleep2; echo $TF_WORKSPACE) | terraform init; fi
terraform apply -auto-approve -var myvar=boo
While trying to migrate my backend config to use the new state storage with gitlab, I have run into this glorious problem: My state is locked.
I cannot force-unlock the state because the backend needs to be reinitialized
I cannot force-unlock -force the state unlock because the backend needs to be reinitialized
I cannot set up the backend with -lock=false because the same credentials that started this entire mess cannot seem to push things other than toxic lock tokens:
Error: Error copying state from the previous "local" backend to the newly configured
"http" backend:
Failed to upload state: POST http://internal.host/api/v4/projects/14/terraform/state/project-name giving up after 3 attempts
I'm at my patience's end. I did try to check whether the chatter in /var/log/gitlab/gitlab-rails/production_json.log delivers something relevant or not, and came away no more sure and little less sane for it.
Is there a sudo pretty-please-with-sugar-on-top-clean-the-fn-lock command that doesn't have any gatekeeping on it?
I have run into the same problem while migrating the terraform state files from s3 to gitlab.
I caused the problem because I had a typo in the backend_config unlock_address and I inserted Control+C while init was still running.
The terraform init did not ask me to migrate states from s3 to gitlab, but I got locked and force unlock would not work in any way.
The solution I came with:
Configure backend.tf to use as unlock address the previously used lock_address and re-initialize terraform.
Terraform plan should work fine now.
Reconfigure backend.tf to continue with state migration. Re-initialize terraform state URLs with the ones you want by migrating again.
For example, this is the terraform init I used where the desired adress was <TF_State_Name> and I had a typo <TF_State_Name_B> .
I interrupted with control+C:
terraform init \
-backend-config="address=https://<gitlab_url>/api/v4/projects/<ProjectID>/terraform/state/<TF_State_Name>" \
-backend-config="lock_address=https://<gitlab_url>/api/v4/projects/<ProjectID>/terraform/state/<TF_State_Name>/lock" \
-backend-config="unlock_address=https://<gitlab_url>/api/v4/projects/<ProjectID>/terraform/state/<TF_State_Name_B>/lock" \
-backend-config="username=<user>" \
-backend-config="password=<password>" \
-backend-config="lock_method=POST" \
-backend-config="unlock_method=DELETE" \
-backend-config="retry_wait_min=5"
And this is how I re-configured terraform init in order to by-pass the lock.
terraform init \
-backend-config="address=https://<gitlab_url>/api/v4/projects/<ProjectID>/terraform/state/<TF_State_Name_B>" \
-backend-config="lock_address=https://<gitlab_url>/api/v4/projects/<ProjectID>/terraform/state/<TF_State_Name_B>/lock" \
-backend-config="unlock_address=https://<gitlab_url>/api/v4/projects/<ProjectID>/terraform/state/<TF_State_Name_B>/lock" \
-backend-config="username=<user>" \
-backend-config="password=<password>" \
-backend-config="lock_method=POST" \
-backend-config="unlock_method=DELETE" \
-backend-config="retry_wait_min=5"
Finally, you should reconfigure to the desired address.
Terraform version: 0.12.24
This is really weird because I have used the TF_VAR_ substitution syntax before and it has worked fine.
provider.tf
# Configure the AWS Provider
provider "aws" {
version = "~> 2.0"
region = "ap-southeast-2"
access_key = var.aws_access_key_id
secret_key = var.aws_secret_access_key
}
vars.tf
variable "aws_access_key_id" {
description = "Access Key for AWS IAM User"
}
variable "aws_secret_access_key" {
description = "Secret Access Key for AWS IAM User"
}
variable "terraform_cloud_token" {
description = "Token used to log into Terraform Cloud via the CLI"
}
backend.tf for terraform cloud
terraform {
backend "remote" {
organization = "xx"
workspaces {
name = "xx"
}
}
}
Build logs
---------------
TF_VAR_aws_secret_access_key=***
TF_VAR_aws_access_key_id=***
TF_VAR_terraform_cloud_token=***
---------------
It also fails locally when I try to run this in a local Docker Container
Dockerfile
FROM hashicorp/terraform:0.12.24
COPY . /app
COPY .terraformrc $HOME
ENV TF_VAR_aws_secret_access_key 'XX'
ENV TF_VAR_aws_access_key_id 'XX'
ENV TF_VAR_terraform_cloud_token 'XX'
WORKDIR /app
ENTRYPOINT ["/app/.github/actions/terraform-plan/entrypoint.sh"]
entrypoint.sh
#!/bin/sh -l
# move terraform cloud configuration file to user root as expected
# by the backend resource
mv ./.terraformrc ~/
terraform init
terraform plan
output from docker container run
$ docker run -it tf-test
---------------
TF_VAR_aws_secret_access_key=XX
TF_VAR_aws_access_key_id=XX
TF_VAR_terraform_cloud_token=XX
---------------
Initializing the backend...
Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.56.0...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
To view this run in a browser, visit:
https://app.terraform.io/app/XX/XX/runs/run-XX
Waiting for the plan to start...
Terraform v0.12.24
Configuring remote state backend...
Initializing Terraform configuration...
2020/04/03 01:43:04 [DEBUG] Using modified User-Agent: Terraform/0.12.24 TFC/05d5abc3eb
Error: No value for required variable
on vars.tf line 1:
1: variable "aws_access_key_id" {
The root module input variable "aws_access_key_id" is not set, and has no
default value. Use a -var or -var-file command line argument to provide a
value for this variable.
Error: No value for required variable
on vars.tf line 5:
5: variable "aws_secret_access_key" {
The root module input variable "aws_secret_access_key" is not set, and has no
default value. Use a -var or -var-file command line argument to provide a
value for this variable.
Error: No value for required variable
on vars.tf line 9:
9: variable "terraform_cloud_token" {
The root module input variable "terraform_cloud_token" is not set, and has no
default value. Use a -var or -var-file command line argument to provide a
value for this variable.
Okay... it is confusing because the logs generated in Terraform's VMs are streamed to your own terminal/run logs.
But this is what I found out. There are two options available to you when you use Terraform Cloud.
Use Terraform's VMs to run your terraform commands
Use your own (or your CI/CD platform's) infrastructure to run those terraform commands.
If you choose the first option (which is annoyingly the default)... you must set your environment variables within the Terraform Cloud Dashboard. This is because all terraform commands for this execution type are run in their VMs and the environment variables in your local environment, for good security reasons, aren't passed through to Terraform.
If you have the remote option selected, once you do this, it will work as expected.
I have been looking at the terraform docs and a udemy course for the answer to this question, but cannot find the answer. I have a jenkins pipeline that is building AWS infrastructure with terraform. This is using a remote backend which is configured via a
terraform {
backend "s3" {}
}
block. I want to override this for local development so that a local state file is generated with terraform init. I have tried running terraform init -backend=false but I realize this is not what i want because it doesnt create a local backend either. I have seen terraform init -backend=<file> is an option, but if i use that then I dont know what to put in the file to indicate default local backend config. I found this article override files but it doesnt lead me to believe that this functionality exists in terraform for this particular use case. I want to make sure I do this in the correct way. How does one override the remote backend config with a default local backend config in terraform? Thanks.
The Terraform override concept you mentioned does work for this use case.
You can create files with an _override.tf added to the end elements of the override will take precedence over the original. e.g. <my_existing_file>_override.tf.
You can use this to override your existing backend config override the existing backend infrastructure so that you can init a local state file for testing/dev purposes.
I would create a script and alias it to the following value:
echo "terraform {" > backend_override.tf
echo " backend \"local\" {" >> backend_override.tf
echo " path = \"./\"" >> backend_override.tf
echo " }" >> backend_override.tf
echo "}" >> backend_override.tf
Remember to add *.override.tf to your .gitignore files so that you don't accidentally break your CI.