I just started learning terraform. I want to arrange my resources in separate directories like modules e.g.
.
├── alb/
├── networking/
├── variables.tf
└── servers/
When I run the terraform validate command from within any of the directory, there is error saying that the variable(s) not declared.
I don't want to keep variables in every directory.
I was wondering if anyone would be able to help.
You don't run terraform validate in each of those directories (unless you're doing this from your workstation only). You run it at the top level directory, which has passed the appropriate variables to the module blocks.
Generally speaking (and best practice) is to publish your modules to their own repos (or some other module hosting, e.g.: tf enterprise) and set up a pipeline to run terraform validate as a step/stage/job/whatever. Once it has completed a successful pipeline run, then you can merge and tag it.
Then you can rest assured that your calling terraform can call those modules, (hopefully by specific versions) knowing there aren't any errors.
If you're just learning terraform, then the chances you're doing this on your local workstation seems high - if I were you, I'd do something like validate.sh:
#!/bin/bash
for dir in $(ls -d */); do
pushd $dir
echo -e "\n\n$dir\n\n"
terraform validate
popd
done
echo "*****"
pwd
echo -e "\n"
terraform validate
Related
i want to run sub dir together.
enter image description here
cand1 - main.tf
cand2 - main.tf
cand3 - main.tf
run in one step
terraform plan/apply -option
i don't know option
how to run in one step??
Terraform works with a root directory as a main project location. Depending on your use case, you may want to:
a) rename all main.tf files into anything with *.tf, for example cand1.tf, cand2.tf, cand3.tf; copy them all into one folder and run terraform in there
b) create main.tf in the root folder and use the cand1, cand2 and cand3 as modules (https://www.terraform.io/language/modules/syntax)
c) check out consolidating tools like Terragrunt that can run multiple terraform projects as one.
We are using a private Github and Terraform Cloud for our projects. Everything is able to talk to each other so there is no issue there. However, I'm trying to create modules for a project I started. I was able to make it work as regular terraform files, but when I try to convert to the module system I am having issues with getting the state imported.
We have a separate repository called tf-modules. In this repository, my directory setup:
> root
>> mymodule
>>> lambda.tf
>>> eventbridge.tf
>>> bucket.tf
These files manage the software being deployed in our AWS environment. They are being used across multiple environments for each of our customers (each separated out by environment [qa, dev, prod]).
In my terraform files, I have:
> root
>> CUSTNAME
>>> mymodule
>>>> main.tf
Inside main.tf I have:
module "mymodule" {
source = "git::https://github.com/myprivaterepo/tf-modules.git"
}
In my dev environment, everything is set up so I need to import the state. However, it's not detecting the resources at all. In the .terraform directory, it is downloading the entire repository (the root with the readme.md and all)
I'm fairly new to Terraform. Am I approaching this wrong or misunderstanding?
I am using the latest version of Terraform.
Since there is a sub-directory "mymodule", you should specify the whole path.
module "mymodule" {
source = "git::https://github.com/myprivaterepo/tf-modules.git//mymodule"
}
Refer to module sources - sub directory
Example: git::https://example.com/network.git//modules/vpc
I have a repository with several separate configs which share some modules, and reference those modules using relative paths that look like ../../modules/rabbitmq. The directories are setup like this:
tf/
configs/
thing-1/
thing-2/
modules/
rabbitmq/
cluster/
...
The configs are setup with a remote backend to use TF Cloud for runs and state:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "my-org"
workspaces {
prefix = "config-1-"
}
}
}
Running terraform init works fine. When I try to run terrform plan locally, it gives me an error saying:
Initializing modules...
- rabbitmq in
Error: Unreadable module directory
Unable to evaluate directory symlink: lstat ../../modules: no such file or
directory
...as if the modules directory isn't being uploaded to TF Cloud or something. What gives?
It turns out the problem was (surprise, surprise!) it was not uploading the modules directory to TF Cloud. This is because neither the config nor the TF Cloud workspace settings contained any indication that this config folder was part of a larger filesystem. The default is to upload just the directory from which you are running terraform (and all of its contents).
To fix this, I had to visit the "Settings > General" page for the given workspace in Terraform Cloud, and change the Terraform Working Directory setting to specify the path of the config, relative to the relevant root directory - in this case: tf/configs/config-1
After that, running terraform plan displays a message indicating which parent directory it will upload in order to convey the entire context relevant to the workspace. 🎉
Update #mlsy answer with a screenshot. Using Terraform Cloud with free account. Resolving module source to using local file system.
terraform version
Terraform v1.1.7
on linux_amd64
Here is the thing I worked for me. I used required_version = ">= 0.11"
and then put all those tf files which have provider and module in a subfolder. Kept the version.tf which has required providers at root level. Somehow I have used the same folder path where terraform.exe is present. Then Built the project instead of executing at main.tf level or doing execution without building. It downloaded all providers and modules for me. I am yet to run on GCP.
enter image description here - Folder path on Windows
enter image description here - InteliJ structure
enter image description hereenter image description here
Use this source = "mhmdio/rabbitmq/aws
I faced this problem when I started. Go to hashicorp/terraform site and search module/provider block. They have full path. The code snippets are written this way. Once you get path run Terraform get -update Terraform init - upgrade
Which will download modules and provider locally.
Note: on cloud the modules are in repo but still you need to give path if by default repo path not mapped
I have similar issue, which I think someone might encounter.
I have issue where in my project the application is hosted inside folder1/folder2. However when I run terraform plan inside the folder2 there was an issue because it tried to load every folder from the root repository.
% terraform plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
The remote workspace is configured to work with configuration at
infrastructure/prod relative to the target repository.
Terraform will upload the contents of the following directory,
excluding files or directories as defined by a .terraformignore file
at /Users/yokulguy/Development/arepository/.terraformignore (if it is present),
in order to capture the filesystem context the remote workspace expects:
/Users/yokulguy/Development/arepository
╷
│ Error: Failed to upload configuration files: Failed to get symbolic link destination for "/Users/yokulguy/Development/arepository/docker/mysql/mysql.sock": lstat /private/var/run/mysqld: no such file or directory
│
│ The configured "remote" backend encountered an unexpected error. Sometimes this is caused by network connection problems, in which case you could retry the command. If the issue persists please open a support
│ ticket to get help resolving the problem.
╵
The solution is sometime I just need to remove the "bad" folder whic is the docker/mysql and then rerun the terraform plan and it works.
I have been looking at the terraform docs and a udemy course for the answer to this question, but cannot find the answer. I have a jenkins pipeline that is building AWS infrastructure with terraform. This is using a remote backend which is configured via a
terraform {
backend "s3" {}
}
block. I want to override this for local development so that a local state file is generated with terraform init. I have tried running terraform init -backend=false but I realize this is not what i want because it doesnt create a local backend either. I have seen terraform init -backend=<file> is an option, but if i use that then I dont know what to put in the file to indicate default local backend config. I found this article override files but it doesnt lead me to believe that this functionality exists in terraform for this particular use case. I want to make sure I do this in the correct way. How does one override the remote backend config with a default local backend config in terraform? Thanks.
The Terraform override concept you mentioned does work for this use case.
You can create files with an _override.tf added to the end elements of the override will take precedence over the original. e.g. <my_existing_file>_override.tf.
You can use this to override your existing backend config override the existing backend infrastructure so that you can init a local state file for testing/dev purposes.
I would create a script and alias it to the following value:
echo "terraform {" > backend_override.tf
echo " backend \"local\" {" >> backend_override.tf
echo " path = \"./\"" >> backend_override.tf
echo " }" >> backend_override.tf
echo "}" >> backend_override.tf
Remember to add *.override.tf to your .gitignore files so that you don't accidentally break your CI.
I've been using Terraform's state environments (soon to be renamed as workspaces) as part of a CI system (Gitlab CI) to spin up dynamic environments for each branch for tests to run against.
This seems to be working fine but as part of the tear down of the environment after the branch is deleted I am also trying to use terraform env delete [ENVIRONMENT NAME]. When ran locally this is fine but my CI system is running in Docker and so has a clean workspace between creating the environment and then later on a build stage destroying it. In this case it can't seem to see the environment.
If I try to delete it I see this error:
Environment "restrict-dev-websites-internally" doesn't exist!
You can create this environment with the "new" option.
terraform env list also doesn't show the environment.
I've also noticed that I'm unable to select it despite seeing it in S3 (where my remote state is stored). If I create a new environment called the same thing then the environment from my remote state is used (it doesn't try to create another set of resources).
On top of this, when I'm using an environment created by the CI system I notice that sometimes I have an environment selected that terraform env list doesn't show:
$ terraform env list
default
$ cat .terraform/environment
[ENVIRONMENT NAME]
$ terraform env list
default
Note the missing * against the selected environment and that my environment isn't listed as would be expected by the example in the docs:
$ terraform env list
default
* development
mitchellh-test
I'm unsure as to how the state environments are meant to be working so I may have missed a trick here which is causing this odd corruption when working in Docker.
For completeness I'm managing the environments using some wrapper scripts:
env.sh
#!/bin/sh
set -e
if [ "$#" -ne 2 ]; then
echo "Usage: ./env.sh terraform_target env_name"
echo ""
echo "Example: ./env.sh test test-branch"
fi
TERRAFORM_TARGET_LOCATION=${1}
TERRAFORM_ENV=${2}
REPO_BASE=`git rev-parse --show-toplevel`
TERRAFORM_BASE="${REPO_BASE}"/terraform
. "${TERRAFORM_BASE}"/remote.sh "${TERRAFORM_BASE}"/"${TERRAFORM_TARGET_LOCATION}"
if ! terraform env select ${TERRAFORM_ENV} 2> /dev/null; then
terraform env new ${TERRAFORM_ENV}
fi
env-delete.sh
#!/bin/sh
set -e
if [ "$#" -ne 2 ]; then
echo "Usage: ./env.sh terraform_target env_name"
echo ""
echo "Example: ./env.sh test test-branch"
fi
TERRAFORM_TARGET_LOCATION=${1}
TERRAFORM_ENV=${2}
REPO_BASE=`git rev-parse --show-toplevel`
TERRAFORM_BASE="${REPO_BASE}"/terraform
. "${TERRAFORM_BASE}"/remote.sh "${TERRAFORM_BASE}"/"${TERRAFORM_TARGET_LOCATION}"
if terraform env select ${TERRAFORM_ENV} 2> /dev/null; then
terraform env select default
terraform env delete ${TERRAFORM_ENV}
fi
The remote.sh script runs a terraform init with dynamic state file locations depending on the project and path in the project using S3 as a backend.
remote.sh
#!/bin/sh
set -e
terraform --version
TERRAFORM_TARGET_LOCATION="${1}"
cd "${TERRAFORM_TARGET_LOCATION}"
REPO_NAME="$(basename "`git config --get remote.origin.url`" .git)"
STATE_BUCKET="<BUCKET_NAME>"
STATE_KEY="$(git rev-parse --show-prefix | cut -d"/" -f2-)"
STATE_FILE="terraform.tfstate"
terraform init -backend-config="bucket=${STATE_BUCKET}" \
-backend-config="key=${STATE_KEY}/${STATE_FILE}"
terraform get -update=true
When running things locally I have very wide permissions which include full access to all of S3. My Gitlab CI instances use the following IAM privileges attached to an instance profile:
{
"Version" : "2012-10-17",
"Statement": [
{
"Sid" : "1",
"Effect" : "Allow",
"Action" : [ "s3:List*",
"s3:Get*",
"s3:PutObject*" ],
"Resource": [ "arn:aws:s3:::<BUCKET_NAME>",
"arn:aws:s3:::<BUCKET_NAME>/*" ]
},
{
"Sid": "2",
"Effect": "Allow",
"Action": [
"s3:DeleteObject*"
],
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>/env:*"
]
}
]
}
For clarity, my builds can see and use the remote state for the environment fine but are forced to created the environment over and over and are then unable to delete the environment after destroying everything in the state file because it can't select the environment.
I could always create the environment before deleting it so that I have it available in terraform env list but the point is that I'm not sure why the environment is not in the list when the environment was created on another machine or in another container.
You need the state to destroy the environment.
From the documentation as to why they need the state:
Terraform typically uses the configuration to determine dependency
order. However, when you delete a resource from a Terraform
configuration, Terraform must know how to delete that resource.
Terraform can see that a mapping exists for a resource not in your
configuration and plan to destroy. However, since the configuration no
longer exists, it no longer knows the proper destruction order.
You might also try to first import it which might be doable if there are not many instances involved.
I'd suggest considering running a consul container (or 3 to make it stable, they are really small) to store the state in another remote store than your default s3 store. This will make sure your CI environments will not show up in the remote store used by others. Consul has a web gui that will allow you to clean up the K/V stored there if it is ever needed. You can also interact with it through their api using curl or Ansible.
Alternatively, you can make the consul server part of the dev environment you set up, store the state there and read from it when destroying. In that case you would still keep everything else clean. I'd personally do it like this.
If a dev starts an environment on his local machine and you want to keep your remote state clean he should be using the local state. You can also use the solution above for the dev's local machine and have a consul server inside his local setup. Good for him to create/destroy and you will keep you remote state clean as you say.
As a disclaimer, I only started recently with Terraform but I don't quite see the advantage of the environments. I'm using a git repo with subdirs for each environment. That way they are truly independent from each other and I can set a dev locally and our staging/prod on our consul cluster protected with ACL.
Didn't you try the Volumes option in runner configuration?
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/configuration/advanced-configuration.md#the-runners-docker-section
To save the terraform state between builds?