I'm new to Terragrunt, and I've come across a bit of a situation with how it carries out caching.
This is what my file structure looks like.
├── monitor
│ └── files
│ └── graph
│ └── server
│ └── default
│ └── foo.json
└── terraform
├── env
│ └── stage
│ └── cluster
│ ├── provider.tf
│ └── terragrunt.hcl
├── moduleConfig
│ └── cluster
│ ├── backend.tf
│ ├── random.tf
│ ├── locals.tf
│ ├── outputs.tf
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── terragrunt.hcl
But when I run a terragrunt plan and look into the .terragrunt-cache folder, this is what I see.
.terragrunt-cache/
└── KdPWtxpAXZdCe4otk2N9TY1tuQU
└── cwMVo-pYTWr47TeiHN8aORnD8g4
├── env
│ └── stage
│ └── cluster
│ ├── provider.tf
│ └── terragrunt.hcl
├── moduleConfig
│ └── cluster
│ ├── backend.tf
│ ├── random.tf
│ ├── locals.tf
│ ├── outputs.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── provider.tf
│ ├── terragrunt.hcl
│ └── variables.tf
└── terragrunt.hcl
This results in an undesired plan output, as there are resources in the monitor directory that I need.
This being said, I'm running my terragrunt plan from inside the cluster directory.
├── env
│ └── stage
│ └── cluster
which might explain the issue.
Is there a way to get Terragrunt to include the monitor directory as well, so that the cache contains the full tree with all the files I need?
Thanks.
#######################################################
Updated to include path and source blocks
include {
path = find_in_parent_folders()
}
terraform {
source = "${path_relative_from_include()}//moduleConfig/cluster"
}
#######################################################
If you want the monitor directory to get pulled in as well, you might want to take the terragrunt.hcl file, out of the root of the terraform directory and place it on the same path as the monitor and terraform directories.
And then change
terraform {
source = "${path_relative_from_include()}//moduleConfig/cluster"
}
to read
terraform {
source = "${path_relative_from_include()}//terraform/moduleConfig/cluster"
}
This should get the entire structure into the .terragrunt-cache directory.
This might make a good read, if you're curious to see how it works.
https://terragrunt.gruntwork.io/docs/reference/built-in-functions/#path_relative_from_include
Related
say I have a folder structure that looks like this:
.
├── CODEOWNERS
├── Makefile
├── README.rst
├── __pycache__
│ └── conftest.cpython-37-pytest-6.2.2.pyc
├── airflow
│ ├── __pycache__
│ │ └── __init__.cpython-37.pyc
│ ├── dev
│ │ └── dags
│ └── <some_file_I_need>
how do I import the file I need from the airflow local package (not the third party dependency named airflow). I have a dependency called airflow unfortunately and that gets imported when I do: import airflow.dev... and that errors out.
I have following project structure,
|
├── database
│ ├── db_adapter.py
│ ├── db_layer
│ │ ├── __init__.py
│ │ └── mysql_adapter.py
│ ├── __init__.py
│ └── scripts
│ └── schema.sql
|
└── workflow
├── dags
│ ├── dss_controller.py
│ ├── __init__.py
|
├── __init__.py
├── plugins
I want to import db_adapter.py module inside dss_controller module, when I tried do it following error shown,
ModuleNotFoundError: No module named 'database'
How can I do the correct importing of the modules?
I have made a Terraform script that can successfully spin up 2 DigitalOcean droplets as nodes and install a Kubernetes master on one and a worker on the other.
For this, it uses a bash shell environment variable that is defined as:
export DO_ACCESS_TOKEN="..."
export TF_VAR_DO_ACCESS_TOKEN=$DO_ACCESS_TOKEN
It can then be used in the script:
provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}
Now, having all these files in one directory, is getting a bit messy. So I'm trying to implement this as modules.
I'm thus having a provider module offering access to my DigitalOcean account, a droplet module spinning up a droplet with a given name, a Kubernetes master module and a Kubernetes worker module.
I can run the terraform init command.
But when running the terraform plan command, it asks me for the provider token (which it rightfully did not do before I implemented modules):
$ terraform plan
provider.digitalocean.token
The token key for API operations.
Enter a value:
It seems that it cannot find the token defined in the bash shell environment.
I have the following modules:
.
├── digitalocean
│ ├── droplet
│ │ ├── create-ssh-key-certificate.sh
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── vars.tf
│ └── provider
│ ├── main.tf
│ └── vars.tf
└── kubernetes
├── master
│ ├── configure-cluster.sh
│ ├── configure-user.sh
│ ├── create-namespace.sh
│ ├── create-role-binding-deployment-manager.yml
│ ├── create-role-deployment-manager.yml
│ ├── kubernetes-bootstrap.sh
│ ├── main.tf
│ ├── outputs.tf
│ └── vars.tf
└── worker
├── kubernetes-bootstrap.sh
├── main.tf
├── outputs.tf
└── vars.tf
In my project directory, I have a vars.tf file:
$ cat vars.tf
variable "DO_ACCESS_TOKEN" {}
variable "SSH_PUBLIC_KEY" {}
variable "SSH_PRIVATE_KEY" {}
variable "SSH_FINGERPRINT" {}
and I have a provider.tf file:
$ cat provider.tf
module "digitalocean" {
source = "/home/stephane/dev/terraform/modules/digitalocean/provider"
DO_ACCESS_TOKEN = "${var.DO_ACCESS_TOKEN}"
}
And it calls the digitalocean provider module defined as:
$ cat digitalocean/provider/vars.tf
variable "DO_ACCESS_TOKEN" {}
$ cat digitalocean/provider/main.tf
provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}
UPDATE: The provided solution led me to organize my project like:
.
├── env
│ ├── dev
│ │ ├── backend.tf -> /home/stephane/dev/terraform/utils/backend.tf
│ │ ├── digital-ocean.tf -> /home/stephane/dev/terraform/providers/digital-ocean.tf
│ │ ├── kubernetes-master.tf -> /home/stephane/dev/terraform/stacks/kubernetes-master.tf
│ │ ├── kubernetes-worker-1.tf -> /home/stephane/dev/terraform/stacks/kubernetes-worker-1.tf
│ │ ├── outputs.tf -> /home/stephane/dev/terraform/stacks/outputs.tf
│ │ ├── terraform.tfplan
│ │ ├── terraform.tfstate
│ │ ├── terraform.tfstate.backup
│ │ ├── terraform.tfvars
│ │ └── vars.tf -> /home/stephane/dev/terraform/utils/vars.tf
│ ├── production
│ └── staging
└── README.md
With a custom library of providers, stacks and modules, layered like:
.
├── modules
│ ├── digitalocean
│ │ └── droplet
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── scripts
│ │ │ └── create-ssh-key-and-csr.sh
│ │ └── vars.tf
│ └── kubernetes
│ ├── master
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── scripts
│ │ │ ├── configure-cluster.sh
│ │ │ ├── configure-user.sh
│ │ │ ├── create-namespace.sh
│ │ │ ├── create-role-binding-deployment-manager.yml
│ │ │ ├── create-role-deployment-manager.yml
│ │ │ ├── kubernetes-bootstrap.sh
│ │ │ └── sign-ssh-csr.sh
│ │ └── vars.tf
│ └── worker
│ ├── main.tf
│ ├── outputs.tf
│ ├── scripts
│ │ └── kubernetes-bootstrap.sh -> /home/stephane/dev/terraform/modules/kubernetes/master/scripts/kubernetes-bootstrap.sh
│ └── vars.tf
├── providers
│ └── digital-ocean.tf
├── stacks
│ ├── kubernetes-master.tf
│ ├── kubernetes-worker-1.tf
│ └── outputs.tf
└── utils
├── backend.tf
└── vars.tf
The simplest option you have here is to just not define the provider at all and just use the DIGITALOCEAN_TOKEN environment variable as mentioned in the Digital Ocean provider docs.
This will always use the latest version of the Digital Ocean provider but otherwise will be functionally the same as what you're currently doing.
However, if you did want to define the provider block so that you can specify the version of the provider used or also take this opportunity to define a partial state configuration or set the required Terraform version then you just need to make sure that the provider defining files are in the same directory you are applying or in a sourced module (if you're doing partial state configuration then they must be in the directory, not in a module as state configuration happens before module fetching).
I normally achieve this by simply symlinking my provider file everywhere that I want to apply my Terraform code (so everywhere that isn't just a module).
As an example you might have a directory structure that looks something like this:
.
├── modules
│ └── kubernetes
│ ├── master
│ │ ├── main.tf
│ │ ├── output.tf
│ │ └── variables.tf
│ └── worker
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── production
│ ├── digital-ocean.tf -> ../providers/digital-ocean.tf
│ ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
│ ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
│ └── terraform.tfvars
├── providers
│ └── digital-ocean.tf
├── stacks
│ ├── kubernetes-master.tf
│ └── kubernetes-worker.tf
└── staging
├── digital-ocean.tf -> ../providers/digital-ocean.tf
├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
└── terraform.tfvars
This layout has 2 "locations" where you would perform Terraform actions (eg plan/apply): staging and production (given as an example of keeping things as similar as possible with slight variations between environments). These directories contain only symlinked files other than the terraform.tfvars file which allows you to vary only a few constrained things, keeping your staging and production environments the same.
The provider file that is symlinked would contain any provider specific configuration (in the case of AWS this would normally include the region that things should be created in, with Digital Ocean this is probably just clamping the version of the provider that should be used) but could also contain a partial Terraform state configuration to minimise what configuration you need to pass when running terraform init or even just setting the required Terraform version. An example might look something like this:
provider "digitalocean" {
version = "~> 1.0"
}
terraform {
required_version = "=0.11.10"
backend "s3" {
region = "eu-west-1"
encrypt = true
kms_key_id = "alias/terraform-state"
dynamodb_table = "terraform-locks"
}
}
I downloaded mtd-utils 2.0 and I want to built it for specified deployment path. If I launch:
./configure --bindir .../mtd-utils-81049e5/deploy/usr/sbin
and then I do:
make
I will get output into folder, where I launched make. I want to have executable files somewhere like: bla/mtd-utils-2.0.../deploy/usr/sbin...
IIUC, you can do this like that:
./configure --prefix=/tmp/mtd-utils
make
make install
Finally, you get this:
$ tree /tmp/mtd-utils
/tmp/mtd-utils
├── sbin
│ ├── doc_loadbios
│ ├── docfdisk
│ ├── flash_erase
│ ├── flash_eraseall
│ ├── flash_lock
│ ├── flash_otp_dump
│ ├── flash_otp_info
│ ├── flash_otp_lock
│ ├── flash_otp_write
│ ├── flash_unlock
│ ├── flashcp
│ ├── ftl_check
│ ├── ftl_format
│ ├── jffs2dump
│ ├── jffs2reader
│ ├── mkfs.jffs2
│ ├── mkfs.ubifs
│ ├── mtd_debug
│ ├── mtdinfo
│ ├── mtdpart
│ ├── nanddump
│ ├── nandtest
│ ├── nandwrite
│ ├── nftl_format
│ ├── nftldump
│ ├── recv_image
│ ├── rfddump
│ ├── rfdformat
│ ├── serve_image
│ ├── sumtool
│ ├── ubiattach
│ ├── ubiblock
│ ├── ubicrc32
│ ├── ubidetach
│ ├── ubiformat
│ ├── ubimkvol
│ ├── ubinfo
│ ├── ubinize
│ ├── ubirename
│ ├── ubirmvol
│ ├── ubirsvol
│ └── ubiupdatevol
└── share
└── man
├── man1
│ └── mkfs.jffs2.1
└── man8
└── ubinize.8
5 directories, 44 files
I'm create puppet configuration structure
puppet
│ ├── data
│ │ └── common.yaml
│ ├── hiera.yaml
│ ├── manifests
│ │ └── site.pp
│ ├── modules
│ │ ├── accessories
│ │ │ └── manifests
│ │ │ └── init.pp
│ │ ├── nginx
│ │ │ ├── manifests
│ │ │ │ ├── config.pp
│ │ │ │ ├── init.pp
│ │ │ │ └── install.pp
│ │ │ └── templates
│ │ │ └── vhost_site.erb
│ │ ├── php
│ │ │ ├── manifests
│ │ │ │ ├── config.pp
│ │ │ │ ├── init.pp
│ │ │ │ └── install.pp
│ │ │ └── templates
│ │ │ ├── php.ini.erb
│ │ │ └── www.conf.erb
│ │ └── site
│ │ └── manifests
│ │ ├── database.pp
│ │ ├── init.pp
│ │ └── webserver.pp
│ └── Puppetfile
Now I have just one server so I sometimes update it manual by runing:
sudo puppet apply --hiera_config=hiera.yaml --modulepath=./modules manifests/site.pp
At this moment I need to use some external modules and for example I added Puppetfile with next lines.
forge "http://forge.puppetlabs.com"
mod 'puppetlabs-mysql', '3.10.0'
...and of course it didn't work.
I tried to find something for configure it in command settings for 'apply' (Configuration Reference) but unsuccessful.
Is it real to auto-configure puppet in standalone mode by using Puppetfile or it possible only with 'puppet module install'???
Puppetfiles are not interpreted or read by the puppet server or client code. They're there to help other tools effectively deploy the proper puppet modules.
In your case in order to take advantage of the Puppetfile you've written you would need to install and configure r10k. HERE are the basics from the Puppet Enterprise documentation. HERE is another great resource, the r10k GitHub page.
Once installed and configured, r10k will read your Puppetfile and download+install the defined entries. In your case, it would install version 3.10.0 of puppetlabs-mysql. This would be installed into your modules directory and then you can execute the puppet agent run and take advantage of the newly installed modules.
In summary, Puppetfiles are not used by the client, they're used by code deployment software (r10k) to download and build the proper modules for the puppet server or agent to consume. Your options are to configure r10k to provision the modules as defined in the Puppetfile, or download the modules manually and eliminate the need for the Puppetfile.