How to pass a token environment variable into a provider module? - terraform

I have made a Terraform script that can successfully spin up 2 DigitalOcean droplets as nodes and install a Kubernetes master on one and a worker on the other.
For this, it uses a bash shell environment variable that is defined as:
export DO_ACCESS_TOKEN="..."
export TF_VAR_DO_ACCESS_TOKEN=$DO_ACCESS_TOKEN
It can then be used in the script:
provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}
Now, having all these files in one directory, is getting a bit messy. So I'm trying to implement this as modules.
I'm thus having a provider module offering access to my DigitalOcean account, a droplet module spinning up a droplet with a given name, a Kubernetes master module and a Kubernetes worker module.
I can run the terraform init command.
But when running the terraform plan command, it asks me for the provider token (which it rightfully did not do before I implemented modules):
$ terraform plan
provider.digitalocean.token
The token key for API operations.
Enter a value:
It seems that it cannot find the token defined in the bash shell environment.
I have the following modules:
.
├── digitalocean
│   ├── droplet
│   │   ├── create-ssh-key-certificate.sh
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   └── provider
│   ├── main.tf
│   └── vars.tf
└── kubernetes
├── master
│   ├── configure-cluster.sh
│   ├── configure-user.sh
│   ├── create-namespace.sh
│   ├── create-role-binding-deployment-manager.yml
│   ├── create-role-deployment-manager.yml
│   ├── kubernetes-bootstrap.sh
│   ├── main.tf
│   ├── outputs.tf
│   └── vars.tf
└── worker
├── kubernetes-bootstrap.sh
├── main.tf
├── outputs.tf
└── vars.tf
In my project directory, I have a vars.tf file:
$ cat vars.tf
variable "DO_ACCESS_TOKEN" {}
variable "SSH_PUBLIC_KEY" {}
variable "SSH_PRIVATE_KEY" {}
variable "SSH_FINGERPRINT" {}
and I have a provider.tf file:
$ cat provider.tf
module "digitalocean" {
source = "/home/stephane/dev/terraform/modules/digitalocean/provider"
DO_ACCESS_TOKEN = "${var.DO_ACCESS_TOKEN}"
}
And it calls the digitalocean provider module defined as:
$ cat digitalocean/provider/vars.tf
variable "DO_ACCESS_TOKEN" {}
$ cat digitalocean/provider/main.tf
provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}
UPDATE: The provided solution led me to organize my project like:
.
├── env
│   ├── dev
│   │   ├── backend.tf -> /home/stephane/dev/terraform/utils/backend.tf
│   │   ├── digital-ocean.tf -> /home/stephane/dev/terraform/providers/digital-ocean.tf
│   │   ├── kubernetes-master.tf -> /home/stephane/dev/terraform/stacks/kubernetes-master.tf
│   │   ├── kubernetes-worker-1.tf -> /home/stephane/dev/terraform/stacks/kubernetes-worker-1.tf
│   │   ├── outputs.tf -> /home/stephane/dev/terraform/stacks/outputs.tf
│   │   ├── terraform.tfplan
│   │   ├── terraform.tfstate
│   │   ├── terraform.tfstate.backup
│   │   ├── terraform.tfvars
│   │   └── vars.tf -> /home/stephane/dev/terraform/utils/vars.tf
│   ├── production
│   └── staging
└── README.md
With a custom library of providers, stacks and modules, layered like:
.
├── modules
│   ├── digitalocean
│   │   └── droplet
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   ├── scripts
│   │   │   └── create-ssh-key-and-csr.sh
│   │   └── vars.tf
│   └── kubernetes
│   ├── master
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   ├── scripts
│   │   │   ├── configure-cluster.sh
│   │   │   ├── configure-user.sh
│   │   │   ├── create-namespace.sh
│   │   │   ├── create-role-binding-deployment-manager.yml
│   │   │   ├── create-role-deployment-manager.yml
│   │   │   ├── kubernetes-bootstrap.sh
│   │   │   └── sign-ssh-csr.sh
│   │   └── vars.tf
│   └── worker
│   ├── main.tf
│   ├── outputs.tf
│   ├── scripts
│   │   └── kubernetes-bootstrap.sh -> /home/stephane/dev/terraform/modules/kubernetes/master/scripts/kubernetes-bootstrap.sh
│   └── vars.tf
├── providers
│   └── digital-ocean.tf
├── stacks
│   ├── kubernetes-master.tf
│   ├── kubernetes-worker-1.tf
│   └── outputs.tf
└── utils
├── backend.tf
└── vars.tf

The simplest option you have here is to just not define the provider at all and just use the DIGITALOCEAN_TOKEN environment variable as mentioned in the Digital Ocean provider docs.
This will always use the latest version of the Digital Ocean provider but otherwise will be functionally the same as what you're currently doing.
However, if you did want to define the provider block so that you can specify the version of the provider used or also take this opportunity to define a partial state configuration or set the required Terraform version then you just need to make sure that the provider defining files are in the same directory you are applying or in a sourced module (if you're doing partial state configuration then they must be in the directory, not in a module as state configuration happens before module fetching).
I normally achieve this by simply symlinking my provider file everywhere that I want to apply my Terraform code (so everywhere that isn't just a module).
As an example you might have a directory structure that looks something like this:
.
├── modules
│   └── kubernetes
│   ├── master
│   │   ├── main.tf
│   │   ├── output.tf
│   │   └── variables.tf
│   └── worker
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── production
│   ├── digital-ocean.tf -> ../providers/digital-ocean.tf
│   ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
│   ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
│   └── terraform.tfvars
├── providers
│   └── digital-ocean.tf
├── stacks
│   ├── kubernetes-master.tf
│   └── kubernetes-worker.tf
└── staging
├── digital-ocean.tf -> ../providers/digital-ocean.tf
├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
└── terraform.tfvars
This layout has 2 "locations" where you would perform Terraform actions (eg plan/apply): staging and production (given as an example of keeping things as similar as possible with slight variations between environments). These directories contain only symlinked files other than the terraform.tfvars file which allows you to vary only a few constrained things, keeping your staging and production environments the same.
The provider file that is symlinked would contain any provider specific configuration (in the case of AWS this would normally include the region that things should be created in, with Digital Ocean this is probably just clamping the version of the provider that should be used) but could also contain a partial Terraform state configuration to minimise what configuration you need to pass when running terraform init or even just setting the required Terraform version. An example might look something like this:
provider "digitalocean" {
version = "~> 1.0"
}
terraform {
required_version = "=0.11.10"
backend "s3" {
region = "eu-west-1"
encrypt = true
kms_key_id = "alias/terraform-state"
dynamodb_table = "terraform-locks"
}
}

Related

How to get Terragrunt to inclue top level directories

I'm new to Terragrunt, and I've come across a bit of a situation with how it carries out caching.
This is what my file structure looks like.
├── monitor
│   └── files
│   └── graph
│   └── server
│   └── default
│   └── foo.json
└── terraform
├── env
│   └── stage
│   └── cluster
│   ├── provider.tf
│   └── terragrunt.hcl
├── moduleConfig
│   └── cluster
│   ├── backend.tf
│   ├── random.tf
│   ├── locals.tf
│   ├── outputs.tf
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
└── terragrunt.hcl
But when I run a terragrunt plan and look into the .terragrunt-cache folder, this is what I see.
.terragrunt-cache/
└── KdPWtxpAXZdCe4otk2N9TY1tuQU
└── cwMVo-pYTWr47TeiHN8aORnD8g4
├── env
│   └── stage
│   └── cluster
│   ├── provider.tf
│   └── terragrunt.hcl
├── moduleConfig
│   └── cluster
│   ├── backend.tf
│   ├── random.tf
│   ├── locals.tf
│   ├── outputs.tf
│   ├── main.tf
│   ├── outputs.tf
│   ├── provider.tf
│   ├── terragrunt.hcl
│   └── variables.tf
└── terragrunt.hcl
This results in an undesired plan output, as there are resources in the monitor directory that I need.
This being said, I'm running my terragrunt plan from inside the cluster directory.
├── env
│ └── stage
│ └── cluster
which might explain the issue.
Is there a way to get Terragrunt to include the monitor directory as well, so that the cache contains the full tree with all the files I need?
Thanks.
#######################################################
Updated to include path and source blocks
include {
path = find_in_parent_folders()
}
terraform {
source = "${path_relative_from_include()}//moduleConfig/cluster"
}
#######################################################
If you want the monitor directory to get pulled in as well, you might want to take the terragrunt.hcl file, out of the root of the terraform directory and place it on the same path as the monitor and terraform directories.
And then change
terraform {
source = "${path_relative_from_include()}//moduleConfig/cluster"
}
to read
terraform {
source = "${path_relative_from_include()}//terraform/moduleConfig/cluster"
}
This should get the entire structure into the .terragrunt-cache directory.
This might make a good read, if you're curious to see how it works.
https://terragrunt.gruntwork.io/docs/reference/built-in-functions/#path_relative_from_include

Create localized django app and use the localization from other app

I have the following problem:
I created a Django app (app1) and then installed it in other one (app2). Now I'm trying to make the internationalization of the site, but I want to be able to use the installed app translations and I cannot even compile them.
Some useful information:
APP 1
.
├── MANIFEST.in
├── app1
│   ├── admin.py
│   ├── apps.py
│   ├── forms.py
│   ├── __init__.py
│   ├── locale/
│   │   ├── en-us
│   │   │   └── LC_MESSAGES
│   │   │   └── django.po
│   │   ├── es
│   │   │   └── LC_MESSAGES
│   │   │   └── django.po
│   │   └── pr
│   │   └── LC_MESSAGES
│   │   └── django.po
│   ├── migrations/
│   ├── models.py
│   ├── settings.py
│   ├── static/
│   ├── templates
│   ├── tests.py
│   ├── urls.py
│   ├── utils.py
│   └── views.py
└── setup.py
APP 2 (the one that has APP 1 installed)
├── app2/
│   ├── locale/
│   │   ├── en-us/
│   │   │   └── LC_MESSAGES
│   │   │   ├── django.mo
│   │   │   └── django.po
│   │   ├── es/
│   │   │   └── LC_MESSAGES
│   │   │   ├── django.mo
│   │   │   └── django.po
│   │   └── pr/
│   │   └── LC_MESSAGES
│   │   ├── django.mo
│   │   └── django.po
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'app1.apps.App1SiteConfig',
'app2.apps.App2SiteConfig',
]
LANGUAGE_CODE = 'es'
LANGUAGES = (
('en-us', _('English')),
('es', _('Spanish')),
('pt', _('Portuguese'))
)
LOCALE_PATHS = (
os.path.join(BASE_DIR, "app2", "locale"),
)
Basically, the desired TODOs are:
compile the .mo's from App1
use the App1 transalations (it has its own templates and models so ideally they would be used there)
What I don't want:
compile the App1 .mo's from django-admin of App2 and then translate it there.
Thanks in advance!
Well.. finally I managed to solve this by my way..
With the virtualenv activated I moved to App1 dir and executed django-admin compilemessages, the .mo's has been created in the right path (app1/locale/<lang>/LC_MESSAGES/django.mo) and they're able to be used from the site.

How to make mtd-utils 2.0 for specified deployment path

I downloaded mtd-utils 2.0 and I want to built it for specified deployment path. If I launch:
./configure --bindir .../mtd-utils-81049e5/deploy/usr/sbin
and then I do:
make
I will get output into folder, where I launched make. I want to have executable files somewhere like: bla/mtd-utils-2.0.../deploy/usr/sbin...
IIUC, you can do this like that:
./configure --prefix=/tmp/mtd-utils
make
make install
Finally, you get this:
$ tree /tmp/mtd-utils
/tmp/mtd-utils
├── sbin
│   ├── doc_loadbios
│   ├── docfdisk
│   ├── flash_erase
│   ├── flash_eraseall
│   ├── flash_lock
│   ├── flash_otp_dump
│   ├── flash_otp_info
│   ├── flash_otp_lock
│   ├── flash_otp_write
│   ├── flash_unlock
│   ├── flashcp
│   ├── ftl_check
│   ├── ftl_format
│   ├── jffs2dump
│   ├── jffs2reader
│   ├── mkfs.jffs2
│   ├── mkfs.ubifs
│   ├── mtd_debug
│   ├── mtdinfo
│   ├── mtdpart
│   ├── nanddump
│   ├── nandtest
│   ├── nandwrite
│   ├── nftl_format
│   ├── nftldump
│   ├── recv_image
│   ├── rfddump
│   ├── rfdformat
│   ├── serve_image
│   ├── sumtool
│   ├── ubiattach
│   ├── ubiblock
│   ├── ubicrc32
│   ├── ubidetach
│   ├── ubiformat
│   ├── ubimkvol
│   ├── ubinfo
│   ├── ubinize
│   ├── ubirename
│   ├── ubirmvol
│   ├── ubirsvol
│   └── ubiupdatevol
└── share
└── man
├── man1
│   └── mkfs.jffs2.1
└── man8
└── ubinize.8
5 directories, 44 files

Correct way of starting a NodeJs, React project with Express

I'm beginning with React, NodeJs and ExpressJs. I have seen many tutorials but I'm not sure of the correct way to start a project?
I have seen two ways. The first being express <project_name> and the second being npm init.
Which is correct and if there isn't a correct way then why would you initialize them differently when npm init includes express eventually (In the tutorials).
Thanks
npm init is good way to start, as you know it creates a package.json file in your project directory where you can store your project dependencies.
After this you must run the following commands:
npm install --save-dev webpack
npm install --save-dev babel
npm install --save-dev babel-loader
npm install babel-core
npm install babel-preset-env
npm install babel-preset-react
or as a single line command use this:
npm install --save-dev webpack babel babel-loader babel-core babel-preset-env babel-preset-react
first command will create a webpack.config.js file.
Second command will ready the babel to use in your project and the third to use babel-loader.
Now it's time to create project structure which looks like :
projectFolder/
├── package.json
├── public
│   ├── favicon.ico
│   └── index.html
├── README.md
└── src
├── App.css
├── App.js
├── App.test.js
├── index.css
├── index.js
└── logo.png
This is the very basic project structure. It doesn't have server side structure.
full structure looks like:
react/
├── CHANGELOG.md
├── CONTRIBUTING.md
├── docs
│   ├── data-fetching.md
│   ├── getting-started.md
│   ├── how-to-configure-text-editors.md
│   ├── react-style-guide.md
│   ├── README.md
│   └── recipes/
├── LICENSE.txt
├── node_modules/
├── package.json
├── README.md
├── src
│   ├── actions
│   ├── client.js
│   ├── components
│   │   ├── App
│   │   │   ├── App.js
│   │   │   ├── App.scss
│   │   │   ├── package.json
│   │   │   └── __tests__
│   │   │   └── App-test.js
│   │   ├── ContentPage
│   │   │   ├── ContentPage.js
│   │   │   ├── ContentPage.scss
│   │   │   └── package.json
│   │   ├── ErrorPage
│   │   │   ├── ErrorPage.js
│   │   │   ├── ErrorPage.scss
│   │   │   └── package.json
│   │   ├── Feedback
│   │   │   ├── Feedback.js
│   │   │   ├── Feedback.scss
│   │   │   └── package.json
│   │   ├── Footer
│   │   │   ├── Footer.js
│   │   │   ├── Footer.scss
│   │   │   └── package.json
│   │   ├── Header
│   │   │   ├── Header.js
│   │   │   ├── Header.scss
│   │   │   ├── logo-small#2x.png
│   │   │   ├── logo-small.png
│   │   │   └── package.json
│   │   ├── Link
│   │   │   ├── Link.js
│   │   │   └── package.json
│   │   ├── Navigation
│   │   │   ├── Navigation.js
│   │   │   ├── Navigation.scss
│   │   │   └── package.json
│   │   ├── NotFoundPage
│   │   │   ├── NotFoundPage.js
│   │   │   ├── NotFoundPage.scss
│   │   │   └── package.json
│   │   ├── TextBox
│   │   │   ├── package.json
│   │   │   ├── TextBox.js
│   │   │   └── TextBox.scss
│   │   ├── variables.scss
│   │   └── withViewport.js
│   ├── config.js
│   ├── constants
│   │   └── ActionTypes.js
│   ├── content
│   │   ├── about.jade
│   │   ├── index.jade
│   │   └── privacy.jade
│   ├── core
│   │   ├── db.js
│   │   ├── DOMUtils.js
│   │   ├── fetch
│   │   │   ├── fetch.client.js
│   │   │   ├── fetch.server.js
│   │   │   └── package.json
│   │   ├── Location.js
│   │   └── passport.js
│   ├── data
│   │   ├── queries
│   │   │   ├── content.js
│   │   │   ├── me.js
│   │   │   └── news.js
│   │   ├── schema.js
│   │   └── types
│   │   ├── ContentType.js
│   │   ├── NewsItemType.js
│   │   └── UserType.js
│   ├── public
│   │   ├── apple-touch-icon.png
│   │   ├── browserconfig.xml
│   │   ├── crossdomain.xml
│   │   ├── favicon.ico
│   │   ├── humans.txt
│   │   ├── robots.txt
│   │   ├── tile.png
│   │   └── tile-wide.png
│   ├── routes
│   │   ├── contact
│   │   │   ├── Contact.js
│   │   │   ├── Contact.scss
│   │   │   └── index.js
│   │   ├── home
│   │   │   ├── Home.js
│   │   │   ├── Home.scss
│   │   │   └── index.js
│   │   ├── login
│   │   │   ├── index.js
│   │   │   ├── Login.js
│   │   │   └── Login.scss
│   │   └── register
│   │   ├── index.js
│   │   ├── Register.js
│   │   └── Register.scss
│   ├── routes.js
│   ├── server.js
│   ├── stores
│   └── views
│   ├── error.jade
│   └── index.jade
├── test
│   └── stubs
│   └── SCSSStub.js
└── tools
├── build.js
├── bundle.js
├── clean.js
├── copy.js
├── deploy.js
├── lib
│   ├── fetch.js
│   └── fs.js
├── README.md
├── run.js
├── runServer.js
├── start.js
└── webpack.config.js
Created using yeoman generator-react-fullstack
Alternatively you can do all this stuff by simply using yeoman react generator but be careful with generators, they sometimes toughen your deployment process.

How use Puppetfile to configure server in standalone mode

I'm create puppet configuration structure
puppet
│   ├── data
│   │   └── common.yaml
│   ├── hiera.yaml
│   ├── manifests
│   │   └── site.pp
│   ├── modules
│   │   ├── accessories
│   │   │   └── manifests
│   │   │   └── init.pp
│   │   ├── nginx
│   │   │   ├── manifests
│   │   │   │   ├── config.pp
│   │   │   │   ├── init.pp
│   │   │   │   └── install.pp
│   │   │   └── templates
│   │   │   └── vhost_site.erb
│   │   ├── php
│   │   │   ├── manifests
│   │   │   │   ├── config.pp
│   │   │   │   ├── init.pp
│   │   │   │   └── install.pp
│   │   │   └── templates
│   │   │   ├── php.ini.erb
│   │   │   └── www.conf.erb
│   │   └── site
│   │   └── manifests
│   │   ├── database.pp
│   │   ├── init.pp
│   │   └── webserver.pp
│   └── Puppetfile
Now I have just one server so I sometimes update it manual by runing:
sudo puppet apply --hiera_config=hiera.yaml --modulepath=./modules manifests/site.pp
At this moment I need to use some external modules and for example I added Puppetfile with next lines.
forge "http://forge.puppetlabs.com"
mod 'puppetlabs-mysql', '3.10.0'
...and of course it didn't work.
I tried to find something for configure it in command settings for 'apply' (Configuration Reference) but unsuccessful.
Is it real to auto-configure puppet in standalone mode by using Puppetfile or it possible only with 'puppet module install'???
Puppetfiles are not interpreted or read by the puppet server or client code. They're there to help other tools effectively deploy the proper puppet modules.
In your case in order to take advantage of the Puppetfile you've written you would need to install and configure r10k. HERE are the basics from the Puppet Enterprise documentation. HERE is another great resource, the r10k GitHub page.
Once installed and configured, r10k will read your Puppetfile and download+install the defined entries. In your case, it would install version 3.10.0 of puppetlabs-mysql. This would be installed into your modules directory and then you can execute the puppet agent run and take advantage of the newly installed modules.
In summary, Puppetfiles are not used by the client, they're used by code deployment software (r10k) to download and build the proper modules for the puppet server or agent to consume. Your options are to configure r10k to provision the modules as defined in the Puppetfile, or download the modules manually and eliminate the need for the Puppetfile.

Resources