Variables across terraform plans and modules? - terraform

What's the common pattern for not duplicating variable values across plans?
We have a standard set of tags we use in plans and modules for which we wish to define once and use many. For example: we set CostType tag to values like compute, storage, etc.. We can define it plan level, or module level but that means defining a variable in multiple places which isn't very DRY (don't repeat yourself).
Options
non infrastructure changing module which defines these "global" variables and all modules/plans use that first so the rest of the actions can harvest the values from that plan
use a non infrastructure changing plan store remote state to store variable values and access it as from module/plans
use a tfvars file and handle it via the scripts that wrap terraform actions
devops elves magically handle this problem
How do you solve this problem in your organization?

I used with success symbolic links to link the same variable file in multiple locations.
Symbolic links are well supported by Git and can be used on Windows too (with some care Git Symlinks in Windows).

Related

Changing variable names after importing functions

I have a master .py file that has a collection of all functions that I need to use for various applications. However, each application needs the variable names in the functions to be changed after import.
May I know what’s the best way to modify functions after import?
Thanks!

Perforce how to create a branch spec with multiple source to a single destination

According to the API guideline
https://www.perforce.com/manuals/v15.1/dvcs/_specify_mappings.html
It seems that I can only specify a one-to-one mapping. Is there any way I can specify a mapping with two sources into one destination?
For example:
//stream/main/... //depot/main/...
//stream/build/... //depot/main/...
Branch mappings are one-to-one. If you want to integrate multiple sources into one target, you need multiple branch mappings and multiple integrate commands. (I would recommend multiple submits as well; it is technically possible to squash multiple integrations into one submit but it multiplies the complexity of the conflict resolution process.)
YMMV, but pretty sure that after 2004.1, you should be able to use the + syntax to append rules instead of overwriting, like:
//stream/main/... //depot/main/...
+//stream/build/... //depot/main/...
Here is the associated reference on perforce views

How to store and reuse terraform interpolation result within resources?

How do I store and reuse terraform interpolation result within resources that do not expose them as output?
example: In aws_ebs_volume , I am calculating my volume size using:
size = "${lookup(merge(var.default_ebs_vol_sizes,var.ebs_vol_sizes),
var.tag_disk_location[var.extra_ebs_volumes[count.index % length(var.extra_ebs_volumes)]])}"
Now I need to reuse the same size for calculating the cost tags in the same resource as well as in corresponding ec2 resource (in same module). How do I do this without copy pasting the entire formula?
PS: I have come across this usecase in multiple scenarios, so the above is just one of the use cases where I need to reuse the interpolated results. Getting the interpolated result using the corresponding data source is one way out in this case but looking for a more straight forward solution.
This is now possible using the local variable available from terraform 0.10.3 onwards.
https://www.terraform.io/docs/configuration/locals.html
Local values assign a name to an expression, that can then be used
multiple times within a module.

is it acceptable to Require Node modules based on env var's or logic?

It might seem like an odd question but I am building a module that abstracts out certain logic for different data storage options. The Idea is that anyone using the module could use it with MongoDb or Redis or SQL or ( insert whatever option you want here )
I have a basic interface I am following in each of my implementations by exporting the same function names and signature just with different implementations for each of the various data storage options.
Right now I have a something like helper = require(process.env.data_storage_helper)
Then the helper can be used the same way.
Is this bad practise and if so why? Is there a better or suggested way to accomplish this kind of abstraction?
This isn't technically bad practice, but I would actually add a level of indirection. Instead, have those options stored in configuration files that get picked based on NODE_ENV or another environment variable. Then use the same key in the configuration object no matter what. A good example of a framework employing this is kraken.js, which auto-loads a configuration file based on NODE_ENV.
You can then grab a handle on the configuration object after Kraken has started up (or whatever you end up using - it uses confit under the hood - you can always just use this library directly), and you can grab the "data_storage_helper" key to see what your store is backed by within a storage module that does the decision making.
The big pro of this approach is that, now if you'd like to change the data storage or any other behavior of another module, you can just update a JSON file. :-)

MS-Tests: How to maintain test-setting across different dev-environments?

I am setting up a solution that is used by multiple developers/ teams that are working on different features of the next release of an application.
In my IntegrationTest-project I need to maintain the test-setting for the different teams. Basically each team uses a different test-server /web-services.
I can create a ".testsettings" file but I cannot find the section where I could define simple key-values pairs. For the testing I only need a few key-value pairs (server_url, webservice_url, ...)
Furthermore: How to I access the ".testsettings" at runtime form my unit-tests?

Resources