How to store and reuse terraform interpolation result within resources? - terraform

How do I store and reuse terraform interpolation result within resources that do not expose them as output?
example: In aws_ebs_volume , I am calculating my volume size using:
size = "${lookup(merge(var.default_ebs_vol_sizes,var.ebs_vol_sizes),
var.tag_disk_location[var.extra_ebs_volumes[count.index % length(var.extra_ebs_volumes)]])}"
Now I need to reuse the same size for calculating the cost tags in the same resource as well as in corresponding ec2 resource (in same module). How do I do this without copy pasting the entire formula?
PS: I have come across this usecase in multiple scenarios, so the above is just one of the use cases where I need to reuse the interpolated results. Getting the interpolated result using the corresponding data source is one way out in this case but looking for a more straight forward solution.

This is now possible using the local variable available from terraform 0.10.3 onwards.
https://www.terraform.io/docs/configuration/locals.html
Local values assign a name to an expression, that can then be used
multiple times within a module.

Related

How can I make it thread-safe to use a Fortran module to share variables between Abaqus subroutines?

I have an Abaqus/Explicit model which currently uses 3 subroutines: VEXTERNALDB, VUAMP, and VDLOAD. VEXTERNALDB is used to read an externally generated text file and save the values such that they can be read by the other two subroutines.
I would like to add additional complexity to the model, which requires that one of the imported values will now instead be determined internally and vary based on the state of the model in each increment.
I am planning to implement this capability using a module as outlined here. However, due to general ignorance on Fortran/multithreading I am concerned about thread-safety. My questions are as follows:
Is the same module variable global between all threads, or is it defined on a per thread basis?
If the variable is shared between threads is a MUTEX on any write command an acceptable solution?
Would it be better to define the variable as an array and only allow each thread to change a single value in the array?

Can I calculate time between local maxima with featuretools?

I would like to calculate time_since_previous, but not transaction after transaction, instead only between transactions that exceed a maximum value.
Can I do that automatically? or do I need to slice the dataframe?
More specifically, I have a function to detect local maxima, which I do with scipy.signal.finds_peaks, which creates a boolean vector with the arrays of the local maxima, which I could add as a feature to the data set, and then I would like the time since previous for those local maxima.
Is that possible in a semi-automated way with featuretools?
If there is a resource doing that, that you could link to this question, that would be great!
Thanks a lot
Yes, a custom transform primitive can be made then used by DFS to automatically calculate this feature. The time_since_previous would only calculate between transactions, so the custom primitive would need to implement the time since the previous local maxima given the boolean vector from finds_peaks. Here are guides for defining simple and advanced custom primitives. Let me know if this helps.

ARM Template - How to reference a copyIndex() deployment output?

I deploy 30 SQL databases via copyIndex() as sub deployments of the main deployment, I want to be able to reference the outputs of the dynamic deployments when kicking off another deployment. Once all the databases are deployed, I want to then all Azure Monitor metric rules to the DBs, and need their resourceIds (the Output of the db deploy).
The answer here sounds exactly like what I'm trying to do, and I understand that each deployment is chained to have the output of the previous deploy. But then if I want to use the chained up "state" output, is it the very last element in the array that has the full chain? If so is the best way to reference that to just build up the name of the deployment and append on the length of the copyIndex array?
reference(concat('reference', length(variables('types'))).outputs.state.value
As so?
yes, you basically need to construct a name that is the name of the deployment:
referenceX
where X is the number of the last deployment, you can use length() function for that exactly as you suggest it.
the above will work only if you gather the output from all the intermediate steps, obviously

Understnading of Kadelmia k-bucket split

I redesign our system (P2P application) that was built using "flat model" of k-buckets - each distance has its own k-backet. The distance is length of identifier minus length of shared prefix (XOR). Everything is clear here.
Now we want to use binary tree to hold buckets (as in last Kadelmia docs).
The tree approach doesn't deal with distance "directly" when we "look for bucket" to put new contact in. This confuses me, since paper says that k-bucket should be split if new node is closer to local then K-closest node.
My question: how to calculate distance in this case? It cannot be prefix (path) of bucket, since bucket may contain nodes with different prefixes.
What is a convenient way to find K-closest node?
Thanks in advance.
It cannot be prefix (path) of bucket, since bucket may contain nodes with diffrent prefixes.
In the tree layout each bucket does have a prefix, but it is not implicit in its position in the routing table data structure, it must be tracked explicitly during split and merge operations instead, e.g. as base address plus prefix length, similar to CIDR notation.
An empty routing table starts out with a prefix covering the entire keyspace (i.e. 0x00/0), after some splitting one of the buckets might cover the range 0x0CFA0 - 0x0CFBF, which would be be the bucket prefix 0x0CFA/15.
See this answer on another question which contains an example routing table layout
Additionally see this answer for a simple and more advanced bucket splitting algorithm
How to find the matching bucket for a given ID depends on the data structure used. A sorted list will require binary search, a Patricia-Trie with nearest-lookups is another option. Even the brute force approach can be adequate as long as you do not have to handle too many operations per second.

Variables across terraform plans and modules?

What's the common pattern for not duplicating variable values across plans?
We have a standard set of tags we use in plans and modules for which we wish to define once and use many. For example: we set CostType tag to values like compute, storage, etc.. We can define it plan level, or module level but that means defining a variable in multiple places which isn't very DRY (don't repeat yourself).
Options
non infrastructure changing module which defines these "global" variables and all modules/plans use that first so the rest of the actions can harvest the values from that plan
use a non infrastructure changing plan store remote state to store variable values and access it as from module/plans
use a tfvars file and handle it via the scripts that wrap terraform actions
devops elves magically handle this problem
How do you solve this problem in your organization?
I used with success symbolic links to link the same variable file in multiple locations.
Symbolic links are well supported by Git and can be used on Windows too (with some care Git Symlinks in Windows).

Resources