I'm a bit curious about Terraform's modules. What exactly are their points? I keep seeing it is supposed to save me time, no copy paste, and such, but I think I might be missing the point because I don't see why I would use them?
I have used them only once so my experience is limited. I have to create 2 different environments and I'm not sure why I would go for it. Maybe I would have a different opinion if I had to do 10 environments.
The environments will be in different accounts, using different vpcs, different IAM, ... Which led me to think I could create basically 2 folders and reference some variables.
Regarding the "no copy-paste", it seems to me you do a file which then refers to the modules and the variables. At some point, you still need to write all the ressources (int-sg, test-sg for example). so why not write it directly into the right folder?
I hope this question makes sense, I would appreciate any constructive opinion
modules are indented to group resources that are required multiple times.
Let's say you have the three environments dev, staging and prod.
You would want to keep them as equal as possible, especially staging and prod.
So, if you want to create a server with an EIP, certain security groups and whatever it needs to be accessible, you can create a module to group these resources.
The module can then easily be included in your .tf files for the respective environments with different parameters for each env.
So you can keep the resources very generic, and also have the avantage that Terraform will automatically update all environments equally in case you change resources in the module.
You'll thus have a single point where to apply modifications, instead of editing all changed resources in all env files.
Now only imagine you have more than 2 or 3 environments (or use cases for the same resource groups) and you'll see the advantage of modules ;)
Related
In Bitbucket Pipelines, for manually run pipelines (i.e. "custom pipelines") where you use the web UI to set variables, is it possible to insert any documentation? For example, so that the UI presents a description above or alongside the input form for a variable? (Or are you limited only to being able to name the pipeline and optionally give the variables each a default value and a set of allowed values?)
I don't want other users of the same pipeline (from the web UI) to misinterpret what a keyword expects, or indeed what the pipeline will do, and doubt they will always refer the source code itself to find comments.
Not that I know.
Documentation only describes a name, default and allowed-values attributes https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/#variables
But I think the culprit of your issue boils down to a general programming problem: adequately naming variables. Comments making up for bad variable names is among the top-10 programming bad practices.
Variable names should always be unambiguous and self-explanatory for anyone reading it.
I would like to split up resource creation using different modules, but I have been unable to figure out how.
For this example I want to create a web app (I'm using the azurerm provider). I've done so by adding the following to a module:
resource "azurerm_app_service" "test" {
name = "${var.app_service_name}"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
app_service_plan_id = "${var.app_service_plan_id}"
app_settings {
SOME_SETTING = "${var.app_setting}"
}
}
It works, but I'd like to have a separate module for applying the app settings, so:
Module 1 creates the web app
Module 2 applies the app settings
Potential module 3 applies something else to the web app
Is this possible? I tried splitting it up (having two modules defining the web app, but only one of them containing the app settings), but I get an error stating that the web app already exists, so it doesn't seem to understand that I try to manipulate the same resource.
Details on how it's going to be used:
I am going to provide a UI for the end user on which he/she can choose a stack of resources needed and tick a range of options desired for that person's project, along with filling out required parameters for the infrastructure.
Once done and submitted the parameters are applied to a Terraform template. It is not feasible to have a template for each permutation of options, so it will have to include different modules depending of the chosen options.
For example: if a user ticks web app, Cosmos DB and application insights the Terraform template will include these modules (using the count trick to create conditions). In this example I'll need to pass the instrumentation key from application insights to the web app's application settings and this is where my issue is.
If the user didn't choose application insights I don't want a setting for the web app and that it why I need to gradually build up a Terraform resource. Also, depending on the type of database the user chose, different settings will be added to the web app's settings.
So my idea is to create a module to apply certain application settings. I don't know if this is possible (or a better way exists), hence my question.
The best way for you to do this would be to wrap terraform with a bash script or whatever scripting language you want (python). Then create a template in bash or python (jinja2) to generate the resource with whatever options the customer selected for the settings, run the template to generate your terraform code, and then apply it.
I've done this with S3 buckets quite a bit. In terraform 0.12, you can generate templates in terraform.
I know this is 5 months old but I think that part of the situation here is that the way you describe splitting it up does not exactly fit how modules are intended to be used. For one thing, you cannot "build up a resource" dynamically, as Terraform is by design declarative. You can only define a resource and then dynamically provide it's predefined inputs (including it's count to activate/deactivate). Secondly, modules are in no way necessary to turn on and off portions of the stack via configuration. Modules simply are a way of grouping sets of resources together for containment and reuse. Any dynamism available to you with a stack consisting of a complex hierarchy of modules would be equally available with essentially the same code all in a giant monolithic blob. The monolith would just be a mess, is the problem with that, and you wouldn't be able to use pieces of it for other things. Finally, using a module to provide settings is not really what they are for. Yes, theoretically you could create a module with a "null_data_source" and then use it purely as some kind of shim to provide settings, but likely this would be a convoluted unnecessary approach to something done better done by simply providing a variable the way you have already shown.
You probably will need to wrap this in some kind of bash (etc) script at the top level as mentioned in other answers and this is not a limitation of terraform. For example once you have your modules, you would want to keep all currently applied stacks (for each customer or whatever) in some kind of composition repository. How will you create the composition stacks for those customers after they fill out the setup form? You will have to do that with some file creation automation that is not what Terraform is there for. Terraform is there to execute stacks that exist. It's not a limitation of Terraform that you have to create the .tf files with an external text editor to begin with, and it's not a limitation that in a situation like this you would use some external scripting to dynamically create the composition stacks for the customer, it's just part of the way you would use automation to get things ready for Terraform to do it's job of applying the stacks.
So, you cannot avoid this external script tooling, and you would probably use it to create the folders for the customer specific composition stacks (that refer to your modules), populate the folders with default files, and to create a .tfvars file based on the customers input from the form. Then you could go about this multiple ways:
You have the .tfvars file be the only difference between customer composition stacks. Whatever modules you do or don't want to use would be activated/deactivated by the "count trick" you mention, given variables from .tfvars. This way has the advantage of being easy to reason about as all customer composition stacks are the same thing just configured differently.
You could have the tooling actually insert the module definitions that you want into the files of the composition stacks. This will create more concise composition stacks, with fewer janky "count tricks" but every customer's stack would be it's own weird snowflake.
As far as splitting up the modules, be aware that there is a whole art/science/philosophy about this. I refer you to this resource on IAC design patterns and this talk on the subject (relevant bit starts at 19:15). Those are general context on the subject.
For your particular case, you would basically want put all the the smallest divisible functional chunks (that might be turned on/off) into their own modules, each to be referenced by higher level consuming modules. You mention it not being feasible to have a module for every permuation, again that is thinking about it wrong. You would aim for something that would be more of a tree of modules and combinations. At the top level you would have (bash etc) tooling that creates the new customers composition folder, their .tfvars file, and drop in the same composition stack that will be the top of the "module-tree". Each module that represents an optional part of the customers stack will take a count. Those modules will either have functional resources inside or will be intermediate modules, themselves instantiating a configurable set of alternative sub modules containing resources.
But it will be up to you to sit down and think through the design of a "decision tree", implemented as a hierarchy of modules, which covers all the permutations that are not feasible to create separate monolithic stacks for.
TFv12 dynamic nested blocks would help you specifically with the one aspect of having or not having a declared block like app_settings. Since you cannot "build up a resource" dynamically, the only alternative in a case like that would be to have an intermediate module that declares the resource in multiple ways (with and without the app_settings block) and one will be chosen via the "count trick" or other input configuration. That sort of thing is just not necessary now that dynamic blocks exist.
I have two variables in the scope of package main, those would be these:
var (
app Application
cfg Config
)
Now, since the size of my application is starting to increase, I have decided to put each module of the website in its own package, much like a subdirectory as so:
/src/github.com/Adel92/Sophie
+ user/ (package user)
- register.go
- login.go
- password.go
+ topic/ (package topic)
- ... etc
- main.go (package main)
How would I go around around accessing the app and cfg global variables from other packages? Is this the wrong way to go about it? I have a feeling it is.
In that case, how would I declare functions in their own namespace so I don't have to go crazy with names that are affixed with user and topic all the time.
Capitalized variable names are exported for access in other packages, so App and Cfg would work. However, using sub-packages for name-spacing is generally not recommended; packages are intended for discrete, self-contained functionality so it is usually more trouble than it's worth to use them this way (for example, import cycles are strictly impossible, so if you have two sub-packages in this layout that need to talk to each other then you're out of luck).
If you're finding you need to prefix things with user and topic in order to avoid name collisions, then perhaps the underlying concept should be factored into its own package, and you can create one instance of it for user and one for topic?
I’m trying to find the least painful way of transporting config files between different environments and I have found many things that can break the system after the transport. I got a script that will keep the values for attributes correct for the ones that are depended on the environment but here is the list of couple of things that I’m not sure about. Maybe someone can shed some light on them.
What I want to do is to simply transport xml file based on the steps from the book for openAM 9 (simply export/import using ssoadm to xml file) but by analyzing the file in depth I find many differences that might break the system, so any help is appreciated.
In every xml file we have sections for ‘iplanet-am-auth-ldap-bind-passwd’ with hash value under it but in one xml file we’re missing one line with hash. I was wondering if we add that line with the correct hash value will it break the system or it won’t matter as long as the hash matches target environment?
Does the size of the ‘iplanet-am-logging-buffer-size’ has to match what was originally setup in the target environment or it will be ok if we overwrite the value from the source xml file?
For some reason we have different links in delegation-rules with the same name, for example:
# environment1 - sms://dc=test-domain,dc=net/sunEntitlementService/1.0/application/ws/1/entitlement/entitlements
# environment2 - sms://dc=test-domain,dc=net/sunEntitlementService/1.0/application/ws/1/entitlement/decision
# environment3 - sms://*dc=test-domain,dc=net/sunIdentityRepositoryService/1.0/application/agent
It could be due the way the server was setup long time ago or due to development processes over time ( I don’t know) but my question is:
If the rule names are the same but some(or all) options/values are different between environments and we overwrite them with the source file from different environment, will this break things or it won’t matter ?
We are working a project jointly with another consulting firm. For the most part we each have our own domains, but there is a little crossover.
Let's say we both modify an entity that has conflicting changes. Using the "last one in wins" rule, whichever solution is imported last will have its change implemented.
Is there a tool or some known methodology for identifying these conflicts before the import is done to help us manage this problem?
I have run into this numerous times and my approach has been to export the customizations and inspect the contents of the customizations files (xml files) with a code comparison tool like, WinDiff or BeyondCompare.
It's not strictly a "last one wins" scenario, there is a model to allow some coexistence, eg if you both add fields to the same form.
One thing to bear in mind is that you should both be doing all your customisations in an unmanaged solution linked to a unique publisher and that publisher should have a unique prefix, so you might use John_ for the prefix for all new entities, fields etc, and the other firm might use Acme_ or whatever suits them.
This helps to reduce direct conflicts such as both adding a field with the same name but different types (they won't have the same schema name, because of the different prefices)
Keep your form components in separate tabs and sections, if you both use managed solutions the form customisations will be merged. Similarly SiteMap & Ribbon customisations can both be developed independently if you keep your changes grouped together you can let CRM merge the solutions for you.
Do not import the other consultancies main customisations solutions into your development environment to avoid creating cross-dependancies between them, you may reference the same entities however. If some entities needed by both consultancies are custom, you will need to agree on what should be included in a "core" solution upfront; develop, share and install it on all development environments as a pre-requisite.
Depending on the projects complexity, you may find that hosting an IFD staging environment with a shared solution which both companies can use to resolve conflicts and to serve as a testing environment useful.
Agree upfront how complaints & UAT issues should be reported, investigated & resolved and clearly define the division of work upfront.