Accessing variables across packages in Go - scope

I have two variables in the scope of package main, those would be these:
var (
app Application
cfg Config
)
Now, since the size of my application is starting to increase, I have decided to put each module of the website in its own package, much like a subdirectory as so:
/src/github.com/Adel92/Sophie
+ user/ (package user)
- register.go
- login.go
- password.go
+ topic/ (package topic)
- ... etc
- main.go (package main)
How would I go around around accessing the app and cfg global variables from other packages? Is this the wrong way to go about it? I have a feeling it is.
In that case, how would I declare functions in their own namespace so I don't have to go crazy with names that are affixed with user and topic all the time.

Capitalized variable names are exported for access in other packages, so App and Cfg would work. However, using sub-packages for name-spacing is generally not recommended; packages are intended for discrete, self-contained functionality so it is usually more trouble than it's worth to use them this way (for example, import cycles are strictly impossible, so if you have two sub-packages in this layout that need to talk to each other then you're out of luck).
If you're finding you need to prefix things with user and topic in order to avoid name collisions, then perhaps the underlying concept should be factored into its own package, and you can create one instance of it for user and one for topic?

Related

What's the point of Terraform's modules?

I'm a bit curious about Terraform's modules. What exactly are their points? I keep seeing it is supposed to save me time, no copy paste, and such, but I think I might be missing the point because I don't see why I would use them?
I have used them only once so my experience is limited. I have to create 2 different environments and I'm not sure why I would go for it. Maybe I would have a different opinion if I had to do 10 environments.
The environments will be in different accounts, using different vpcs, different IAM, ... Which led me to think I could create basically 2 folders and reference some variables.
Regarding the "no copy-paste", it seems to me you do a file which then refers to the modules and the variables. At some point, you still need to write all the ressources (int-sg, test-sg for example). so why not write it directly into the right folder?
I hope this question makes sense, I would appreciate any constructive opinion
modules are indented to group resources that are required multiple times.
Let's say you have the three environments dev, staging and prod.
You would want to keep them as equal as possible, especially staging and prod.
So, if you want to create a server with an EIP, certain security groups and whatever it needs to be accessible, you can create a module to group these resources.
The module can then easily be included in your .tf files for the respective environments with different parameters for each env.
So you can keep the resources very generic, and also have the avantage that Terraform will automatically update all environments equally in case you change resources in the module.
You'll thus have a single point where to apply modifications, instead of editing all changed resources in all env files.
Now only imagine you have more than 2 or 3 environments (or use cases for the same resource groups) and you'll see the advantage of modules ;)

How can I split up resource creation into different modules with Terraform?

I would like to split up resource creation using different modules, but I have been unable to figure out how.
For this example I want to create a web app (I'm using the azurerm provider). I've done so by adding the following to a module:
resource "azurerm_app_service" "test" {
name = "${var.app_service_name}"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
app_service_plan_id = "${var.app_service_plan_id}"
app_settings {
SOME_SETTING = "${var.app_setting}"
}
}
It works, but I'd like to have a separate module for applying the app settings, so:
Module 1 creates the web app
Module 2 applies the app settings
Potential module 3 applies something else to the web app
Is this possible? I tried splitting it up (having two modules defining the web app, but only one of them containing the app settings), but I get an error stating that the web app already exists, so it doesn't seem to understand that I try to manipulate the same resource.
Details on how it's going to be used:
I am going to provide a UI for the end user on which he/she can choose a stack of resources needed and tick a range of options desired for that person's project, along with filling out required parameters for the infrastructure.
Once done and submitted the parameters are applied to a Terraform template. It is not feasible to have a template for each permutation of options, so it will have to include different modules depending of the chosen options.
For example: if a user ticks web app, Cosmos DB and application insights the Terraform template will include these modules (using the count trick to create conditions). In this example I'll need to pass the instrumentation key from application insights to the web app's application settings and this is where my issue is.
If the user didn't choose application insights I don't want a setting for the web app and that it why I need to gradually build up a Terraform resource. Also, depending on the type of database the user chose, different settings will be added to the web app's settings.
So my idea is to create a module to apply certain application settings. I don't know if this is possible (or a better way exists), hence my question.
The best way for you to do this would be to wrap terraform with a bash script or whatever scripting language you want (python). Then create a template in bash or python (jinja2) to generate the resource with whatever options the customer selected for the settings, run the template to generate your terraform code, and then apply it.
I've done this with S3 buckets quite a bit. In terraform 0.12, you can generate templates in terraform.
I know this is 5 months old but I think that part of the situation here is that the way you describe splitting it up does not exactly fit how modules are intended to be used. For one thing, you cannot "build up a resource" dynamically, as Terraform is by design declarative. You can only define a resource and then dynamically provide it's predefined inputs (including it's count to activate/deactivate). Secondly, modules are in no way necessary to turn on and off portions of the stack via configuration. Modules simply are a way of grouping sets of resources together for containment and reuse. Any dynamism available to you with a stack consisting of a complex hierarchy of modules would be equally available with essentially the same code all in a giant monolithic blob. The monolith would just be a mess, is the problem with that, and you wouldn't be able to use pieces of it for other things. Finally, using a module to provide settings is not really what they are for. Yes, theoretically you could create a module with a "null_data_source" and then use it purely as some kind of shim to provide settings, but likely this would be a convoluted unnecessary approach to something done better done by simply providing a variable the way you have already shown.
You probably will need to wrap this in some kind of bash (etc) script at the top level as mentioned in other answers and this is not a limitation of terraform. For example once you have your modules, you would want to keep all currently applied stacks (for each customer or whatever) in some kind of composition repository. How will you create the composition stacks for those customers after they fill out the setup form? You will have to do that with some file creation automation that is not what Terraform is there for. Terraform is there to execute stacks that exist. It's not a limitation of Terraform that you have to create the .tf files with an external text editor to begin with, and it's not a limitation that in a situation like this you would use some external scripting to dynamically create the composition stacks for the customer, it's just part of the way you would use automation to get things ready for Terraform to do it's job of applying the stacks.
So, you cannot avoid this external script tooling, and you would probably use it to create the folders for the customer specific composition stacks (that refer to your modules), populate the folders with default files, and to create a .tfvars file based on the customers input from the form. Then you could go about this multiple ways:
You have the .tfvars file be the only difference between customer composition stacks. Whatever modules you do or don't want to use would be activated/deactivated by the "count trick" you mention, given variables from .tfvars. This way has the advantage of being easy to reason about as all customer composition stacks are the same thing just configured differently.
You could have the tooling actually insert the module definitions that you want into the files of the composition stacks. This will create more concise composition stacks, with fewer janky "count tricks" but every customer's stack would be it's own weird snowflake.
As far as splitting up the modules, be aware that there is a whole art/science/philosophy about this. I refer you to this resource on IAC design patterns and this talk on the subject (relevant bit starts at 19:15). Those are general context on the subject.
For your particular case, you would basically want put all the the smallest divisible functional chunks (that might be turned on/off) into their own modules, each to be referenced by higher level consuming modules. You mention it not being feasible to have a module for every permuation, again that is thinking about it wrong. You would aim for something that would be more of a tree of modules and combinations. At the top level you would have (bash etc) tooling that creates the new customers composition folder, their .tfvars file, and drop in the same composition stack that will be the top of the "module-tree". Each module that represents an optional part of the customers stack will take a count. Those modules will either have functional resources inside or will be intermediate modules, themselves instantiating a configurable set of alternative sub modules containing resources.
But it will be up to you to sit down and think through the design of a "decision tree", implemented as a hierarchy of modules, which covers all the permutations that are not feasible to create separate monolithic stacks for.
TFv12 dynamic nested blocks would help you specifically with the one aspect of having or not having a declared block like app_settings. Since you cannot "build up a resource" dynamically, the only alternative in a case like that would be to have an intermediate module that declares the resource in multiple ways (with and without the app_settings block) and one will be chosen via the "count trick" or other input configuration. That sort of thing is just not necessary now that dynamic blocks exist.

Orchard Custom Module Return View From Different Module

I recently walked through the Advanced Orchard tutorial on Pluralsight and it really showed me a lot of things I can do to extend Orchard. That said, I was wondering if there is a way for one module to return a view from another module?
The scenario for this is I'm building custom modules for my clients that have features that would be proprietary so I'd want to protect them with an API key, similar to how oForms works. The only difference from mine to theirs is they allow functionality regardless of activation whereas mine wouldn't work at all so I'd like to have a base module that all of my custom modules derive from and each one could do something like:
if (this.IsActivated())
return View("ViewFromThisModule")
else
return View("NotActivatedViewFromBaseModule")
The real purpose behind this would be so I don't have to copy the view(s) utilized in the base module to each custom one for things such as whether the module is activated or not.
Per Betrand's suggestion, instead of going the multiple module route I'll instead do a single module that breaks out features instead. Then I don't need to share anything because the whole thing is self-contained.

RequireJS - optimize to more than one file and load on demand

We have a large single page application with approximately 200 modules -
When we use the optimizer - we are ending up with all the modules in one file uglified etc.
Works beautifully.
But ours is a kind of a multi-tenant application where every user does not need all the 200 modules.
We can broadly divide the modules into 50 common modules, 100 modules which are required if user type is 'A' and 50 modules for user type 'B' etc.
Now if the user type is 'B' the downloaded one single optimized file contains 100 modules which are never used. Somehow if we can avoid those, the file size will be much less which would really increase the performance.
In short I am looking for this - We have groups of modules - optimize the group of modules into it's own file - download the corresponding file based on the user on demand.
Is it possible to do this kind of optimization with requireJS?
Thanks, J.
You could build an base module with all the things you need to start the app and the stuff most users will see. From the docs:
The optimizer will only combine modules that are specified in arrays
of string literals that are passed to top-level require and define
calls, or the require('name') string literal calls in a simplified
CommonJS wrapping. So, it will not find modules that are loaded via a
variable name
...
This behavior allows dynamic loading of modules even after optimization.
You can always explicitly add modules that are not found via the
optimizer's static analysis by using the include option.
All modules that you dont want to have in there, have to be required like this in your modules:
define(["require"], function(require) {
return function(b) {
var lib = 'lib'
var a = require(lib);
a.doSomething(b);
}
}
);
so they will be not part of your initial load but loaded when needed.
Sounds as though you'd be better off splitting your single page app into two single page apps sharing common code as described in the require docs Optimizing a multi-page project.
This will mean that for anything shared for user types A and B there ought to be one HTTP request and then for user type A or B another HTTP request with the modules specific to that module type.

Merging 2 CRM 2011 unmanaged solutions

We are working a project jointly with another consulting firm. For the most part we each have our own domains, but there is a little crossover.
Let's say we both modify an entity that has conflicting changes. Using the "last one in wins" rule, whichever solution is imported last will have its change implemented.
Is there a tool or some known methodology for identifying these conflicts before the import is done to help us manage this problem?
I have run into this numerous times and my approach has been to export the customizations and inspect the contents of the customizations files (xml files) with a code comparison tool like, WinDiff or BeyondCompare.
It's not strictly a "last one wins" scenario, there is a model to allow some coexistence, eg if you both add fields to the same form.
One thing to bear in mind is that you should both be doing all your customisations in an unmanaged solution linked to a unique publisher and that publisher should have a unique prefix, so you might use John_ for the prefix for all new entities, fields etc, and the other firm might use Acme_ or whatever suits them.
This helps to reduce direct conflicts such as both adding a field with the same name but different types (they won't have the same schema name, because of the different prefices)
Keep your form components in separate tabs and sections, if you both use managed solutions the form customisations will be merged. Similarly SiteMap & Ribbon customisations can both be developed independently if you keep your changes grouped together you can let CRM merge the solutions for you.
Do not import the other consultancies main customisations solutions into your development environment to avoid creating cross-dependancies between them, you may reference the same entities however. If some entities needed by both consultancies are custom, you will need to agree on what should be included in a "core" solution upfront; develop, share and install it on all development environments as a pre-requisite.
Depending on the projects complexity, you may find that hosting an IFD staging environment with a shared solution which both companies can use to resolve conflicts and to serve as a testing environment useful.
Agree upfront how complaints & UAT issues should be reported, investigated & resolved and clearly define the division of work upfront.

Resources