We have a large single page application with approximately 200 modules -
When we use the optimizer - we are ending up with all the modules in one file uglified etc.
Works beautifully.
But ours is a kind of a multi-tenant application where every user does not need all the 200 modules.
We can broadly divide the modules into 50 common modules, 100 modules which are required if user type is 'A' and 50 modules for user type 'B' etc.
Now if the user type is 'B' the downloaded one single optimized file contains 100 modules which are never used. Somehow if we can avoid those, the file size will be much less which would really increase the performance.
In short I am looking for this - We have groups of modules - optimize the group of modules into it's own file - download the corresponding file based on the user on demand.
Is it possible to do this kind of optimization with requireJS?
Thanks, J.
You could build an base module with all the things you need to start the app and the stuff most users will see. From the docs:
The optimizer will only combine modules that are specified in arrays
of string literals that are passed to top-level require and define
calls, or the require('name') string literal calls in a simplified
CommonJS wrapping. So, it will not find modules that are loaded via a
variable name
...
This behavior allows dynamic loading of modules even after optimization.
You can always explicitly add modules that are not found via the
optimizer's static analysis by using the include option.
All modules that you dont want to have in there, have to be required like this in your modules:
define(["require"], function(require) {
return function(b) {
var lib = 'lib'
var a = require(lib);
a.doSomething(b);
}
}
);
so they will be not part of your initial load but loaded when needed.
Sounds as though you'd be better off splitting your single page app into two single page apps sharing common code as described in the require docs Optimizing a multi-page project.
This will mean that for anything shared for user types A and B there ought to be one HTTP request and then for user type A or B another HTTP request with the modules specific to that module type.
Related
I'm a bit curious about Terraform's modules. What exactly are their points? I keep seeing it is supposed to save me time, no copy paste, and such, but I think I might be missing the point because I don't see why I would use them?
I have used them only once so my experience is limited. I have to create 2 different environments and I'm not sure why I would go for it. Maybe I would have a different opinion if I had to do 10 environments.
The environments will be in different accounts, using different vpcs, different IAM, ... Which led me to think I could create basically 2 folders and reference some variables.
Regarding the "no copy-paste", it seems to me you do a file which then refers to the modules and the variables. At some point, you still need to write all the ressources (int-sg, test-sg for example). so why not write it directly into the right folder?
I hope this question makes sense, I would appreciate any constructive opinion
modules are indented to group resources that are required multiple times.
Let's say you have the three environments dev, staging and prod.
You would want to keep them as equal as possible, especially staging and prod.
So, if you want to create a server with an EIP, certain security groups and whatever it needs to be accessible, you can create a module to group these resources.
The module can then easily be included in your .tf files for the respective environments with different parameters for each env.
So you can keep the resources very generic, and also have the avantage that Terraform will automatically update all environments equally in case you change resources in the module.
You'll thus have a single point where to apply modifications, instead of editing all changed resources in all env files.
Now only imagine you have more than 2 or 3 environments (or use cases for the same resource groups) and you'll see the advantage of modules ;)
I would like to split up resource creation using different modules, but I have been unable to figure out how.
For this example I want to create a web app (I'm using the azurerm provider). I've done so by adding the following to a module:
resource "azurerm_app_service" "test" {
name = "${var.app_service_name}"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
app_service_plan_id = "${var.app_service_plan_id}"
app_settings {
SOME_SETTING = "${var.app_setting}"
}
}
It works, but I'd like to have a separate module for applying the app settings, so:
Module 1 creates the web app
Module 2 applies the app settings
Potential module 3 applies something else to the web app
Is this possible? I tried splitting it up (having two modules defining the web app, but only one of them containing the app settings), but I get an error stating that the web app already exists, so it doesn't seem to understand that I try to manipulate the same resource.
Details on how it's going to be used:
I am going to provide a UI for the end user on which he/she can choose a stack of resources needed and tick a range of options desired for that person's project, along with filling out required parameters for the infrastructure.
Once done and submitted the parameters are applied to a Terraform template. It is not feasible to have a template for each permutation of options, so it will have to include different modules depending of the chosen options.
For example: if a user ticks web app, Cosmos DB and application insights the Terraform template will include these modules (using the count trick to create conditions). In this example I'll need to pass the instrumentation key from application insights to the web app's application settings and this is where my issue is.
If the user didn't choose application insights I don't want a setting for the web app and that it why I need to gradually build up a Terraform resource. Also, depending on the type of database the user chose, different settings will be added to the web app's settings.
So my idea is to create a module to apply certain application settings. I don't know if this is possible (or a better way exists), hence my question.
The best way for you to do this would be to wrap terraform with a bash script or whatever scripting language you want (python). Then create a template in bash or python (jinja2) to generate the resource with whatever options the customer selected for the settings, run the template to generate your terraform code, and then apply it.
I've done this with S3 buckets quite a bit. In terraform 0.12, you can generate templates in terraform.
I know this is 5 months old but I think that part of the situation here is that the way you describe splitting it up does not exactly fit how modules are intended to be used. For one thing, you cannot "build up a resource" dynamically, as Terraform is by design declarative. You can only define a resource and then dynamically provide it's predefined inputs (including it's count to activate/deactivate). Secondly, modules are in no way necessary to turn on and off portions of the stack via configuration. Modules simply are a way of grouping sets of resources together for containment and reuse. Any dynamism available to you with a stack consisting of a complex hierarchy of modules would be equally available with essentially the same code all in a giant monolithic blob. The monolith would just be a mess, is the problem with that, and you wouldn't be able to use pieces of it for other things. Finally, using a module to provide settings is not really what they are for. Yes, theoretically you could create a module with a "null_data_source" and then use it purely as some kind of shim to provide settings, but likely this would be a convoluted unnecessary approach to something done better done by simply providing a variable the way you have already shown.
You probably will need to wrap this in some kind of bash (etc) script at the top level as mentioned in other answers and this is not a limitation of terraform. For example once you have your modules, you would want to keep all currently applied stacks (for each customer or whatever) in some kind of composition repository. How will you create the composition stacks for those customers after they fill out the setup form? You will have to do that with some file creation automation that is not what Terraform is there for. Terraform is there to execute stacks that exist. It's not a limitation of Terraform that you have to create the .tf files with an external text editor to begin with, and it's not a limitation that in a situation like this you would use some external scripting to dynamically create the composition stacks for the customer, it's just part of the way you would use automation to get things ready for Terraform to do it's job of applying the stacks.
So, you cannot avoid this external script tooling, and you would probably use it to create the folders for the customer specific composition stacks (that refer to your modules), populate the folders with default files, and to create a .tfvars file based on the customers input from the form. Then you could go about this multiple ways:
You have the .tfvars file be the only difference between customer composition stacks. Whatever modules you do or don't want to use would be activated/deactivated by the "count trick" you mention, given variables from .tfvars. This way has the advantage of being easy to reason about as all customer composition stacks are the same thing just configured differently.
You could have the tooling actually insert the module definitions that you want into the files of the composition stacks. This will create more concise composition stacks, with fewer janky "count tricks" but every customer's stack would be it's own weird snowflake.
As far as splitting up the modules, be aware that there is a whole art/science/philosophy about this. I refer you to this resource on IAC design patterns and this talk on the subject (relevant bit starts at 19:15). Those are general context on the subject.
For your particular case, you would basically want put all the the smallest divisible functional chunks (that might be turned on/off) into their own modules, each to be referenced by higher level consuming modules. You mention it not being feasible to have a module for every permuation, again that is thinking about it wrong. You would aim for something that would be more of a tree of modules and combinations. At the top level you would have (bash etc) tooling that creates the new customers composition folder, their .tfvars file, and drop in the same composition stack that will be the top of the "module-tree". Each module that represents an optional part of the customers stack will take a count. Those modules will either have functional resources inside or will be intermediate modules, themselves instantiating a configurable set of alternative sub modules containing resources.
But it will be up to you to sit down and think through the design of a "decision tree", implemented as a hierarchy of modules, which covers all the permutations that are not feasible to create separate monolithic stacks for.
TFv12 dynamic nested blocks would help you specifically with the one aspect of having or not having a declared block like app_settings. Since you cannot "build up a resource" dynamically, the only alternative in a case like that would be to have an intermediate module that declares the resource in multiple ways (with and without the app_settings block) and one will be chosen via the "count trick" or other input configuration. That sort of thing is just not necessary now that dynamic blocks exist.
I have two variables in the scope of package main, those would be these:
var (
app Application
cfg Config
)
Now, since the size of my application is starting to increase, I have decided to put each module of the website in its own package, much like a subdirectory as so:
/src/github.com/Adel92/Sophie
+ user/ (package user)
- register.go
- login.go
- password.go
+ topic/ (package topic)
- ... etc
- main.go (package main)
How would I go around around accessing the app and cfg global variables from other packages? Is this the wrong way to go about it? I have a feeling it is.
In that case, how would I declare functions in their own namespace so I don't have to go crazy with names that are affixed with user and topic all the time.
Capitalized variable names are exported for access in other packages, so App and Cfg would work. However, using sub-packages for name-spacing is generally not recommended; packages are intended for discrete, self-contained functionality so it is usually more trouble than it's worth to use them this way (for example, import cycles are strictly impossible, so if you have two sub-packages in this layout that need to talk to each other then you're out of luck).
If you're finding you need to prefix things with user and topic in order to avoid name collisions, then perhaps the underlying concept should be factored into its own package, and you can create one instance of it for user and one for topic?
I recently walked through the Advanced Orchard tutorial on Pluralsight and it really showed me a lot of things I can do to extend Orchard. That said, I was wondering if there is a way for one module to return a view from another module?
The scenario for this is I'm building custom modules for my clients that have features that would be proprietary so I'd want to protect them with an API key, similar to how oForms works. The only difference from mine to theirs is they allow functionality regardless of activation whereas mine wouldn't work at all so I'd like to have a base module that all of my custom modules derive from and each one could do something like:
if (this.IsActivated())
return View("ViewFromThisModule")
else
return View("NotActivatedViewFromBaseModule")
The real purpose behind this would be so I don't have to copy the view(s) utilized in the base module to each custom one for things such as whether the module is activated or not.
Per Betrand's suggestion, instead of going the multiple module route I'll instead do a single module that breaks out features instead. Then I don't need to share anything because the whole thing is self-contained.
I need to standardize on how I classify and handle errors/exceptions 'gracefully'.
I currently use a process by which I report the errors to a function passing an error-number, severity-code, location-info and extra-info-string. This function returns boolean true if the error is fatal and the app should die, false otherwise. As part of it's process, apart from visual-feedback to the user, the function also log-to-file errors of above some severity-level.
Error-number indexes an array of strings explaining the type of error, e.g.:'File access','User Input','Thread-creation','Network access', etc. Severity-code is binary OR of 0,1,2 or 4, 0=informative, 1=user_retry, 2=cannot_complete, 4=cannot_continue. Location-info is module & function, and Extra-info is parameter- and local variable values.
I want to make this into a standard way of error-handling that I can put in a library and re-use in all my apps. I mainly use C/C++ on Linux, but would want to use the resultant library with other languages/platforms as well.
An idea is to extend the error-type
array to indicate some default
behavior for a given severity-level,
but should this then become the
action taken and give no options to
the user?
Or: should such extension be a
sub-array of options that the user
need to pick from? The problem with
this is that the options would of
necessity be generalized
programming-related options that may
very-well completely baffle an
end-user.
Or: should each app that uses the
error-lib routine pass along its own
array of either errors or default
behaviors - but this will defeat the
purpose of the library...
Or: should the severity-levels be
handled in each app?
Or: what do you suggest? How do you handle errors? How can I improve this?
How you handle the errors really depends upon the application.A Web application has a different Error-Catching mechanism than A Desktop Application, and both of those differ drastically to an asynchronous messaging system.
That being said the a common practice in error handling is to handle it at the lowest possible level where it can be dealt with. This usually means the Application Layer or the GUI.
I like the severity levels. Perhaps you can have a pluggable Error-collection library with different error output providers and severity level provider.
Output providers could include things like a logginProvider and IgnoreErrorsProvider.
Severity providers would probably be something implemented by each project since severity levels are usually determined by that type of project in which it occurs. (For example, network connection issues are more severe for a banking application than for a contact management system).