Global class in node? - node.js

Global class?
I'd like to use a module in 2 other modules. And use defaults.
Then update the module after application is initialized (and connected to DB).
How this can be achieved?
Example use case:
Logger module is started with default configuration. It will fetch custom one from the database after database is connected.
Database module is using same logger (using default configuration until it gets configuration from that same database).
In many other languages I could create a class, then use instances of it and finally update class (not the instance) with new configuration. Updated values will be shared across the instances.
Some ideas that came into my mind:
Maybe I am thinking about it wrong way?
Can I use global variables?
I can use a local shared resource (file for example) to trigger change after startup is completed and connections are established/configuration fetched.
Another problem: How to avoid strong coupling between the modules?

Maybe I am thinking about it wrong way?
Right or wrong isn't really black and white here. It's more about benefits of modularity.
Can I use global variables?
You can, but you probably shouldn't.
Modularity in nodejs offers all sorts of benefits. Using a global variable creates a global environment dependency that breaks some of the fundamental tenets of modularity.
Instead, it generally makes more sense to create a single module that encapsulates the shared instance that you wish to use. When that module is initialized, it creates the shared instance and stores it locally in its own module level variable. Then, when other modules require() or import this module, it exports that shared instance. In this way you both retain the modularity and all the benefits of it and you get a common, shared instance that everyone who wants to can use.
The only downside? One line of code is required in any module that wants to use the shared resource to import that shared resource. That one line of code helps you retain all the benefits of modularity while still getting access to a shared resource.
I can use a local shared resource (file for example) to trigger change after startup is completed and connections are established/configuration fetched.
It isn't clear what you mean by this. Any modular, shared resource (without
globals) as described above can capture a configuration and preserve that configuration change.
How to avoid strong coupling between the modules?
This is indeed one of the reasons to avoid globals as it creates strong coupling. Any module that exports or shares (in any way) some shared resource creates some level of coupling. The code using the shared resource has to know what the interface is to the shared resource and that cannot be avoided in order to use it. You can often take advantage of existing interfaces (like eventEmitters) in order to avoid reinventing a lot of new interface, but the caller still needs to know what common interface is being used and how.

Related

Advantages of using #Module NestJS

I'm making a new NestJS app and after a lot of errors on the first because the multiple modules I created didn't have the correct imports, providers, exports, TypeOrmModule.forFeature etc made me wonder: What was the point?
Why not use only the app.module and just dump everything in it? All the controllers and services and entity types and any other that may come up?
From the documentation:
We want to emphasize that modules are strongly recommended as an
effective way to organize your components
Is that the only reason? Organization?
Does dependency injection play a role of some kind?
Edit:
If organization is the main reason, why not separate in a different folder with a controller and service? Basically a module without the imports, providers etc. Doing the same thing with less boilerplate.
Why not use only the app.module and just dump everything in it?
Better yet, why use multiple files at all? Why not just have a couple thousand line index.js with no types, no organization, just raw JS all the way down?
The answer? Code organization and ease of re-use. By making these modules, you should be grouping together similar logic together. All the code for a single feature should be available by just importing FeatureModule and usable. When it comes to library modules, this becomes pretty apparent: TypeOrmModule has a forRoot/forRootAsync and a forFeature which exposes ways to inject repositories into your services. The JwtModule has a register/registerAsync and exposes a JwtService so you can configure the JwtService once and re-use the provider.
When dealing with entity features this may look messier, but technically it's all still possible, so that in theory you'd be able to take FeatureModule from Application A and drop it into Application B and have everything still working with regards to the FeatureModule, similar to how pulumi has the idea of stacks and applications and you can just spin up new applications using the same group of components.
The module system, once you get the hang of it, and in my opinion, makes it very easy to recognize what all a module will be working with, with regards to other features and how they're connected. It's just a matter of discipline and learning the feature of the framework.

Using terraform modules with in modules

We are working on creating various terraform modules for Azure cloud in our organization. I have a basic doubt on using these modules.
Lets say we have a module created for creating resource groups. When we write a module for storage container, Would it be better to use the resource group module inside the storage module itself or would it be better to let the user terraform script handle it specifying multiple module resource. Eg,
module resourcegroup {
…
}
module storage {
}
Thanks,
Hound
What you're considering here is a design tradeoff rather than a question with a universal answer. With that said, the Terraform documentation section Module Composition recommends that you use only one level of module nesting where possible, and then have the root module connect the outputs from one module into the inputs of another.
One situation where you might decide to go against that advice and create multiple levels of nesting is when you want to write a module which intentionally constrains or raises the level of abstraction of another module written by someone else. Modules shared on Terraform Registry are often very general in order to serve various different use-cases, but those modules might also encapsulate some design best-practices for the system in question and so you might choose to wrap one or more of those general modules in a more specific module that more directly meets your use-case, and hopefully in turn make your "wrapper module" easier to use.
However, it's always important to keep in mind that although Terraform modules can in some sense encapsulate complexity, in the case of Terraform they can't truly hide that complexity the way we might expect for libraries in general-purpose languages, because the maintainer of the root module is ultimately responsible for understanding the full consequences of applying a plan, which involves reviewing all of the proposed changes even to resources encapsulated in nested modules.

"doubleton" or how to have two global instances of a module

I am playing with node.js porting a webAPI currently implemented in Delphi and learning how to do things the node way.
My application uses two instances of a database-backend connected to a different DB each. Service-code can get a connection to each from one of two global instances of a connection-pool.
How would one do this in node.js?
I know how to write the pool as a module exporting a constructor so I can create two instances, configured with a different connection string. I can hold these in global variables in the main server-file or in a module holding references to various services and doing routing of requests.
But how would the individual service-modules that want to get a connection, do something with it and then return it to the pool get access?
So far I have been happy with require()-ing such shared modules everywhere I use them, so they behave as singletons and the state is shared between everywhere they are require()-ed. But how to do that if I want two (or n) instances that are configured differently?
All I can think of so far is actively injecting a reference to them everywhere. Would work, but is that the proper solution?
P.S.: I did not know up to now that Doubletons are a thing.

Puppet modules and the self-contained design

From Puppet Best Practices:
The Puppet Labs documentation describes modules as self-contained bundles of code and data.
Ok it's clear.
A single module can easily manage a single application.
So, puppetlabs-apache manages Apache only, puppetlabs-mysql manages MySQL only.
.... So, my module my_company-mediawiki manages Mediawiki only (i suppose... with database and virtual host... because a module is self-contained bundles of code and data).
Modules are most effective when the serve a single purpose, limit dependencies, and concern themselves only with managing system state relating to their named purpose.
But my_company-mediawiki needs to depend by:
puppetlabs-mysql: to create database;
puppetlabs-apache: to manage a virtual host.
And... from a rapid search I understand that many modules refer to other modules.
But...
They provide complete functionality without creating dependencies on any other modules, and can be combined as needed to build different application stacks.
Ok, a good module is self-contained and has no dependencies.
So I have to necessarily use the pattern roles and profiles to accomplish these best practices? Or I'm confused...
The Puppet documentation's description of modules as self-contained is more aspirational than definitive. Don't read too much into it, or into others' echoes of it. Modules are quite simply Puppet's next level of code organization above classes and defined types, incorporating also plug-ins and owned data.
Plenty of low-level modules indeed have no cross-module dependencies, but such dependencies inescapably arise when you start forming aggregations at a level between that and whole node configurations. There is nothing inherently wrong with that. The Roles & Profiles pattern is a good way to structure such aggregations, but it is not the only way, and in any case it does not avoid cross-module dependencies because role and profile classes, like any other, should themselves belong to modules.

Configuring Distributed Objects Dynamically

I'm currently evaluating using Hazelcast for our software. Would be glad if you could help me elucidate the following.
I have one specific requirement: I want to be able to configure distributed objects (say maps, queues, etc.) dynamically. That is, I can't have all the configuration data at hand when I start the cluster. I want to be able to initialise (and dispose) services on-demand, and their configuration possibly to change in-between.
The version I'm evaluating is 3.6.2.
The documentation I have available (Reference Manual, Deployment Guide, as well as the "Mastering Hazelcast" e-book) are very skimpy on details w.r.t. this subject, and even partially contradicting.
So, to clarify an intended usage: I want to start the cluster; then, at some point, create, say, a distributed map structure, use it across the nodes; then dispose it and use a map with a different configuration (say, number of backups, eviction policy) for the same purposes.
The documentation mentions, and this is to be expected, that bad things will happen if nodes have different configurations for the same distributed object. That makes perfect sense and is fine; I can ensure that the configs will be consistent.
Looking at the code, it would seem to be possible to do what I intend: when creating a distributed object, if it doesn't already have a proxy, the HazelcastInstance will go look at its Config to create a new one and store it in its local list of proxies. When that object is destroyed, its proxy is removed from the list. On the next invocation, it would go reload from the Config. Furthermore, that config is writeable, so if it has been changed in-between, it should pick up those changes.
So this would seem like it should work, but given how silent the documentation is on the matter, I'd like some confirmation.
Is there any reason why the above shouldn't work?
If it should work, is there any reason not to do the above? For instance, are there plans to change the code in future releases in a way that would prevent this from working?
If so, is there any alternative?
Changing the configuration on the fly on an already created Distributed object is not possible with the current version though there is a plan to add this feature in future release. Once created the map configs would stay at node level not at cluster level.
As long as you are creating the Distributed map fresh from the config, using it and destroying it, your approach should work without any issues.

Resources