Nestjs Circular dependency forwardRef() drawbacks - nestjs

Official Circular dependency says:
A circular dependency occurs when two classes depend on each other. For example, class A needs class B, and class B also needs class A. Circular dependencies can arise in Nest between modules and between providers.
While circular dependencies should be avoided where possible, you
can't always do so.
What are the reasons for not using forwardRef()?

Circular dependencies usually mean you have tightly bound logic and possibly unstable architecture that will not allow you to scale. If you really don't want to care about that, you can just sprinkle in forwardRef wherever you want, constructors and services, but that could lead to some strange, hard to resolve errors, and is generally seen as a bad practice amongst the Nest community.

Related

Languages with a NodeJS/CommonJS style module system

I really like the way NodeJS (and it's browser-side counterparts) handle modules:
var $ = require('jquery');
var config = require('./config.json');
module.exports = function(){};
module.exports = {...}
I am actually rather disappointed by the ES2015 'import' spec which is very similar to the majority of languages.
Out of curiosity, I decided to look for other languages which implement or even support a similar export/import style, but to no avail.
Perhaps I'm missing something, or more likely, my Google Foo isn't up to scratch, but it would be really interesting to see which other languages work in a similar way.
Has anyone come across similar systems?
Or maybe someone can even provide reasons that it isn't used all that often.
It is nearly impossible to properly compare these features. One can only compare their implementation in specific languages. I collected my experience mostly with the language Java and nodejs.
I observed these differences:
You can use require for more than just making other modules available to your module. For example, you can use it to parse a JSON file.
You can use require everywhere in your code, while import is only available at the top of a file.
require actually executes the required module (if it was not yet executed), while import has a more declarative nature. This might not be true for all languages, but it is a tendency.
require can load private dependencies from sub directories, while import often uses one global namespace for all the code. Again, this is also not true in general, but merely a tendency.
Responsibilities
As you can see, the require method has multiple responsibilities: declaring module dependencies and reading data. This is better separated with the import approach, since import is supposed to only handle module dependencies. I guess, what you like about being able to use the require method for reading JSON is, that it provides a really easy interface to the programmer. I agree that it is nice to have this kind of easy JSON reading interface, however there is no need to mix it with the module dependency mechanism. There can just be another method, for example readJson(). This would separate the concerns, so the require method would only be needed for declaring module dependencies.
Location in the Code
Now, that we only use require for module dependencies, it is a bad practice to use it anywhere else than at the top of your module. It just makes it hard to see the module dependencies when you use it everywhere in your code. This is why you can use the import statement only on top of your code.
I don't see the point where import creates a global variable. It merely creates a consistent identifier for each dependency, which is limited to the current file. As I said above, I recommend doing the same with the require method by using it only at the top of the file. It really helps to increase the readability of the code.
How it works
Executing code when loading a module can also be a problem, especially in big programs. You might run into a loop where one module transitively requires itself. This can be really hard to resolve. To my knowledge, nodejs handles this situation like so: When A requires B and B requires A and you start by requiring A, then:
the module system remembers that it currently loads A
it executes the code in A
it remembers that is currently loads B
it executes the code in B
it tries to load A, but A is already loading
A is not yet finished loading
it returns the half loaded A to B
B does not expect A to be half loaded
This might be a problem. Now, one can argue that cyclic dependencies should really be avoided and I agree with this. However, cyclic dependencies should only be avoided between separate components of a program. Classes in a component often have cyclic dependencies. Now, the module system can be used for both abstraction layers: Classes and Components. This might be an issue.
Next, the require approach often leads to singleton modules, which cannot be used multiple times in the same program, because they store global state. However, this is not really the fault of the system but the programmers fault how uses the system in the wrong way. Still, my observation is that the require approach misleads especially new programmers to do this.
Dependency Management
The dependency management that underlays the different approaches is indeed an interesting point. For example Java still misses a proper module system in the current version. Again, it is announced for the next version, but who knows whether this will ever become true. Currently, you can only get modules using OSGi, which is far from easy to use.
The dependency management underlaying nodejs is very powerful. However, it is also not perfect. For example non-private dependencies, which are dependencies that are exposed via the modules API, are always a problem. However, this is a common problem for dependency management so it is not limited to nodejs.
Conclusion
I guess both are not that bad, since each is used successfully. However, in my opinion, import has some objective advantages over require, like the separation of responsibilities. It follows that import can be restricted to the top of the code, which means there is only one place to search for module dependencies. Also, import might be a better fit for compiled languages, since these do not need to execute code to load code.

How to stop two orchard modules conflicting

I have two Orchard Modules.
Both have implementations of IAppSettings , which is defined in an external dll, and referenced in the modules via nuget package (So I cannot use IDependency ).
I wire these up using an Autofac Module class in each module.
Unfortunately this leads to "last registration wins" and both modules will use the last registered implementation, even though the "expected" result would be that each uses their own.
To be clear, each module is developed by a separate team, who don't co-ordinate with each other, but do use the same guidelines for module creation. The example above is just one instance of this occurring, but it is fair to assume there would be more.
How might I go about ensuring that each team can register their own dependencies for their modules, without constantly having to check with the authors of other modules?
There is one Autofac container per tenant, not per (Orchard) modules. You see the implications of this.
However this couldn't be much differently since interaction between modules would be seriously hindered if dependencies would be scoped to extensions.
Also one of the points of DI is that you can override the implementation: this is also desired here, since if you implement a dependency in Module A, then also in Module B (where Module B depends on Module A) then Module B can override the default implementation. This is a good thing.
Instead of wanting to require specific implementations for your interfaces what kind of defeats DI you could implement the strategy pattern for example. But if you tell more details I could help more.

Over-use of require() in node.js, mongoose

I'm new to Node.js, but quite like the module system and require().
That being said, coming from a C background, it makes me uneasy seeing the same module being require()'d everywhere. All in all, it leads me to some design choices that deviate from how things are done in C. For example:
Should I require() mongoose in every file that defines a mongoose model? Or inject a mongoose instance into each file that defines a model.
Should I require() my mongoose models in every module that needs them? Or have a model provider that is passed around and used to provide these models.
Ect. For someone who uses dependency injection a lot - my gut C feeling is telling me to require() a module only once, and pass it around as needed. However, after looking at some open-source stuff, this doesn't seem to be Node way of things. require() does make things super easy..
Does it hurt to overuse this mechanism?
require() caches modules when you use it. When you see the same file or module required everywhere it's only being loaded once, and the stored module.exports is being passed around instead. This means that you can use require everywhere and not worry about performance and memory issues.
As cptroot states requiring a module everywhere you need it instead of passing it around as an argument is safe to do and is also much easier. However, you should view any require call as a hardcoded dependency which you can't change easily. E.g. if you want to mock a module for testing these hardcoded dependencies will hurt.
So passing a module instance around as an argument instead of just requiring it again and again reduces the amount of hardcoded dependencies because you inject this dependency now. E.g. in your tests you will benefit from easily injecting a mock instead.
If you go down this road you will want to use a dependency injection container that helps you injecting all your dependencies and get rid of all hardcoded require calls. To choose a dependency injection container appropriate for your project you should read this excellent article. Also check out Fire Up! which I implemented.

Create orphan instances or add spurious dependencies?

I'm working on updating my ReadArgs package. I had a request to add Arguable instances for Data.Text and FileSystem.Path.FilePath. The former is no big deal, since it's in the base package, but the latter requires system-filepath
So I could release a ReadArgs-ext package, chock full of orphan instances, or I could update the ReadArgs package with an additional external dependency. Which option makes more sense?
My usual rule of thumb is to tend towards adding the instances for packages that are in the Haskell Platform, but don't involve less portable elements such as graphics. This covers both filepath and text. Since you are already dealing with the outside world for command line arguments, neither one of those seems like a particularly egregious addition.
Orphans can lead to pretty terrible problems.
I don't use them in 95% of my packages, and I go out of my way to avoid packages that use them.
The two exceptions I have at this point are a few missing monoids in reducers and a package full of vector-instances I picked up because I wasn't willing to make my entire hierarchy of packages depend on vector, downgrading everything from Safe to Trustworthy.
I find when I'm tempted to add an orphan instance, I can usually work around it by providing some kind of WrappedMonad-like newtype wrapper for lifting or lowering another class.

Where do QuickCheck instances belong in a cabal package?

I have a cabal package that exports a type NBT which might be useful for other developers. I've gone through the trouble of defining an Arbitrary instance for my type, and it would be a shame to not offer it to other developers for testing their code that integrates my work.
However, I want to avoid situations where my instance might get in the way. Perhaps the other developer has a different idea for what the Arbitrary instance should be. Perhaps my package's dependency on a particular version of QuickCheck might interfere with or be unwanted in the dependencies of the client project.
My ideas, in no particular order, are:
Leave the Arbitrary instance next to the definition of the type, and let clients deal with shadowing the instance or overriding the QuickCheck version number.
Make the Arbitrary instance an orphan instance in a separate module within the same package, say Data.NBT.Arbitrary. The dependency on QuickCheck for the overall package remains.
Offer the Arbitrary instance in a totally separate package, so that it can be listed as a separate test dependency for client projects.
Conditionally include both the Arbitrary instance and the QuickCheck dependency in the main package, but only if a flag like -ftest is set.
I've seen combinations of all of these used in other libraries, but haven't found any consensus on which works best. I want to try and get it right before uploading to Hackage.
On the basis of not much specific experience, but a general desire for robustness, the guiding principle for package dependencies should perhaps be
From each according to their ability; to each according to their need.
It's good to keep the dependencies of a package to the minimum needed for its essential functionality. That suggests option 3 or option 4 to me. Of course, it's a pain to chop the package up so much. If options are capable of expressing the conditionality involved, then option 4 sounds like a sensible approach, based on using language effectively to say what you mean.
It would be really good if a consensus emerged about which one switch we need to flick to get the testing kit as well as the basic functionality.
It's also clear that there's room for refinement here. It's amazing that Cabal works as well as it does, but it could allow for more sophisticated notions of "package", perhaps after the manner of the SML module system. Translating dependencies into function types, we basically get to write
simplePackage :: (Dependency1, .., Dependencyn) -> Deliverable
but one could imagine more elaborate combinations of products and functions, like
fancyPackage :: BasicDependency -> (BasicDeliverable, HelpfulExtras -> Gravy)
Until then, pick the option that most accurately reflects the actual deal. And tell us about it, so we can build that consensus.
The problem comes down to: how likely is it that someone using your library will be wanting to run QuickCheck tests using your NBT type?
If it is likely, and the Arbitrary instance is detailed (and thus not likely to change for different people), it would probably be best to ship it with your package, especially if you're going to make sure you keep updating the package (as for using a flag or not, that comes down to a bit of personal preference). If the instance is relatively simple however (and thus more likely that people would want to customise it), then it might be an idea to just provide a sample instance in the documentation.
If the type is primarily internal in nature and not likely to be used by others wanting to run tests, then using a flag to conditionally bring in QuickCheck is probably the best way to go to avoid unnecessary dependencies (i.e. the test suite is there just so you can test the package).
I'm not a fan of having QuickCheck-only packages in general, though it might be useful in some situations.

Resources