On some of my projects, I use the double dispatch mechanism to provide at run time a "view" of my infrastructure module to my domain module (Strengthening your domain: The double dispatch pattern). What I call 'modules' above is simply separate jar files where the dependency from service.jar->domain.jar is enforced at compile time only. Will I be able to have this working on java-9 if I define my service and domain as 'true' java 9 modules?
module domain
L Fee.java
L Payment recordPayment(double, BalanceCalculator)
L BalanceCalculator.java
module service
L BalanceCalculatorImpl.java // implements BalanceCalculator
L double calculate(Fee fee) //call fee.recordPayment(amount,this)
Yes that is possible. Here are some things to consider:
The module domain needs to export the package containing Fee. Possibly to everyone but at least to service.
The module service will have to require domain as BalanceCalculatorImpl has to access BalanceCalculator since it implements it.
It looks like clients of service need to know about domain as well, which is a textbook case for implied readability.
In a simple setup either service or some third module will have to instantiate BalanceCalculatorImpl and pass it to Fee, this can not happen in domain or it would create a cyclic dependency.
A more advanced solution would be services, where all code that can access BalanceCalculator, even inside domain, can get hold of all its implementations.
Taking all of these into account, this is what the two module declarations might look like:
module com.example.domain {
// likely some requires clauses
// export packages containing Fee and BalanceCalculator
exports com.example.domain.fee;
exports com.example.domain.balance;
}
module com.example.service {
requires public com.example.domain;
// likely some more requires clauses
// expose BalanceCalculatorImpl as a service,
// which makes it unnecessary to export the containing package
provides com.example.domain.balance.BalanceCalculator
with com.example.service.balance.BalanceCalculatorImpl;
}
Then every module that likes to use BalanceCalculator can declare it in its module declaration with uses com.example.domain.balance.BalanceCalculator and get instances of that using Java's ServiceLoader.
You can find more practical applications of the module system (particularly for services] in a demo I created.
(Note: The answer was revised after this exchange.)
Related
I'm fairly new to Typescript.
I have 2 microservices, let's call them ManagerMs and HandlerMs.
They communicate through RabbitMq.
Each of their public methods, becomes a queue on Rabbit when service starts.
The ManagerMs need to preform an RPC call on a function, called 'handle' that belongs to HandlerMs.
Ideally, I want ManagerMs to be able to import just the declarations of HandlerMs so that it can do something like (inside the ManagerMs class):
import HandlerMsApi from '<path-to-declaration-file?>'
...
class ManagerMs {
...
this.rpcClient.call(HandlerMsApi.handle.name) // instead of: this.rpcClient.call('HandlerMsApi.handle')
The point is that a certain service will have access to the declaration of another service and not the implementation.
Currently, both services can't import each other because of the way the project is structured.
So I thought of creating a shared library which will hold just the declaration files of the different modules, but that mean that .d.ts files aren't located next to their corresponding implementation files (.ts).
Questions are:
Is it a good idea?
How can I achieve such behaviour?
Currently, when I tried to do so I have the following .d.ts file (in a different folder than the implementation):
declare class HandlerMsApi {
handle(req: string): Promise<any>;
}
export = HandlerMsApi;
But when I try to compile (tsc) my code I get the following error:
"....handler.d.ts' is not a module"
Any help?
How does the framework manage the lifetime of DynamicModules?
The NestJs documentation on Modules states that:
In Nest, modules are singletons by default, and thus you can share the same instance of any provider between multiple modules effortlessly.
How can you share multiple dynamic module instances between modules?
The NestJs documentation on DynamicModules states that:
In fact, what our register() method will return is a DynamicModule. A dynamic module is nothing more than a module created at run-time, with the same exact properties as a static module, plus one additional property called module.
How can you manage/change the scope of DynamicModules? For example, changing them from behaving transitively to as a singleton. Defining their injection token, retrieving them on demand, etc.
How does the framework manage the lifetime of DynamicModules?
Generally speaking, like it does any other module. A dynamic module is just a special name for a module configuraed by a function and represented by an object. The end result is usually something like
{
module: SomeModuleClass,
imports: [Imports, For, The, Module],
providers: [SomeProviderToken, SomeProviderService, ExtraProvidersNeeded],
exports: [SomeProviderService],
}
Pretty much the same kind of thing you'd see in an #Module() decorator, but configured via a function that possibly uses DI instead of just written directly
How can you share multiple dynamic module instances between modules?
I might need a bit of clarification here, but I'll be happy to edit my answer with more detail once I know what's the goal here, or what you're trying to do.
How can you manage/change the scope of DynamicModules? For exmaple, changing them from behaving transitively to as a singleton. Defining their injection token, retrieving them on demand, etc.
The easiest option for sharing your configuration (besides making the module #Global()) is to make a wrapper module that re-exports the dynamic module once it has been configured.
Example: Let's say we have a dynamic FooModule that we want to pass application to, to designate the name of the application, and we want to re-use that module in several other places
#Module({
imports: [
FooModule.forRoot(
{ application: 'StackOverflow Dynamic Module Scope Question' }
)
],
exports: [FooModule],
})
export class FooWrapperModule {}
Now instead of importing FooModule.forRoot() again in multiple places, we just import FooWrapperModule and get the same instance of FooService with the configuration originally passed.
I do want to mention that by convention, DynamicModule.forRoot/Async() usually implies single time registration in the RootModule and usually has a #Global() or isGlobal: true config attached to it somewhere. This isn't always the case, but it holds relatively true.
DynamicModule.register/Async() on the other hand, usually means that we are configuring a dynamic module for this module only and it can be reconfigured elsewhere to have it's own separate config. This can lead to cool setups where you can have multiple JwtService instances that have different secret values for signing (like for an access and refresh token signing service).
Then there's DynamicModule.forFeature() which is like register in that it is at a per module basis, but usually it uses config from the forRoot/Async() call that was already made. The #nestjs/typeorm module, mikro-orm/nestjs module, and #ogma/nestjs-module module are three separate examples I can think of that follow this pattern. This is a great way to allow for general configuration at the root level (application name, database connection options, etc) and then allow for scoped configuration at the module level (what entities will be injected, logger context, etc)
I read a lot about ordering puppet classes with containment (iam using Puppet 6). But it still does not work for me in one case. Maybe my english is not good enough and i miss something. Maybe somebody know what iam doing wrong.
I have a profile to installing a puppetserver (profile::puppetserver). This profile has three sub-classes which I contain within the profile::puppetserver
class profile::puppetserver(
) {
contain profile::puppetserver::install
contain profile::puppetserver::config
contain profile::puppetserver::firewall
}
That works fine for me. Now I want to expand this profile and install PuppetDB. For this, i use the puppetdb module from puppet forge:
So what i do is add profile::puppetserver::puppetdb and the contain to the profile::puppetserver
class profile::puppetserver::puppetdb(
) {
# Configure puppetdb and its underlying database
class { 'puppetdb': }
# Configure the Puppet master to use puppetdb
class { 'puppetdb::master::config': }
}
When i provision my puppetserver first and add the profile::puppetserver::puppetdb after it, puppetdb installs and everything works fine.
If I add it directly with contain, and provisioning everything at once, it crashes. It's because the puppetdb module is installed randomly during my master server installs (and also the postgresql server and so on). That ends in my puppetserver is not running and my puppetdb generate no local ssl certificates and the service doesn't comes up.
What i try first:
I installed the puppetdb Package in my profile::puppetserver::puppetdb directly and use the required flag. It works when i provision all at once.
class profile::puppetserver::puppetdb (
) {
Package { 'puppetdb':
ensure => installed,
require => Class['profile::puppetserver::config']
}
}
So i think i could do the same in the code above:
class profile::puppetserver::puppetdb(
) {
# Configure puppetdb and its underlying database
class { 'puppetdb':
require => Class['profile::puppetserver::config']
}
# Configure the Puppet master to use puppetdb
class { 'puppetdb::master::config':
require => Class['profile::puppetserver::config']
}
}
But this does not work...
So i read about puppet class containment and ordering by chains. So i did this in my profile::puppetserver
class profile::puppetserver(
) {
contain profile::puppetserver::install
contain profile::puppetserver::config
contain profile::puppetserver::firewall
contain profile::puppetserver::puppetdb
Class['profile::puppetserver::install'] ->
Class['profile::puppetserver::config'] ->
Class['profile::puppetserver::firewall'] ->
Class['profile::puppetserver::puppetdb']
}
But it still does not have any effect... he still starts to install postgresql and the puppetdb package during my "puppetserver provisioning" in the install, config, firewall steps.
How i must write the ordering, that all things from the puppetdb module, which i call in profile::puppetserver::puppetdb, only starts when the rest of the provisioning steps are finished?
I really don't understand it. I think maybe it haves something to do with the fact, that i declare classes from the puppetdb module inside of profile::puppetserver::puppetdb and not the directly Resource Type. Because when i use the Package Resource Type with the Require Flag, it seems to work. But i really don't know how to handle this. I think there must be a way or?
I think maybe it haves something to do with the fact, that i declare
classes from the puppetdb module inside of
profile::puppetserver::puppetdb and not the directly Resource Type.
Because when i use the Package Resource Type with the Require Flag, it
seems to work.
Exactly so.
Resources are ordered with the class or defined-type instance that directly declares them, as well as according to ordering parameters and instructions applying to them directly.
Because classes can be declared multiple times, in different places, ordering is more complicated for them. Resource-like class declarations such as you demonstrate (and which you really ought to avoid as much as possible) do not imply any particular ordering of the declared class. Neither do declarations via the include function.
Class declarations via the require function place a single-ended ordering constraint on the declared class relative to the declaring class or defined type, and declarations via the contain function place a double-ended ordering constraint similar to that applying to all resource declarations. The chaining arrows and ordering metaparameters can place additional ordering constraints on classes.
But i really dont know how to handle this. I think there must be a way or?
Your last example shows a viable way to enforce ordering at the level of profile::puppetserver, but its effectiveness is contingent on each of its contained classes taking the same approach for any classes they themselves declare, at least where those third-level classes must be constrained by the order of the second-level classes. This appears to be where you are falling down.
Note also that although there is definitely a need to order some things relative to some others, it is not necessary or much useful to try to enforce an explicit total order over all resources. Work with the lightest hand possible, placing only those ordering constraints that serve a good purpose.
I encountered some problems reading the Rust documentation:
In this example, we have three modules again: client, network, and network::client. Following the same steps we did earlier for extracting modules into files, we would create src/client.rs for the client module. For the network module, we would create src/network.rs. But we wouldn’t be able to extract the network::client module into a src/client.rs file because that already exists for the top-level client module! If we could put the code for both the client and network::client modules in the src/client.rs file, Rust wouldn’t have any way to know whether the code was for client or for network::client
Why does Rust need to know the code in client.rs belongs to client or network::client? Can it belong to both?
The compiler has rules about where the source file for an external module can be. Those rules ensure that there aren't two modules that use the same source file.
If you really want to, you can override the rules with a #[path] attribute:
mod client; // defaults to client.rs relative to the current file
mod network {
#[path="client.rs"] // reads the same source as the outer `mod client;`
mod client;
}
However, doing so would lead to duplicate code, i.e. the code in client.rs would be compiled twice, and everything that's defined in client.rs would be defined twice, in two separate modules. It's as if you made network/client.rs an exact copy of client.rs and didn't write the #[path] attribute.
Another thing you can do is provide an alias for a module by reexporting it elsewhere. This can be useful when building a library: it enables you to present an external module hierarchy that is different from the internal module hierarchy.
mod client; // not accessible externally
pub mod network {
pub use client; // network::client::* will refer to the same definitions as client::*
}
For example, with the above, the client module is defined in client.rs, but clients use it through my_crate::network::client.
Lets say I want to define a set of resources that have dependencies on each other, and the dependent resources should reuse parameters from their ancestors. Something like this:
server { 'my_server':
path => '/path/to/server/root',
...
}
server_module { 'my_module':
server => Server['my_server'],
...
}
The server_module resource both depends on my_server, but also wants to reuse the configuration of it, in this case the path where the server is installed. stdlib has functions for doing this, specifically getparam().
Is this the "puppet" way to handle this, or is there a better way to have this kind of dependency?
I don't think there's a standard "puppet way" to do this. If you can get it done using the stdlib and you're happy with it, then by all means do it that way.
Personally, if I have a couple defined resources that both need the same data I'll do one of the follow:
1) Have a manifest that creates both resources and passes the data both need via parameters. The manifest will have access to all data both resources need, whether shared or not.
2) Have both defined resources look up the data they need in Hiera.
I've been leaning more towards #2 lately.
Dependency is only a matter of declaring it. So your server_module resource would have a "require => Server['my_server']" parameter --- or the server resource would have a "before => Server_module['my_module']".