How to use modules in modules - haxe

I'd like to have a module which can be developed in isolated envorinment but still remains a module which can be plugged into anoter project.
The idea: Currently I have state machine driven modular project where every module is defined by DSL so the main project has its context and command mappings and state machine. Now one of the modules will become essentially the same thing - it'll have its own context, its own child modules and its own DSL definition which will be separated from the main context.
Is this possible?
Is there some best practice of how to automatically forward events from the main context throught the module to the module context?
Is there a way to map the modules private dispatcher as a dispatcher for the isolated context?

It seems to be completely possible
Since I didn't find any documentation or example regarding this use case I think there is no best practice
As far as I understand it's possible to create own application context class which would expose possibility to override the context dispatcher. But it won't solve much because command triggers can only listen to modules and not to the whole context
So I assume that the best way to solve this is to create a separated communication module which will be mapped inside the "child" DSL and the "parent" module will then locate it through the core factory of the "child assembler" and trigger events through it. It also makes the communication more testable because that will channel entire communication through a single point where it can be easily tested/mocked/observed and it also abstracts away implementation and events of the "parent" application

HexMachina only support one context by applications (parent-child context should be supported in future). I’m not certain to understand exactly what you want, but let start with few things.
Communication between modules.
Modules have two dispatchers, one internal for all internal communication with FrontController, and one public for all external communication.
To communicate between module, one module has to subscribe to the other. In DSL, it defines like that :
<chat id="chat" type="hex.ioc.parser.xml.mock.MockChatModule">
<listen ref="translation"/>
</chat>
<translation id="translation" type="hex.ioc.parser.xml.mock.MockTranslationModule">
<listen ref="chat">
<event static-ref="hex.ioc.parser.xml.mock.MockChatModule.TEXT_INPUT" method="onSomethingToTranslate"/>
</listen>
</translation>
In this example, when chat module dispatchPublicMessage(MockChatModule.TEXT_INPUT, [“data”]), the onSomethingToTranslate(textToTranslate : String) method of translation module is executed.
Split DSL in many files
You can use context inclusion and conditional attributes to organize your DSL files by “component”, and define what you want to use at compile time.
<root name="applicationContext">
<include if=“useModuleA” file="context/ModuleA.xml"/>
</root>
Conditional attribute value is defined by compiler flags (-D useModuleA=1) or directly in code check this link.
Driven many modules with the state machine
If you want driven many modules on one state change, you have used command to manage that.
<state id="assemblingEnd" ref="applicationContext.state.ASSEMBLING_END">
<enter command-class="hex.ioc.parser.xml.assembler.mock.MockStateCommand" fire-once="true"/>
</state>
I hope this can help you. Let me know if you want more detail.

Related

Best practice for naming Event Types in Event Sourcing

When building an event store, the typical approach is to serialize the event and then persist the type of the event, the body of the event (the serialized event itself), an identifier and the time it occurred.
When it comes to the event type, are there any best practises as to how these should be stored and referenced? Examples I see store the fully qualified path of the class ie.
com.company.project.package.XXXXEvent
What effort is then required though if you decide to refactor your project structure?
After years running event-sourced applications in production, we avoid using fully qualified class names or any other platform-specific identifiers for event types.
An event type is just a string that should allow any kind of reader to understand how the event should be deserialized. You are also absolutely right about the issue with refactoring the application structure that might lead to changes in the class name.
Therefore, we use a pre-configured map that allows resolving the object type to a string and to reverse the string to an event type. By doing so, we detach the event type meta from the actual class and get the freedom to read and write events using different languages and stacks, also being able to freely move classes around of needed.
What effort is then required though if you decide to refactor your project structure?
Not a lot of effort, but some discipline.
Events are messages, and long term viability of messages depend on having a schema, where the schema is deliberately designed to support forward and backward compatibility.
So something like "event type" would be a field name that can be any of an open set of identifiers which would each have an official spelling and semantics.
The spelling conventions that you use don't matter - you can use something that looks like a name in a hierarchical namespace, or you can use a URI, or even just a number like a surrogate key.
The identifiers, whatever convention you use, are coupled to the specification -- not to the class hierarchy that implements them.
In other words, there's no particular reason that org.example.events.Stopped necessarily implies the existence of a type org.example.events.Stopped.
Your "factories" are supposed to create instances of the correct classes/data structures from the messages, and while the naive mapping from schema identifier to class identifier works, then yes, they can take that shortcut. But when you decide to refactor your packages, you have to change the implementation of the factory such that the old identifiers from the message schema map to the new class implementations.
In other words, using something like Class.forName is a shortcut, which you abandon in favor of doing the translation explicitly when the short cut no longer works.
Since event sourcing is about storing Domain Events, we prefer to avoid package-names or other technical properties in the events. Especially when it comes to naming them, since the name should be part of ubiquitous language. Domain Experts and other people don't lean on package names when making conversation about the domain. Package names are a language construct that also ties the storage of the Domain Events with the use of them within your software, which is another reason to avoid this solution.
We sometimes use the short class name (such as Class.forName in Java) to make mapping to code simpler and more automatic, but the class names should in that case be carefully chosen to match the ubiquitous language so that it still is not too implementation-specific.
Additionally, adding a prefix opens upp the possibility to have multiple Event Types with the same name but using different prefixes. Domain Events are part of the context of the Aggregate they are emitted from and therefore the Aggregate type can be useful to embed in the event. It will scope your events so you don't have to make up synthetic prefixes.
If you store events from multiple bounded contexts in one store, BoundedContext.EventThatHappened. Past tense for events, and event names are unique for a bounded context. As your events will change implementation, there is no direct connection to a class name.

Model data flow from (flow) port to UML activity diagram

One of the projects I am working on uses flow ports to model data flow between classes. We now started to model dynamic behavior using activity diagrams and state charts and are looking for a way to express that the data used in an activity diagram has been received on a specific port. Bascially, we want to create a connector between a flow port and e.g. an activity parameter node.
I think modelling data flow with ports is quite common especially in System Engineering, and there should be ways to link the data to activities. I can think of two some ways:
Connect the port to a property (or part) and use a ReadStructuralFeatureAction to get the value
Connect the port to a property (or part) and add an operation to the class which is called with a CallOperation
Create an attribute with the same name as the port and provide an operation that is called with a CallOperation action
The first option would be ok, but our modelling tool Rhapsody 8.1 seems to not support ReadStructuralFeatureActions. The other two approaches have the drawback that there is no direct connector between the port and the activity in the model and it is not visually obvious, so I would like to have a better alternative.
I am wondering if there anybody knows of better approaches to achieve this, e.g. using SysML (1.3).
The connections between the static and the dynamic views in UML and SysML are "hidden" in the less visible part of the model. I guess the reason is that the designers of UML wanted to separate these. So there are no graphical or otherwise very explicit connections.
Instead, the connections are quite natural, so you can just use it. Examples are the guards, triggers or actions on transitions in state charts or activity diagrams. This ReadStructuralFeatureAction is implemented implicitely by using the static element directly. You can modell them directly there. So they occur next to the edge that represents the state transition or control flow. A futher way is to use Receive Actions and set the property of the reception to an event or an triggered Operation. By using Send Actions you can trigger Events in the same structural element or others. When doing so in Rhapsody you need to specify the target Port and the target part.
Neither in UML/SysML or Rhapsody it is foreseen that you want to know via which port the call came or the attribute was changed, when you offer the interface of the Class/Block. But you can realize this by using full ports and implement the intended behaviour (which sould be distinctive - otherwise it would not be needed to know the source). So each full port has a state chart or activity and passes internal signals or events the object of your class. For calling operations from actions there are two ways, the more hidden one by just calling from the actions (or state on enter or leave) and the more visible one by using call operations.
The visibility of these connections have been changed in recent UML or SysML versions. So this will change significantly when updating to later Rhapsody versions; although I would really recommend to update to the latest Rhapsody version, as it brings better sysML support, far less bugs, a few new features and better usability.

Are there any creational design patterns that instantiate newly written classes?

Are there any creational design patterns that allow for completely new objects (as in newly written) to be instantiated without having to add a new statement somewhere in existing code?
The main problem to solve is how to identify the class or classes to instantiate. I know of and have used three general patterns for discovering classes, which I'll call registration, self-registration and discovery by type. I'm not aware of them having been written up in a formal pattern description.
Registration: Each class that wants to be discovered is registered somewhere where a framework can find it:
the class name is put in an environment variable or Java system property, in a file, etc.
some code adds the class or its name to a singleton list early in program execution
Self-registration: Each class that wants to be discovered registers itself, probably in a singleton list. The trick is how the class knows when to do that.
The class might have to be explicitly referred to by some code early in the program (e.g. the old way of choosing a JDBC driver).
In Ruby, the code that defines a class can register the class somewhere (or do anything else) when the class is defined. It suffices to require the file that contains the class.
Discovery by type: Each class that wants to be discovered extends or implements a type defined by the framework, or is named in a particular way. Spring autowiring class annotations are another version of this pattern.
There are several ways to find classes that descend from a given type in Java (here's one SO question, here's another) and Ruby. As with self-registration, in languages like those where classes are dynamically loaded, something has to be done to be sure the class is loaded before asking the runtime what classes are available.
One things which I think here is that someone needs to do a new to your newly written Class. If you are not doing that then may be some framework needs to do that.
I could remember something similar which I did in one of my side projects using Java Spring . I had an interface and multiple implementations to it. My project required to iterate over all the implementations do some processing. Now for this even I was looking for some solution with which I would not have to manually do the instantiation of that particular class or some explicit wiring of that particular class. So Spring facilitated me to do that through #Autowired annotation. It injected all the implementation of that interface on the fly. Eg :-
#Autowired
private List<IMyClass> myClassImplementations;
Now in the above example I can simply iterate over the list of implementations injected and I would not have to do instantiation of my new implementation every time I write a new one.
But I think in most of the cases it would be difficult to use this approach (even though it fits in my case). I would rather go with a Factory pattern in general case and try to use that for creating new instances. I know that would require new but in my perception engineering it in a way that its object is automatically created and injected is a bit an extra overhead.

Why are Puppet classes called class?

I'm quite new to Puppet, and so far I worked through the official tutorial, at least the introduction and part 1. So I guess I should have a basic understanding of how Puppet works and its terminology.
Anyway, there is one thing I don't get: In Puppet you have classes, which basically are nothing but outsourced manifests. So far, so good.
But why are they called class?
I have an OOP and CSS background, and I can also imagine the general concept of a class as a container for objects that have something in common. But none of these definitions match the concept of a class in Puppet. At least I don't get the analogy.
Can anybody bring some light into this?
PS: I know that there is no objective answer to this question, but primarily opinion-based, but I don't know where to ask elsewhere, so I hope for some advice :-)
You can relate Classes in puppet with division or separation of manifest bases on its role.
Classes are named blocks of Puppet code which are not applied unless they are invoked by name.
For the modularity and understanding of manifest it is recommended that different role of manifest code should be present in different file, which is called as classes.
And the name of class should denote its role. For example user(to create user), params(to pass parameters).
I believe it's recommended by Puppet and I'm sure it says this in the docs somewhere that you make classes for each of your node's roles that each contain requires for all other classes needed by that role with the same requires in multiple role classes if need be (which is allowed when you use the require command rather than include). This way you could have self contained classes such as 'fileserver' or 'webserver' which just need to be included in the node resources.
More on overlapping role classes:
http://docs.puppetlabs.com/puppet/2.7/reference/lang_classes.html#aside-history-the-future-and-best-practices

Inter bounded context events - what module/project to store message contracts

Each of our bounded contexts has an event message processor which pulls messages off the inter-context-bus and dispatches it locally via an in-memory bus (Reactive Extensions, or https://github.com/flq/MemBus).
In DDD books I have read it talks about putting messages in modules within the project such as mycompany.accounts.infrastructure.messages and mycompany.ordering.infrastructure.messages .
The problem for me with multiple contexts, referencing these messages would lead to circular references.
How best to organise different bounded context messaging contracts:
Would each bounded context have a separate project that contained all of the possible messages for that context so that other bounded contexts could reference?
Or is it better to have separate shared library for all messages that will go over the inter-context-bus?
I solve similar problems building (at least) two assembly for each bounded context:
One for the contracts (event, exceptions, shared identifiers and so on...)
One for the implementation of entities.
This way, different bounded contexts implementations can reference to the same contracts, without any cicle.
edit
As for naming conventions, I usually name assemblies after the "conventional name" of the bounded context, for example
BankName.FinancialAdvisory for the contracts
BankName.FinancialAdvisory.POCO for the implementations
BankName.FinancialAdvisory.ORMOrOtherTechnologicalCouplingName when I need to specialize some class to use them in a specific technological environment.
However, inside the POCO assembly the root namespace is the same of the contracts' one (eg BankName.FinanicalAdvisory): this because the POCOs, that expresses the business rules in code without any technological concern, have the same development livecycle of the contracts. On the contrary the assembly containing technological specializations uses a root namespace that is equals to the assembly name (such as BankName.FinancialAdvisory.ORMOrOtherTechnologicalCouplingName).
Nevertheless all the assembly related to the domain share the same namespace structure: for example if a namespace "Funds" exists under BankName.FinancialAdvisory it also exists in both POCO and ORMOrOtherTechnologicalCouplingName (if it contains any class, of course).

Resources