May be it is my lack of knowledge, I'm seeing the behavior where a header value updated in SubFlow1 of PublishSubscribe is NOT reflecting in the other SubFlow2 which is executed on the same main thread.
.publishSubscribeChannel(pubSubSpec -> pubSubSpec.subscribe(flow1())
.subscribe(flow2()))
.get();
The Message is immutable by definition. So, in those to sub-flows you really deal with different messages.
Imaging you have to Map objects which contain the same key. Modifying one of them definitely does not effect the other instance.
If you still think that you need share the same object between sub-flows, then consider to have a mutable object as a header. For example an AtomicReference will do the trick to be shared as a header value between different message instance. Then modifying this value in a header will lead you to the reflected change from one sub-flow to another.
In general it is better to think only in immutable object since there is no guarantee how your messages are going to travel through distributed asynchronous solution.
Related
I usually try to encapsulate my code, meaning each objects works on its data via public methods. The problem is, that a lot of stakeholder/other objects of an object lead to a lot of requirements and thus a lot of methods in this object, which are then only used by one stakeholder. In such a scenario, should I rather pass the raw data to the stakeholder, so that it can process the data itself?
When should an object process its own data for other objects (offering methods) and when should it simply return its data (via getters) to let the caller do the processing itself?
I think it depends on many factors such as:
purpose of class (some real world object)
data that object has (description of this object)
desired behavior (some actions of this behavior)
For example, we have two objects:
a big bucket with water
kettle.
Then we can ask myself:
can big bucket have method boil() or should big bucket just have method giveWater()? So it can be concluded that bucket of water should just expose getter for exposing water for other users such as kettle
And by using this technique you can decide whether should an object process its own data for other objects (offering methods) and when should it simply return its data (via getters)
In addition, you should apply Single Responsibility Principle of SOLID in your classes. As Robert C. Martin says:
Martin defines a responsibility as a reason to change, and concludes
that a class or module should have one, and only one, reason to be
changed (e.g. rewritten).
So SRP defines a responsibility of a class as a reason to change, and states that a class might only have one reason to change.
So if someone edits logging code in class Radio, then it means that she/he violates SRP principle. She/he should edit logging code in Log class. These classes can communicate through references or composition.
I'm trying to follow DDD principles for my current project. Unfortunately I have to use RFCs due to technical constraints, so no OData and no REST. It's quite a long question I hope it's OK to ask this in Stackoverflow.
In any case, I have an entity class WorkOrder with a list of Operation objects.
I have a WorkOrderRepository class with a SAVE method that only receives a WorkOrder object and is able to save everything (header data, address, etc) in one go. No matter if it's creation, update or deleted. The repository hides the BAPI calls from the rest.
Now I want to implement the logic to add/update/remove Operations to the work order object and I'm not sure even if the names I give to the methods are correct. Maybe they should be insert/edit/delete... I'm quite confused with this since in every place I look they use different names.
But the most important are my 2 specific doubts:
Should I have only 1 RFC that receives all the updates to a WorkOrder entity including header, operations? Or should I create 1 RFC per operation that only handles one operation at a time? Bear in mind that the UI mockup expects that the user can add/delete multiple operations before clicking a SAVE button and RFC has implicit commit, and that to my knowledge a DDD entity should be updated always in one call.
Option 1:
FUNCTION ZWORKORDER_HDR_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_WORKORDER_HDR_CHG) TYPE ZWORKORDER_HDR_CHG
VALUE(I_WORKORDER_HDR_UPD) TYPE ZWORKORDER_HDR_UPD "X structure for the BAPI
VALUE(I_OPERATIONS_CHG) TYPE ZOPERATIONS_CHG
VALUE(I_OPERATIONS_UPD) TYPE ZOPERATIONS_UPD
VALUE(I_OPERATIONS_DEL) TYPE ZOPERATIONS_DEL
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
Option 2
FUNCTION ZWORKORDER_OPERATION_CRT
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION) TYPE ZOPERATION_CHG
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_CHG) TYPE ZOPERATION_CHG
VALUE(I_OPERATION_UPD) TYPE ZOPERATION_UPD
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_DEL
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_ID) TYPE ZOPERATION_ID
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
How should my Workorder methods look to handle this? I'm specially confused with the update method, since I'm not sure if I should first get the existing operation and then update it or let the parent class do it. But maybe my approaches are completely wrong from the root.
Option 1:
workorder->add_operation( i_operation ). "Pass flat structure from RFC? Or first create object?
workorder->update_operation( i_operation_chg
i_operation_upd ).
workorder->delete_operation( i_operation_id ).
Option 2:
workorder->add_operation( ).
operation = workorder->get_operation(i_operation_chg->get_id())
operation->update( i_operation_chg
i_operation_upd ).
operation->delete_operation( i_operation_id ).
The simplest solution is always the best (KISS and YAGNI principles). It doesn't really matter if you create 1 or 3 RFC-enabled function module, so if you can achieve your goal with one function module, then do it with one.
I think you need to have two RFC-enabled function modules. One to validate the maintained operations (do the validations as far as possible), but that should not save anything to the database, and another one called after the user clicks the SAVE button, to save the whole "WorkOrder", including the maintained operations (at this time, there will be the complete validation also).
If you don't need to define an "operation" class for something else, right now, then keep it simple, no need to instantiate an object. Note that you may create an "operation" class with private static methods, and being a friend of the "workorder" class (only this class can use the operation class), just to organize better your code.
PS: although I don't know what is "Domain-Driven Design", I don't see how your question is related to it, because it just looks like simple program design.
We also work with DDD, although luckily not with RFCs but instead Gateway/OData. Since REST by defintion is stateless we always commit in our application service layer.
What we do is have three "DDD" entities
app_Service, domain_service, repository (and also some data containers like aggregates), where the app_service in your case would expose the methods create, update and delete, and possibly also validate.
Then I would write four thin RFCs (crud + validate) that basically passes the data to the app service.
As for being able to handle multiple updates in each save, we always model our OData (your RFC) after the UI requirements, then it's the app_service task to make sense of the data from a system point of view.
We make heavy use of Abap Clean Code (it's even part of our ATC check), and they clearly state that you should have seperate methods, which is also OO best practices.
https://github.com/SAP/styleguides/blob/master/clean-abap/CleanABAP.md#split-methods-instead-of-adding-optional-parameters
We're building a microservices system with Axon Framework 4.1. In our domain, we have a label concept where we can attach labels to other entities. While labels are normally created and managed by the user, some of these labels are "special" and need to be hard-coded, but they need to be present in the event stream as well.
We have a bunch of aggregates that represent entities that can be labeled with these labels. Some of these aggregates will be used frequently, while others might be used infrequently or are even abandoned by the user.
Sometimes we come up with new special labels. We add them to the code, and then we also need to add them to the event stream. What is a good way to do that?
We can create a special command that we need to send when the updated service is started for the first time. It goes through all the labels and adds the ones that aren't in the event stream yet. This has two disadvantages. First, we need to actually send that command, which either requires us to not forget it, or to add some infrastructure for it outside of the code (e.g., in our build pipeline). Also, other services could have booted up faster with the new labels and started sending commands before we fired our special command. The other disadvantage is that this command will target all aggregates, including the abandoned ones, which could be wasteful of resources and be confusing to end users who might see activity in a document they thought was abandoned.
Ideally, we would like to be able to send the command when Axon has just loaded the aggregate. That way we would be certain that the labels are only introduced in aggregates that are actually used. Also, we could wire this up in code and it wouldn't require us to add infrastructure outside of the application and/or remember to do it manually.
Unfortunately, this feature doesn't seem to exist in Axon (yet) 😉.
Are there other (better) ways to achieve this?
I've got an idea which might help you out on this.
If I understand the use case correctly, the "Label" in your system, which user can introduce themselves but for which also a couple of hard-coded versions exist, is an Aggregate.
Based on that assumption, I suggest to be smart with the Aggregate Identifier you are using.
The sole thing that Axon expects from you, is that the Aggregate Identifier is (or can be made in to) a String. Typically a UUID is used for the Aggregate Identifiers, which is a reasonable first start.
You can however wrap this UUID in a typed-id object. Taking your "Label" Aggregate, that would opt for a LabelId.
That said, let's first go back to verifying whether a given "Label" Aggregate exists within the Event Stream.
The concern you have is rather valid I think; reading the entire Event Stream to figure out whether a given Aggregate instance exists is to big of a hassle.
However, the EventStore can be queried through two mechanism:
The Event Stream from a given point in time (e.g. what the TrackingToken mechanism does).
The Event Stream for a given Aggregate instance, based on the Aggregate Identifier.
It's the second option which is far more ideal in your scenario.
Just query the EventStore for a given "Label" Aggregate's Identifier. If you receive a non-empty Event Stream, you know it already exists.
Vice versa, if no Events are found, you are certain it's a new "Label" that needs to be introduced.
The crux here is in knowing the "Label's" Aggregate Identifier up front, which circles back to the String storage approach for the Aggregate Identifiers using a typed LabelId. What you could do, is deviate in the LabelId object between a custom "Label" (I'd opt for a UUID here) and a hard-coded "Label".
For the latter, you could for example have the label-name, plus a UUID/counter if desired.
Doing so will ensure that all the Events published from a hard-coded "Label" will have an Aggregate Identifier you can anticipate on during start-up.
Hope this is clear and all, if not, please comment on my response below.
I'm developing a distributed system, which must maintain a distributed object graph, where objects can point to other objects, some of which may be on remote computers. It must also be possible to move objects from one computer to another.
This raises two related problems:
How do we know when it is safe to garbage-collect objects?
If we move an object, how do we robustly ensure that all references to it are updated?
An initial thought is to maintain an exhaustive list of all inbound references.
Of course, this alone isn't sufficient for garbage-collection, as it won't catch cyclic references (same problem as with reference-counting garbage collections).
However this list can be used to ensure that any remote computer with references to the object can be notified if the object is moved.
Another thought is that if an object is moved, it leaves behind a "breadcrumb" on its original computer. If a computer receives a message intended for an object that is no-longer present, it can be forwarded to its new location.
However it is not scalable to keep such references around indefinitely, how do we know when its safe to delete them?
Can anyone provide pointers to existing solutions to this problem, particularly those that deal with replication and concurrency issues too?
The most known solution in Java is implemented as part of the RMI spec.
See Garbage Collection of Remote Objects & Distributed Garbage Collection
Let's say we have CQRS-inspired architecture, with components such as Commands, Domain Model, Domain Events, Read Model DTOs.
Of course, we can use Value Objects in our Domain Model. My question is, should they also be used in:
Commands
Events
DTOs
I haven't seen any examples where Value Objects (VO) are used in the components mentioned above. Instead, primitive types are used. Maybe it's just the simplistic examples. After all, my understanding of VOs use in DDD is that they act as a glue for the whole application.
My motivation:
Commands.
Let's say user submits a form which contains address fields. We have Address Value Object to represent this concept. When constructing command in the client, we should validate user input anyway, and when it is well-formed, we can create Address object right there and initialize Command with it. I see no need to delegate creation of Address object to command handler.
Domain Events.
Domain Model already operates in terms of Value Objects, so by publishing events with VOs instead of converting them to primitive types, we can avoid some mapping code. I'm pretty sure it's alright to use VOs in this case.
DTOs.
If our query-side DTOs can contain Value Objects, this allows for some more flexibility. E.g., if we have Money object, we can choose whether to display it in EUR or USD, no need to change Read Model.
Ok I've changed my mind. I have been trying to deal with VOs a bunch lately and after watching this http://www.infoq.com/presentations/Value-Objects-Dan-Bergh-Johnsson it clarified a couple of things for me.
Commands and Event are messages (and not objects, objects are data + behavior), in some respects much like DTOs, they communicate data about an event and they themselves encapsulate no behavior.
Value Objects are not like DTOs at all. They are a domain representation and they are, generally speaking, rich on behavior like all other domain representations.
Commands and Events communicate information into and out of the domain respectively, but they themselves do not encapsulate any behavior. From that perspective it seems wrong and a possibly a violation context boundaries to pass VOs inside of them.
To paraphrase Oren (though he was referring to nHibernate and WCF) "Don't send your domain across the wire".
http://ayende.com/Blog/archive/2009/05/14/the-stripper-pattern.aspx
If you want to communicate a value object, then I suggest passing the necessary attributes needed to re-construct the VO within them instead.
Original Text (for posterity):
If you are asking if Value Objects can be passed by the domain model to events or passed in by commands, I don't really see a huge problem with the former, though the latter may violate some of the rules of the aggregate root being the "owner" of values.
That said a value object represents concepts like for example a color. You don't have green, you are green or not. There seems to be nothing intrinsically wrong with a command telling you that you are green by passing this.
Reading the chapter from DDD on the Aggregate Root pattern explains Entities and Value Objects quite well and is worth reading over a few times.
I say it's a bad idea.
There's a reason we don't do the same with entities - to avoid coupling other parts of the system to the domain (in the wrong places). The same is true for Value Objects, the only difference between value objects and entities is lifetime and ownership - these differences do not affect how we should and should not couple to them.
Imagine that you make an event contain a VO. A change in your domain requires you to change that VO. You've now boxed yourself into a corner where your event is also forced to change, ditto for any Commands or DTO's it's a part of.
This is to be avoided.
Use DTO's and/or primitives. Map them (AutoMapper makes it a 1-line deal).
Similar to other answers, in SOA this would break encapsulation of the service as the domain is now leaking out.
According to Clean Code your DTOs are data structures (just to add another term), while value objects are objects. The difference that objects can have behavior. Mixing data structures with objects is generally a very bad idea, because it will be hard to maintain the hybrid you get.
I don't feel right to put value objects to DTOs from an architecture perspective as well. The value objects are inside of the domain model while the DTOs you mentioned are defining the interface of the model. We usually build an interface to decouple the outside world from the inside of something. So in the current case we added DTOs to decouple the outside world from the value objects (and other model related stuff). After that adding value objects to the interface is crazy.
So you haven't met this solution yet because it is an anti-pattern.
Value Objects are or at least should be immutable. Once instantiated with a value that value will never change throughout the lifetime of the object. Thus, passing VOs as data to DTOs (such as Events) should not be an issue since all you can do with them is get their value. At the most their value in a differing representation such as toString() as opposed to an original getValue() which might return an integer or whatever the value is.