What should I do when I'm done with a contextual reference acquired programmatically? - cdi

Suppose I acquire a contextual reference of a bean programmatically, using BeanManager#getBeans(Type, Annotation...) and BeanManager#resolve(Set) and BeanManager#getReference(Bean, Type, CreationalContext). (This is standard stuff.)
Suppose further that the bean in question is in #Singleton scope (it could be any scope other than #Dependent; I've picked #Singleton just (a) to make this concrete and (b) to get client proxies out of the picture to keep it simple). (Note that strictly speaking I should be unaware of what scope applies to the bean I'm working with, but my question may be directly related to this.)
Suppose further that the bean in question has a #Dependent-scoped contextual reference "inside" it. (This helps to highlight the necessary pairing of bean destruction with CreationalContext#release(); see below.)
Suppose further that the reference I get back is the first such reference. That is, my acquiring this reference is what causes the underlying contextual instance singleton to be created. All fine and simple and good so far. Yay; I have my object.
I do whatever I'm going to do with this contextual reference. Now I have just created something and don't want to leak memory, so…what should I do?
Should I simply try to destroy the existing contextual instance underlying the reference, perhaps using AlterableContext#destroy(Contextual)? But don't I now have to know intimate details about the scope and whether it is suitable to destroy any existing instance? (And of course if I do this I'd better call CreationalContext#release() too.)
Should I simply release the CreationalContext used for construction (without explicitly destroying the singleton instance)? But this will destroy the singleton's dependent references. That seems like something that would result in inconsistent state.
Should I do nothing? In the case of #Singleton this does no harm, since a singleton lasts for as long as the application, but, again, now I have to know intimate details about the scope. Is this by design?
Is the proper answer really that I need to know lots of things about the particular underlying scope (which, if user-supplied, it may be impossible to do), and need to tailor my destruction/releasing actions to the semantics of the scope?

Related

ABAP DDD how to correctly implement add/update/remove child from aggregate via RFC?

I'm trying to follow DDD principles for my current project. Unfortunately I have to use RFCs due to technical constraints, so no OData and no REST. It's quite a long question I hope it's OK to ask this in Stackoverflow.
In any case, I have an entity class WorkOrder with a list of Operation objects.
I have a WorkOrderRepository class with a SAVE method that only receives a WorkOrder object and is able to save everything (header data, address, etc) in one go. No matter if it's creation, update or deleted. The repository hides the BAPI calls from the rest.
Now I want to implement the logic to add/update/remove Operations to the work order object and I'm not sure even if the names I give to the methods are correct. Maybe they should be insert/edit/delete... I'm quite confused with this since in every place I look they use different names.
But the most important are my 2 specific doubts:
Should I have only 1 RFC that receives all the updates to a WorkOrder entity including header, operations? Or should I create 1 RFC per operation that only handles one operation at a time? Bear in mind that the UI mockup expects that the user can add/delete multiple operations before clicking a SAVE button and RFC has implicit commit, and that to my knowledge a DDD entity should be updated always in one call.
Option 1:
FUNCTION ZWORKORDER_HDR_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_WORKORDER_HDR_CHG) TYPE ZWORKORDER_HDR_CHG
VALUE(I_WORKORDER_HDR_UPD) TYPE ZWORKORDER_HDR_UPD "X structure for the BAPI
VALUE(I_OPERATIONS_CHG) TYPE ZOPERATIONS_CHG
VALUE(I_OPERATIONS_UPD) TYPE ZOPERATIONS_UPD
VALUE(I_OPERATIONS_DEL) TYPE ZOPERATIONS_DEL
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
Option 2
FUNCTION ZWORKORDER_OPERATION_CRT
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION) TYPE ZOPERATION_CHG
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_CHG) TYPE ZOPERATION_CHG
VALUE(I_OPERATION_UPD) TYPE ZOPERATION_UPD
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_DEL
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_ID) TYPE ZOPERATION_ID
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
How should my Workorder methods look to handle this? I'm specially confused with the update method, since I'm not sure if I should first get the existing operation and then update it or let the parent class do it. But maybe my approaches are completely wrong from the root.
Option 1:
workorder->add_operation( i_operation ). "Pass flat structure from RFC? Or first create object?
workorder->update_operation( i_operation_chg
i_operation_upd ).
workorder->delete_operation( i_operation_id ).
Option 2:
workorder->add_operation( ).
operation = workorder->get_operation(i_operation_chg->get_id())
operation->update( i_operation_chg
i_operation_upd ).
operation->delete_operation( i_operation_id ).
The simplest solution is always the best (KISS and YAGNI principles). It doesn't really matter if you create 1 or 3 RFC-enabled function module, so if you can achieve your goal with one function module, then do it with one.
I think you need to have two RFC-enabled function modules. One to validate the maintained operations (do the validations as far as possible), but that should not save anything to the database, and another one called after the user clicks the SAVE button, to save the whole "WorkOrder", including the maintained operations (at this time, there will be the complete validation also).
If you don't need to define an "operation" class for something else, right now, then keep it simple, no need to instantiate an object. Note that you may create an "operation" class with private static methods, and being a friend of the "workorder" class (only this class can use the operation class), just to organize better your code.
PS: although I don't know what is "Domain-Driven Design", I don't see how your question is related to it, because it just looks like simple program design.
We also work with DDD, although luckily not with RFCs but instead Gateway/OData. Since REST by defintion is stateless we always commit in our application service layer.
What we do is have three "DDD" entities
app_Service, domain_service, repository (and also some data containers like aggregates), where the app_service in your case would expose the methods create, update and delete, and possibly also validate.
Then I would write four thin RFCs (crud + validate) that basically passes the data to the app service.
As for being able to handle multiple updates in each save, we always model our OData (your RFC) after the UI requirements, then it's the app_service task to make sense of the data from a system point of view.
We make heavy use of Abap Clean Code (it's even part of our ATC check), and they clearly state that you should have seperate methods, which is also OO best practices.
https://github.com/SAP/styleguides/blob/master/clean-abap/CleanABAP.md#split-methods-instead-of-adding-optional-parameters

Dynamically building independent object trees with CDI

I'm wondering how to use CDI to build multiple independent objects trees representing the same type of data. Here is an example:
I have a Car, in which I want to inject GearShift and Engine.
I also want to inject Engine in GearShift
This Car + GearShift + Engine is my tree.
If I want to have several cars at the same time, what would be the best way to do this using CDI?
I would expect to be able to define a kind of scope or a qualifier for each tree.
But CDI scopes and qualifiers are defined statically, while the number of cars is dynamic.
As an additional requirement, I would like to inject another dependency that would be shared between cars.
For example, all cars would share the same Road for their whole lifetime (couldn't find something else that makes more sense).
Thanks in advance
I am no sure if I understand you properly, but I think you can, at least, use Session Scope, because you could create several Sessions with different Ids. For every Session you will have own Session Map.
In that way you can manage separate set of Cars Objects.
If you use Weld as CDI your implementation, you can use a BoundSessionContext which can be bound to any Map. The context is automatically attached to the map on activation, and detached when invalidate() is called.

UML and Implementation: Associating Classes through IDs

I was recently studying an online course. it was recommended that to reduce coupling we could simply pass the ID from the customer object to the Order object. that way the Order did not have to have a full reference to the Customer class.
The idea certainly seems simple and why pass a whole object if you don't need all its attributes?
1) What do you think of this idea?
2) How would I express the relationship between the Customer class and the Order class in UML if only an ID is passed. This isn't just an example of aggregation is it? Doesn't composition and aggregation require more than just passing a value?
Thanks!
First of all you need to be clear about what UML actually is. On the one hand you have an idea and on the other side there is some code running on hardware. Ideally the latter supports the first in a way that brings added value to a user of the idea. Now, there are many possibilities to describe the way from idea to code. And UML is one of them. It is possible to describe each step on this way but for pragmatic reasons UML stops at the border of code, namely programming languages.
Now for you concrete question: Any object can be seen as an instance. That is some concrete memory partition with a fixed address. Programming languages realize instances by allocating memory and using the start address as reference. And since this reference does not change the object can be identified by its address. Clearly then, an association will just be the a pointer. And an association class will hold two (or more) such pointers.
Honestly, the very first time I started with OO I was also confused and thought that it's a waste of resource to pass those large objects. But since it's just a pointer it's really easy going.
Again, things can get more difficult if you need to persist objects. In that case you need an artificial key you can save along with the object and you will likely need tables to map artificial key to the concrete instance address.
The answer to this question depends on a number of factors, which I started listing in a comment attached to your question. I will assume that you are either using UML to create a Domain Model, or you are describing an implementation done using a statically typed language.
If you are using UML to create a Domain Model, you are obfuscating the semantics when you use an ID to "link" classes. Just draw and annotate the association and you're done.
If you are describing an implementation done using a statically typed language - types exist for a reason. Using generic IDs to link things means that the information that the system needs most become more indirect, and therefore more opaque (which is bad). In your case, the Order object still must acquire a typed reference to a Customer object to do anything with it.
For example, the Order may acquire a reference to the Customer by invoking a lookup by the ID, but it must cast the reference to an appropriate type to invoke anything on the Customer object. So you haven't reduced the coupling from the Order to the Customer. You just buried it somwhere else.

Automatic actions when creating a new NSManagedObject

I have two entities - A and B. A includes a set of Bs. Every time I create a B, I want to make sure I add it to a special instance of A.
Looking at the NSManagedObjectClass reference, it is very clear that I should NOT be overriding the init method. So where is the best place to "catch" the creation of B? The only way I can see is to use validateForInsert, but I'm concerned that that's not really what it's meant for, and thus may cause headaches down the road. Is there a better option?
From the NSManagedObject documentation:
awakeFromInsert
Invoked automatically by the Core Data framework when the receiver is first inserted into a managed object context.
Subclass this method on your entity's class.

Using object's setter to trigger data updates, best practices

I have an object that gets instantiated in a linq to sql method. When the object fields are being assigned, i want to check a date field and if it is an old date, retrieve data from another table and perform calculations before continuing with assigning this object.
Is there anything wrong with triggering such an event through the property setter Or should I independently check the date through some service and make the changes if necessary at some point aftwewards?
There's nothing wrong with doing some logic from within your setters, but you should be careful about just how much logic you put within your setters. One of the fundamental problems of setters is that since they act like attributes, but have backing code, it's easy to forget that there are potentially some non-trivial actions going on behind the scenes.
This sort of thing can cause problems if you have accessors which use accessors which use accessors; you can rapidly end up causing unexpected performance problems. Generally, it's a good idea to keep the actions of setters (or getters, for that matter) to a relatively small set of actions. For example, validation can work perfectly fine in a setter, but I'd generally advise against doing validation against external resources, because of two things: first, resource delays can cause problems with expected access speed, and secondly, the number of external resource accesses can destroy your performance.
Generally, the rule is this: keep it simple. It's not unreasonable to do complicated things in a setter, but if you do, it's really important to understand the consequences of all of the actions you'll be causing, and it's EXTREMELY important to document what it does extremely well, so the next guy (or girl) to use the code doesn't just try to naively use the accessor and end up causing massive resource contention issues unexpectedly.
Half the point of using setters instead of, say, public fields, is to be able to trigger events associated with setting certain data.
Keyword: associated. If you're just using the setter as a "convenient" time to do some other stuff because it happens to work, you're doing it wrong. If setting this value requires other work to be done, then by all means, use the setter to do it.

Resources