Lets imagine I've written a custom actions managed class library that I am planning to use in WiX setup project. The class library contains few classes which have "Install" methods. I am planning to launch those methods from my setup package as a custom action, so I mark all of them with the CustomActionAttribute. What will happen then? Will only one method be launched, or all of them, or the compilation of the setup project will fail? Is this considered a good practice at all?
A better practice would be to:
1) Eliminate CA's where possible ( don't reinvent the wheel )
2) Make CA's generic and declarative ( table data driven )
3) Make CA's that are transactional whenever possible ( support rollback )
4) Don't use InstallUtil, use WiX DTF instead
5) Understand custom action context / scheduling concerns
You should never really be installing things with custom actions, since that's what the whole MSI thing is for.
If you do want to do this though, make sure you schedule your actions in the ExecuteInstallSequence table, otherwise they won't run. Additionally, make sure your dll is included in the Binary table, and that your custom actions reference that binary.
I've simulated the problem and got following errors while trying to compile the custom actions class library:
An item with the same key has already been added.
So that means it is impossible to use methods with same names in the class library, or at least we shouldn't do that.
Related
I'm trying to follow DDD principles for my current project. Unfortunately I have to use RFCs due to technical constraints, so no OData and no REST. It's quite a long question I hope it's OK to ask this in Stackoverflow.
In any case, I have an entity class WorkOrder with a list of Operation objects.
I have a WorkOrderRepository class with a SAVE method that only receives a WorkOrder object and is able to save everything (header data, address, etc) in one go. No matter if it's creation, update or deleted. The repository hides the BAPI calls from the rest.
Now I want to implement the logic to add/update/remove Operations to the work order object and I'm not sure even if the names I give to the methods are correct. Maybe they should be insert/edit/delete... I'm quite confused with this since in every place I look they use different names.
But the most important are my 2 specific doubts:
Should I have only 1 RFC that receives all the updates to a WorkOrder entity including header, operations? Or should I create 1 RFC per operation that only handles one operation at a time? Bear in mind that the UI mockup expects that the user can add/delete multiple operations before clicking a SAVE button and RFC has implicit commit, and that to my knowledge a DDD entity should be updated always in one call.
Option 1:
FUNCTION ZWORKORDER_HDR_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_WORKORDER_HDR_CHG) TYPE ZWORKORDER_HDR_CHG
VALUE(I_WORKORDER_HDR_UPD) TYPE ZWORKORDER_HDR_UPD "X structure for the BAPI
VALUE(I_OPERATIONS_CHG) TYPE ZOPERATIONS_CHG
VALUE(I_OPERATIONS_UPD) TYPE ZOPERATIONS_UPD
VALUE(I_OPERATIONS_DEL) TYPE ZOPERATIONS_DEL
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
Option 2
FUNCTION ZWORKORDER_OPERATION_CRT
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION) TYPE ZOPERATION_CHG
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_CHG) TYPE ZOPERATION_CHG
VALUE(I_OPERATION_UPD) TYPE ZOPERATION_UPD
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_DEL
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_ID) TYPE ZOPERATION_ID
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
How should my Workorder methods look to handle this? I'm specially confused with the update method, since I'm not sure if I should first get the existing operation and then update it or let the parent class do it. But maybe my approaches are completely wrong from the root.
Option 1:
workorder->add_operation( i_operation ). "Pass flat structure from RFC? Or first create object?
workorder->update_operation( i_operation_chg
i_operation_upd ).
workorder->delete_operation( i_operation_id ).
Option 2:
workorder->add_operation( ).
operation = workorder->get_operation(i_operation_chg->get_id())
operation->update( i_operation_chg
i_operation_upd ).
operation->delete_operation( i_operation_id ).
The simplest solution is always the best (KISS and YAGNI principles). It doesn't really matter if you create 1 or 3 RFC-enabled function module, so if you can achieve your goal with one function module, then do it with one.
I think you need to have two RFC-enabled function modules. One to validate the maintained operations (do the validations as far as possible), but that should not save anything to the database, and another one called after the user clicks the SAVE button, to save the whole "WorkOrder", including the maintained operations (at this time, there will be the complete validation also).
If you don't need to define an "operation" class for something else, right now, then keep it simple, no need to instantiate an object. Note that you may create an "operation" class with private static methods, and being a friend of the "workorder" class (only this class can use the operation class), just to organize better your code.
PS: although I don't know what is "Domain-Driven Design", I don't see how your question is related to it, because it just looks like simple program design.
We also work with DDD, although luckily not with RFCs but instead Gateway/OData. Since REST by defintion is stateless we always commit in our application service layer.
What we do is have three "DDD" entities
app_Service, domain_service, repository (and also some data containers like aggregates), where the app_service in your case would expose the methods create, update and delete, and possibly also validate.
Then I would write four thin RFCs (crud + validate) that basically passes the data to the app service.
As for being able to handle multiple updates in each save, we always model our OData (your RFC) after the UI requirements, then it's the app_service task to make sense of the data from a system point of view.
We make heavy use of Abap Clean Code (it's even part of our ATC check), and they clearly state that you should have seperate methods, which is also OO best practices.
https://github.com/SAP/styleguides/blob/master/clean-abap/CleanABAP.md#split-methods-instead-of-adding-optional-parameters
I am developing a plugin for Schedulability Analysis for mbeddr. In order to run an external tool, I want to transform the mbeddr-model to the external tool's model and use the Textgen aspect in order to create an input file for the external tool. The Analysis will be started from an Action defined in a plugin solution (is this the smartest way?), so how do I trigger the M2M transformation and the textgen from a plugin action?
Thanks
You can have a look at the MakeActionImpl Class from MPS. It basically does it by creating a MakeSession it takes a list of models to generate and you can await the Future that is returned. You need fine grained control about the generators invoked and the facets involved have a look at the MakeUtils from com.mbeddr.core.runconfiguration.pluginSolution It's a good blueprint if you want get deeper integration with the MPS internals. It also contains a easy API to do the simple rebuild/make tasks.
After reading this SO question, I noticed that the link in the question made a reference to Microsoft.Xrm.Client.CodeGeneration.CodeCustomization,Microsoft.Xrm.Client.CodeGeneration.
What advantages it has over the standard code gen? According to LameCoder it changes all the entities to inherit from Microsoft.Xrm.Client.CrmEntity rather than `Microsoft.Xrm.Sdk.Entity. What changes does that make and what other changes are created?
Here is the best site I could currently find on what it does:
CrmSvcUtil & OrganizationServiceContext enhancements such as lazy loading
Simplified Connection Management with Connection Dialog UI
Client Side caching extensions
Utility Extension functions for common tasks to speed up client development
Organization Service Message utility functions to make it easy to call common messages such as BulkDelete, Add Member to Team etc.
Objects to support the Microsoft.Xrm.Portal extensions
The only real downside I can see to inheriting from CrmEntity is that it requires the Microsoft.Xrm.Client dll to either be Gac'd on the server, or IL Mergered into the Entities dll.
Besides that one downside, here are the features I see it adding:
Moves INotifyPropertyChanging and INotifyPropertyChanged into the base class, making resulting code smaller
Defines additional class Attributes
System.Data.Services.Common.DataServiceKeyAttribute
System.Data.Services.IgnorePropertiesAttribute (I'm assuming this one sends less data over the wire?)
Microsoft.Xrm.Client.Metadata.EntityAttribute (I believe this is used to support LazyLoading
Option Sets properties are changed to nullable ints
Money properties are now nullable decimals
Setting a property value to the value it already is, will not trigger a property changing/changed event
SetPrimaryIdAttributeValue results in smaller code.
Over the years, I've investigated a lot of ways to use code generators and MDD. I've always felt that something is lacking: Patching and changes to the model at runtime.
Patching: If you have a code generator, all your classes should look the same. Now you have a single exception. All code generators so far would require that I modify the template or the template engine to make this work.
Wouldn't it be better if I could apply patches to the result of the code generation step to fix the exceptions?
Well, it depends on how you build your model. If fact, it depends on what code generator you are using, its approach, and what it lets you do.
Creating an exception to a rule (model) is more or less against the nature of MDD, unless the applied modeling approach allows you to add exceptions as modeling entities.
I think ABSE is the only modeling approach that accepts "custom code" as a first-class entity, just like a text or an integer. If you create a template that contains a "CustomCode" parameter, you can later add your exception code only when necessary, without breaking your model rules. This can be used to add or replace code. You just need to specify it in your template.
AtomWeaver is a free implementation of the ABSE modeling methodology.
MDD doesn't work because it is based on a view of the domain and not the entire domain. I mean that usually MDD take an XMI in entry coming from an UML diagram. The problem is that this diagram is only a view of the domain and therefore you have many alternative and the real world is a mot more complex specially at deployment stage.
The only company which has provided me real value in my project was Omondo with EclipseUML. EclipseUML doesn't try to do MDD but create UML at diagram level live synchronized with code. Deployment is made using stereotypes which are added in the java annotation in the code. I can therefore model and if I add deployment stereotypes then my application can be deployed immediately/ If I manually change my code, then my model is refactored and all my views updated. If I want to add a documentation then I just add notes in the metamodel. These notes are live available when I click on each element. No more printed documentation needed because live navigation, dynamic views creations etc...
My EclipseUML model is always up to date, and I can deploy it immediately because Java annotations are lived synchronized between the model, the metamodel, the diagrams and the code. Really cool :-) :-)
So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.