Working on a BTDF deployment package, firstly the schema, and getting lots of warnings that ...
a previously deployed schema "" have the same target namespace "http://schemas.microsoft.com/2003/10/Serialization/".
which track back to the various copies of the ..._schemas_microsoft_com_2003_10_Serialization.xsd, presumably added by referencing multiple web services.
Are these all needed? ... especially as the content is identical
Is even one needed?
Similarly, there are multiple copies of the ...Serialization_Arrays.xsd
No, multiple copies of these schemas do not have to be Deployed. But...
These schema are included in Projects when you generate schemas for WCF Adapter services though with different .Net Type names and namespaces. They only thing they contain are element type definitions and are used by the referring schema.
But, not every definition is used and yes, it does tend to clutter the Schemas list in BT Admin.
So, there are some ways to mitigate this:
Have one "master" Serialization.xsd, and reference it from any other WCF schema. One per Solution is an option too, whichever is most practical.
Remove the reference completely by changing to native xml types. Most of the definitions are redeclarations of native types and likely aren't used. Checking a couple schemas I have at hand, I see only the "guid" type is used. It still works fine if I change it to xs:string and remove ...Serialization.xsd from the Includes list.
Related
I'm trying to follow DDD principles for my current project. Unfortunately I have to use RFCs due to technical constraints, so no OData and no REST. It's quite a long question I hope it's OK to ask this in Stackoverflow.
In any case, I have an entity class WorkOrder with a list of Operation objects.
I have a WorkOrderRepository class with a SAVE method that only receives a WorkOrder object and is able to save everything (header data, address, etc) in one go. No matter if it's creation, update or deleted. The repository hides the BAPI calls from the rest.
Now I want to implement the logic to add/update/remove Operations to the work order object and I'm not sure even if the names I give to the methods are correct. Maybe they should be insert/edit/delete... I'm quite confused with this since in every place I look they use different names.
But the most important are my 2 specific doubts:
Should I have only 1 RFC that receives all the updates to a WorkOrder entity including header, operations? Or should I create 1 RFC per operation that only handles one operation at a time? Bear in mind that the UI mockup expects that the user can add/delete multiple operations before clicking a SAVE button and RFC has implicit commit, and that to my knowledge a DDD entity should be updated always in one call.
Option 1:
FUNCTION ZWORKORDER_HDR_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_WORKORDER_HDR_CHG) TYPE ZWORKORDER_HDR_CHG
VALUE(I_WORKORDER_HDR_UPD) TYPE ZWORKORDER_HDR_UPD "X structure for the BAPI
VALUE(I_OPERATIONS_CHG) TYPE ZOPERATIONS_CHG
VALUE(I_OPERATIONS_UPD) TYPE ZOPERATIONS_UPD
VALUE(I_OPERATIONS_DEL) TYPE ZOPERATIONS_DEL
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
Option 2
FUNCTION ZWORKORDER_OPERATION_CRT
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION) TYPE ZOPERATION_CHG
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_UPD
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_CHG) TYPE ZOPERATION_CHG
VALUE(I_OPERATION_UPD) TYPE ZOPERATION_UPD
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
FUNCTION ZWORKORDER_OPERATION_DEL
IMPORTING
VALUE(I_WORKORDER_ID) TYPE AUFNR
VALUE(I_OPERATION_ID) TYPE ZOPERATION_ID
EXPORTING
VALUE(E_ERRORS) TYPE BAPIRET2_T.
How should my Workorder methods look to handle this? I'm specially confused with the update method, since I'm not sure if I should first get the existing operation and then update it or let the parent class do it. But maybe my approaches are completely wrong from the root.
Option 1:
workorder->add_operation( i_operation ). "Pass flat structure from RFC? Or first create object?
workorder->update_operation( i_operation_chg
i_operation_upd ).
workorder->delete_operation( i_operation_id ).
Option 2:
workorder->add_operation( ).
operation = workorder->get_operation(i_operation_chg->get_id())
operation->update( i_operation_chg
i_operation_upd ).
operation->delete_operation( i_operation_id ).
The simplest solution is always the best (KISS and YAGNI principles). It doesn't really matter if you create 1 or 3 RFC-enabled function module, so if you can achieve your goal with one function module, then do it with one.
I think you need to have two RFC-enabled function modules. One to validate the maintained operations (do the validations as far as possible), but that should not save anything to the database, and another one called after the user clicks the SAVE button, to save the whole "WorkOrder", including the maintained operations (at this time, there will be the complete validation also).
If you don't need to define an "operation" class for something else, right now, then keep it simple, no need to instantiate an object. Note that you may create an "operation" class with private static methods, and being a friend of the "workorder" class (only this class can use the operation class), just to organize better your code.
PS: although I don't know what is "Domain-Driven Design", I don't see how your question is related to it, because it just looks like simple program design.
We also work with DDD, although luckily not with RFCs but instead Gateway/OData. Since REST by defintion is stateless we always commit in our application service layer.
What we do is have three "DDD" entities
app_Service, domain_service, repository (and also some data containers like aggregates), where the app_service in your case would expose the methods create, update and delete, and possibly also validate.
Then I would write four thin RFCs (crud + validate) that basically passes the data to the app service.
As for being able to handle multiple updates in each save, we always model our OData (your RFC) after the UI requirements, then it's the app_service task to make sense of the data from a system point of view.
We make heavy use of Abap Clean Code (it's even part of our ATC check), and they clearly state that you should have seperate methods, which is also OO best practices.
https://github.com/SAP/styleguides/blob/master/clean-abap/CleanABAP.md#split-methods-instead-of-adding-optional-parameters
In my app I have two instances of two different classes fetching the same core data object and grabbing a pointer to it in their respective properties. Then my two instances they both modify a different attribute in the core data object and save context at different moments. Is this going to create inconsistencies or merge conflicts in my app or will core data manage well this process? I've read about uniquing in the Core Data Programming Guide but I'm not sure I can extrapolate to my case the example they use.
Thank you.
This isn't a merge, it's just an update. There is only one instance of the managed object and multiple different pointers to it. If you check the address of the managed object instance you can verify this. So no, there are no issues (so long as you do all this on the same thread that owns the instance of course).
The uniquing you refer to relates to inserting multiple instances which both have the same unique identifier and is a different situation than you describe.
I have several schemas that inherit one or more elements from a collection of 'common' schemas. In this particular instance, I'm importing one of these schemas to make use of a single complex type defined in it.
When I generate the java objects from the schema, I get my schema types, and the element I referenced as expected, however I also get objects generated for the 30+ other types from the common schema.
I want to use the common schema, because I want to rely on automated builds for updating my schema when the common schema changes, but I do not want the extra java classes generated.
Suggestions ?
There's no out of the box approach to achieve what you want. The reason I am offering an opinion here is rather to point out (maybe for others) some issues one needs to take into account no matter which route ones go.
The 'extra' label is not always straightforward. Substitution group members are interesting. In Java, think about a class (A) using an interface (I), and a class (B:I) implementing (I). Some may say there's no dependency between A and B, while others would require B in the distribution. If you replace (I) with a concrete class, things become even less clear - consider that the substitution group head doesn't need to be abstract; or if the type of the substitution group head is anyType (Object in Java).
More so, if the XML processing was designed to accommodate xsi:type then it is even harder to tell (by looking at the schema) what is expected to work where.
Tools such as QTAssistant (I am associated with it) have a default setting that will pull in all strict dependencies (A and I above); and either ALL that might work (B above), or nothing else. Anything in between, the user needs to manually define what goes in the release. This is called automatic XSD refactoring and could be used easily in your scenario.
I'm planning my first architecture that uses DTOs. I'm now exploring how to map the modified client-side domain objects back to the DTOs that were originally retrieved from the data service. I must map back to the original object graph, instead of instantiating a new one, in order to use WCF Data Services Client Library's change tracking feature.
To put it in general terms, I need a tool that maps instances and (recursively) their sub-instances (collectively called the "source graph") to existing instances and (recursively) sub-instances (collectively called the "target graph") in a manner that is (nearly) 100% convention, rather than configuration, based.
The specific required functionality that I can think of is:
Replace single-valued properties within the target graph with their corresponding values from the source graph.
Synchronize collection pairs: elements that were added to a collection within the source graph should then be added to the corresponding collection within the target graph; elements removed from a collection within the source graph should then be removed from the corresponding collection within the target graph.
When it comes to mapping DTOs, it seems many people use AutoMapper. So I had assumed this task would be easy using that tool. Upon looking at the details, though, I have doubts it will fit my requirements. This indicates AutoMapper won't handle #1 so well. Equally so, this indicates AutoMapper won't help much with #2 either.
I don't want to try bending AutoMapper to my purposes if it will lead to a lot of configuration code. That would defeat the purpose of using a convention-based tool in the first place. So I'm wondering: what's a better tool for the job?
I'm developing a rather large XML Schema to integrate multiple disparate systems. Each system has messages that consist of multiple fields. I've written the schemas so that each field is represented as a type (either simple or complex), while the entire message is represented as an element with a sequence of multiple types.
When defining types using XSD, what is the convention in terms of how you break up your files?
I've been going down the path of having a separate XSD file for each type that I am defining, then a single XSD file that defines the message and imports the required XSD files.
Here's a simplified example showing the structure (can't post images, no reputation).
Type schemas (fields) have a namespace ending in "types", while message schemas have a namespace ending in "messages".
My reasoning was that it would be easier to compose new types using existing types if they were defined in their own file, and that versioning types might be easier.
Is there a better approach? I'm relatively new to XML and Schemas. Thanks in advance.
It makes no sense to have one schema per type. One schema per domain area, yes, but not one per type.
For instance, you might have one schema to define types related to customers, and another schema to define types related to products. You might have a separate schema to define common types like people and addresses, which might be used by the customer schema and also by the product schema (product sales rep, vendor address).
You might combine these in a schema related to order processing, since it involves orders made by customers for products.
Are the separate xsd files you created completely unrelated to eachother? For example if one xsd file handles the sales domain and the other one handles the payments domain, you should give each schema a unique domain specific targetNamespace. This approach is called "Heterogeneous namespace design" and you are using it now. I believe it is best suited for your requirement. Since you are having multiple disparate systems it is better to maintain them in separate xsd files with different targetNamespaces. This enhances re-usability and reduces redundancy of types/fields. It also prevents namespace collisions. However if you have a large number of imports it becomes a challenge to manage and there will be dependency on the granularity of the types defined in the imported schemas.
If they are related to each other you should go for "Homogeneous namespace design".
There are two other approaches for XML Schema design:Homogenous Namespace Design and Chameleon Namespace Design which may not best suit your specific requirement. You can further get details on the benefits/drawbacks of all these three different schema design approaches at http://technologyandleadership.com/three-schema-design-approaches/