What mappings is Automapper applying? - automapper

I have a problem where a mapping works when applied as the child of one object but not another and I am clueless how to debug this.
I map a complex object such as:
ParentType1
-> ChildType1
-> GrandchildType1
PropertyA
which is transformed to
ParentType2
-> ChildType2
-> GrandchildType2
PropertyB
PropertyB is populated
through a tonne of mapping files (I inherited this code) and works fine.
I have another object
ParentType3
-> ChildType3
-> GrandchildType1
PropertyA
which I am trying to map to another object
ParentType4
-> ChildType4
-> GrandchildType2
PropertyB
PropertyB is empty
which doesn't work. In both cases GrandchildType1 is being converted to GrandchildType2. The problem is the transform for Grandchild1 to GrandChild2 is not being applied in the second scenario. How can I tell which mappings are being applied in the first case so I can compare with the second. The application has thousands of lines of code so I am not about to ditch automapper. If I post the mappings here it will take up hundreds of lines of code.

Related

Use JOOQ Multiset with custom RecordMapper - How to create Field<List<String>>?

Suppose I have two tables USER_GROUP and USER_GROUP_DATASOURCE. I have a classic relation where one userGroup can have multiple dataSources and one DataSource simply is a String.
Due to some reasons, I have a custom RecordMapper creating a Java UserGroup POJO. (Mainly compatibility with the other code in the codebase, always being explicit on whats happening). This mapper sometimes creates simply POJOs containing data only from the USER_GROUP table, sometimes also the left joined dataSources.
Currently, I am trying to write the Multiset query along with the custom record mapper. My query thus far looks like this:
List<UserGroup> = ctx
.select(
asterisk(),
multiset(select(USER_GROUP_DATASOURCE.DATASOURCE_ID)
.from(USER_GROUP_DATASOURCE)
.where(USER_GROUP.ID.eq(USER_GROUP_DATASOURCE.USER_GROUP_ID))
).as("datasources").convertFrom(r -> r.map(Record1::value1))
)
.from(USER_GROUP)
.where(condition)
.fetch(new UserGroupMapper()))
Now my question is: How to create the UserGroupMapper? I am stuck right here:
public class UserGroupMapper implements RecordMapper<Record, UserGroup> {
#Override
public UserGroup map(Record rec) {
UserGroup grp = new UserGroup(rec.getValue(USER_GROUP.ID),
rec.getValue(USER_GROUP.NAME),
rec.getValue(USER_GROUP.DESCRIPTION)
javaParseTags(USER_GROUP.TAGS)
);
// Convention: if we have an additional field "datasources", we assume it to be a list of dataSources to be filled in
if (rec.indexOf("datasources") >= 0) {
// How to make `rec.getValue` return my List<String>????
List<String> dataSources = ?????
grp.dataSources.addAll(dataSources);
}
}
My guess is to have something like List<String> dataSources = rec.getValue(..) where I pass in a Field<List<String>> but I have no clue how I could create such Field<List<String>> with something like DSL.field().
How to get a type safe reference to your field from your RecordMapper
There are mostly two ways to do this:
Keep a reference to your multiset() field definition somewhere, and reuse that. Keep in mind that every jOOQ query is a dynamic SQL query, so you can use this feature of jOOQ to assign arbitrary query fragments to local variables (or return them from methods), in order to improve code reuse
You can just raw type cast the value, and not care about type safety. It's always an option, evne if not the cleanest one.
How to improve your query
Unless you're re-using that RecordMapper several times for different types of queries, why not do use Java's type inference instead? The main reason why you're not getting type information in your output is because of your asterisk() usage. But what if you did this instead:
List<UserGroup> = ctx
.select(
USER_GROUP, // Instead of asterisk()
multiset(
select(USER_GROUP_DATASOURCE.DATASOURCE_ID)
.from(USER_GROUP_DATASOURCE)
.where(USER_GROUP.ID.eq(USER_GROUP_DATASOURCE.USER_GROUP_ID))
).as("datasources").convertFrom(r -> r.map(Record1::value1))
)
.from(USER_GROUP)
.where(condition)
.fetch(r -> {
UserGroupRecord ug = r.value1();
List<String> list = r.value2(); // Type information available now
// ...
})
There are other ways than the above, which is using jOOQ 3.17+'s support for Table as SelectField. E.g. in jOOQ 3.16+, you can use row(USER_GROUP.fields()).
The important part is that you avoid the asterisk() expression, which removes type safety. You could even convert the USER_GROUP to your UserGroup type using USER_GROUP.convertFrom(r -> ...) when you project it:
List<UserGroup> = ctx
.select(
USER_GROUP.convertFrom(r -> ...),
// ...

How can I generate the list of `EntityField`s for a `PersistEntity`?

I was trying to implement some form of repsertBy: repsert where the key is a provided Unique, in the line of getBy, upsertBy etc.
My approach: implement it on top of upsertBy. Now upsertBy takes a unique constraint, a record, and a list of changes to apply in case of a unique collision. To implement repsertBy I'd like that list of changes to be “assign all fields to the new value”.
repsertBy :: (MonadIO m, PersistRecordBackend record backend)
=> Unique record -> record
-> ReaderT backend m (Entity record)
repsertBy unique record = upsertBy unique record [ entifyField =. value | … ]
And there I'm stuck.
I can generate the list of values by calling toPersistValue on the record's toPersistFields. But where can I get the EntityFields from?
I'd have expected them to be available somewhere from the entity definition at èntityDef, but haven't found any as of now. I've tried comparing with actual backends' implementations of replace and upsert, but only found SQL-level string banging.
I'm currently spelling out the fields by hand, but at some point I'm going to add one in the entity and forget to update it in the repsertBy. Is there any way to access the EntityFields?

Spring Integration aggregator's release strategy based on last modified

I'm trying to implement the following scenario:
I get a bunch of files that have common file pattern, i.e. doc0001_page0001, doc0001_page0002, doc0001_page0003, doc0002_page0001 (where doc0001 would be one document consisting of 3 pages that I would need to merge, doc0002 would only have 1 page)
I want to aggregate them in a way that I will release a group only if all of the files for specific document are gathered (doc0001 after 3 files were picked up, doc0002 after 1 file)
My idea was to read the files in an alphabetical order and wait for 2 seconds after a group was last modified to release it (g.getLastModified() is smaller than the current time minus 2 seconds)
I've tried the following without success:
return IntegrationFlows.from(Files.inboundAdapter(tmpDir.getRoot())
.patternFilter("*.json")
.useWatchService(true)
.watchEvents(FileReadingMessageSource.WatchEventType.CREATE,
FileReadingMessageSource.WatchEventType.MODIFY),
e -> e.poller(Pollers.fixedDelay(100)
.errorChannel("filePollingErrorChannel")))
.enrichHeaders(h -> h.headerExpression("CORRELATION_PATTERN", "headers[" + FileHeaders.FILENAME + "].substring(0,7)")) // docxxxx.length()
.aggregate(a -> a.correlationExpression("headers['CORRELATION_PATTERN']")
.releaseStrategy(g -> g.getLastModified() < System.currentTimeMillis() - 2000)) .channel(MessageChannels.queue("fileReadingResultChannel"))
.get();
Changing the release strategy to the following also didn't work:
.aggregate(a -> a.correlationExpression("headers['CORRELATION_PATTERN']")
.releaseStrategy(g -> {
Stream<Message<?>> stream = g.getMessages()
.stream();
Long timestamp = (Long) stream.skip(stream.count() - 1)
.findFirst()
.get()
.getHeaders()
.get(MessageHeaders.TIMESTAMP);
System.out.println("Timestamp: " + timestamp);
return timestamp.longValue() < System.currentTimeMillis() - 2000;
}))
Am I misunderstanding the release strategy concept?
Also, is it possible to print something out from the releaseStrategy block? I wanted to compare the timestamp (see System.out.println("Timestamp: " + timestamp);)
Right, since you don't know the whole sequence for message group, you don't have any other choice unless to use a groupTimeout. The regular releaseStrategy works only when a message arrives to the aggregator. Since at the point of one message you don't have enough info to release the group, it is going to sit in the group store forever.
The groupTimeout option has been introduced to the aggregator especially for this kind of use-cases when we definitely would like to release a group without enough messages to group normally.
You may consider to use a groupTimeoutExpression instead of constant-based groupTimeout. The MessageGroup is a root evaluation context object for SpEL, so you will be able to get access to the mentioned lastModified for it.
The .sendPartialResultOnExpiry(true) is right option to deal with here.
See more info in the docs: https://docs.spring.io/spring-integration/reference/html/#agg-and-group-to
I found a solution to that with a different approach. I still don't understand why the above one wasn't working.
I've also found a cleaner way of defining the correlation function.
IntegrationFlows.from(Files.inboundAdapter(tmpDir.getRoot())
.patternFilter("*.json")
.useWatchService(true)
.watchEvents(FileReadingMessageSource.WatchEventType.CREATE, FileReadingMessageSource.WatchEventType.MODIFY), e -> e
.poller(Pollers.fixedDelay(100)))
.enrichHeaders(h -> h.headerFunction(IntegrationMessageHeaderAccessor.CORRELATION_ID, m -> ((String) m
.getHeaders()
.get(FileHeaders.FILENAME)).substring(0, 17)))
.aggregate(a -> a.groupTimeout(2000)
.sendPartialResultOnExpiry(true))
.channel(MessageChannels.queue("fileReadingResultChannel"))
.get();

GXT re-arrange data array index in TreeStore

Currently, I'm using gxt 3.0.6
I have a TreeStore let's called it "treeStore", with model data "ParentDto".
private TreeStore<ParentDto> treeStore;
treeStore = new TreeStore<ParentDto>(new ModelKeyProvider<ParentDto>(){
#Override
public String getKey(ParentDto item){
return String.valueOf(item.getParentId());
}
});
Inside ParentDto there is a list of ChildDto. If there is ParentDto data which has list of ChildDto, I want to show it in a tree grid. I use basic tree grid from this link
https://www.sencha.com/examples/#ExamplePlace:basictreegrid
Using that reference, if I try to add 1 ParentDto everything works fine, but when the problem is when I add many Parent Dto.
Here is my code for adding data into the treeStore
public void fillTreeStore(List<ParentDto) listParent){
treeStore.clear();
for(ParentDto parentDto : listParent){
treeStore.add(parentDto);
if(parentDto.getListChild().size() > 0){
for(ChildDto childDto : parent.getListChild()){
treeStore.add(parentDto,childDto);
}
}
}
In my case, I only need 1 level parent and child tree so this code is enough.
I try to debug my code use this expression
treeStore.getAll().get(index);
When I add 1 ParentDto (parentA) which has 1 Child (childA). The result will be
treeStore.getAll().get(0) -> contain parentA
treeStore.getAll().get(1) -> contain childA
But if I add 2 ParentDto (parentA, parentB) and each of them have 1 child (childA,childB). The result will be
treeStore.getAll().get(0) -> contain parentA
treeStore.getAll().get(1) -> contain parentB
treeStore.getAll().get(2) -> contain childA
treeStore.getAll().get(3) -> contain childB
But in the grid, those data will be shown perfectly fine :
row 1 : parentA (this row can expand)
row 2 : childA (the expanded row form parentA)
row 3 : parentB (this row can expand)
row 4 : childB (the expanded row form parentB)
I need to render icon if the data is "parent" so I use this code :
(icon_variable).addBeforeRenderIconCellEventHandler(new BeforeRenderIconCellEventHandler() {
#Override
public void onBeforeRenderIconCell(BeforeRenderIconCellEvent event) {
if(treeStore.getParent(treeStore.get(event.getSelectedRowIndex())) == null){
//#render icon here
}
}
});
The problem is at this code
treeStore.get(event.getSelectedRowIndex())
When parentB is added it will trigger addBeforeRenderIconCellEventHandler method. event.getSelectedRowIndex() will get the row index based on "grid's perspective". At the second row, from grid's perspective (childA), event.getSelectedRowIndex() will return 1. But from "treeStore's perspective", index 1 is "parentB", so my icon render is messed up.
That's why, the result I need in treeStore is like this
treeStore.getAll().get(0) -> contain parentA
treeStore.getAll().get(1) -> contain childA
treeStore.getAll().get(2) -> contain parentB
treeStore.getAll().get(3) -> contain childB
My solution :
To solve this problem, for now, I use 2 Stores, the first one is TreeStore, and the second one is ListStore. Each time parent and child are added, I insert them at TreeStore and ListStore. In the ListStore, I keep parent's and child's index to always match with grid's perspective, so that whenever addBeforeRenderIconCellEventHandler is triggered, I use ListStore to get the data.
In my opinion, this solution is not good enough but because in my case, the maximum data can be added into the store less than 50, it's enough.
It looks like this is default behavior. You didn't say what it is you are trying to do but my guess is you can do it with the methods they provide. I'm guessing you are trying to traverse the tree by looking at the parent and then all of it's children before moving on to the next parent. Something like this would do it.
for (ParentDto parent : treeStore.getRootItems()){
for (ChildDto child : treeStore.getChildren(parent)){
}
}

Camel custom component: perform two different actions

I just want to know if I can do below pertaining to custom component
1) I created a sample component
somComponent://foo ---> what this foo refers to?can i have any string there?
What does it denotes?
2) consider below route
from("some blah")
.to(someCustomComponent://action1)
.to(someCustomComponent://action2);
Idea - I want to perform two different actions on the above. Kind of two different methods.
Is the above possible?
The notation for your custom component in Apache Camel can be described as follows:
someComponent://instance?parm1=foo&parm2=bar
The instance part can be pretty much anything you want to uniquely identify the endpoint.
You can derive DefaultComponent and implement the methods. The signature for createEndpoint method looks like this:
protected Endpoint createEndpoint(final String uri, String remaining,
Map<String, Object> parameters) throws Exception
So for the endpoint someComponent://instance?parm1=foo&parm2=bar
uri = someComponent://instance?parm1=foo&parm2=bar
remaining = instance
parmeters = (Map) parm1 -> foo, parm2 -> bar
Therefore, yes! You can easily denote the action you want, for example as a parameter such as:
someComponent://instance?action=something

Resources