AutoMapper suggestion required - automapper

I have below classes
class Contact
{
string FirstName;
string LastName;
List<Phone> ContactNumbers;
}
class Phone
{
string Number;
PhoneType Type;
}
enum PhoneType
{
Home, Work, Fax
}
class Source
{
Contact Agent;
Contact Customer;
}
class Destination
{
string AgentFirstName;
string AgentLastName;
string AgentPhoneNumber1;
string AgentPhoneNumber2;
string AgentPhoneNumber3;
PhoneType AgentPhoneType1;
PhoneType AgentPhoneType2;
PhoneType AgentPhoneType3;
string CustomerFirstName;
string CustomerLastName;
string CustomerPhoneNumber1;
string CustomerPhoneNumber2;
string CustomerPhoneNumber3;
PhoneType CustomerPhoneType1;
PhoneType CustomerPhoneType2;
PhoneType CustomerPhoneType3;
}
I want to do auto-map from Source to Destination class. The challenge I see is to convert the list of contact numbers into independent fields in destination class. Can anyone please suggest the ways? Thanks in advance.

It is probably easiest to do a custom mapping function, which keeps things simple and readable:
CreateMap<Contact, Destination>().ConvertUsing(c => MapContactToDestination(c));
Destination MapContactToDestination(Contact c)
{
//logic here for handling conversion
}

Related

C# Expression Bodied Constructor - Is there a way to combine with null checking?

Is there a recommended or prefered way to combine C#7 "Expression Bodied Constructors" with other activities, such as null checking and exception throwing?
When writing a C# class, I am accustomed to being able to test arguments for nulls (and throw exceptions if needed) in my constructor, like this:-
class Person
{
private string _name;
private SomeClass _someClass;
public Person(string name, SomeClass someClass)
{
_name = name ?? throw new ArgumentNullException(nameof(name));
_someClass = someClass?? throw new ArgumentNullException(nameof(someClass));
}
}
I have just learned about Expression Bodied Constructors, where a simple version of my code could look like this:-
class Person
{
private string _name;
private SomeClass _someClass;
public Person(string name, SomeClass someClass)
=> (_name, _someClass) = (name, someClass);
}
The above initially seems appealing because of the potential to reduce the amount of boilerplate code needed to assign arguments to member variables.
However, I seem to have lost the opportunity to include activities such as the aforementioned null checking, as there is no longer a constructor body.
As far as I can tell, the only way around this is to inline code, such as my null-coalesce check, like this :-
...
public Person(string name, SomeClass someClass)
=> (_name, _someClass) =
(
name ?? throw new ArgumentNullException(nameof(name ),
someClass ?? throw new ArgumentNullException(nameof(someClass)
));
Even though I have attempted to improve legibility through spacing, in my opinion, the above is not as easy to read as the original example - and has not really saved any typing.
Q: Is there a better way that I could approach this, or am I defeating the purpose of this new style of constructor? (i.e. should I just stick with the original approach)
I'm not going to criticize you for trying. 😊
The fact is that both versions generate pretty much the same code.
class Person
{
private string _name;
private string _someClass;
public Person(string name, string someClass)
{
_name = name ?? throw new ArgumentNullException(nameof(name));
_someClass = someClass?? throw new ArgumentNullException(nameof(someClass));
}
}
becomes
class Person
{
private string _name;
private string _someClass;
public Person(string name, string someClass)
{
if (name == null)
{
throw new ArgumentNullException("name");
}
_name = name;
if (someClass == null)
{
throw new ArgumentNullException("someClass");
}
_someClass = someClass;
}
}
see it on sharplab.io
and
class Person
{
private string _name;
private string _someClass;
public Person(string name, string someClass)
=> (_name, _someClass) =
(
name ?? throw new ArgumentNullException(nameof(name)),
someClass ?? throw new ArgumentNullException(nameof(someClass))
);
}
becomes
class Person
{
private string _name;
private string _someClass;
public Person(string name, string someClass)
{
if (name == null)
{
throw new ArgumentNullException("name");
}
if (someClass == null)
{
throw new ArgumentNullException("someClass");
}
_name = name;
_someClass = someClass;
}
}
see it on sharplab.io
I see the "Tuple assignment" style as a nice brief way to say "my class is initialised from these members and nothing else is going on". I can glance at it and immediately know that's what the author intended.
As soon as one more thing is happening (validation in your example), then you should use a full blown constructor. As you say, you can do it, but it reads a lot worse.
Really, we're only using this style because we don't have record types yet. When we have those, I don't think we'll see this cropping up in code so much.

ModelMapper Matching Strategy: Standard with Reference Types

Entity Classes:
class User{
private Name name;
private int age;
private String email;
private Date dob;
private Address address;
// No Arguments Constructor , All Arguments Constructor , Setters, Getters and toString
}
class Name {
private String firstName;
private String lastName;
// No Arguments Constructor , All Arguments Constructor , Setters, Getters and toString
}
class Address {
private String houseNo;
private String street;
private String city;
private Integer pincode;
// No Arguments Constructor , All Arguments Constructor , Setters, Getters and toString
}
DTO:
class UserDTO{
private String firstName;
private String lastName;
private int age;
private String email;
private Date dob;
private String houseNo;
private String street;
private String city;
private Integer pincode;
// No Arguments Constructor , All Arguments Constructor , Setters, Getters and toString
}
Code to convert Entity to DTO:
public class ReferenceTypePropertiesMapper {
#Test
public void shouldPopulateAllSimpleProperties(){
User user = createUser();
ModelMapper modelMapper = new ModelMapper();
UserDTO userDTO = modelMapper.map(user,UserDTO.class);
System.out.println("Source : "+ user);
System.out.println("Destination : "+ userDTO);
}
private User createUser(){
Name name = new Name("Siva", "Prasad");
Address address = new Address("1-93","ABC","HYD",123456);
return new User(name, 29, "Siva#gmail.com", new Date(), address);
}
}
Output:
Source : User(name=Name(firstName=Siva, lastName=Prasad), age=29, email=Siva#gmail.com, dob=Tue Sep 26 14:38:45 IST 2017, address=Address(houseNo=1-93, street=ABC, city=HYD, pincode=123456))
Destination : UserDTO(firstName=Siva, lastName=Prasad, age=29, email=Siva#gmail.com, dob=Tue Sep 26 14:38:45 IST 2017, houseNo=null, street=null, city=null, pincode=null)
I am taking 2 reference types Name and Address in User.java.
While creating object for User , I am passing both Name and Address details as well. When I try to map User object to UserDTO, Name details are getting mapped successfully, but Address details are not getting mapped.
Can any body help me in understanding why its happing like that or am I missing any thing?
With MatchingStrategies.LOOSE everything works well.
The Loose matching strategy allows for source properties to be loosely matched to destination properties by requiring that only the last destination property in a hierarchy be matched. The following rules apply:
Tokens can be matched in any order
The last destination property name must have all tokens matched
The last source property name must have at least one token matched
The loose matching strategy is ideal to use for source and destination object models with property hierarchies that are very dissimilar. It may result in a higher level of ambiguous matches being detected, but for well-known object models it can be a quick alternative to defining mappings.
In this way it is necessary to add only one line:
#Test
public void shouldPopulateAllSimpleProperties() {
User user = createUser();
ModelMapper modelMapper = new ModelMapper();
modelMapper.getConfiguration().setMatchingStrategy(MatchingStrategies.LOOSE);
UserDTO userDTO = modelMapper.map(user, UserDTO.class);
System.out.println("Source : " + user);
System.out.println("Destination : " + userDTO);
}
Output:
Source : User{name=Name{firstName='Siva', lastName='Prasad'}, age=29, email='Siva#gmail.com', dob=Wed Oct 18 23:44:25 MSK 2017, address=Address{houseNo='1-93', street='ABC', city='HYD', pincode=123456}}
Destination : UserDTO{firstName='Siva', lastName='Prasad', age=29, email='Siva#gmail.com', dob=Wed Oct 18 23:44:25 MSK 2017, houseNo='1-93', street='ABC', city='HYD', pincode=123456}

Are POCO objects just "persistent ignorant" or something more?

RPM1984 in this question speaks about POCO are "persistent ignorant" objects. But he doen´t speak about how much logic can hold. For example:
class Person {
public string FirstName { get; set; }
}
Or this:
class Person {
private string firstName = string.Empty;
public string Firstname {
get
{
return this.firstname;
}
set {
if (value.Length > 26)
{
throw new System.ComponentModel.DataAnnotations.ValidationException("Firstname is too long");
}
this.firstname = value;
}
}
}
Both are "persistent igonrant". The first one is for sure a POCO class. But is it the second a valid POCO? It has some logic but it could be persisted without problem and its logic is not more than a validation. Can it be considered POCO?
Thanks
Yes, the second one is a valid POCO, because it doesn't use a persistence specific detail. The whole point of POCOs is to say that a certain object doesn't depend on a db access library. If, for example, you would decorate Person with an EF specific attribute then, you would have to reference EF everywhere you'd use that class.

Removing properties with null values

In my DocumentDb documents, I don't want to include properties with NULL values. For example, I have the following POCO class.
public class Person
{
[JsonProperty(PropertyName="id")]
public int PersonId {get; set;}
[JsonProperty(PropertyName="firstName")]
public string FirstName {get; set;}
[JsonProperty(PropertyName="middleName")]
public string MiddleName {get; set;}
[JsonProperty(PropertyName="lastName")]
public string LastName {get; set;}
}
Some people don't have middle names and when I save a person's document in my collection, I don't want the middle name to be included. Currently, a person without a middle name is saved as:
{
"id": 1234,
"firstName": "John",
"middleName": null,
"lastName": "Smith"
}
Is this normal behavior? If not, how do I NOT include the middle name property with a NULL value in my document?
P.S. All serialization/deserialization is handled by JSON.NET
You can do that when you initialize the Cosmos Client, there's a serialization option which is similar to the JSON.Net.
CosmosClient client = new CosmosClient(yourConnectionString, new CosmosClientOptions()
{
SerializerOptions = new CosmosSerializationOptions()
{
IgnoreNullValues = true,
}
});
I think I found the answer. Looks like I can tell JSON.NET to ignore properties with NULL values using
NullValueHandling = NullValueHandling.Ignore
Here's the documentation:
http://james.newtonking.com/archive/2009/10/23/efficient-json-with-json-net-reducing-serialized-json-size

How to execute ranged query in cassandra with astyanax and composite column

I am developing a blog using cassandra and astyanax. It is only an exercise of course.
I have modelled the CF_POST_INFO column family in this way:
private static class PostAttribute {
#Component(ordinal = 0)
UUID postId;
#Component(ordinal = 1)
String category;
#Component
String name;
public PostAttribute() {}
private PostAttribute(UUID postId, String category, String name) {
this.postId = postId;
this.category = category;
this.name = name;
}
public static PostAttribute of(UUID postId, String category, String name) {
return new PostAttribute(postId, category, name);
}
}
private static AnnotatedCompositeSerializer<PostAttribute> postSerializer = new AnnotatedCompositeSerializer<>(PostAttribute.class);
private static final ColumnFamily<String, PostAttribute> CF_POST_INFO =
ColumnFamily.newColumnFamily("post_info", StringSerializer.get(), postSerializer);
And a post is saved in this way:
MutationBatch m = keyspace().prepareMutationBatch();
ColumnListMutation<PostAttribute> clm = m.withRow(CF_POST_INFO, "posts")
.putColumn(PostAttribute.of(post.getId(), "author", "id"), post.getAuthor().getId().get())
.putColumn(PostAttribute.of(post.getId(), "author", "name"), post.getAuthor().getName())
.putColumn(PostAttribute.of(post.getId(), "meta", "title"), post.getTitle())
.putColumn(PostAttribute.of(post.getId(), "meta", "pubDate"), post.getPublishingDate().toDate());
for(String tag : post.getTags()) {
clm.putColumn(PostAttribute.of(post.getId(), "tags", tag), (String) null);
}
for(String category : post.getCategories()) {
clm.putColumn(PostAttribute.of(post.getId(), "categories", category), (String)null);
}
the idea is to have some row like bucket of some time (one row per month or year for example).
Now if I want to get the last 5 posts for example, how can I do a rage query for that? I can execute a rage query based on the post id (UUID) but I don't know the available post ids without doing another query to get them. What are the cassandra best practice here?
Any suggestion about the data model is welcome of course, I'm very newbie to cassandra.
If your use case works the way I think it works you could modify your PostAttribute so that the first component is a TimeUUID that way you can store it as time series data and you'd easily be able to pull the oldest 5 or newest 5 using the standard techniques. Anyway...here's a sample of what it would look like to me since you don't really need to make multiple columns if you're already using composites.
public class PostInfo {
#Component(ordinal = 0)
protected UUID timeUuid;
#Component(ordinal = 1)
protected UUID postId;
#Component(ordinal = 2)
protected String category;
#Component(ordinal = 3)
protected String name;
#Component(ordinal = 4)
protected UUID authorId;
#Component(ordinal = 5)
protected String authorName;
#Component(ordinal = 6)
protected String title;
#Component(ordinal = 7)
protected Date published;
public PostInfo() {}
private PostInfo(final UUID postId, final String category, final String name, final UUID authorId, final String authorName, final String title, final Date published) {
this.timeUuid = TimeUUIDUtils.getUniqueTimeUUIDinMillis();
this.postId = postId;
this.category = category;
this.name = name;
this.authorId = authorId;
this.authorName = authorName;
this.title = title;
this.published = published;
}
public static PostInfo of(final UUID postId, final String category, final String name, final UUID authorId, final String authorName, final String title, final Date published) {
return new PostInfo(postId, category, name, authorId, authorName, title, published);
}
}
private static AnnotatedCompositeSerializer<PostInfo> postInfoSerializer = new AnnotatedCompositeSerializer<>(PostInfo.class);
private static final ColumnFamily<String, PostInfo> CF_POSTS_TIMELINE =
ColumnFamily.newColumnFamily("post_info", StringSerializer.get(), postInfoSerializer);
You should save it like this:
MutationBatch m = keyspace().prepareMutationBatch();
ColumnListMutation<PostInfo> clm = m.withRow(CF_POSTS_TIMELINE, "all" /* or whatever makes sense for you such as year or month or whatever */)
.putColumn(PostInfo.of(post.getId(), post.getCategory(), post.getName(), post.getAuthor().getId(), post.getAuthor().getName(), post.getTitle(), post.getPublishedOn()), /* maybe just null bytes as column value */)
m.execute();
Then you could query like this:
OperationResult<ColumnList<PostInfo>> result = getKeyspace()
.prepareQuery(CF_POSTS_TIMELINE)
.getKey("all" /* or whatever makes sense like month, year, etc */)
.withColumnRange(new RangeBuilder()
.setLimit(5)
.setReversed(true)
.build())
.execute();
ColumnList<PostInfo> columns = result.getResult();
for (Column<PostInfo> column : columns) {
// do what you need here
}

Resources