Just wondering if this is possible:
Say I have a ViewModel for comparing an old and new entity.
public class FooComparison
{
public string Name {get;set;}
public string OldName {get; set;}
public int Age {get; set;}
public int OldAge {get; set;}
...
}
I want to load 2 instances of Foo from the database and populate the FooComparison with the details from both instances.
For now, I have Mapper.CreateMap<Foo, FooComparison>() which will populate the Name, Age etc from the first entity - is there an easy way to populate the Oldxxx properties from the second entity without looping and manually updating them?
My suggestion would be to define a mapping from Tuple<Foo, Foo> to FooComparison:
Mapper.CreateMap<Tuple<Foo, Foo>, FooComparison>()
.ConvertUsing(x => new FooComparison
{
Name = x.Item2.Name,
OldName = x.Item1.Name,
Age = x.Item2.Age,
OldAge = x.Item1.Age,
...
});
Then use it like this:
Foo oldFoo = ...;
Foo newFoo = ...;
FooComparison comparison = Mapper.Map<FooComparison>(Tuple.Create(oldFoo, newFoo));
I appreciate that this loses the "auto" part of automapper, but really the big benefit you are getting by using automapper is that this mapping is defined in just one place in your software, not so much the auto part.
I have personally found this way of mapping via Tuple to work very well.
Related
We have below entity inherited from tableEntity
public class LinkEntity : TableEntity
{
public string LinkKey {get; set;}
public string LinkName {get; set;}
public int LinkValue {get; set;}
public string LinkId {get {return PartitionKey;} set;}
public LinkEntity(Link link)
{
PartitionKey = link.LinkId;
RowKey = link.LinkKey;
LinkValue = link.Value;
LinkName = link.LinkName;
}
}
I have an API that adds the above entity using Post and below steps:
linkValue of Link is null.
var cloudTable = cloudTableClient.GetTableReference(LinkTable);
cloudTable.CreateIfNotExistsAsync();
var postOperation = TableOperation.Insert(LinkEntity(link));
cloudTable.ExecuteAsync(postOperation);
But, when I do get on above, I again receive linkValue as null.
Hence, I don't want to add this value in tableStorage or column/property for entity when this value is null.
I cannot get rid of property linkValue completely because it is being used by other API which is a required field over there. Hence, any advise would be appreciated.
I think there should be some way where we can add required fields and ignore or remove columns completely from entities since tables in table storage is schemaless.
TL;DR;
Please make the LinkValue property nullable. That should solve the problem. So your entity definition would be:
public class LinkEntity : TableEntity
{
public string LinkKey {get; set;}
public string LinkName {get; set;}
public int? LinkValue {get; set;}
public string LinkId {get {return PartitionKey;} set;}
public LinkEntity(Link link)
{
PartitionKey = link.LinkId;
RowKey = link.LinkKey;
LinkValue = link.Value;
LinkName = link.LinkName;
}
}
Longer Version (Somewhat) :)
As you rightly mentioned, Azure Tables are schema less. Another important thing to understand is that there's no concept of null values in an entity in Azure Tables. Either an attribute is present in an entity or it is not.
By keeping int as the data type (which has a default value of 0) for your LinkValue attribute, even if you don't provide any value, this attribute will be initialized with default value and that gets stored.
By making the data type as nullable int, if you don't provide any value for this attribute, it won't get initialized and will be ignored by the SDK when the entity gets serialized.
However you will need to ensure that the application which consumes this entity (i.e. the receiving end) does not assume that the value will always be present in this attribute and should be prepared to handle null values.
I am using multiple aggregate roots inside a DDD bounded context.
For example
public class OrderAggregate
{
public int ID {get;set;}
public string Order_Name {get;set;}
public int Created_By_UserID {get;set;}
}
public class UserAggregate
{
public int ID {get;set;}
public string Username {get;set;}
public string First_Name {get;set;}
public string Last_Name {get;set;}
}
I am using SQL relational base to persists domain objects. Each aggregate root matches one repository.
In case I would like to find an order that was created by John Doe (seach accross multiple aggregates) what would be a DDD way to go?
add First_Name and Last_Name into OrderAggregate in order to add FindByUserFirstLastName method in OrderRespository, but that could raise data consistency issue between two aggregate roots
create a raw sql query and access DB directly in order to span search accross "repositories"
use "finders" in order to join entities directly from DB
replicate data necessary for query to be completed to a new aggregate root such as
public class QueryOrderAggregate
{
public int ID { get; set; }
public string Order_Name { get; set; }
public int Created_By_UserID { get; set; }
public string First_Name { get; set; }
public string Last_Name { get; set; }
}
In case I would like to find an order that was created by John Doe (seach accross multiple aggregates) what would be a DDD way to go?
Almost the same way that it goes with accessing an aggregate...
You create a Repository that provides a (whatever the name for this view/report is in your domain). It probably uses the UserId as the key to identify the report. In the implementation of the repository, the implementation can do whatever makes sense -- a SQL join is a reasonable starting point.
The View/Report is basically a value type; it is immutable, and can provide data, but doesn't not have any methods, or any direct access to the aggregate roots. For example, the view might include the OrderId, but to actually get at the order aggregate root you would have to invoke a method on that repository.
A view that spans multiple aggregates is perfectly acceptable, because you can't actually modify anything using the view. Changes to the underlying state still go through the aggregate roots, which provide the consistency guarantees.
The view is a representation of a stale snapshot of your data. Consumers should not be expecting it to magically update -- if you want something more recent, go back to the repository for a new copy.
I have two domain entities:
class Foo {
int Id {get;}
string Name {get;}
}
class FooBar : Foo {
RuleEnum Rule {get;}
string NewName {get;}
int OrderId {get;}
}
And single persistence model for them:
class FooInDb {
int Id {get;set;}
int Rule {get;set;}
string Name {get;set;}
string NewName {get;set;}
int? OrderId {get;set;}
}
I have application service, converting api-binding models to domains models (from IEnumarable<Api.Foo> to IEnumarable<FooBar>). Validation of some business rules occures in FooBar ctor. What I need next: load all FooInDbs from db and update its fields, according to:
void Update(FooBar fooBar, FooInDb fooInDb)
{
fooInDb.Rule = fooBar.Rule;
if (fooBar.Rule == RuleEnum.New){
fooInDb.NewName = fooBar.NewName;
fooInDb.OrderId = null;
}
else {
fooInDb.NewName = null;
fooInDb.OrderId = fooBar.OrderId;
}
}
Should this decision be placed in some domain-service? (Personally I do not want domain-service project to reference persistence-models project.) If not, how fine will be to place it in repository and call from application-service?
Your domain model seems quite CRUD-oriented and anemic, but that is another story.
The persistence layer shouldn't be making decisions about the model's state. If the state should change based on the Rule then the model is responsible to reflect that.
The only thing the persistency layer should know here is how to persist the various representations of the same kind of entity, by mapping the model state as is to the persistence model.
E.g. where persistenceFramework is assumed to be anything that handles the persistence of your persistence model. Also not that I'm not a C# programmer.
public void Save(Foo foo) {
FooInDb dbFoo = persistenceFramework.FindById(foo.Id);
Map(dbFoo, foo as dynamic);
persistenceFramework.Save(dbFoo);
}
private void Map(FooInDb dbFoo, Foo foo) {
//Foo mapping logic
}
private void Map(FooInDb dbFoo, FooBar foo) {
//FooBar mapping logic
}
I'm using an automapper to flatten the object coming from WS. Simplified model would be as follows:
public abstract class AOrder {
public Product Product {get;set;}
public decimal Amount {get;set;}
//number of other properties
}
public abstract class Product {
//product properties
}
public class RatedProduct : Product {
public int Rate { get;set;}
}
public class MarketOrder : AOrder {
//some specific market order properties
}
Using automapper I'm trying to flatten this into:
public class OrderEntity {
public decimal Amount {get;set;}
public int ProductRate {get;set;}
}
with next mapping:
CreateMap<RatedProduct, OrderEntity>();
CreateMap<MarketOrder, OrderEntity>();
The above mapping will not map the ProductRate.
Atm I've just used the AfterMap:
CreateMap<MarketOrder, OrderEntity>()
.AfterMap((s,d) => {
var prod = s.Product as RatedProduct;
if (prod != null)
{
//map fields
}
});
which works pretty well, but thought if I could reuse the automapper flattening possibilities (i.e. matching by name) I wouldn't need to apply the after map in quite many places.
Note: I can't change the WS and this is just a tiny part from object hierarchy.
Advice appreciated.
Mapping Rate to ProductRate is fairly straight forward with "ForMember"
The one where you have to do a cast to the specific type to see if it is that type is a little trickier but I think the same approach you took is what you might have to do however I don't think you need to do "aftermap". I thought all your destination mappings had to be found OR you need to mark them as ignore of the mapping will fail.
Another thing you could do is just change the OrderEntity.ProductRate to be OrderEntity.Rate. Then it would find it and map it for you except where it was hidden because Product doesn't have a rate (but RatedProducts do).
public class OrderEntity {
public decimal Amount {get;set;}
public int Rate {get;set;} //changed name from ProductRate to just Rate.
}
Mapper.CreateMap<Product, OrderEntity>()
.Include<RatedProduct, OrderEntry>();
Mapper.CreateMap<RatedProduct, OrderEntry>();
SEE: Polymorphic element types in collections
I have a WCF service that calls a stored procedure and returns a DataTable. I would like to transform the DataRows to custom object before sending to the consumer but can't figure out how to do so. Lets say I retrieved a customer from a stored procedure. To keep things short here is the customer via DataRow:
string lastName = dt.Rows[0]["LastName"].ToString();
string firstName = dt.Rows[0]["FirstName"].ToString();
string age = System.Convert.ToInt32(dt.Rows[0]["Age"].ToString());
I can retrieve the customer easily. Now, I created a Customer object like so:
public Customer
{
public string LastName {get;set;}
public string FirstName {get;set;}
public int Age {get;set;}
}
I loaded AutoMapper from the package manager console. I then put a public static method on my customer like so:
public static Customer Get(DataRow dr)
{
Mapper.CreateMap<DataRow, Customer>();
return Mapper.Map<DataRow, Customer>(dr);
}
When I step through my code, every property in the customer returned from Get() is null. What am I missing here? Do I have to add a custom extension to map from a DataRow? Here is a thread that seems related but I thought AutoMapper would support this out of the box especially since the property names are identical to the column names. Thanks in advance.
This works!!
public static Customer GetSingle(DataTable dt)
{
if (dt.Rows.Count > 0) return null;
List<Customer> c = AutoMapper.Mapper.DynamicMap<IDataReader, List<Customer>>(dt.CreateDataReader());
return c[0];
}