When to use DTO's and Models for ServiceStack? - servicestack

I've seen quite some examples of ServiceStack services and I don't seem to understand when to use a DTO and when to use a Model. As I understand it the DTO is to keep everything as seperate as possible as it's the contract of your service. That would allow you to change a lot in your code but keep the DTO's unchanged. But if you have a Model as one of the properties or it's return value (in a lot of examples that's what I see), the dependency on the model is there any way, so why not simply wrap the Model in the DTO for the request as well?
[Route("/events", "POST")]
public class CreateEvent : IReturn<Event>
{
public string Name { get; set; }
public DateTime StartDate { get; set; }
}
From: Recommended ServiceStack API Structure
/// <summary>
/// Define your ServiceStack web service response (i.e. Response DTO).
/// </summary>
public class MovieResponse
{
/// <summary>
/// Gets or sets the movie.
/// </summary>
public Movie Movie { get; set; }
}
From: https://github.com/ServiceStack/ServiceStack.Examples/blob/master/src/ServiceStack.MovieRest/Web/MovieService.cs

You would use distinct DTO's for Data Models that make poor substitutes for serializable DTO's, e.g:
Have cyclical dependencies or overuse of Inheritance and interfaces
This is less of an issue when using code-first ORM's like OrmLite which as they encourage use of clean POCO's already make good candidates to be re-used as DTO's.
Ideally DTO's should be self-describing and non-hierarchical (i.e. flat) and not rely on serializer-specific features, inhibiting re-usability and reducing interoperability with different formats and serializers.
Doesn't match the shape of the Contract that you want to expose
Your data models might make use of internal codes (e.g. int values) which doesn't make sense to external users outside your database, in which case you may want to project them into self-describing DTO's which exposes more user-friendly labels.
You can use Auto Mapping to reduce the effort whenever you need to re-project between models.

Related

How to manage separation of concerns when using ServiceStack AutoQuery

I am having some issues with how to organise my AutoQuery code. My project structure currently looks like:
/Project
/Project.ServiceInterface
Service.cs
/Project.Logic
Manager.cs
/Types
DbModel.cs
/Project.ServiceModel
Request.cs
/Types
DtoModel.cs
With this setup, the ServiceModel has no knowledge of the Logic models. Because of this, I can't make a request query like QueryDb<DbModel, DtoModel> without essentially duplicating all my DbModel objects in my ServiceModel or adding a dependency to Logic in ServiceModel. I also have custom AutoQuery service implementations and inside those I want to be able to leverage code that has been written using my DbModels elsewhere.
Does anyone have any recommendations or relevant examples? I feel like I'm approaching this problem incorrectly and making it more complex than need be. Thanks.
Auto Query lets you create Services by defining a Request DTO as such all Types it references must also be the ServiceModel Assembly, so you'd either need to move the Data Models your AutoQuery Services references to your ServiceModel project or annotate your DTO so that it can be used by OrmLite to query your RDBMS Table where it can use the [Alias] attribute where names differ and the [Ignore*] attributes depending on whether the property should exist in OrmLite or Serialization, e.g:
[Alias("MyTable")]
public class MyDto
{
[Alias("DbName")]
public string DtoName { get; set; }
[Ignore]
public string IgnoredInOrmLite { get; set; }
[IgnoreDataMember]
public string IgnoredInSerialization { get; set; }
}
Otherwise you're not going to be able to use Auto Query and would need to create Custom Services whose internal implementation makes use of your Data Models where they're hidden from your public Services Contract.
Personally I'd recommend moving the Data Models you need to your ServiceModel Assembly (that continues to use the same Namespace as your other DataModels) as OrmLite DataModels are POCOs that like DTOs typically don't need any additional references other than the impl-free ServiceStack.Interfaces.

Domain driven design - How to check uniqueness of one property in domain object

I'm developing an application using domain driven design. One of the patterns I've been using is Repository pattern. For the sake of simplicity, let's say I have following classes and interfaces.
Car - domain class representing car domain concept.
public class Car {
public int Id {get;private set;}
public string SomeUniqueCode {get;private set;}
}
ICarRepository - interface for adding, deleting or saving changes to Car objects.
public interface ICarRepository{
Car AddCar(Car c);
void DeleteCar(Car c);
}
My problem is, how to check uniqueness of SomeUniqueCode property among all Car objects in the database? That property is changed by user (not auto-generated) at any time during the object life-cycle. Of course, one solution would be to put the unique key in the database, but that is not the principle of DDD. I've seen Specification pattern used to validate single objects. How would that pattern be applied to a set of Car objects?
Is it legitimate that Specification class (let's call it CheckUniqueCarSpecification) accesses ICarRepository?
A repository mimics an in-memory collection. What I have used before is a Contains method as opposed to a Find method, I guess you could have either. A query layer could also be used for this. Just as you have a CarRepository you could have a CarQuery. Trying to check for uniqueness in the domain is somewhat pesky. I would do a check for the sake of convenience but still rely on the DB to raise the exception since you should also handle that case. Using the specification pattern for this may be more effort than it is worth.
Since repository is a 'collection' I wouldn't have Commit and Rollback on there.
Use DomainService ICarCodesLibrary.
public class Car {
ctor(string someUniqueCode, ICarCodesLibrary codes)
{
// the check
codes.IsValidCode(someUniqueCode)
}
public int Id {get;private set;}
public string SomeUniqueCode {get;private set;}
}
Implement the interface in the place where u create the Car object and inject it. Also get rid of the properties and use fields. The ID is OK to be a prop.

Designing DTO's and DO service layer

I'm designing a new large scale application which needs to be as flexible as possible.
I chose designing mainly with DDD..
My question is about transferring DTO object's back to DO objects in my service layer.
i.e:
This is my domain object mapped to the DB (using ORM)
public class Cat
{
int ID {get; set;}
string Name {get; set;}
string BloodType {get; set;}
string Color {get; set;}
void Run(){...}
void Purr() {...}
}
Methods and some properties are only needed for server actions.
That's why I designed another, data transfer object for this cat type:
public class CatDTO
{
int ID {get; set;}
string Name {get; set;}
}
In the middle, I'll set up an object mapper to translate my DO's to DTO's (and vice versa).
When a client would like to update a cat's name he will call a service like this
public void UpdateCat(CatDTO cat)
{
// What will happen here?
Cat serverCat = Mapper.GetCat(CatDTO);
CatDao.SaveOrUpdate(serverCat);
}
When the mapper is translating the DTO object back to DO it will have to hit the DB in order to fill the missing properties of the Cat object (blood type, etc')
Needles to say this action is absurd but without filling the empty properties the rest of the server side cannot work with the Cat object because it relies on those missing properties (even if i just try to update the data in the DB, My ORM will update the bloodtype field as an empty string!)
I searched for this problem and couldn't find any explenation on the web (or at least someone who is bothered with the issue as I do)
Am I designing it the wrong way? Maybe I missed something in my DDD?
Thanks, Pavel.
The usual workflow for this use case is: retrieve mapped domain object by ID, apply updates specified by the DTO, commit unit of work. What you refer to as the DAO is normally called a repository in DDD. The code should look more like:
public void UpdateCat(CatDTO catDto)
{
Cat cat = this.catRepository.Get(cat.ID);
cat.Name = catDto.Name;
this.catRepository.Commit();
}
The Commit step can come in a variety of ways. It can either be an explicit save, or the unit of work can be committed outside of the UpdateCat method. This workflow applies to all related scenarios as well. Generally, domain behavior involves retrieving the appropriate entity, invoking some behavior on that entity and then committing the resulting changes to the database.
Also, DTOs shouldn't directly map back into existing entities. Instead, it is better to think of them as representing changes to be applied to existing entities and the code should reflect this. This is in part because an existing entity is "owned" by the repository and the repository is responsible for reconstitution, not a DTO mapper.

Need some advice on POCO design and Lazy<> usage for new architecture

I'm faced with an architecture decisions wich are quite tricky to solve. I'm sorry in advance for a long post but before you can help me I have to give you a context.
My company tries to integrate new architecture (based around ASP.NET MVC framework) into the existing ASP.NET legacy code. The hint is that we can't get rid of all the business code wich is encapsulated in the legacy assemblies but we have to wrap it into the new ones in order to make a sort of "anticorruption" layer and to decouple it. We're good until now, but here come one constraint (the main) that makes the decoupling a bit tricker :
In the legacy ASP.NET web site, they used a kind of persistance aware model in order to enable lazy loading from views (I know it's crap). There is a reason for it. The code is so convulted and difficult to deploy that they wanted to give a possibility to designers to modify only views without deploying binaries. A God class is pushed into the view with properties that are lazy loaded if needed. So for example if a designer wants to modify a view to display another property he can just pick whetever he wants from the God class and it just works. We have to keep the same behavior in the new MVC architecture.
Let's take an example. Let's say we have a Item legacy persistance aware DTO :
public class Item
{
public string Title {get; set; }
public List<UserComment> Comments {get; set; } // lazy loaded in the background in legacy code.
}
So the designer uses in the view can call a Item.Title and deploy and after can add if he wants Item.Comments (in foreach) and deploy just the view without the need of deploying all the binaries.
The following architecture has been proposed. A new POCOs for the mvc web site are located in the Domain assebmly which doesn't reference any other assembly. In this assembly we have also a definition of repositories interfaces like IItemRepository. Entities are used directly in the ASP.NET MVC ViewModels and obtained through the Repository implementations that are wired up by the underlying DI container.
We have another assebly where the implementations of IRepositories live. That assembly references all the legacy code that is used to populate the entities from the Domain.
Untill now all is great. But here's the tricky part. The entities from the domain assembly should comply with the constraint from the point 1 (lazy load in the views).
So our POCOs declare all the properties as Lazy<>. Which allows us in the implementation assembly (the one that references all the legacy code) to define delegates that populate these properties when they are accessed. Here's the new implementation of POCOs for MVC :
public class Item
{
public string Title {get; set; }
public Lazy<List<UserComment>> Comments {get; set; } // lazy loaded in the background in legacy code.
}
And here is a ItemRepository implementation :
public class ItemRepository : IItemRepository
{
public Item GetItem(int id)
{
Item i = new Item() { Id = id };
i.Comments = new System.Lazy<IEnumerable<UserComments>>( () => CommentsService.GetUserComments(id));
return i;
}
}
We don't like it very much, and here's why :
POCOs are not really POCO because it seems that there is a leaky abstraction of lifetime managment in the entities. This is not a big deal because Lazy<>, is a type in .NET Framework. On the other hand if you're using an ORM like NHibernate you have to bend your POCOs in order to allow NHibernate to load the data (sepcial constructor, virtual members, etc.)
The other reason why we don't like it, is that it hides the underlying service composition in the repositories implementation. For example if Item.Comments is used, in the underlying implementation of the ItemRepository we will call to another legacy service (CommentsService.GetUserComments(id) in that case)
Question 1 : Do you see other drawbacks in using Lazy<> in the POCO implementation ? Or other flaws in the design that we should consider ?
Question 2 : How can we avoid legacy service composition in the repository implementations in order to have a clear "vision" of which
legacy services are used ? Besides, do we really need this "vision"
at the implementation level as we could consider repositories
implementation as "façade" to the legacy services ?
Question 3 : Are there any other alternatives with a respect to the constraint 1 (lazy load) ?
Question 4 : Are there any guidelines related to the composition for the lazy members ?
I'm sorry for the long questions but would really appreciate your help.
Thomas
Does the lazy-loading properties have to be List<T>? If not, you could define them as IList<T> or ICollection<T>, like this:
public class Item
{
public string Title {get; set; }
public IList<UserComment> Comments {get; set; }
}
Now, since the property is an instance of an interface, you could create a Virtual Proxy out of it.

Domain Driven Development: Detecting changes (.NET)

I've just started with Domain Driven Design and trying to apply it for my current project.
I've started with a pure domain model and now stuck with my Data Access layer. I have a completely home made data access layer therefore no any of well known ORM tools can be applied here.
I cannot figure out how to deal with updates. Let's say I have the following Objects:
public class Document : Entity
{
public IPropertiesCollection Properties { get; set; }
public IContents Contents { get; set; }
}
public class PostalDocumentsPackage : Entity
{
public String Name { get; set; }
public DateTime DeliverDate { get; set; }
public ICollection<Document> Documents { get; set; }
}
I have corresponding repositories IDocumentsRepository and IPostalDocumentPackagesRepository for retrieving objects (for now).
The problem I deal with now is to situation when i want to add a new document do Documents collection of PostalDocumentsPackage. Basically I see two possible cases here:
1) Implement the collection that track changes in original collection and holds lists of items that were updated\removed.
2) Implement separate methods in repository for adding documents to the package.
I wonder are these approaches is ok or can cause problems in future? or there is another alternatives?
Typically change tracking would be handled by an ORM such as NHibernate. In your case you may be able to do the following:
Select new documents based on the value of the identity property.
Issue a SQL delete statement before re-inserting into the table.
A problem with either approach is that the documents collection may be big, such that loading all documents for each PostalDocumentsPackage may be a bottleneck. Also you must consider whether you need change tracking on the Document entity in addition to the documents collection. If so, then you would need to implement change tracking for the Document class as well. Given that you're not using an ORM I would suggest solution #2 since solution #1 will lead you down a path of re-implementing change tracking, which among other things would pollute your domain classes. You may also consider a CQRS/Event Sourcing architecture in which change tracking is made explicit.

Resources