Naming Convention for ServiceStack DTO's - servicestack

I know this has been asked to some degree already - and is a fairly subjective question. I’m trying to figure out the best naming convention for a set of services that we are porting over to ServiceStack from WCF. I have read through a lot of the ServiceStack documentation and examples – and I feel I have a pretty good understanding of the overall structure. What I am trying to work out is the best naming convention for my request and response DTO’s.
Let me give some examples. Here would be my request dto as I have currently named it.
[Route("/blast/emailblast", "POST")]
public class CreateEmailBlast : IReturn<CreateCreateEmailBlastResponse>
{
public Guid SenderProfileId { get; set; }
public Guid TemplateId { get; set; }
public string CallListName { get; set; }
public string CallListCategory { get; set; }
}
public class CreateEmailBlastResponse : ICreateEmailBlastResponse
{
public string ResponseMessage { get; set; }
}
So the naming I have adopted is prefix the dto with "Create" for posts, "Get" for Gets, etc... Would it be wiser to just use EmailBlast and EmailBlastResponse? Just wondering if anyone has some input on these two different methods of naming.

I think it will be wiser to use EmailBlast only. As, http verbs are there to identify what is going to happen.
If you are looking at how to design good API. (General purpose, not specifically with SS). There is a wonderful video over infoq. I couldn't remember the name over top of mind.
But gist of the that presentation was , try to think of API as user facing html page. What you need at user end. And I can go a little bit more to that, define model after deciding page. To do specifically that, there is a need of metadata.
Means, there is a EmailBlast model, how can I tell to create. So, instead of CreateEmailBlast model, I will go with http post verb. Here, I am trying to take simple example.
Personally I will try to avoid Hungarian notation for variable as well as for model wherever I can.
Please let me know if I understand you question correctly and also if you need any further details from my end.

Related

How to manage separation of concerns when using ServiceStack AutoQuery

I am having some issues with how to organise my AutoQuery code. My project structure currently looks like:
/Project
/Project.ServiceInterface
Service.cs
/Project.Logic
Manager.cs
/Types
DbModel.cs
/Project.ServiceModel
Request.cs
/Types
DtoModel.cs
With this setup, the ServiceModel has no knowledge of the Logic models. Because of this, I can't make a request query like QueryDb<DbModel, DtoModel> without essentially duplicating all my DbModel objects in my ServiceModel or adding a dependency to Logic in ServiceModel. I also have custom AutoQuery service implementations and inside those I want to be able to leverage code that has been written using my DbModels elsewhere.
Does anyone have any recommendations or relevant examples? I feel like I'm approaching this problem incorrectly and making it more complex than need be. Thanks.
Auto Query lets you create Services by defining a Request DTO as such all Types it references must also be the ServiceModel Assembly, so you'd either need to move the Data Models your AutoQuery Services references to your ServiceModel project or annotate your DTO so that it can be used by OrmLite to query your RDBMS Table where it can use the [Alias] attribute where names differ and the [Ignore*] attributes depending on whether the property should exist in OrmLite or Serialization, e.g:
[Alias("MyTable")]
public class MyDto
{
[Alias("DbName")]
public string DtoName { get; set; }
[Ignore]
public string IgnoredInOrmLite { get; set; }
[IgnoreDataMember]
public string IgnoredInSerialization { get; set; }
}
Otherwise you're not going to be able to use Auto Query and would need to create Custom Services whose internal implementation makes use of your Data Models where they're hidden from your public Services Contract.
Personally I'd recommend moving the Data Models you need to your ServiceModel Assembly (that continues to use the same Namespace as your other DataModels) as OrmLite DataModels are POCOs that like DTOs typically don't need any additional references other than the impl-free ServiceStack.Interfaces.

Need some advice on POCO design and Lazy<> usage for new architecture

I'm faced with an architecture decisions wich are quite tricky to solve. I'm sorry in advance for a long post but before you can help me I have to give you a context.
My company tries to integrate new architecture (based around ASP.NET MVC framework) into the existing ASP.NET legacy code. The hint is that we can't get rid of all the business code wich is encapsulated in the legacy assemblies but we have to wrap it into the new ones in order to make a sort of "anticorruption" layer and to decouple it. We're good until now, but here come one constraint (the main) that makes the decoupling a bit tricker :
In the legacy ASP.NET web site, they used a kind of persistance aware model in order to enable lazy loading from views (I know it's crap). There is a reason for it. The code is so convulted and difficult to deploy that they wanted to give a possibility to designers to modify only views without deploying binaries. A God class is pushed into the view with properties that are lazy loaded if needed. So for example if a designer wants to modify a view to display another property he can just pick whetever he wants from the God class and it just works. We have to keep the same behavior in the new MVC architecture.
Let's take an example. Let's say we have a Item legacy persistance aware DTO :
public class Item
{
public string Title {get; set; }
public List<UserComment> Comments {get; set; } // lazy loaded in the background in legacy code.
}
So the designer uses in the view can call a Item.Title and deploy and after can add if he wants Item.Comments (in foreach) and deploy just the view without the need of deploying all the binaries.
The following architecture has been proposed. A new POCOs for the mvc web site are located in the Domain assebmly which doesn't reference any other assembly. In this assembly we have also a definition of repositories interfaces like IItemRepository. Entities are used directly in the ASP.NET MVC ViewModels and obtained through the Repository implementations that are wired up by the underlying DI container.
We have another assebly where the implementations of IRepositories live. That assembly references all the legacy code that is used to populate the entities from the Domain.
Untill now all is great. But here's the tricky part. The entities from the domain assembly should comply with the constraint from the point 1 (lazy load in the views).
So our POCOs declare all the properties as Lazy<>. Which allows us in the implementation assembly (the one that references all the legacy code) to define delegates that populate these properties when they are accessed. Here's the new implementation of POCOs for MVC :
public class Item
{
public string Title {get; set; }
public Lazy<List<UserComment>> Comments {get; set; } // lazy loaded in the background in legacy code.
}
And here is a ItemRepository implementation :
public class ItemRepository : IItemRepository
{
public Item GetItem(int id)
{
Item i = new Item() { Id = id };
i.Comments = new System.Lazy<IEnumerable<UserComments>>( () => CommentsService.GetUserComments(id));
return i;
}
}
We don't like it very much, and here's why :
POCOs are not really POCO because it seems that there is a leaky abstraction of lifetime managment in the entities. This is not a big deal because Lazy<>, is a type in .NET Framework. On the other hand if you're using an ORM like NHibernate you have to bend your POCOs in order to allow NHibernate to load the data (sepcial constructor, virtual members, etc.)
The other reason why we don't like it, is that it hides the underlying service composition in the repositories implementation. For example if Item.Comments is used, in the underlying implementation of the ItemRepository we will call to another legacy service (CommentsService.GetUserComments(id) in that case)
Question 1 : Do you see other drawbacks in using Lazy<> in the POCO implementation ? Or other flaws in the design that we should consider ?
Question 2 : How can we avoid legacy service composition in the repository implementations in order to have a clear "vision" of which
legacy services are used ? Besides, do we really need this "vision"
at the implementation level as we could consider repositories
implementation as "façade" to the legacy services ?
Question 3 : Are there any other alternatives with a respect to the constraint 1 (lazy load) ?
Question 4 : Are there any guidelines related to the composition for the lazy members ?
I'm sorry for the long questions but would really appreciate your help.
Thomas
Does the lazy-loading properties have to be List<T>? If not, you could define them as IList<T> or ICollection<T>, like this:
public class Item
{
public string Title {get; set; }
public IList<UserComment> Comments {get; set; }
}
Now, since the property is an instance of an interface, you could create a Virtual Proxy out of it.

Domain Driven Development: Detecting changes (.NET)

I've just started with Domain Driven Design and trying to apply it for my current project.
I've started with a pure domain model and now stuck with my Data Access layer. I have a completely home made data access layer therefore no any of well known ORM tools can be applied here.
I cannot figure out how to deal with updates. Let's say I have the following Objects:
public class Document : Entity
{
public IPropertiesCollection Properties { get; set; }
public IContents Contents { get; set; }
}
public class PostalDocumentsPackage : Entity
{
public String Name { get; set; }
public DateTime DeliverDate { get; set; }
public ICollection<Document> Documents { get; set; }
}
I have corresponding repositories IDocumentsRepository and IPostalDocumentPackagesRepository for retrieving objects (for now).
The problem I deal with now is to situation when i want to add a new document do Documents collection of PostalDocumentsPackage. Basically I see two possible cases here:
1) Implement the collection that track changes in original collection and holds lists of items that were updated\removed.
2) Implement separate methods in repository for adding documents to the package.
I wonder are these approaches is ok or can cause problems in future? or there is another alternatives?
Typically change tracking would be handled by an ORM such as NHibernate. In your case you may be able to do the following:
Select new documents based on the value of the identity property.
Issue a SQL delete statement before re-inserting into the table.
A problem with either approach is that the documents collection may be big, such that loading all documents for each PostalDocumentsPackage may be a bottleneck. Also you must consider whether you need change tracking on the Document entity in addition to the documents collection. If so, then you would need to implement change tracking for the Document class as well. Given that you're not using an ORM I would suggest solution #2 since solution #1 will lead you down a path of re-implementing change tracking, which among other things would pollute your domain classes. You may also consider a CQRS/Event Sourcing architecture in which change tracking is made explicit.

Domain Driven Design - Entities VO's and Class Hierarchy

The shorter version of the Question: "Is it ok to have a superclass, with 2 subclasses, one is an entity the other is a Value Object?"
To longer version:
T have a Team superclass. The Team has the Master, Helpers and a Code.
Then i have the DefaultTeam, subclass of Team, which is an entity with an unique **Code**** has its domain identity.
Then i have the **ExecutionTeam, its a subclass of Team and has an extra attribute OriginalTeam:
public abstract class Team{
public string Code{ get; protected set; }
public Worker Master{ get; protected set; }
public IList<Worker > Helpers { get; protected set; }
...
}
public class DefaultTeam: Team
{
}
public class ExecutionTeam : Team
{
public virtual string Code { get { return OriginalTeam.Code; } }
public virtual DefaultTeam OriginalTeam { get; private set; }
...
}
The ExecutionTeam, is the team that executes a Task.
When a Task needs to be executed, we choose a DefaultTeam to execute it.
But we can change the Helpers from the DefaultTeam (the master never changes).
That team that executes the task, is a variation of the DefaultTeam (OriginalTeam), but with the Helpers that were chosen just for that Task.
The ExecutionTeam will have the same code has the OriginalTeam. So the ExecutionTeam has no unique identity.
If there are 10 executions of tasks by the same DefaultTeam, there will be 10 ExecutionTeams with the same code (with the same OriginalTeam). So ExecutionTeam is cannot be an Entity.
But having an Entity and a Value Object sharing the same superclass (both being Teams), is a bit strange. Maybe this domain model has something wrong.
Need opinions.
Thanks
What is it that makes the DefaultTeam a Value Object rather than an Entity? Isn't a DefaultTeam also an entity?
That being said, here are some comments:
Why do you need a special class for DefaultTeam? Can't a DefaultTeam simply be an ExecutionTeam, with certain specified values?
A DefaultTeam should probably be an instance of a Team that is associated with an application domain. For example, you might have a particular team that is generally used to solve problems with Project XYZ.
Instead of listing "DefaultTeam" as a property of the ExecutionTeam, you should probably have a "PreviousTeam" as a property of both the Team and ExecutionTeam classes.
This will be more generalizable, in case the team gets changed yet again.
Since Task is an important part of the domain and is assigned to a Team, it should probably be a property of Team.
"Helpers" doesn't seem an appropriate name for the team members. Why not just name them "Members" or "TeamMembers"?
"Master" is probably un-PC unless you are working in Dilbert land or dealing with a database :) You might want to change this to "Supervisor" or "Manager".
"Code" is probably a bad name in the context of your application, as it may easily be confused with programming code. You might want to use "Id" or "TeamId" instead.
Sounds like ExecutionTeam might be better modeled as an interface ICanExecuteTasks. Would that work for you? It would eliminate the issue you are struggling with..
As to your short question, if the ExecutionTeam was indeed a derived class of Team, (inheriting from team and representing an "IsA" relatoonship, then the answer is No, they cannot be of different types because of course, every ExecutionTeam isA Team, thgere is only one thing, which is both a Team and an ExecutionTeam at the same time... It cannot be both an entity Type and a value type at the same time.
But the way you have designed the classes, as you have structured things, ExcecutionTeam is not a derived class, it is a property of the DefaultTeam. This implies that they have a "HasA" relationship. THis implies that they are different, co-existing objects, one of which can be an entity and one of which can be a value type. But my gut tells me this is not an accurate mirror of your real domain model...

How do you handle SubSonic 'relationships' with migration?

According to this article:
http://subsonicproject.com/docs/3.0_Migrations
Bottom line: if you're a developer that is concerned about database design,
migrations might not be for you.
Ok, that's fine, I can treat the database as simply a persistent repository of data that doesn't contain any business logic. In other words, a glorified text file.
What I don't know how to do is relate two objects together. Take for example these two classes:
public class Disaster
{
public int DisasterId { get; set; }
public string Name { get; set; }
public DateTime? Date { get; set; }
public IList<Address> Addresses { get; set; }
}
public class Address
{
public int AddressId { get; set; }
public string WholeAddressHereForSakeOfBrevity { get; set; }
}
Disaster contains an IList of multiple Addresses that were hit by the disaster. When I use SimpleRepository to add these to the database with SimpleRepositoryOptions.RunMigrations, it generates the tables with all the columns, but no foreign key columns as expected.
How would I relate these two together so that when I call Disaster.Addresses, I get a list of all the affected Addresses? Is this possible or do I have to use ActiveRecord instead and create the database tables first? Or do I have to add in a column for the disaster's ID into Address? If so, how does this method work for many-to-many relationships?
It's possible - you just do it by hand is all. Add a property to Disaster called "Addresses" and make it an IList<> (or you can make it IQueryable if you want it to Lazy Load). When you retrieve your Disaster, just be sure to retrieve your Addresses.
It's sort of "manual" - but that's the idea. I'm working on enhancements to this that i'm hoping to push in a later release.
And before you ask why I didn't do it in the first place :) it's because I don't know if I should use a Many to Many or 1-many based on the parent/child relationship. In your example, I'd guess that it's probably 1 to many but given what I know about Addresses and disasters (especially in Florida) it should probably be many to many.
Bottom Line - how would SubSonic know this? We could introspect both objects for "bi-directionality", which means if Address has many disasters than it's many to many (which is obvious) - but then that's not happy coding if you like DDD.
I'm leaning towards that rule with some type of override that would force the issue. Your thoughts on this are welcome :)

Resources