In our team, we use the request and response DTO's, through our hierarchy of business logic assemblies (beyond the isolated DB DTO's ).
We have a requirement for no SS dependencies at the business logic layer.
So we don't use the IReturn or IReturnVoid interface. We use only simple c# objects without inheritance.
As for the Routing, we use the Fluent API, in the AppHost.Configure, creating essentially a routing table.
ServiceStack behaves exceptionally well, in our case.
Our Service.Model can be used from the business logic layer, without dependencies.
The service function is a thin wrapper actually, calling the business logic function, to return the response DTO.
But the JsonServiceClient.Get function accepts only as parameter an IReturn object, OR directly the URI.
It does not accept an object as parameter , like the Post function.
Any advice ?
Update 1.
mythz,
About IReturn, unfortunately, in our case there are requirements not using in business logic modules,
even the lighter SS dependency.
The service functions are a thin wrapper calling the business modules.
The link between the two layers are only the Request and Response DTOs. We like very much this approach.
Yes, they are "message operations" but they serve as messages also between the application layers.
Also my clients mainly are Jquery Ajax, not C#. Because of the mobile, the great majority inclined to Jquery Ajax.
So, in our case, we can use only objects, not marked with IReturn. ServiceStack behaves very well.
The API only accepts IReturn<TResponse> to make it clear that it only accepts and works with Request DTO's and not just any DTO or object. Request DTO's are "message operations" and shouldn't be re-used for anything else, the DTO types can be, but not the Request DTO which is your external facing service contract and shouldn't be coupled to any other concerns.
The DTO attributes like [Route], IReturn<T>, [Api], [Restrict], etc are just extra metadata that can't be expressed in C#, but just like defining the type of a DTO property, it's still metadata describing the service, and if you attribute them on the DTO's then they become sharable and introspectable on the client as well. E.g. the ServiceClients will only be able to use the custom routes defined with [Route] because that's the only information clients have, if there are none it will end up falling back to using the pre-defined routes.
ServiceStack encourages defining IReturn<T> markers as it lets you infer more about the service by glancing at the Request DTO, ensures services are restricted in returning the same type (good practice) and centralizes what the Service returns rather than spreading out over the different (more verbose/non-DRY) call-sites, which also means if you change the Response a service returns you'll get compiler feedback on which call-sites needs updating. Not everyone is aware of this info/behavior which is why ServiceStack wants to encourage this "pit of success" development by encouraging the use of IReturn<T> markers, so not everyone has to be.
As for the dependencies, the only dependency your DTO's should need to reference is ServiceStack.Interfaces.dll which is purposely a light-weight, impl-free dll. In v3 this needs to reference the ServiceStack.Common NuGet pkg but for v4 we'll provide a stand-alone ServiceStack.Interfaces NuGet pkg providing the minimum/lightest dependency your DTO's can reference.
Related
I am building a REST service over CQRS using EventSourcing to distribute changes to my domain across services. I have the REST service up and running, with a POST endpoint for creating the initial model and then a series of PATCH endpoints to change the model. Each end-point has a command associated with it that the client sends as a Content-Type parameter. For example, Content-Type=application/json;domain-command=create-project. I have the following end-points for creating a Project record on my task/project management service.
api.foo.com/project
Verb: POST
Command: create-project
What it does: Inserts a new model in the event store with some default values set
api.foo.com/project/{projectId}
Verb: PATCH
Command: rename-project
What it does: Inserts a project-renamed event into the event store with the new project name.
api.foo.com/project/{projectId}
Verb: PATCH
Command: reschedule-project
What it does: Inserts a project-rescheduled event into the event store with the new project due date.
api.foo.com/project/{projectId}
Verb: PATCH
Command: set-project-status
What it does: Inserts a project-status-changed event into the event store with the new project status (Active, Planning, Archived etc).
api.foo.com/project/{projectId}
Verb: DELETE
Command: delete-project
What it does: Inserts a project-deleted event into the event store
Traditionally in a REST service you would offer a PUT endpoint so the record could be replaced. I'm not sure how that works in the event-sourcing + CQRS pattern. Would I only ever use POST and PATCH verbs?
I was concerned I was to granular and that every field didn't need a command associated with it. A PUT endpoint could be used to replace pieces. My concern though was that the event store would get out of sync so I just stuck with PATCH endpoints. Is this level of granularity typical? For a model with 6 properties on it I have 5 commands to adjust the properties of the model.
This is a common question that we get a lot of the time when helping developers getting started with CQRS/ES. We need to acknowledge that applying REST in a pure way is a really bad match for DDD/CQRS since the intention of the commands are not explicitly expressed in the verbs GET/POST/PUT/PATCH/DELETE (even though you can use content-type like you did). Also the C/R-side of the system are definitely different resources in a CQRS-system which does not match up with REST.
However, to use HTTP to provide an API for a CQRS/ES system is very practical.
We usually only use POST for sending commands, to either a /commands endpoint or to endpoints with the name of the command, i.e /commands/create-project. It's all about how strict you want to be. In this case we embed the command type in the payload or as a content-type.
However, it is all a matter of what matches the tech stack better and what you choose here usually does not make or break the solution. The more important part is usually to create a good domain model and get the whole team onboard with this way of thinking.
Good luck!
One question that comes to mind is, is REST the right paradigm for CQRS at all?
One completely different way to structure this is to not have action-focused endpoints, but instead structure your REST API as a series of events that you add new events to (with POST).
Events should be immutable and append-only, so maybe a DELETE method doesn't make that much sense for mutations.
If you're going all in with CQRS (good luck, I've heard the war stories) I would be inclined to build an API that reflects that model well.
Would I only ever use POST and PATCH verbs?
Most of the time, you would use POST.
PUT, and PATCH are defined with remote authoring semantics - they are methods used to copy new representations of a resource from the client to the server. For example, the client GETs a representation of /project/12345, makes local edits, and then uses PUT to request that the server accept the client's new representation of the resource as its own.
PATCH, semantically, is a similar exchange of messages - the difference being that instead of sending the full representation of the resource, the client returns a "patch-document" that the server can apply to its copy to make the changes.
Now, technically, the PATCH documentation does put any restrictions on what a "patch-document" is. In order for PATCH to be more useful that POST, however, we need patch document formats that are general purpose and widely recognized (for instance, application/merge-patch+json or application/json-patch+json).
And that's not really the use case you have here, where you are defining command messages that are specific to your domain.
Furthermore, remote authoring semantics don't align very well with "domain modeling" (which is part of the heritage of CQRS). When we're modeling a domain, we normally give the domain model the authority to decide how to integrate new information with what the server already knows. PUT and PATCH semantics are more like what you would use to write information into an anemic data store.
On the other hand, it is okay to use POST
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” -- Fielding, 2009
It may help to recall that REST is the architectural style of the world wide web, and the only unsafe method supported by html is POST.
So replacing your PATCH commands with POST, and you're on the right path.
Fielding, 2008
I should also note that the above is not yet fully RESTful, at least how I use the term. All I have done is described the service interfaces, which is no more than any RPC. In order to make it RESTful, I would need to add hypertext to introduce and define the service, describe how to perform the mapping using forms and/or link templates, and provide code to combine the visualizations in useful ways. I could even go further and define these relationships as a standard, much like Atom has standardized a normal set of HTTP relationships with expected semantics
The same holds here - we aren't yet at "REST", but we have improved things by choosing standardized methods that are better aligned with our intended semantics.
One final note -- you should probably replace your use of DELETE with POST as well. DELETE is potentially a problem for two reasons -- the semantics aren't what you want, and the standard delete payload has no defined semantics
Expressed another way: DELETE is from the transferring documents over a network domain, not from your domain. A DELETE message sent to your resources should be understood to mean the same thing as a DELETE message sent to any other resource is understood. That's the uniform interface constraint at work: we all agree that the HTTP method tokens mean the same thing everywhere.
Relatively few resources allow the DELETE method -- its primary use is for remote authoring environments, where the user has some direction regarding its effect -- RFC 7231
As before: remote authoring semantics are not obviously a good fit for sending messages to a domain model.
This Google Cloud article API design: Understanding gRPC, OpenAPI and REST and when to use them clarifies the REST vs RPC debate. REST is more relevant for entity-centric API whereas RPC is more relevant for action-centric API (and CQRS). The most mature REST level 3 with hypermedia controls works well only for entities with simple state models.
Understand and evaluate first the benefits of REST for your case. Many APIs are REST-ish and not RESTful. OpenAPI is actually RPC mapped over and HTTP endpoints but it doesn't prevent it to be widely adopted.
I am feeling stuck in my MPA ABP app development between three principles that I believe are DDD ones :
Application service has not to be tied to a controller. I mean a controller doesn't have to use always only one application service. Because application service conception has not presentation in mind.
DTO has to fit the view needs to minimize HTTP data transfer. So sometimes we have to design one DTO class per entity/concept and per view.
Application services get and return always DTO's.
Now I have a view which follows the Master-Detail principle: select one entity in the Master part from an entity list loads entity details in the Details part by Ajax call. But the entity selection in Master part is made by Ajax-synchronized cascade of dropdown lists: ParentEntities > Entities.
What choice respects better DDD ?
Put GetAllParent(), GetAllEntities(parentId) and GetEntity(id) all in MyViewApplicationService, then my application service can return optimized DTO's for my view needs, but violates DDD principle,
Put each of theses three methods in different application services, isolated more "domain" in mind, but DTO's are domain-oriented, somewhat generic. So DTO's are not optimized.
Let the controller the responsability to map into a DTO that fits view needs, but it should not do that.
Application service has not to be tied to a controller. I mean a controller doesn't have to use always only one application service.
Because application service conception has not presentation in mind.
Application services are not tied to the client type, but to the client needings. They return the data that client needs, so in that sense application services has the client (presentation) in mind.
Application services get and return always DTO's.
Not always. There are alternatives to DTOs, as Vaughn Vernon says in his book Implementing DDD (page 512):
Mediator
Domain Payload Object
State Representations
Use Case optimal repository queries (closed to CQRS)
Data Transformers
What choice respects better DDD ?
Put GetAllParent(), GetAllEntities(parentId) and GetEntity(id) all in MyViewApplicationService, then my application service can return
optimized DTO's for my view needs, but violates DDD principle,
You shouldn't name the application service refering the client technology (MyView), but according to the functionality it offers.
Put each of theses three methods in different application services, isolated more "domain" in mind, but DTO's are
domain-oriented, somewhat generic. So DTO's are not optimized.
It doesn't matter if you put the 3 methods in just one service or you have one service for each method. The controller should call them anyway.
Let the controller the responsability to map into a DTO that fits view needs, but it should not do that.
If you mean the application service returns domain objects and the controller translates them to DTOs, then no, you shouldn't do that as you are exposing the domain to clients.
I understand your problem I think, let's start at the beginning:
...Put GetAllParent(), GetAllEntities(parentId) and GetEntity(id) all in MyViewApplicationService...
Which of those words would a business person understand? Are any of those words part of the ubiquitous language?
They are of course all purely technical, therefore should be detail and should not influence the architecture. Basically they would be in the wrong place, they should not be visible at all.
...but DTO's are domain-oriented, somewhat generic. So DTO's are not optimized...
DTOs should not be part of anything remotely object-oriented. However, you did not say you want object-orientation so let's not dwell on that.
Still, if your object is supposed to be domain-oriented, then how come it's unfit (not optimized) for the application that is written specifically for that domain?
I think the problem is that your "object" is actually modeling something different than the domain. It's likely modeling the database table or its records.
I mean, if you are showing a profile for a product, then your "object" should be ProductProfile, not the generic Product. Or ProductDetails, or ProductHeroImage, and so on. Those things are of the domain and likely mentioned in the requirements document too.
Let the controller the responsability to map into a DTO that fits view needs, but it should not do that.
Why should it not do that? If the purpose of your feature is to show some data to the user, then why is that not considered a "business function". I mean it should be literally the other way around. The "view" is the business function you want, and the database/repository/controller/service or whatever is "just" technology that should be just a detail and not visible in the architecture.
Disclaimer: I must admit these views are not what most projects do under DDD, but maybe you find some sense in it to question those projects more in the future.:)
I typically use DTOs to send requests from Presentation to Application and I use DTOs to send back responses from Application back to Presentation.
I typically decorate the properties of my request DTOs with attributes such as Required, StringLength and so for. But no business rules.
However, I have read in multiple sources that I shouldn't add any sort of validation (such as attributes such as Required, StringLength, etc.) to DTOs at all.
I would like to ask what is the recommended way in light of the DDD literature. Thank you.
The concept within DDD that is responsible for input validation is the application service.
So you should remove the attributes and validate the DTOs in the app service. Use the DTOs to define structure only, and perform input validation in the app service.
If you get a lot of duplicated code in different app services (e.g. because they use the same DTO and require the same validation), you can of course extract the validation into a reusable service. Still, the app service controls input validation.
According to Mythz (Getting ServiceStack to retain type information) he recommends not using inheritance in DTOs. What about the use case where I want every request to optionally provide an API key, or a location parameter? Are interfaces bad to use but abstract classes are ok? Anyone have any recommendation?
Your use case, "every request to optionally provide an API key, or a location parameter" is traditionally handled in SS through filter attributes
Here's an example where a required authorization header is managed both server-side and client-side.
Given the right requirements, Mythz has recommended the use of a custom interface in DTOs: "Request DTO that implements a custom ITenant interface that just has a Tenant property. Another solution is to use a IHttpRequest.Tennant() extension method that you can re-use inside all services which inspects the AbsoluteUri or RawUrl properties." See this comment: (Multi-tenant ServiceStack API, same deployment to respond to requests on different hostnames?)
I use interfaces and then check for implementations of the interface in a request filter.
I would really like to utilize servicestack for a service I need to write but I'm hitting a mental block wrapping my mind around how it works, and more precisely how I can make it work for my intents and purposes. I don't have a strong asp background, mainly backend, so maybe that's why the mental block.
I have a legacy platform to which I connect via native c++ api. I have warped the native api in cli as a .net class library, this would be my equivalent of the Todo repo being injected in the samples.
The data moving back and forth is exposed in the class lib as value structs. For example account would be defined like this:
struct Account{
int id;
string name;
string password;
...
}
Order would be something like this:
struct Order{
int orderId;
int account;
string comment;
...
}
The lib exposes a lot of functionality and operations for all sorts of different objects defined similarly to the above. What i'm trying to understand is this:
1) How do I register the api with the container? More precisely, I don't understand how the Register method knows what type its supposed to get. In the todo sample everything is defined in the same assembly so its hard to see how the backend gets injected.
2) Is there a way to manage the lifecylce of the back end in the framework. Can I make it a singleton across all connections.
3) Do I have to wrap my structs in classes that map fields to a request. Not quiet clear on how the request objects are defined, it seems that the content of the request should be fields that would translate to urls of fieldname/type for operations. If there is a way to not have to wrap, how do I then limit which fields to expose in the api and which not to.
4)Do I absolutely have to create a service per data type, so in the structures above would I have to implement one service for Orders and one for Accounts of is there a way to combine them into one. I love that ss can be converted to talk over mq's, would making a combined service make it difficult to operate over mq's in the future, what are the cons to this approach.
5) Lastly, I would like to expose operations in the api that, afaik would violate the rest contract. Something like: archive accounts older then.... This would be an operation that returns success/fail status no update/delete ect. Essentially, drive some functionality via an http request. Is this possible in ss, if so, does using it in this way have any debilitating consequences to the framework's operation...
1) To register your API you will want to use the built in IoC with Funq.
container.Register(c => new LegacyApiService())
.ReusedWithin(ReuseScope.Container);
Funq is able to automatically wire these Services in your API Services. Take a look at https://github.com/ServiceStack/ServiceStack/wiki/The-IoC-container.
You can also resolve anywhere container is available using the TryResolve method.
2) You can controls the Object lifetime with Funq by specifing ReuseScopes when you register them. You will want to look at
ReuseScope.Container: Singleton scope
// a instance is used per application lifetime
3) You are going to need to create plan old classes (DTOs) for your structs. This is necessary for ServiceStack. All your DTOs public properties will get serialized. You can also optionally use DataMemberAttribute and the IgnoreDataMemberAttribute to control what public properties get serialized.
4) You will need to have a sservice per request DTO. However, you can keep this code at a minimun and call a centralized business layer. This is necessary since each route + verb needs to have a distinct operation, hence the one service class per DTO.
5) You can easily define more routes and there is nothing that forces you to adhere to REST rules, you are free to implement the HTTP verbs as you see fit. You can easily create specialized routes on GETs that perform an action such as archiving. There are no debilitating consequences here, just perhaps confusion to your API consumers. Just be sure everyone is clear how the API works with documentation.