Why is data access tightly coupled to the Service base in ServiceStack - servicestack

I'm curious why the decision was made to couple the Service base class in ServiceStack to data access (via the Db property)? With web services it is very popular to use a Data Repository pattern to fetch the raw data from the database. These data repositories can be used by many services without having to call a service class.
For example, let's say I am supporting a large retail chain that operates across the nation. There are a number of settings that will differ across all stores like tax rates. Each call to one of the web services will need these settings for domain logic. In a repository pattern I would simply create a data access class whose sole responsibility is to return these settings. However in ServiceStack I am exposing these settings as a Service (which it needs to be as well). In my service call the first thing I end up doing is newing up the Setting service and using it inside my other service. Is this the intention? Since the services return an object I have to cast the result to the typed service result.

ServiceStack convenience ADO.NET IDbConnection Db property allows you to quickly create Database driven services (i.e. the most popular kind) without the overhead and boilerplate of creating a repository if preferred. As ServiceStack Services are already testable and the DTO pattern provides a clean endpoint agnostic Web Service interface, there's often not a lot of value in wrapping and proxying "one-off" data-access into a separate repository.
But at the same time there's nothing forcing you to use the base.Db property, (which has no effect if unused). The Unit Testing Example on the wiki shows an example of using either base.Db or Repository pattern:
public class SimpleService : Service
{
public IRockstarRepository RockstarRepository { get; set; }
public List<Rockstar> Get(FindRockstars request)
{
return request.Aged.HasValue
? Db.Select<Rockstar>(q => q.Age == request.Aged.Value)
: Db.Select<Rockstar>();
}
public RockstarStatus Get(GetStatus request)
{
var rockstar = RockstarRepository.GetByLastName(request.LastName);
if (rockstar == null)
throw HttpError.NotFound("'{0}' is no Rockstar".Fmt(request.LastName));
var status = new RockstarStatus
{
Alive = RockstarRepository.IsAlive(request.LastName)
}.PopulateWith(rockstar); //Populates with matching fields
return status;
}
}
Note: Returning an object or a strong-typed DTO response like RockstarStatus have the same effect in ServiceStack, so if preferred you can return a strong typed response and avoid any casting.

Related

ServiceStack: business logic that depends on the database itself

I'm exploring ServiceStack and I'm not sure what is the best way to implement some business logic.
Using the "Bookings CRUD" example I would like to enforce the following rule:
a given Booking can only be saved (either created or updated) if the hotel has enough free rooms for the particular dates of that booking
Please note that I'm not asking how to calculate "free rooms".
What I'm asking is, from the architectural point of view, how should this be done.
For example, one way would be:
create a request DTO to query the number of configured rooms (lets call it "QueryRooms")
use the existing "QueryBookings" to query current bookings present in database
create a " : Service" class to customize the Booking Service, in order to intercept the "CreateBooking" and "UpdateBooking" requests
inside the custom methods for "CreateBooking" and "UpdateBooking", somehow get the results of "QueryRooms" and "QueryBookings", check if there are enough free rooms for the current request, and proceed only if so
This doesn't look very clean, because the service "CreateBooking" and "UpdateBooking" would depend of "QueryRooms" and "QueryBookings".
What would be an elegant and effcient solution, using ServiceStatck?
You can override AutoQuery CRUD operations with your own Service implementation using the AutoQuery DTO.
Where you can use the Service Gateway to call existing Services which you can use to perform any additional validation & modify the request DTO before executing the AutoQuery operation to implement the API, e.g:
public class MyCrudServices : Service
{
public IAutoQueryDb AutoQuery { get; set; }
public object Post(CreateBooking request)
{
var response = Gateway.Send(new QueryRooms
{
From = request.BookingStartDate,
To = request.BookingEndDate,
});
if (response.Results.Count == 0)
throw new Exception("No rooms available during those dates");
request.RoomNumber = response.Results[0].Id;
return AutoQuery.Create(request, base.Request);
}
}
Note: calling in-process Services with the Service Gateway is efficient as it calls the C# method implementation directly, i.e. without incurring any HTTP overhead.

Query remote rest service from ddd aggregate

I've read about the Double Dispatch pattern, which enables to pass service interfaces into aggregate methods: https://lostechies.com/jimmybogard/2010/03/30/strengthening-your-domain-the-double-dispatch-pattern/, http://blog.jonathanoliver.com/dddd-double-dispatch/.
In my domain I have a BitbucketIntegration aggregate, which is local copy of a remote bitbucket account with some additional domain specific data. Now, I have to synchronize repositories and teams, etc.. from the cloud to be able to do business operations on them. In my first implementation I was using a service to access the Bitbucket Cloud, then set the aggregate's repositories, teams, account. This way I had a DDD mixed with Anemic Domain Model, since half of the aggregates state was set using setter-like methods from the service. With Double Dispatch I can pass e.g. a BitbucketService interface into method arguments. This way, the aggregate can protect it's invariants more, since some of the data can only be verified by connecting to the rest service (e.g. if the aggregate's accessToken, bitbucketAccount and repositories are in sync), which was the service's responsibility. One more thing that smells is that I have an accessToken field in my aggregate, which is only a technical concern.
Are there any recommended patterns for keeping a copy of a remote resource in a ddd aggregate? Also, how to keep the technical side out of it? Or was the first method with a domain service good enough?
Now the code looks something like:
class BitbucketIntegration extends Aggregate<UUID> {
accountId: BitbucketId
repos: List<Repository>
localData: ...
// ... and more
Single integrateWith(accessToken, queryService) {
var id = queryService.getAccountAsync(accessToken);
var repos = queryService.getReposAsync(accessToken);
return Single.zip(id, repos,
(i, r) -> new BitbucketIntegratedEvent(accessToken, i, r))
.onSubscribe(event -> apply(event))
}
Observable doSomeBusinessLocally(data) { ... return events; }
// this is triggered by a saga
Single pollForChanges(queryService) {
var dataFromRemote = queryService.synchronizeAsync(this.accessToken);
....
return event;
}
}
class CommandHandler {
queryService: BitbucketService
Completable handle(integrateCmd) {
aggregate = repo.get(integrateCmd.id);
return aggregate.integrateWith(integrateCmd.accessToken, queryService)
.flatMap(event -> repo.store(event));
}
}
As a side note, I only query Bitbucket.
EDIT:
Martin Fowler writes about accessing an external system, including the definition of an Anti-Corruption Layer, which translates the remote resource representation to domain types.
If you inject infrastructure services into your Aggregate (by constructor or by method invocation) then you won't have a pure domain model anymore. This includes even services that have interfaces defined in the domain layer. It affects testability and introduces a dependency on the infrastructure. It also breaks the Single responsibility principle and it forces the Aggregate to know things it does not really need to.
The solution to this is to call the service before and pass the result to the Aggregate's method (i.e. in the Application layer).

Doubts on application structure and communication directions

I'm currently building a CQS-style DDD-application. I'm having some doubts on how all 'components' work with each other.
But first I'll give a brief overview about the application's structure:
ApplicationService
-> Receives command objects
-> doesn't return any results
-> Acts on Domain model
-> Speaks with Aggregate repository for domain modifications
QueryService
-> Bypasses domain model; doesn't speak with Aggregate Repositories
-> Executes queries against database to populate view
-> Returns 'Representation' objects
REST Controller
-> Receives HTTP requests and binds 'body content' & request params to Command objects
-> delegates to ApplicationService for POST, PUT & DELETE requests
-> Always returns at least some HTTP code
-> delegates to QueryService for GET requests
Infrastructure
-> Handles persistence to DB
-> Contains some scheduling operations
-> Handles foreign domain events our domain model is 'interested' in
'Open Host'
-> This is mainly a Facade to be used by other domains
-> Facade delegates methods to ApplicationService for domain modifications and to QueryService for data retrieval (bypassing Repositories)
My Questions:
Is it OK that a DomainEventHandler corresponds with a Repository and invokes some methods on a Aggregate? Or should it always correspond with an ApplicationService?
QueryService returns 'Representation' objects. These are used by UI AND by 'Open Host' Facade as return value. Is it OK these objects are reused as return value by Facade? Or should Facade create their own Objects, even the results are basically the same?
ApplicationService takes 'Commands' as input parameters. Is it OK these Commands are also used by the Open Host Facade? Or should the Facade only accept primitive values and convert them to Commands when delegating to ApplicationService?
DomainEventHandlers seem to reside on 'Infrastructure' layer. Is it possible that an ApplicationService or Domain Service also subscribes to an Domain Event? Or is this always an Infrastructure responsibility?
All advice is very welcome!
Is it OK that a DomainEventHandler corresponds with a Repository and invokes some methods on a Aggregate? Or should it always correspond with an ApplicationService?
In my experience, any handlers are application services.
QueryService returns 'Representation' objects. These are used by UI AND by 'Open Host' Facade as return value. Is it OK these objects are reused as return value by Facade? Or should Facade create their own Objects, even the results are basically the same?
There is a lot of discussion here about the differences between Open Host service and Application Service. It is not clear to me who would be using Open Host service, or why it exists.
ApplicationService takes 'Commands' as input parameters. Is it OK these Commands are also used by the Open Host Facade? Or should the Facade only accept primitive values and convert them to Commands when delegating to ApplicationService?
I would pass in primitives on the edges of the application and convert them into commands which are then handled in the Application Services
DomainEventHandlers seem to reside on 'Infrastructure' layer. Is it possible that an ApplicationService or Domain Service also subscribes to an Domain Event? Or is this always an Infrastructure responsibility?
I've always considered my handlers to be Application Services - things that are responsible for orchestrating a user case. So the use case might be "when EventX is received, send an email and update the database". In this example, you would probably consider "the code that sends the email" and "the code that saves to the database" to be infrastructure concerns, but the handler itself would not be.
public class ExampleHandler : IHandle<ExampleEvent>
{
private IRepository _repo;
private ISendEmails _emailer;
public ExampleHandler(Repository repo, ISendEmails emailer)
{
.... set the private fields..
}
public void When(ExampleEvent event)
{
_emailer.Send(event.whatever);
_repo.Save(something);
}
}
To be honest, I don't really think in terms of layers - i prefer a hexagonal architecture style of thinking. In the above example, the event handlers would just have dependencies injected into them and then go about their business.

ServiceStack versioning - how to customize the request deserialization based on versioning

I am working on a new API where we have requirement for many to many versioning.
Old Client -> New Server
Old Server -> New client
and everything in
between
I've read some of the other posts about defensive programming and having DTOs that evolve gracefully... and we are definitely going to use that pattern in most cases.
However, when we have a breaking change for CreateCustomer and require CreateCustomer2 DTO, I would like to be able to customize the way SS resolves the type that is used to deserialize, otherwise we will be forced to use routes like this:
/api/v1/createcustomer
/api/v2/createcustomer
Where I would much rather keep it /api/createcustomer and let the version number live in the Header / Querystring / whatever (not looking for a debate here :) ).
Are there any examples of how to resolve the deserialization type dynamically in ServiceStack, based on a parameter other than route + verb?
Many thanks in advance
The recommended approach for versioning is to take advantage for the natural forwards compatibility of message-based services and extend existing services defensively so it can support multiple client versions and avoid create multiple versions of the same service.
If you still want to expose /api/v1 routes than I'd recommend doing it at the infrastructure level by using a reverse proxy to re-route /api/v1 and /api/v2 requests to different running instances of ServiceStack.
If you want to do the proxying in code you can use base.ResolveService<T>() or base.ExecuteRequest() to execute different Services in code and ServiceStack's built-in AutoMapping to populate different DTO's which could look something like:
[Route("/customers")]
public class CreateCustomers {
public int Version { get; set; }
}
public class CreateCustomersV1 { ... }
public class CreateCustomersV2 { ... }
public class CustomerServices : Service
{
public object Any(CreateCustomers request)
{
return request.Version == 1
? base.ExecuteRequest(request.ConvertTo<CreateCustomersV1>())
: base.ExecuteRequest(request.ConvertTo<CreateCustomersV2>())
}
}

Servicestack - Grouping like services together

Was wondering if there's a recommended best-practice way of grouping similar services together in what's becoming a larger and larger project. Say that most of my services can be lumped in either dealing with "Pro" data or "Amateur" data (the data goes way beyond a simple flag in a table, the data itself is totally different, from different tables, on the pro or amateur side.
I know I can add routes to my classes...
/pro/service1
/am/service2
It looks like I can put the DTOs in namespaces....
What about the Service.Interface items (Service and Factory classes). Would you put those into namespaces also?
Finally, is there a way for the metadata page to reflect these groupings? I started to go down this road, but all the services listed out in alphabetical order, and you couldn't see the route or namespace differences between service1 and service2.
thank you
If you want, you can split multiple Service implementations across multiple dlls as described on the Modularizing Services wiki.
You can safely group service implementation classes into any nested folder groupings without having any impact to the external services. But changing the namespaces on DTO's can have an effect if your DTO's make use of object, interfaces or abstract classes which emit type info containing full namespaces.
In ServiceStack v4.09+ (now on MyGet) the MetadataFeature includes the ability to customize the ordering of the metadata page, e.g you can reverse the order of the metadata pages with:
var metadata = (MetadataFeature)Plugins.First(x => x is MetadataFeature);
metadata.IndexPageFilter = page => {
page.OperationNames.Sort((x,y) => y.CompareTo(x));
};
Organising your large project:
For a complex service(s) I setup 4 projects in one solution.
AppHost, This takes care of the configuration of the service. (References Model, Service and Types)
Model, This is the database model (Does not reference other projects)
Service, This is the implementation of the service only, not the interfaces or DTOs (References Model and Types)
Types, This includes my Interfaces, DTOs and routes. (Does not reference other projects)
Having a separate Types library allows the distribution to clients, for example for use with the ServiceStack JsonServiceClient.
Yes you can namespace the Interfaces, DTOs and factory classes, any way you want. They will work as long as they are referenced in your service correctly.
If you are trying to separate more than one service, you should consider separating your service code into logical folders within the Service project. i.e.
/Service/Pro
/Service/Amateur
Wrap the outer code of your Service methods in a public partial static class MyServiceStackApplication, with an appropriate name. Then reference this as the assembly in the AppHost constructor. So for example:
Pro Service (Service Project/Pro/UserActions.cs)
public partial static class MyServiceStackApplication
{
public partial class Pro
{
public class UserActionsService : Service
{
public User Get(GetUserRequest request)
{
}
}
// ...
}
}
Pro Service (Service Project/Pro/OtherActions.cs)
public partial static class MyServiceStackApplication
{
public partial class Pro
{
public class OtherActionsService : Service
{
public Other Get(GetOtherRequest request)
{
}
}
// ...
}
}
Amateur Service (Service Project/Am/UserActions.cs)
public partial static class MyServiceStackApplication
{
public partial class Amateur
{
public class UserActionsService : Service
{
public User Get(GetUserRequest request)
{
}
}
// ...
}
}
etc.
You can see from the above code we can have multiple files, all separated out and organised, but one assembly for ServiceStack to reference in the AppHost:
public AppHost() : base("Pro & Amateur Services", typeof(MyServiceStackApplication).Assembly) {}
Using the reference to the MyServiceStackApplication assembly, and using the partial keyword allows you to organise the code into manageable groupings.
Metadata:
Unfortunately separating the metadata by namespace isn't supported. You could try and customize the MetaDataFeature yourself, but it does seem like a useful feature, being able to separate multiple services where they are hosted in the one ServiceStack application. I would suggest you raise a feature request.
Mythz is bringing out features faster than lightning. :) Seems like he has that covered in the next release and you should be able to apply a custom filter to HostContext.Metadata.OperationNamesMap.

Resources