ServiceStack service for none standard rest + overall confusion - servicestack

I would really like to utilize servicestack for a service I need to write but I'm hitting a mental block wrapping my mind around how it works, and more precisely how I can make it work for my intents and purposes. I don't have a strong asp background, mainly backend, so maybe that's why the mental block.
I have a legacy platform to which I connect via native c++ api. I have warped the native api in cli as a .net class library, this would be my equivalent of the Todo repo being injected in the samples.
The data moving back and forth is exposed in the class lib as value structs. For example account would be defined like this:
struct Account{
int id;
string name;
string password;
...
}
Order would be something like this:
struct Order{
int orderId;
int account;
string comment;
...
}
The lib exposes a lot of functionality and operations for all sorts of different objects defined similarly to the above. What i'm trying to understand is this:
1) How do I register the api with the container? More precisely, I don't understand how the Register method knows what type its supposed to get. In the todo sample everything is defined in the same assembly so its hard to see how the backend gets injected.
2) Is there a way to manage the lifecylce of the back end in the framework. Can I make it a singleton across all connections.
3) Do I have to wrap my structs in classes that map fields to a request. Not quiet clear on how the request objects are defined, it seems that the content of the request should be fields that would translate to urls of fieldname/type for operations. If there is a way to not have to wrap, how do I then limit which fields to expose in the api and which not to.
4)Do I absolutely have to create a service per data type, so in the structures above would I have to implement one service for Orders and one for Accounts of is there a way to combine them into one. I love that ss can be converted to talk over mq's, would making a combined service make it difficult to operate over mq's in the future, what are the cons to this approach.
5) Lastly, I would like to expose operations in the api that, afaik would violate the rest contract. Something like: archive accounts older then.... This would be an operation that returns success/fail status no update/delete ect. Essentially, drive some functionality via an http request. Is this possible in ss, if so, does using it in this way have any debilitating consequences to the framework's operation...

1) To register your API you will want to use the built in IoC with Funq.
container.Register(c => new LegacyApiService())
.ReusedWithin(ReuseScope.Container);
Funq is able to automatically wire these Services in your API Services. Take a look at https://github.com/ServiceStack/ServiceStack/wiki/The-IoC-container.
You can also resolve anywhere container is available using the TryResolve method.
2) You can controls the Object lifetime with Funq by specifing ReuseScopes when you register them. You will want to look at
ReuseScope.Container: Singleton scope
// a instance is used per application lifetime
3) You are going to need to create plan old classes (DTOs) for your structs. This is necessary for ServiceStack. All your DTOs public properties will get serialized. You can also optionally use DataMemberAttribute and the IgnoreDataMemberAttribute to control what public properties get serialized.
4) You will need to have a sservice per request DTO. However, you can keep this code at a minimun and call a centralized business layer. This is necessary since each route + verb needs to have a distinct operation, hence the one service class per DTO.
5) You can easily define more routes and there is nothing that forces you to adhere to REST rules, you are free to implement the HTTP verbs as you see fit. You can easily create specialized routes on GETs that perform an action such as archiving. There are no debilitating consequences here, just perhaps confusion to your API consumers. Just be sure everyone is clear how the API works with documentation.

Related

CQRS with REST APIs

I am building a REST service over CQRS using EventSourcing to distribute changes to my domain across services. I have the REST service up and running, with a POST endpoint for creating the initial model and then a series of PATCH endpoints to change the model. Each end-point has a command associated with it that the client sends as a Content-Type parameter. For example, Content-Type=application/json;domain-command=create-project. I have the following end-points for creating a Project record on my task/project management service.
api.foo.com/project
Verb: POST
Command: create-project
What it does: Inserts a new model in the event store with some default values set
api.foo.com/project/{projectId}
Verb: PATCH
Command: rename-project
What it does: Inserts a project-renamed event into the event store with the new project name.
api.foo.com/project/{projectId}
Verb: PATCH
Command: reschedule-project
What it does: Inserts a project-rescheduled event into the event store with the new project due date.
api.foo.com/project/{projectId}
Verb: PATCH
Command: set-project-status
What it does: Inserts a project-status-changed event into the event store with the new project status (Active, Planning, Archived etc).
api.foo.com/project/{projectId}
Verb: DELETE
Command: delete-project
What it does: Inserts a project-deleted event into the event store
Traditionally in a REST service you would offer a PUT endpoint so the record could be replaced. I'm not sure how that works in the event-sourcing + CQRS pattern. Would I only ever use POST and PATCH verbs?
I was concerned I was to granular and that every field didn't need a command associated with it. A PUT endpoint could be used to replace pieces. My concern though was that the event store would get out of sync so I just stuck with PATCH endpoints. Is this level of granularity typical? For a model with 6 properties on it I have 5 commands to adjust the properties of the model.
This is a common question that we get a lot of the time when helping developers getting started with CQRS/ES. We need to acknowledge that applying REST in a pure way is a really bad match for DDD/CQRS since the intention of the commands are not explicitly expressed in the verbs GET/POST/PUT/PATCH/DELETE (even though you can use content-type like you did). Also the C/R-side of the system are definitely different resources in a CQRS-system which does not match up with REST.
However, to use HTTP to provide an API for a CQRS/ES system is very practical.
We usually only use POST for sending commands, to either a /commands endpoint or to endpoints with the name of the command, i.e /commands/create-project. It's all about how strict you want to be. In this case we embed the command type in the payload or as a content-type.
However, it is all a matter of what matches the tech stack better and what you choose here usually does not make or break the solution. The more important part is usually to create a good domain model and get the whole team onboard with this way of thinking.
Good luck!
One question that comes to mind is, is REST the right paradigm for CQRS at all?
One completely different way to structure this is to not have action-focused endpoints, but instead structure your REST API as a series of events that you add new events to (with POST).
Events should be immutable and append-only, so maybe a DELETE method doesn't make that much sense for mutations.
If you're going all in with CQRS (good luck, I've heard the war stories) I would be inclined to build an API that reflects that model well.
Would I only ever use POST and PATCH verbs?
Most of the time, you would use POST.
PUT, and PATCH are defined with remote authoring semantics - they are methods used to copy new representations of a resource from the client to the server. For example, the client GETs a representation of /project/12345, makes local edits, and then uses PUT to request that the server accept the client's new representation of the resource as its own.
PATCH, semantically, is a similar exchange of messages - the difference being that instead of sending the full representation of the resource, the client returns a "patch-document" that the server can apply to its copy to make the changes.
Now, technically, the PATCH documentation does put any restrictions on what a "patch-document" is. In order for PATCH to be more useful that POST, however, we need patch document formats that are general purpose and widely recognized (for instance, application/merge-patch+json or application/json-patch+json).
And that's not really the use case you have here, where you are defining command messages that are specific to your domain.
Furthermore, remote authoring semantics don't align very well with "domain modeling" (which is part of the heritage of CQRS). When we're modeling a domain, we normally give the domain model the authority to decide how to integrate new information with what the server already knows. PUT and PATCH semantics are more like what you would use to write information into an anemic data store.
On the other hand, it is okay to use POST
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” -- Fielding, 2009
It may help to recall that REST is the architectural style of the world wide web, and the only unsafe method supported by html is POST.
So replacing your PATCH commands with POST, and you're on the right path.
Fielding, 2008
I should also note that the above is not yet fully RESTful, at least how I use the term. All I have done is described the service interfaces, which is no more than any RPC. In order to make it RESTful, I would need to add hypertext to introduce and define the service, describe how to perform the mapping using forms and/or link templates, and provide code to combine the visualizations in useful ways. I could even go further and define these relationships as a standard, much like Atom has standardized a normal set of HTTP relationships with expected semantics
The same holds here - we aren't yet at "REST", but we have improved things by choosing standardized methods that are better aligned with our intended semantics.
One final note -- you should probably replace your use of DELETE with POST as well. DELETE is potentially a problem for two reasons -- the semantics aren't what you want, and the standard delete payload has no defined semantics
Expressed another way: DELETE is from the transferring documents over a network domain, not from your domain. A DELETE message sent to your resources should be understood to mean the same thing as a DELETE message sent to any other resource is understood. That's the uniform interface constraint at work: we all agree that the HTTP method tokens mean the same thing everywhere.
Relatively few resources allow the DELETE method -- its primary use is for remote authoring environments, where the user has some direction regarding its effect -- RFC 7231
As before: remote authoring semantics are not obviously a good fit for sending messages to a domain model.
This Google Cloud article API design: Understanding gRPC, OpenAPI and REST and when to use them clarifies the REST vs RPC debate. REST is more relevant for entity-centric API whereas RPC is more relevant for action-centric API (and CQRS). The most mature REST level 3 with hypermedia controls works well only for entities with simple state models.
Understand and evaluate first the benefits of REST for your case. Many APIs are REST-ish and not RESTful. OpenAPI is actually RPC mapped over and HTTP endpoints but it doesn't prevent it to be widely adopted.

DDD Layers and External Api

Recently I've been trying to make my web application use separated layers.
If I understand the concept correctly I've managed to extract:
Domain layer
This is where my core domain entities, aggregate roots, value objects reside in. I'm forcing myself to have pure domain model, meaning i do not have any service definitions here. The only thing i define here is the repositories, which is actually hidden because axon framework implements that for me automatically.
Infrastructure layer
This is where the axon implements the repository definitions for my aggregates in the domain layer
Projection layer
This is where the event handlers are implemented to project the data for the read model using MongoDB to persist it. It does not know anything other than event model (plain data classes in kotlin)
Application layer
This is where the confusion starts.
Controller layer
This is where I'm implementing the GraphQL/REST controllers, this controller layer is using the command and query model, meaning it has knowledge about the Domain Layer commands as well as the Projection Layer query model.
As I've mentioned the confusion starts with the application layer, let me explain it a bit with simplified example.
Considering I want a domain model to implement Pokemon fighting logic. I need to use PokemonAPI that would provide me data of the Pokemon names stats etc, this would be an external API i would use to get some data.
Let's say that i would have domain implemented like this:
(Keep in mind that I've stretched this implementation so it forces some issues that i have in my own domain)
Pokemon {
id: ID
}
PokemonFight {
id: ID
pokemon_1: ID
pokemon_2: ID
handle(cmd: Create) {
publish(PokemonFightCreated)
}
handle(cmd: ProvidePokemonStats) {
//providing the stats for the pokemons
publish(PokemonStatsProvided)
}
handle(cmd: Start) {
//fights only when the both pokemon stats were provided
publish(PokemonsFought)
}
The flow of data between layers would be like this.
User -> [HTTP] -> Controller -> [CommandGateway] -> (Application | Domain) -> [EventGateway] -> (Application | Domain)
Let's assume that two of pokemons are created and the use case of pokemon fight is basically that when it gets created the stats are provided and then when the stats are provided the fight automatically starts.
This use case logic can be solved by using event processor or even saga.
However as you see in the PokemonFight aggregate, there is [ProvidePokemonStats] command, which basically provides their stats, however my domain do not know how to get such data, this data is provided with the PokemonAPI.
This confuses me a bit because the use case would need to be implemented on both layers, the application (so it provides the stats using the external api) and also in the domain? the domain use case would just use purely domain concepts. But shouldn't i have one place for the use cases?
If i think about it, the only purpose saga/event processor that lives in the application layer is to provide proper data to my domain, so it can continue with it's use cases. So when external API fails, i send command to the domain and then it can decide what to do.
For example i could just put every saga / event processor in the application, so when i decide to change some automation flow i exactly know what module i need to edit and where to find it.
The other confusion is where i have multiple domains, and i want to create use case that uses many of them and connects the data between them, it immediately rings in my brain that this should be application layer that would use domain APIs to control the use case, because I don't think that i should add dependency of different domain in the core one.
TL;DR
What layer should be responsible of implementing the automated process between aggregates (can be single but you know what i mean) if the process requires some external API data.
What layer should be responsible of implementing the automated process between aggregates that live in different domains / micro services.
Thank you in advance, and I'm also sorry if what I've wrote sounds confusing or it's too much of text, however any answers about layering the DDD applications and proper locations of the components i would highly appreciate.
I will try to put it clear. If you use CQRS:
In the Write Side (commands): The application services are the command handlers. A cmd handler accesses the domain (repositories, aggreagates, etc) in order to implement a use case.
If the use case needs to access data from another bounded context (microservice), it uses an infraestructure service (via dependency injection). You define the infraestructure service interface in the application service layer, and the implementation in the infra layer. The infra then access the remote microservice via http rest for example. Or integration through events.
In the Read Side (queries): The application service is the query method (I think you call it projection), which access the database directly. There's no domain here.
Hope it helps.
I do agree your wording might be a bit vague, but a couple of things do pop up in my mind which might steer you in the right direction.
Mind you, the wording makes it so that I am not 100% sure whether this is what you're looking for. If it isn't, please comment and correct my on the answer I'll provide, so I can update it accordingly.
Now, before your actual question, I'd firstly like to point out the following.
What I am guessing you're mixing is the notion of the Messages and your Domain Model belonging to the same layer. To me personally, the Messages (aka your Commands, Events and Queries) are your public API. They are the language your application speaks, so should be freely sharable with any component and/or service within your Bounded Context.
As such, any component in your 'application layer' contained in the same Bounded Context should be allowed to be aware of this public API. The one in charge of the API will be your Domain Model, that's true, but these concepts have to be shared to be able to communicate with one another.
That said, the component which will provide the states to your aggregate can be viewed from two directions I think.
It's a component that handles a specific 'Start Pokemon Match' Command. This component has the smarts to know to firstly retrieve the states prior to being able to dispatch a Create and ProvidePokemonStats command, thus ensuring it'll consistently create a working match with the stats in it by not dispatching any of both of the external stats-retrieval API fails.
Your angle in the question is to have an Event Handling Component that reacts on the creation of a Match. From here, I'd state a short-lived saga would be in place, as you'd need to deal with the fault scenario of not being able to retrieve the stats. A regular Event Handler is likely to lean to deal with this correctly.
Regardless of the two options you select, this service will deal with messages, a.k.a. your public API. As such it's within your application and not a component others will deal with directly, ever.
When it comes to your second question, I feel the some notion still holds. Two distinct applications/microservices only more so suggests your talking about two different Bounded Contexts. Certainly then a Saga would be in place to coordinate the operations between both contexts. Note that between Bounded Contexts, you want to share consciously when it comes to the public API, as you'd ideally not expose everything to the outside world.
Hope this helps you out and if not, like I said, please comment and provide me guidance how to answer your question properly.

Send non-DataContract class to Azure Service Fabric Actor

The Service Fabric requires [DataContract] and [DataMember] attributes for all the classes that are used as input parameters for the Actor services.
Is there a way to override this default?
In our project we heavily use the read-only message classes, which have read-only properties and constructors. The serialization is handled by Newtonsoft JSON serializer, which works just great. Now, I want to send this messages in Service Fabric, so I need a way to override the default WCF-like serialization to something like JSON serializer.
I was able to make a workaround by implementing the serialization myself. Basically, this is how it works:
Define an actor (+ contract) which accepts and returns byte[] + type name, e.g.
Task<byte[]> DoWork(string typeName, byte[] message);
Deserialize byte[] to an instance of the specified type by using custom serializer (I used Newtonsoft JSON converter).
On the sending side, serialize object to byte[] using the same serializer.
I defined some base classes to wrap that, so I don't have to repeat it for every actor / client.
This feels a bit hacky though. Would be nice to get some opinion from the Service Fabric team. Would be even nicer to get the native extensibility point from the platform itself. DataContract feels a bit archaic, it's not used anywhere in our project anymore.
This approach is described in more details in my blog post.
I don't think this is possible at the minute unfortunately. From this article,
"Reliable Collections allow the serializer to be overridden, but
Reliable Actors currently do not."
So it might be worth mapping to new classes.

JsonServiceClient methods and IReturn

In our team, we use the request and response DTO's, through our hierarchy of business logic assemblies (beyond the isolated DB DTO's ).
We have a requirement for no SS dependencies at the business logic layer.
So we don't use the IReturn or IReturnVoid interface. We use only simple c# objects without inheritance.
As for the Routing, we use the Fluent API, in the AppHost.Configure, creating essentially a routing table.
ServiceStack behaves exceptionally well, in our case.
Our Service.Model can be used from the business logic layer, without dependencies.
The service function is a thin wrapper actually, calling the business logic function, to return the response DTO.
But the JsonServiceClient.Get function accepts only as parameter an IReturn object, OR directly the URI.
It does not accept an object as parameter , like the Post function.
Any advice ?
Update 1.
mythz,
About IReturn, unfortunately, in our case there are requirements not using in business logic modules,
even the lighter SS dependency.
The service functions are a thin wrapper calling the business modules.
The link between the two layers are only the Request and Response DTOs. We like very much this approach.
Yes, they are "message operations" but they serve as messages also between the application layers.
Also my clients mainly are Jquery Ajax, not C#. Because of the mobile, the great majority inclined to Jquery Ajax.
So, in our case, we can use only objects, not marked with IReturn. ServiceStack behaves very well.
The API only accepts IReturn<TResponse> to make it clear that it only accepts and works with Request DTO's and not just any DTO or object. Request DTO's are "message operations" and shouldn't be re-used for anything else, the DTO types can be, but not the Request DTO which is your external facing service contract and shouldn't be coupled to any other concerns.
The DTO attributes like [Route], IReturn<T>, [Api], [Restrict], etc are just extra metadata that can't be expressed in C#, but just like defining the type of a DTO property, it's still metadata describing the service, and if you attribute them on the DTO's then they become sharable and introspectable on the client as well. E.g. the ServiceClients will only be able to use the custom routes defined with [Route] because that's the only information clients have, if there are none it will end up falling back to using the pre-defined routes.
ServiceStack encourages defining IReturn<T> markers as it lets you infer more about the service by glancing at the Request DTO, ensures services are restricted in returning the same type (good practice) and centralizes what the Service returns rather than spreading out over the different (more verbose/non-DRY) call-sites, which also means if you change the Response a service returns you'll get compiler feedback on which call-sites needs updating. Not everyone is aware of this info/behavior which is why ServiceStack wants to encourage this "pit of success" development by encouraging the use of IReturn<T> markers, so not everyone has to be.
As for the dependencies, the only dependency your DTO's should need to reference is ServiceStack.Interfaces.dll which is purposely a light-weight, impl-free dll. In v3 this needs to reference the ServiceStack.Common NuGet pkg but for v4 we'll provide a stand-alone ServiceStack.Interfaces NuGet pkg providing the minimum/lightest dependency your DTO's can reference.

Breeze.js - Securing IQueryable calls

I'm rather new at this, but I've come to understand the security risks of using Breeze to expose an IQueryable<>. Would someone please suggest to me some best practices (or merely some recommendations) for securing an IQueryable collection that's exposed in the JavaScript? Thanks.
I would not expose any data via IQueryable that should nto be sent to the client via a random query. So a projection could be exposed or a DTO.
I'm not sure if this answers your question tho ... What "security risks" are you worried about?
I second this question, too. But to add some specifics along the questions that Ward asked:
In securing queryable services, two traditional issues come to mind:
1) Vertical security: Which items is the currently logged in user (based on user identity or roles) NOT allowed to see in the UI. Those need to be removed from the queryable list. IMO, this can be done as part of the queryable ActionFilter magic by chaining some exclude logic on the returned IQueryable.
2) Horizontal security: Some models contain fields that are not appropriate for the logged in user to see (and/or edit). This is more difficult to handle as it's not a matter of just removing instances from the returned IQueryable. The returned class has a different shape and therefore can be handled either by the json formatter omitting the fields based on security (which AFAIK screws up breeze meta data) or you return a DTO in which case since the DTO doesn't exist in the metadata it's not a full life cycle (updatable) class? (I am asking this not stating it)
I would like to see either built-in support or easy to implement recipes for number 2). Perhaps some sample code to amend the client side metadata to make DTOs work perfectly fine comingled with model objects. The newset VS 2012 SPA templates (in the TodoList app) seem to push DTO variants of the model object both on the queryable and insert/update side. This is similar to the traditional MVC modelviews...
Finally - I'd add a request to auto-handling of the overposting security issue for inserts and updates. This is the reciprocal aspect of 2). Some users should not be able to edit certain fields.

Resources