The Service Fabric requires [DataContract] and [DataMember] attributes for all the classes that are used as input parameters for the Actor services.
Is there a way to override this default?
In our project we heavily use the read-only message classes, which have read-only properties and constructors. The serialization is handled by Newtonsoft JSON serializer, which works just great. Now, I want to send this messages in Service Fabric, so I need a way to override the default WCF-like serialization to something like JSON serializer.
I was able to make a workaround by implementing the serialization myself. Basically, this is how it works:
Define an actor (+ contract) which accepts and returns byte[] + type name, e.g.
Task<byte[]> DoWork(string typeName, byte[] message);
Deserialize byte[] to an instance of the specified type by using custom serializer (I used Newtonsoft JSON converter).
On the sending side, serialize object to byte[] using the same serializer.
I defined some base classes to wrap that, so I don't have to repeat it for every actor / client.
This feels a bit hacky though. Would be nice to get some opinion from the Service Fabric team. Would be even nicer to get the native extensibility point from the platform itself. DataContract feels a bit archaic, it's not used anywhere in our project anymore.
This approach is described in more details in my blog post.
I don't think this is possible at the minute unfortunately. From this article,
"Reliable Collections allow the serializer to be overridden, but
Reliable Actors currently do not."
So it might be worth mapping to new classes.
Related
In our team, we use the request and response DTO's, through our hierarchy of business logic assemblies (beyond the isolated DB DTO's ).
We have a requirement for no SS dependencies at the business logic layer.
So we don't use the IReturn or IReturnVoid interface. We use only simple c# objects without inheritance.
As for the Routing, we use the Fluent API, in the AppHost.Configure, creating essentially a routing table.
ServiceStack behaves exceptionally well, in our case.
Our Service.Model can be used from the business logic layer, without dependencies.
The service function is a thin wrapper actually, calling the business logic function, to return the response DTO.
But the JsonServiceClient.Get function accepts only as parameter an IReturn object, OR directly the URI.
It does not accept an object as parameter , like the Post function.
Any advice ?
Update 1.
mythz,
About IReturn, unfortunately, in our case there are requirements not using in business logic modules,
even the lighter SS dependency.
The service functions are a thin wrapper calling the business modules.
The link between the two layers are only the Request and Response DTOs. We like very much this approach.
Yes, they are "message operations" but they serve as messages also between the application layers.
Also my clients mainly are Jquery Ajax, not C#. Because of the mobile, the great majority inclined to Jquery Ajax.
So, in our case, we can use only objects, not marked with IReturn. ServiceStack behaves very well.
The API only accepts IReturn<TResponse> to make it clear that it only accepts and works with Request DTO's and not just any DTO or object. Request DTO's are "message operations" and shouldn't be re-used for anything else, the DTO types can be, but not the Request DTO which is your external facing service contract and shouldn't be coupled to any other concerns.
The DTO attributes like [Route], IReturn<T>, [Api], [Restrict], etc are just extra metadata that can't be expressed in C#, but just like defining the type of a DTO property, it's still metadata describing the service, and if you attribute them on the DTO's then they become sharable and introspectable on the client as well. E.g. the ServiceClients will only be able to use the custom routes defined with [Route] because that's the only information clients have, if there are none it will end up falling back to using the pre-defined routes.
ServiceStack encourages defining IReturn<T> markers as it lets you infer more about the service by glancing at the Request DTO, ensures services are restricted in returning the same type (good practice) and centralizes what the Service returns rather than spreading out over the different (more verbose/non-DRY) call-sites, which also means if you change the Response a service returns you'll get compiler feedback on which call-sites needs updating. Not everyone is aware of this info/behavior which is why ServiceStack wants to encourage this "pit of success" development by encouraging the use of IReturn<T> markers, so not everyone has to be.
As for the dependencies, the only dependency your DTO's should need to reference is ServiceStack.Interfaces.dll which is purposely a light-weight, impl-free dll. In v3 this needs to reference the ServiceStack.Common NuGet pkg but for v4 we'll provide a stand-alone ServiceStack.Interfaces NuGet pkg providing the minimum/lightest dependency your DTO's can reference.
According to Mythz (Getting ServiceStack to retain type information) he recommends not using inheritance in DTOs. What about the use case where I want every request to optionally provide an API key, or a location parameter? Are interfaces bad to use but abstract classes are ok? Anyone have any recommendation?
Your use case, "every request to optionally provide an API key, or a location parameter" is traditionally handled in SS through filter attributes
Here's an example where a required authorization header is managed both server-side and client-side.
Given the right requirements, Mythz has recommended the use of a custom interface in DTOs: "Request DTO that implements a custom ITenant interface that just has a Tenant property. Another solution is to use a IHttpRequest.Tennant() extension method that you can re-use inside all services which inspects the AbsoluteUri or RawUrl properties." See this comment: (Multi-tenant ServiceStack API, same deployment to respond to requests on different hostnames?)
I use interfaces and then check for implementations of the interface in a request filter.
I'm rather new at this, but I've come to understand the security risks of using Breeze to expose an IQueryable<>. Would someone please suggest to me some best practices (or merely some recommendations) for securing an IQueryable collection that's exposed in the JavaScript? Thanks.
I would not expose any data via IQueryable that should nto be sent to the client via a random query. So a projection could be exposed or a DTO.
I'm not sure if this answers your question tho ... What "security risks" are you worried about?
I second this question, too. But to add some specifics along the questions that Ward asked:
In securing queryable services, two traditional issues come to mind:
1) Vertical security: Which items is the currently logged in user (based on user identity or roles) NOT allowed to see in the UI. Those need to be removed from the queryable list. IMO, this can be done as part of the queryable ActionFilter magic by chaining some exclude logic on the returned IQueryable.
2) Horizontal security: Some models contain fields that are not appropriate for the logged in user to see (and/or edit). This is more difficult to handle as it's not a matter of just removing instances from the returned IQueryable. The returned class has a different shape and therefore can be handled either by the json formatter omitting the fields based on security (which AFAIK screws up breeze meta data) or you return a DTO in which case since the DTO doesn't exist in the metadata it's not a full life cycle (updatable) class? (I am asking this not stating it)
I would like to see either built-in support or easy to implement recipes for number 2). Perhaps some sample code to amend the client side metadata to make DTOs work perfectly fine comingled with model objects. The newset VS 2012 SPA templates (in the TodoList app) seem to push DTO variants of the model object both on the queryable and insert/update side. This is similar to the traditional MVC modelviews...
Finally - I'd add a request to auto-handling of the overposting security issue for inserts and updates. This is the reciprocal aspect of 2). Some users should not be able to edit certain fields.
I would really like to utilize servicestack for a service I need to write but I'm hitting a mental block wrapping my mind around how it works, and more precisely how I can make it work for my intents and purposes. I don't have a strong asp background, mainly backend, so maybe that's why the mental block.
I have a legacy platform to which I connect via native c++ api. I have warped the native api in cli as a .net class library, this would be my equivalent of the Todo repo being injected in the samples.
The data moving back and forth is exposed in the class lib as value structs. For example account would be defined like this:
struct Account{
int id;
string name;
string password;
...
}
Order would be something like this:
struct Order{
int orderId;
int account;
string comment;
...
}
The lib exposes a lot of functionality and operations for all sorts of different objects defined similarly to the above. What i'm trying to understand is this:
1) How do I register the api with the container? More precisely, I don't understand how the Register method knows what type its supposed to get. In the todo sample everything is defined in the same assembly so its hard to see how the backend gets injected.
2) Is there a way to manage the lifecylce of the back end in the framework. Can I make it a singleton across all connections.
3) Do I have to wrap my structs in classes that map fields to a request. Not quiet clear on how the request objects are defined, it seems that the content of the request should be fields that would translate to urls of fieldname/type for operations. If there is a way to not have to wrap, how do I then limit which fields to expose in the api and which not to.
4)Do I absolutely have to create a service per data type, so in the structures above would I have to implement one service for Orders and one for Accounts of is there a way to combine them into one. I love that ss can be converted to talk over mq's, would making a combined service make it difficult to operate over mq's in the future, what are the cons to this approach.
5) Lastly, I would like to expose operations in the api that, afaik would violate the rest contract. Something like: archive accounts older then.... This would be an operation that returns success/fail status no update/delete ect. Essentially, drive some functionality via an http request. Is this possible in ss, if so, does using it in this way have any debilitating consequences to the framework's operation...
1) To register your API you will want to use the built in IoC with Funq.
container.Register(c => new LegacyApiService())
.ReusedWithin(ReuseScope.Container);
Funq is able to automatically wire these Services in your API Services. Take a look at https://github.com/ServiceStack/ServiceStack/wiki/The-IoC-container.
You can also resolve anywhere container is available using the TryResolve method.
2) You can controls the Object lifetime with Funq by specifing ReuseScopes when you register them. You will want to look at
ReuseScope.Container: Singleton scope
// a instance is used per application lifetime
3) You are going to need to create plan old classes (DTOs) for your structs. This is necessary for ServiceStack. All your DTOs public properties will get serialized. You can also optionally use DataMemberAttribute and the IgnoreDataMemberAttribute to control what public properties get serialized.
4) You will need to have a sservice per request DTO. However, you can keep this code at a minimun and call a centralized business layer. This is necessary since each route + verb needs to have a distinct operation, hence the one service class per DTO.
5) You can easily define more routes and there is nothing that forces you to adhere to REST rules, you are free to implement the HTTP verbs as you see fit. You can easily create specialized routes on GETs that perform an action such as archiving. There are no debilitating consequences here, just perhaps confusion to your API consumers. Just be sure everyone is clear how the API works with documentation.
So, I've looked at some posts about the Specification Pattern here, and haven't found an answer to this one yet.
My question is, in an n-layered architecture, where exactly should me Specifications get "newed" up?
I could put them in my Service Layer (aka, Application layer it's sometimes called... basically, something an .aspx code-behind would talk to), but I feel like by doing that, I'm letting business rules leak out of the Domain. If the Domain objects are accessed some other way (besides the Service Layer), the Domain objects cannot enforce their own business rules.
I could inject the Specification into my Model class via constructor injection. But again, this feels "wrong". I feel like the only thing that should be injected into Model classes are "services", like Caching, Logging, dirty-flag tracking, etc... And if you can avoid it, to use Aspects instead of littering the constructors of the Model classes with tons of service interfaces.
I could inject the Specification via method injection (sometimes referred to as "Double Dispatch"???), and explicitly have that method encapsulate the injected Specification to enforce its business rule.
Create a "Domain Services" class, which would take a Specification(s) via constructor injection, and then let the Service Layer use the Domain Service to coordinate the Domain object. This seems OK to me, as the rule enforced by the Specification is still in the "Domain", and the Domain Service class can be named very much like the Domain object it's coordinating. The thing here is I feel like I'm writing a LOT of classes and code, just to "properly" implement the Specification pattern.
Add to this, that the Specification in question requires a Repository in order to determine whether it's "satisfied" or not.
This could potentially cause performance problems, esp. if I use constructor injection b/c consuming code could call a property that perhaps wraps the Specification, and that, in turn is calling the database.
So any ideas/thoughts/links to articles?
Where is the best place to new up and use Specifications?
Short answer:
You use Specifications mainly in your Service Layer, so there.
Long answer:
First of all, there's two questions here:
Where should your specs live, and where should they be new'd up?
Just like your repository interfaces, your specs should live in the domain layer, as they are, after all, domain specific. There's a question on SO that discusses this on repository interfaces.
Where should they be new'd up though? Well, I use LinqSpecs on my repositories and mostly ever have three methods on my repository:
public interface ILinqSpecsRepository<T>
{
IEnumerable<T> FindAll(Specification<T> specification);
IEnumerable<T> FindAll<TRelated>(Specification<T> specification, Expression<Func<T, TRelated>> fetchExpression);
T FindOne(Specification<T> specification);
}
The rest of my queries are constructed in my service layer. That keeps the repositories from getting bloated with methods like GetUserByEmail, GetUserById, GetUserByStatus etc.
In my service, I new-up my specs and pass them to the FindAll or FindOne methods of my repository. For example:
public User GetUserByEmail(string email)
{
var withEmail = new UserByEmail(email); // the specification
return userRepository.FindOne(withEmail);
}
and here is the Specification:
public class UserByEmail : Specification<User>
{
private readonly string email;
public UserByEmail(string email)
{
this.email = email;
}
#region Overrides of Specification<User>
public override Expression<Func<User, bool>> IsSatisfiedBy()
{
return x => x.Email == email;
}
#endregion
}
So to answer your question, specs are new'd up in the service layer (in my book).
I feel like the only thing that should be injected into Model classes
are "services"
IMO you should not be injecting anything into domain entities.
Add to this, that the Specification in question requires a Repository
in order to determine whether it's "satisfied" or not.
That's a code smell. I would review your code there. A Specification should definitely not require a repository.
A specification is an implementation check of a business rule. It has to exist in the domain layer full stop.
Its hard to give specifics on how you do this as every codebase is different, but any business logic in my opinion needs to be in the domain layer and nowhere else. This business logic needs to be completely testable and coupled loosely from UI, database, external services and other non-domain dependencies. So I would definitely rule out 1, 2, and 3 above.
4 is an option, at least the specification will live in your domain layer. However the newing up of specifications depends really again on the implementation. We usually use depencency injection, hence the newing up of pretty much all of our objects is performed via an IOC container and corresponding bootstrapping code (i.e. we usually wire the application fluently). However we would never directly link business logic directly to e.g. UI model classes and the like. We usually have contours/boundaries between things such as UI and domain. We usually define domain service contracts, which can then be used by outside layers such as the UI etc.
Lastly, my answer is assuming that the system you are working on is at least some way complex. If it is a very simple system, domain driven design as a concept is probably too over the top. However some concepts such as testability, readibility, SoC etc should be respected regardless of the codebase in my opinion.