How to create loosely coupled services in MVC architecture? - node.js

I've tried to Google a lot but could not get an answer.
I'm building an MVC app and I have a question which bothering my mind. I would like to create a loosely coupled (as much as possible) services.
Let's assume in a method in a controller I want to use 3 different services (Services A,B,C).
Assuming service C needs service A and service B returned values in order to work, are you actually putting all that "calls logic" in the controller? In a different layer perhaps? If so, which layer is the ideal one?
Attaching some code to better explain my question.
function evaluateStatus(a, b){
try {
const aResult = this.aService.getSomething();
const bReulst = this.bService.getSomething();
if(aResult === "SOME VALUE" && bResult <= 50){
const cResult = this.cService.doSomething(aResult);
}
} catch(e) {
}
return something;
}
Thanks a lot.

A controller can invoke multiple services to obtain data to pass it to view/client. There is no point in having another layer for this. The controller contains the logic of the site and the glued between models/services and views/clients.
But if you have common business logic that is used across multiple places then you can take them out to a class/function where it can be reused.

Related

What are the risks involved in using custom decorators as validation pipes in Nestjs?

Background
A data structure in my database consists of "sections"; lists of custom objects. The number of sections may expand in the future, so to keep my code as DRY as possible, I wanted to determine the section to add/update/delete an item from to be defined dynamically as a parameter.
I quickly realised that doing something like #Body() section: SectionA | SectionB | SectionC... disables validation so I needed a single DTO Section that could encompass all sections. To do that I need to define dynamically which validators to apply as I have several #IsNotEmpty constraints.
So I came across this post whose selected answer recommends the usage of groups.
This posed the following challenges:
I now have to write a custom validation pipe. Relied heavily on this
I want to override the global validator pipe that I already had running and use my custom one for just that method. Outcome: didn't work, had to start defining the pipe on every controller method, a tradeoff I am willing to accept. Looks like there is no simple alternative.
However, I'm now faced with the final problem: how to use the parameters in the request to define these groups in the validator; another brick wall. No simple solution.
Solution
This question has been asked here but no satisfactory solution was actually given.
Option one recommended redefining the scope of the pipe to "request" level but didn't explain how, and solutions found online didn't work.
The second solution, using a custom decorator to perform the validation instead, did work, very well in fact here is a simplified version of the code:
export const ProfileSectionData = createParamDecorator(
async (data: unknown, ctx: ExecutionContext) => {
const request = ctx.switchToHttp().getRequest();
let object = plainToInstance(SectionDto, request.body); // I don't need to access the metatype from the request because I know what type I need but I'm sure I could if need-be.
const groups = [request.params.profileSection];
let validatorOptions = { groups, ...defaultOptions };
const errors = await validate(object, validatorOptions);
if (errors.length > 0) {
throw new BadRequestException();
}
return request.body;
},
);
Implications?
Here's my question. When Jay McDoniel recommended using a custom decorator, they warn: "Do note, that this could impact how the ValidationPipe is functioning if that is bound globally, at the class, or method level."
What does this mean?
Are there any vulnerabilities or performance drawbacks associated with this solution?
Obviously, one drawback is that you are using validation outside a validation pipe which is not ideal from a point of view of order and single-responsibility but I can't think of tangible inconveniences beyond aesthetics and maintainability.
Knowing the background, would you have approached the problem in a completely different way?

DDD: Create one aggregate root within another AR

Suppose that I have 2 aggregate roots (AR) in my domain and invoking some method on the 1st requires access to an instance of the 2nd. In DDD how and where should retrieval and creation of the 2nd AR happen?
Here's a contrived example TravelerEntity that needs access to a SuitcaseEntity. I'm looking for an answer that doesn't pollute the domain layer with infrastructure code.
public class TravelerEntity {
// null if traveler has no suitcase yet.
private String suitcaseId = ...;
...
// Returns an empty suitcase ready for packing. Caller
public SuitcaseEntity startTrip(SuitcaseRepository repo) {
SuitcaseEntity suitcase;
if (suitcaseId == null) {
suitcase = new SuitcaseFactory().create();
suitcase = repo.save(suitcase);
suitcaseId = suitcase.getId();
} else {
suitcase = repo.findOne(suitcaseId);
}
suitcase.emptyContents();
return suitcase;
}
}
An application layer service handling the start trip request would get the appropriate SuitcaseRepository implementation via DI, get the TravelerEntity via a TravelerRepository implementation and call its startTrip() method.
The only alternative I thought of was to move SuitcaseEntity management to a domain service, but I don't want to create the suitcase before starting the trip, and I don't want to end up with an anemic TravelerEntity.
I'm a little uncertain about one AR creating and saving another AR. Is this OK since the repo and factory encapsulate specifics about the 2nd AR? Is there a danger I'm missing? Is there a better alternative?
I'm new enough to DDD to question my thinking on this. And the other questions I found about ARs seem to focus on identifying them properly, not on managing their lifecycles in relation to one another.
Ideally TravelerEntity wouldn't manipulate a SuitcaseRepository because it shouldn't know about an external thing where suitcases are stored, only about its own internals. Instead, it could new up a SuitCase and add it to its internal [list of] suitcases. If you wanted that to work with ORMs without specifically adding the suitcase to the repository though, you'd have to store the whole suitcase object in TravelerEntity.suitcaseList and not just its ID, which conflicts with the "store references to other AR's as IDs" best practice.
Moreover, TravelerEntity.startTrip() returning a suitcase seems a bit artificial and unexplicit and you'll be in trouble if you need to return other entities created by startTrip(). So a good solution could be to have TravelerEntity emit a SuitcaseAdded event with the suitcase data in it once it has added the suitcase to its list. An application service could subscribe to the event, add the suitcase to SuitcaseRepository and commit the transaction, effectively saving both the new suitcase and the modified traveler to the database.
Alternatively, you could place startTrip() in a Domain Service instead of an Entity. There it might be more legit to use SuitcaseRepository since a domain service is allowed know about multiple domain entities and the overall domain process going on.
First of all persistence is not domain's job so i would get rid of all the repositories from the domain models and create a service that would use them.
Second of all you should rethink your design. Why a StartTrip method of a Traveller should return a SuitCase?
A Traveller either has or hasn't a suitcase. Once you have retrieved the Traveller you should already have their SuitCases too.
public class StartTripService {
public void StartTrip(int travellerId) {
var traveller = travellerRepo.Get(travellerId);
traveller.StartTrip();
}
}

Hiding a method for a related model

How can I hide a method for a related model?
Let's say that, in the demo app loopback-example-datagraph, I don't want to expose the DELETE /customers/{id}/orders method.
How should I go about this?
For loopback 1.x, the relation is mapped to a prototype method internally. To not expose it as REST APIs, try the following:
var customer = app.models.Customer;
customer.prototype.__delete_orders.shared = false;
Disclaimer I Have Never Used StrongLoop
Wild stab but it looks like this might work. When you add a relation it adds a method to the underlying model class. when you add a has many it adds this method
customer.orders.destroyAll(function(err) {
...
});
source: http://docs.strongloop.com/display/DOC/Creating+model+relations#Creatingmodelrelations-Methodsaddedtothemodel.1
You should be able to say something like
var customer = app.models.Customer;
customer.orders.destroyAll.shared = false;

ServiceStack: RESTful Resource Versioning

I've taken a read to the Advantages of message based web services article and am wondering if there is there a recommended style/practice to versioning Restful resources in ServiceStack? The different versions could render different responses or have different input parameters in the Request DTO.
I'm leaning toward a URL type versioning (i.e /v1/movies/{Id}), but I have seen other practices that set the version in the HTTP headers (i.e Content-Type: application/vnd.company.myapp-v2).
I'm hoping a way that works with the metadata page but not so much a requirement as I've noticed simply using folder structure/ namespacing works fine when rendering routes.
For example (this doesn't render right in the metadata page but performs properly if you know the direct route/url)
/v1/movies/{id}
/v1.1/movies/{id}
Code
namespace Samples.Movies.Operations.v1_1
{
[Route("/v1.1/Movies", "GET")]
public class Movies
{
...
}
}
namespace Samples.Movies.Operations.v1
{
[Route("/v1/Movies", "GET")]
public class Movies
{
...
}
}
and corresponding services...
public class MovieService: ServiceBase<Samples.Movies.Operations.v1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1.Movies request)
{
...
}
}
public class MovieService: ServiceBase<Samples.Movies.Operations.v1_1.Movies>
{
protected override object Run(Samples.Movies.Operations.v1_1.Movies request)
{
...
}
}
Try to evolve (not re-implement) existing services
For versioning, you are going to be in for a world of hurt if you try to maintain different static types for different version endpoints. We initially started down this route but as soon as you start to support your first version the development effort to maintain multiple versions of the same service explodes as you will need to either maintain manual mapping of different types which easily leaks out into having to maintain multiple parallel implementations, each coupled to a different versions type - a massive violation of DRY. This is less of an issue for dynamic languages where the same models can easily be re-used by different versions.
Take advantage of built-in versioning in serializers
My recommendation is not to explicitly version but take advantage of the versioning capabilities inside the serialization formats.
E.g: you generally don't need to worry about versioning with JSON clients as the versioning capabilities of the JSON and JSV Serializers are much more resilient.
Enhance your existing services defensively
With XML and DataContract's you can freely add and remove fields without making a breaking change. If you add IExtensibleDataObject to your response DTO's you also have a potential to access data that's not defined on the DTO. My approach to versioning is to program defensively so not to introduce a breaking change, you can verify this is the case with Integration tests using old DTOs. Here are some tips I follow:
Never change the type of an existing property - If you need it to be a different type add another property and use the old/existing one to determine the version
Program defensively realize what properties don't exist with older clients so don't make them mandatory.
Keep a single global namespace (only relevant for XML/SOAP endpoints)
I do this by using the [assembly] attribute in the AssemblyInfo.cs of each of your DTO projects:
[assembly: ContractNamespace("http://schemas.servicestack.net/types",
ClrNamespace = "MyServiceModel.DtoTypes")]
The assembly attribute saves you from manually specifying explicit namespaces on each DTO, i.e:
namespace MyServiceModel.DtoTypes {
[DataContract(Namespace="http://schemas.servicestack.net/types")]
public class Foo { .. }
}
If you want to use a different XML namespace than the default above you need to register it with:
SetConfig(new EndpointHostConfig {
WsdlServiceNamespace = "http://schemas.my.org/types"
});
Embedding Versioning in DTOs
Most of the time, if you program defensively and evolve your services gracefully you wont need to know exactly what version a specific client is using as you can infer it from the data that is populated. But in the rare cases your services needs to tweak the behavior based on the specific version of the client, you can embed version information in your DTOs.
With the first release of your DTOs you publish, you can happily create them without any thought of versioning.
class Foo {
string Name;
}
But maybe for some reason the Form/UI was changed and you no longer wanted the Client to use the ambiguous Name variable and you also wanted to track the specific version the client was using:
class Foo {
Foo() {
Version = 1;
}
int Version;
string Name;
string DisplayName;
int Age;
}
Later it was discussed in a Team meeting, DisplayName wasn't good enough and you should split them out into different fields:
class Foo {
Foo() {
Version = 2;
}
int Version;
string Name;
string DisplayName;
string FirstName;
string LastName;
DateTime? DateOfBirth;
}
So the current state is that you have 3 different client versions out, with existing calls that look like:
v1 Release:
client.Post(new Foo { Name = "Foo Bar" });
v2 Release:
client.Post(new Foo { Name="Bar", DisplayName="Foo Bar", Age=18 });
v3 Release:
client.Post(new Foo { FirstName = "Foo", LastName = "Bar",
DateOfBirth = new DateTime(1994, 01, 01) });
You can continue to handle these different versions in the same implementation (which will be using the latest v3 version of the DTOs) e.g:
class FooService : Service {
public object Post(Foo request) {
//v1:
request.Version == 0
request.Name == "Foo"
request.DisplayName == null
request.Age = 0
request.DateOfBirth = null
//v2:
request.Version == 2
request.Name == null
request.DisplayName == "Foo Bar"
request.Age = 18
request.DateOfBirth = null
//v3:
request.Version == 3
request.Name == null
request.DisplayName == null
request.FirstName == "Foo"
request.LastName == "Bar"
request.Age = 0
request.DateOfBirth = new DateTime(1994, 01, 01)
}
}
Framing the Problem
The API is the part of your system that exposes its expression. It defines the concepts and the semantics of communicating in your domain. The problem comes when you want to change what can be expressed or how it can be expressed.
There can be differences in both the method of expression and what is being expressed. The first problem tends to be differences in tokens (first and last name instead of name). The second problem is expressing different things (the ability to rename oneself).
A long-term versioning solution will need to solve both of these challenges.
Evolving an API
Evolving a service by changing the resource types is a type of implicit versioning. It uses the construction of the object to determine behavior. Its works best when there are only minor changes to the method of expression (like the names). It does not work well for more complex changes to the method of expression or changes to the change of expressiveness. Code tends to be scatter throughout.
Specific Versioning
When changes become more complex it is important to keep the logic for each version separate. Even in mythz example, he segregated the code for each version. However, the code is still mixed together in the same methods. It is very easy for code for the different versions to start collapsing on each other and it is likely to spread out. Getting rid of support for a previous version can be difficult.
Additionally, you will need to keep your old code in sync to any changes in its dependencies. If a database changes, the code supporting the old model will also need to change.
A Better Way
The best way I've found is to tackle the expression problem directly. Each time a new version of the API is released, it will be implemented on top of the new layer. This is generally easy because changes are small.
It really shines in two ways: first all the code to handle the mapping is in one spot so it is easy to understand or remove later and second it doesn't require maintenance as new APIs are developed (the Russian doll model).
The problem is when the new API is less expressive than the old API. This is a problem that will need to be solved no matter what the solution is for keeping the old version around. It just becomes clear that there is a problem and what the solution for that problem is.
The example from mythz's example in this style is:
namespace APIv3 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var data = repository.getData()
request.FirstName == data.firstName
request.LastName == data.lastName
request.DateOfBirth = data.dateOfBirth
}
}
}
namespace APIv2 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v3Request = APIv3.FooService.OnPost(request)
request.DisplayName == v3Request.FirstName + " " + v3Request.LastName
request.Age = (new DateTime() - v3Request.DateOfBirth).years
}
}
}
namespace APIv1 {
class FooService : RestServiceBase<Foo> {
public object OnPost(Foo request) {
var v2Request = APIv2.FooService.OnPost(request)
request.Name == v2Request.DisplayName
}
}
}
Each exposed object is clear. The same mapping code still needs to be written in both styles, but in the separated style, only the mapping relevant to a type needs to be written. There is no need to explicitly map code that doesn't apply (which is just another potential source of error). The dependency of previous APIs is static when you add future APIs or change the dependency of the API layer. For example, if the data source changes then only the most recent API (version 3) needs to change in this style. In the combined style, you would need to code the changes for each of the APIs supported.
One concern in the comments was the addition of types to the code base. This is not a problem because these types are exposed externally. Providing the types explicitly in the code base makes them easy to discover and isolate in testing. It is much better for maintainability to be clear. Another benefit is that this method does not produce additional logic, but only adds additional types.
I am also trying to come with a solution for this and was thinking of doing something like the below. (Based on a lot of Googlling and StackOverflow querying so this is built on the shoulders of many others.)
First up, I don’t want to debate if the version should be in the URI or Request Header. There are pros/cons for both approaches so I think each of us need to use what meets our requirements best.
This is about how to design/architecture the Java Message Objects and the Resource Implementation classes.
So let’s get to it.
I would approach this in two steps. Minor Changes (e.g. 1.0 to 1.1) and Major Changes (e.g 1.1 to 2.0)
Approach for minor changes
So let’s say we go by the same example classes used by #mythz
Initially we have
class Foo { string Name; }
We provide access to this resource as /V1.0/fooresource/{id}
In my use case, I use JAX-RS,
#Path("/{versionid}/fooresource")
public class FooResource {
#GET
#Path( "/{id}" )
public Foo getFoo (#PathParam("versionid") String versionid, (#PathParam("id") String fooId)
{
Foo foo = new Foo();
//setters, load data from persistence, handle business logic etc
Return foo;
}
}
Now let’s say we add 2 additional properties to Foo.
class Foo {
string Name;
string DisplayName;
int Age;
}
What I do at this point is annotate the properties with a #Version annotation
class Foo {
#Version(“V1.0")string Name;
#Version(“V1.1")string DisplayName;
#Version(“V1.1")int Age;
}
Then I have a response filter that will based on the requested version, return back to the user only the properties that match that version. Note that for convenience, if there are properties that should be returned for all versions, then you just don’t annotate it and the filter will return it irrespective of the requested version
This is sort of like a mediation layer. What I have explained is a simplistic version and it can get very complicated but hope you get the idea.
Approach for Major Version
Now this can get quite complicated when there is a lot of changes been done from one version to another. That is when we need to move to 2nd option.
Option 2 is essentially to branch off the codebase and then do the changes on that code base and host both versions on different contexts. At this point we might have to refactor the code base a bit to remove version mediation complexity introduced in Approach one (i.e. make the code cleaner) This might mainly be in the filters.
Note that this is just want I am thinking and haven’t implemented it as yet and wonder if this is a good idea.
Also I was wondering if there are good mediation engines/ESB’s that could do this type of transformation without having to use filters but haven’t seen any that is as simple as using a filter. Maybe I haven’t searched enough.
Interested in knowing thoughts of others and if this solution will address the original question.

Connecting the dots with DDD

I have read Evans, Nilsson and McCarthy, amongst others, and understand the concepts and reasoning behind a domain driven design; however, I'm finding it difficult to put all of these together in a real-world application. The lack of complete examples has left me scratching my head. I've found a lot of frameworks and simple examples but nothing so far that really demonstrates how to build a real business application following a DDD.
Using the typical order management system as an example, take the case of order cancellation. In my design I can see an OrderCancellationService with a CancelOrder method which accepts the order # and a reason as parameters. It then has to perform the following 'steps':
Verify that the current user has the necessary permission to cancel an Order
Retrieve the Order entity with the specified order # from the OrderRepository
Verify that the Order may be canceled (should the service interrogate the state of the Order to evaluate the rules or should the Order have a CanCancel property that encapsulates the rules?)
Update the state of the Order entity by calling Order.Cancel(reason)
Persist the updated Order to the data store
Contact the CreditCardService to revert any credit card charges that have already been processed
Add an audit entry for the operation
Of course, all of this should happen in a transaction and none of the operations should be allowed to occur independently. What I mean is, I must revert the credit card transaction if I cancel the order, I cannot cancel and not perform this step. This, imo, suggests better encapsulation but I don't want to have a dependency on the CreditCardService in my domain object (Order), so it seems like this is the responsibility of the domain service.
I am looking for someone to show me code examples how this could/should be "assembled". The thought-process behind the code would be helpful in getting me to connect all of the dots for myself. Thx!
Your domain service may look like this. Note that we want to keep as much logic as possible in the entities, keeping the domain service thin. Also note that there is no direct dependency on credit card or auditor implementation (DIP). We only depend on interfaces that are defined in our domain code. The implementation can later be injected in the application layer. Application layer would also be responsible for finding Order by number and, more importantly, for wrapping 'Cancel' call in a transaction (rolling back on exceptions).
class OrderCancellationService {
private readonly ICreditCardGateway _creditCardGateway;
private readonly IAuditor _auditor;
public OrderCancellationService(
ICreditCardGateway creditCardGateway,
IAuditor auditor) {
if (creditCardGateway == null) {
throw new ArgumentNullException("creditCardGateway");
}
if (auditor == null) {
throw new ArgumentNullException("auditor");
}
_creditCardGateway = creditCardGateway;
_auditor = auditor;
}
public void Cancel(Order order) {
if (order == null) {
throw new ArgumentNullException("order");
}
// get current user through Ambient Context:
// http://blogs.msdn.com/b/ploeh/archive/2007/07/23/ambientcontext.aspx
if (!CurrentUser.CanCancelOrders()) {
throw new InvalidOperationException(
"Not enough permissions to cancel order. Use 'CanCancelOrders' to check.");
}
// try to keep as much domain logic in entities as possible
if(!order.CanBeCancelled()) {
throw new ArgumentException(
"Order can not be cancelled. Use 'CanBeCancelled' to check.");
}
order.Cancel();
// this can throw GatewayException that would be caught by the
// 'Cancel' caller and rollback the transaction
_creditCardGateway.RevertChargesFor(order);
_auditor.AuditCancellationFor(order);
}
}
A slightly different take on it:
//UI
public class OrderController
{
private readonly IApplicationService _applicationService;
[HttpPost]
public ActionResult CancelOrder(CancelOrderViewModel viewModel)
{
_applicationService.CancelOrder(new CancelOrderCommand
{
OrderId = viewModel.OrderId,
UserChangedTheirMind = viewModel.UserChangedTheirMind,
UserFoundItemCheaperElsewhere = viewModel.UserFoundItemCheaperElsewhere
});
return RedirectToAction("CancelledSucessfully");
}
}
//App Service
public class ApplicationService : IApplicationService
{
private readonly IOrderRepository _orderRepository;
private readonly IPaymentGateway _paymentGateway;
//provided by DI
public ApplicationService(IOrderRepository orderRepository, IPaymentGateway paymentGateway)
{
_orderRepository = orderRepository;
_paymentGateway = paymentGateway;
}
[RequiredPermission(PermissionNames.CancelOrder)]
public void CancelOrder(CancelOrderCommand command)
{
using (IUnitOfWork unitOfWork = UnitOfWorkFactory.Create())
{
Order order = _orderRepository.GetById(command.OrderId);
if (!order.CanBeCancelled())
throw new InvalidOperationException("The order cannot be cancelled");
if (command.UserChangedTheirMind)
order.Cancel(CancellationReason.UserChangeTheirMind);
if (command.UserFoundItemCheaperElsewhere)
order.Cancel(CancellationReason.UserFoundItemCheaperElsewhere);
_orderRepository.Save(order);
_paymentGateway.RevertCharges(order.PaymentAuthorisationCode, order.Amount);
}
}
}
Notes:
In general I only see the need for a domain service when a command/use case involves the state change of more than one aggregate. For example, if I needed to invoke methods on the Customer aggregate as well as Order, then I'd create the domain service OrderCancellationService that invoked the methods on both aggregates.
The application layer orchestrates between infrastructure (payment gateways) and the domain. Like domain objects, domain services should only be concerned with domain logic, and ignorant of infrastructure such as payment gateways; even if you've abstracted it using your own adapter.
With regards to permissions, I would use aspect oriented programming to extract this away from the logic itself. As you see in my example, I've added an attribute to the CancelOrder method. You can use an intercepter on that method to see if the current user (which I would set on Thread.CurrentPrincipal) has that permission.
With regards to auditing, you simply said 'audit for the operation'. If you just mean auditing in general, (i.e. for all app service calls), again I would use interceptors on the method, logging the user, which method was called, and with what parameters. If however you meant auditing specifically for the cancellation of orders/payments then do something similar to Dmitry's example.

Resources