I am trying to understand what the purpose of injecting service providers into a NestJS controller? The documentation here explains here how to use them, that's not the issue here: https://docs.nestjs.com/providers
What I am trying to understand is, in most traditional web applications regardless of platform, a lot of the logic that would go into a NestJS service would otherwise just normally go right into a controller. Why did NestJS decide to move the provider into its own class/abstraction? What is the design advantages gained here for the developer?
Nest draws inspiration from Angular which in turn drew inspiration from enterprise application frameworks like .NET and Java Spring Boot. In these frameworks, the biggest concerns are ideas called Separation of Concern (SoC) and the Single Responsibility Principle (SRP), which means that each class deal with a specific function, and for the most part it can do it without really knowing much about other parts of the application (which leads to loosely coupled design patterns).
You could, if you wanted, put all of your business logic in a controller and call it a day. After all, that would be the easy thing to do, right? But what about testing? You'll need to send in a full request object for each functionality you want to test. You could then make a request factory that makes theses requests for you so it's easier to test, but now you're also looking at needing to test the factory to make sure it is producing correctly (so now you're testing your test code). If you broke apart the controller and the service, the controller could be tested that it just returns whatever the service returns and that's that. Then he service can have a specific input (like from the #Body() decorator in NestJS) and have a much easier input to work with an test.
By splitting the code up, the developer gains flexibility in maintenance, testing, and some autonomy if you are on a team and have interfaces set up so you know what kind of architecture you'll be getting from an injected service without needing to know how the service works in the first place. However, if you still aren't convinced you can also read up on Module Programming, Coupling, and Inversion of Control
Related
So we have decided to use NestJS to build our web-app with, and we have this ongoing argument about whether we should use a Microservice or a Standalone app to implement our queue-interactions module with.
A bit of background - we use Amazon SQS as our queue provider, and we use the
bbc/sqs-consumer package for handling the connection.
Now one approach is to use a microservice, in a similar fashion to what is done here: https://github.com/algoan/nestjs-components/tree/master/packages/google-pubsub-microservice
I believe the implications are pretty clear, and it seems as if the NestJS documentation really pushes you towards microservices here, if only because all the biult-in implementations are for queues/pubSub services (rabbitMQ, kafka, redis...).
On the other hand, you can choose to use a standalone app, which I feel is basically a microservice but without controllers.
Since we opted for using a 3rd party package to handle the actual transport and all the technical details, this feels in a way more appropriate. We don't actually need to send the messages from the messageHandler to some controller and then process it, if we can process it directly from the messageHandler, no controllers included.
Personally, it seems to me that if we don't want to go into details with the transport implementation (i.e. use sqs-consumer package for it) then the microservice approach, while works perfectly, is an overkill. A standalone app feels like it would give us the benefits of separating the "main" and the "queues" processes, while maintaining simplicity of implementation as much as possible.
Conversely, using a Microservice feels more natural to others. The way to think about it is that it doesn't matter whether we choose to implement transport ourselves or use some package, the semantic meaning is the same in the way that we have some messages coming into our app from outside, thus using a custom transport Microservice really is the most appropriate solution.
What do you guys think about it?
Would you use the Microservice or the standalone approach?
And in general, when would you choose Microservice over a Standalone app and vice-versa?
Let's assume the following situation:
We have several webservices based on Nest.js technology
The services perform CRUD operations in the area of their domain
The services do not have business logic (they can add, change, delete, return data, they know the relationships between entities, but also between domains (e.g. through Apollo Federation)
Everything works fine so far.
However, we face the problem of business processes, validation, business rules and everything that goes with it. So we have to code this logic somehow or use some engine (eg Camunda).
As far as I understand that Camunda can send requests from Service A to Service B in the BPMN process e.g. via HTTP.
But what if several activities are performed in the same service?
Isn't it better to make requests to the same service at the service level layer? Is it possible in Cmunda?
E.g.
WebService1 has a POST Customer/ endpoint which calls CustomerService.AddCustomer (data) and CustomerRoles.AddRole (data). Can we call CustomerRoles.AddRole in Camunda?
My question is mainly about node.js / nestjs.
Forgive me, but I don't think I can describe it more clearly :(
In general you can use Camunda not only at the highest orchestration layer, for the end-to-end business process, but also inside the micro service. Benefits include state management, error handling, retries, exception handling, possible compensation. (What happens if AddCustomer succeeds, but AddRole fails?).
There are orchestration vs choreography considerations. Latency requiremnts may also be relevant. I recommend these two reads, which illustrate the benefits/trade-offs and design decision well:
https://blog.bernd-ruecker.com/the-microservice-workflow-automation-cheat-sheet-fc0a80dc25aa
and
https://blog.bernd-ruecker.com/3-common-pitfalls-in-microservice-integration-and-how-to-avoid-them-3f27a442cd07
Why don't you implement a little proof of concept and see what it could look like? If NextJS is your world, you may like to start with a Camunda 8 SaaS trila and https://github.com/camunda-community-hub/nestjs-zeebe#readme
To speed up development for my next Node-API I was looking for a suitable Framework. In the past I was building my APIs with express only.
One Design pattern I always found useful is to completely seperate the business logic from route-handling in services. Those services only accept the required information (like a user id or data) and return a promise resolving the result of the operation.
This way it is easy to reuse these services in other routes, to combine them, test them, or call them based on schedules or other events - totally independent from endpoint-calls. Routing and Middleware take care of access-controll, error-handling and respondig.
Looking at the documentations of those frameworks (sailsjs, keystonejs, ...) I mostly see the business-logic tightly coupled to individual routes, directly accepting request objects and handling the responses. Only as an afterthought it seems there is sometimes offered a way to extract "often used code" into helper functions.
Am I missing something? How come this pattern seems to be the standard of API design? Is this a best practice for a reason?
It might have to do with Node.js services being smaller in size. If you're coming from an enterprise background, you're well aware mixing business-logic with controller code doesn't fly in the long run. Perhaps small projects can get away with defying that, but once the size increases, you can't avoid the laws of physics. It's best to separate concerns and keep the codebase maintainable.
I'd also add that below services, it's good to have a separate layer that handles talking to outside process boundaries. That way, you can test business logic in isolation by providing appropriate test doubles for integrations. Here's a longer explanation of how it would work in a Node project: Organize Node.js API project using 3-layer architecture.
Without getting into all of the gory details, I am trying to design a service-based solution that will be consumed by several client applications. The solution allows admins to create and modify document templates which are used by regular users to perform data entry. It is my intent to make the application a learning tool for best practices, techniques, etc.
And, at the same time, I have to accomodate a schizophrenic environment because the 'powers that be' cannot ever stick to their decisions regarding technologies and tools. For example, I am using Linq-to-SQL today because they aren't ready to go to EF4 but there is also discussion about switching over to NHibernate. So, I have to make the code as persistent ignorant as possible to minimize the work required should we change OR/M tools.
At this point, I am also limited to using the partial class approach to extend the Linq-to-SQL classes so they implement interfaces defined in my business layer. I cannot go with POCOs because management insists that we leverage all built-in tooling, etc. so I must support the Linq-to-SQL designer.
That said, my service interface has a StartSession method that accepts a template identifier in its signature. The operation flows like this:
If a session already exists in the database for the current user and specified template, update the record to show the current activity. If not, create a new session object.
The session is associated with an instance of the template, call it the "form". So if the session is new, I need to retrieve the template information to create the new "form", associate it with the session then save the session to the database. On the other hand, if the session already existed, then I need to also load the "form" with the data entered by the user and stored in the session previously.
Finally, the session (with form definition and data) is returned to the caller.
My first objective is to create clean separation between the logical layers of my application. The second is to maintain persistence ignorance (as mentioned above). Third, I have to be able to test everything so all dependencies must be externalized for easy mocking. I am using Unity as an IoC tool to help in this area.
To accomplish this, I have defined my service class and data contracts as needed to support the service interface. The service class will have a dependency injected from the business layer that actually performs the work. And here's where it has gotten messy for me.
I've been try to go the Unit of Work and Repository route to help with persistance ignorance. I have an ITemplateRepository and an ISessionRepository which I can access from my IUnitOfWork implementation. The service class gets an instance of my SessionManager class (in my BLL) injected. The SessionManager receives the IUnitOfWork implementation through constructor injection and will delegate all persistence to the UoW but I find myself playing a shell game with the various logic.
Should all of the logic described above be in the SessionManager class or perhaps the UoW implementation? I want as little logic as possible in the repository implementations because changing the data access platform could result in unwanted changes to the application logic. Since my repository is working against an interface, how do I best go about creating the new session (keeping in mind that a valid session has a reference to the template, er, form being used)? Would it be better to still use POCOs even though I have to support the designer and use a tool like AutoMapper inside the repository implementation to handle translating the objects?
Ugh!
I know I am just stuck in analysis paralysis so a little nudge is probably all I need. What would be ideal would be if someone could provide an example how you would you would solve the problem given the business rules and architectural constraints I've defined.
If you don't use POCOs then your not really going to be data store agnostic. And using POCOs will allow you to get your system up and running with memory based repositories which is what you'll likely want to use for your unit tests anyhow.
The AutoMapper sounds nice but I wouldn't consider it a deal breaker. Mapping POCOs to EF4, LinqToSql, nHibernate isn't that time consuming unless you have hundreds of tables. When/If your POCOs begin to diverge from your persistence layer then you might find that an AutoMapper wont really fit the bill.
I have a mixed UI (Win App, WPF App, and soon an ASP.NET MVC App) setup, so far I'm using Client Application Services for security. I know how to programmatically get a user authenticated and doing so is working beautifully. However...
I want to implement some cross cutting that basically checks to see if the user is authenticated all the time. Since everything will be accessing web services I want to enable this as a standard execution for pretty much everything the UI does. So far I'm thinking the PIAB - Policy Injection Application Block - will serve that function. What I'm wondering is two things;
1 Will the PIAB cover that needed functionality? Verifying authentication at every practical step if used against the UI?
...and...
2 Are there alternatives out there besides the PIAB? I'm curious to do a comparison of aspect oriented policy injection frameworks.
I'm not really familiar with Client Application Services but from my experience, most AOP frameworks wrap interfaces in order to implement the cross-cutting functionality. If CAS uses interfaces, you could probably just wrap them with what ever functionality you require.
Alternative AOP frameworks:
Spring.NET
Castle Dynamic Proxy
Spring.NET and Dynamic proxy seem to work in much the same way and have much the same performance in my Hello World type tests (about half-way between direct calls and invoking through reflection). PIAB is significantly slower than both these frameworks and I found bit more verbose. It does have the ability to be configurable via xml and I'm not sure if that's a good thing or not. Not sure if the other frameworks provide that. It does of course have the MS stamp of approval though :P.