I am writing a lot of util functions.
Do I need to put them inside a Service or can I leave them outside?
Thanks
Utilities are for static functionality that does not depend on a state. Once you call a function, it gets the job done and nothing further.
Services on the other hand can handle state and are usually more complex. You could have some kind of DownloadService that is initialized with an URL, then fetches it, and finally does some kind of post-processing.
As a rule of thumb, a utility does not have dependencies and does not call other (non-static) methods. Services are more complex and might call other services. Services may have an internal state.
As of now if your utils are not calling other services then it's good to have them outside.
It would be a better idea to leave them outside. Hope this answers your question.
Related
I have a discussion with some colleagues of mine about the Azure Functions. I'm giving you a bit of context.
I have created an Azure Functions responsible for communicating the accounting system. In this function I have all I need related to the accounting. So, if you want to use my functions, you know in this one you find everything. I think it is easy to manage also because everything is in one solution. Probably, if I have to update a model or a function, other functions or classes are effected.
For this reason, I have in this function different triggers (HTTP, Servicebus, Timer...). I think an Azure Function is container and each function in it is a "micro" service and it implements SOLID principles by nature. Then, I can say my implementation is correct.
My colleagues said it is not good practice to mix different type of triggers in the same Azure Function.
What is the best practice? Is there any (official) recommendation or advice for that?
It is okay to use different type of triggers in the same Azure Function. But you need to consider that functions within a function app share resources.
Here is the general best practices for your reference.
General best practices
To speed up development for my next Node-API I was looking for a suitable Framework. In the past I was building my APIs with express only.
One Design pattern I always found useful is to completely seperate the business logic from route-handling in services. Those services only accept the required information (like a user id or data) and return a promise resolving the result of the operation.
This way it is easy to reuse these services in other routes, to combine them, test them, or call them based on schedules or other events - totally independent from endpoint-calls. Routing and Middleware take care of access-controll, error-handling and respondig.
Looking at the documentations of those frameworks (sailsjs, keystonejs, ...) I mostly see the business-logic tightly coupled to individual routes, directly accepting request objects and handling the responses. Only as an afterthought it seems there is sometimes offered a way to extract "often used code" into helper functions.
Am I missing something? How come this pattern seems to be the standard of API design? Is this a best practice for a reason?
It might have to do with Node.js services being smaller in size. If you're coming from an enterprise background, you're well aware mixing business-logic with controller code doesn't fly in the long run. Perhaps small projects can get away with defying that, but once the size increases, you can't avoid the laws of physics. It's best to separate concerns and keep the codebase maintainable.
I'd also add that below services, it's good to have a separate layer that handles talking to outside process boundaries. That way, you can test business logic in isolation by providing appropriate test doubles for integrations. Here's a longer explanation of how it would work in a Node project: Organize Node.js API project using 3-layer architecture.
I am trying to understand what the purpose of injecting service providers into a NestJS controller? The documentation here explains here how to use them, that's not the issue here: https://docs.nestjs.com/providers
What I am trying to understand is, in most traditional web applications regardless of platform, a lot of the logic that would go into a NestJS service would otherwise just normally go right into a controller. Why did NestJS decide to move the provider into its own class/abstraction? What is the design advantages gained here for the developer?
Nest draws inspiration from Angular which in turn drew inspiration from enterprise application frameworks like .NET and Java Spring Boot. In these frameworks, the biggest concerns are ideas called Separation of Concern (SoC) and the Single Responsibility Principle (SRP), which means that each class deal with a specific function, and for the most part it can do it without really knowing much about other parts of the application (which leads to loosely coupled design patterns).
You could, if you wanted, put all of your business logic in a controller and call it a day. After all, that would be the easy thing to do, right? But what about testing? You'll need to send in a full request object for each functionality you want to test. You could then make a request factory that makes theses requests for you so it's easier to test, but now you're also looking at needing to test the factory to make sure it is producing correctly (so now you're testing your test code). If you broke apart the controller and the service, the controller could be tested that it just returns whatever the service returns and that's that. Then he service can have a specific input (like from the #Body() decorator in NestJS) and have a much easier input to work with an test.
By splitting the code up, the developer gains flexibility in maintenance, testing, and some autonomy if you are on a team and have interfaces set up so you know what kind of architecture you'll be getting from an injected service without needing to know how the service works in the first place. However, if you still aren't convinced you can also read up on Module Programming, Coupling, and Inversion of Control
Can anyone explain any reason why and when should I use PublishOnBackgroundThread instead of PublishOnUIThread.
I cannot find any use cases for usage PublishOnBackgroundThread and I am not sure what method should I use?
It really depends on the type of the message you're publishing.
If you're using the EventAggregator to surface a message from a low laying service back the UI then PublishOnUIThread makes the most sense since you'll be updating the UI when handling the message. The same applies when you're using it to communicate between view models.
Conversely sometimes it get used for view models to publish events that an underlying service is listening to (rather than the view model depending on that service).
That service may perform some expensive work which makes sense to happen on a background thread. Personally I'd have gone in the background service to push that work onto a background thread but different people want different options.
Ultimately the method was included for completeness.
What is the proper way of using domain module in NodeJS applications?
Wrap up the code blocks with the domain instance like how we use try-catch blocks? If yes, should we create new instances of domain each time for each separated block?
Wrap up the main function with the domain run method? If yes, is that really sufficient for an enterprise application for example?
P.S. Is there any well-known open-source node project with an extensive use of domain module where I can study their code?
P.P.S. Looking at the node documentation and tutorials, I see that almost all of them just simply wrapped up the main function within the domain's run method, however as far as I can see they are mostly copying each other. I basically can't see how people use domain module in different situations (what I see is mostly a copy of node documentation with couple of minor changes)