I need to call a webservice from my MVC5 project to populate a model. But I'm not sure if the call to the webservice should be made from the controller or from the model. Reading answers on Stackoverflow regarding this issue, seems to point in both directions. So where is the right place to put the call?
A bit shocked nobody has answered this after 20 hours. Umm lets see here :) At minimum the controller would be responsible for this. Don't dirty up your model with responsibilities it shouldn't have.
I would create a service layer to handle this, the service layer would hold the refrence instead of the UI / web project, and then call _myservice.ExecuteSomeWebserviceMethod(); This really is just a light wrapper around the web service call but allows you more freedom to do things before returning whatever value(s) back to the controller.
You can inject the service into the constructor of the controller, it would be testable as well.
Related
Let's assume the following situation:
We have several webservices based on Nest.js technology
The services perform CRUD operations in the area of their domain
The services do not have business logic (they can add, change, delete, return data, they know the relationships between entities, but also between domains (e.g. through Apollo Federation)
Everything works fine so far.
However, we face the problem of business processes, validation, business rules and everything that goes with it. So we have to code this logic somehow or use some engine (eg Camunda).
As far as I understand that Camunda can send requests from Service A to Service B in the BPMN process e.g. via HTTP.
But what if several activities are performed in the same service?
Isn't it better to make requests to the same service at the service level layer? Is it possible in Cmunda?
E.g.
WebService1 has a POST Customer/ endpoint which calls CustomerService.AddCustomer (data) and CustomerRoles.AddRole (data). Can we call CustomerRoles.AddRole in Camunda?
My question is mainly about node.js / nestjs.
Forgive me, but I don't think I can describe it more clearly :(
In general you can use Camunda not only at the highest orchestration layer, for the end-to-end business process, but also inside the micro service. Benefits include state management, error handling, retries, exception handling, possible compensation. (What happens if AddCustomer succeeds, but AddRole fails?).
There are orchestration vs choreography considerations. Latency requiremnts may also be relevant. I recommend these two reads, which illustrate the benefits/trade-offs and design decision well:
https://blog.bernd-ruecker.com/the-microservice-workflow-automation-cheat-sheet-fc0a80dc25aa
and
https://blog.bernd-ruecker.com/3-common-pitfalls-in-microservice-integration-and-how-to-avoid-them-3f27a442cd07
Why don't you implement a little proof of concept and see what it could look like? If NextJS is your world, you may like to start with a Camunda 8 SaaS trila and https://github.com/camunda-community-hub/nestjs-zeebe#readme
I come from express.js background and pretty new to loopback framework, especially loopback4 which i am using for my current project. I have gone through the loopback4 documentation few times and got some good progress in setting up the project. As the project is running as expected, I am not much convinced with project structure, Please help me to solve below problem,
As per docs, database operations should be in repositories and routes should be in controllers. Now suppose, My API consist lots of business logic along with database operations say thousand of lines. Which makes controllers routes difficult to maintain. More difficulty would arise, if some API demands version upgrade.
Is there any way to organise the code in controllers in more
scalable and reusable manner? What if i add one more service layer
between controllers and repositories and put business logic there?
how to implement it in the correct way? Is there any official way to
do that which is suggested by loopback community only?
Thanks in advance!!
Is there any way to organise the code in controllers in more scalable and reusable manner?
Yes, services can be used to abstract complex logic into its own separate class(es). Once defined, the service can be injected into the dependent controller(s) which can then call the respective service functions.
How the service is designed is dependent on the user requirements as LoopBack 4 does not necessarily enforce a strict design requirement.
I am trying to understand what the purpose of injecting service providers into a NestJS controller? The documentation here explains here how to use them, that's not the issue here: https://docs.nestjs.com/providers
What I am trying to understand is, in most traditional web applications regardless of platform, a lot of the logic that would go into a NestJS service would otherwise just normally go right into a controller. Why did NestJS decide to move the provider into its own class/abstraction? What is the design advantages gained here for the developer?
Nest draws inspiration from Angular which in turn drew inspiration from enterprise application frameworks like .NET and Java Spring Boot. In these frameworks, the biggest concerns are ideas called Separation of Concern (SoC) and the Single Responsibility Principle (SRP), which means that each class deal with a specific function, and for the most part it can do it without really knowing much about other parts of the application (which leads to loosely coupled design patterns).
You could, if you wanted, put all of your business logic in a controller and call it a day. After all, that would be the easy thing to do, right? But what about testing? You'll need to send in a full request object for each functionality you want to test. You could then make a request factory that makes theses requests for you so it's easier to test, but now you're also looking at needing to test the factory to make sure it is producing correctly (so now you're testing your test code). If you broke apart the controller and the service, the controller could be tested that it just returns whatever the service returns and that's that. Then he service can have a specific input (like from the #Body() decorator in NestJS) and have a much easier input to work with an test.
By splitting the code up, the developer gains flexibility in maintenance, testing, and some autonomy if you are on a team and have interfaces set up so you know what kind of architecture you'll be getting from an injected service without needing to know how the service works in the first place. However, if you still aren't convinced you can also read up on Module Programming, Coupling, and Inversion of Control
Without getting into all of the gory details, I am trying to design a service-based solution that will be consumed by several client applications. The solution allows admins to create and modify document templates which are used by regular users to perform data entry. It is my intent to make the application a learning tool for best practices, techniques, etc.
And, at the same time, I have to accomodate a schizophrenic environment because the 'powers that be' cannot ever stick to their decisions regarding technologies and tools. For example, I am using Linq-to-SQL today because they aren't ready to go to EF4 but there is also discussion about switching over to NHibernate. So, I have to make the code as persistent ignorant as possible to minimize the work required should we change OR/M tools.
At this point, I am also limited to using the partial class approach to extend the Linq-to-SQL classes so they implement interfaces defined in my business layer. I cannot go with POCOs because management insists that we leverage all built-in tooling, etc. so I must support the Linq-to-SQL designer.
That said, my service interface has a StartSession method that accepts a template identifier in its signature. The operation flows like this:
If a session already exists in the database for the current user and specified template, update the record to show the current activity. If not, create a new session object.
The session is associated with an instance of the template, call it the "form". So if the session is new, I need to retrieve the template information to create the new "form", associate it with the session then save the session to the database. On the other hand, if the session already existed, then I need to also load the "form" with the data entered by the user and stored in the session previously.
Finally, the session (with form definition and data) is returned to the caller.
My first objective is to create clean separation between the logical layers of my application. The second is to maintain persistence ignorance (as mentioned above). Third, I have to be able to test everything so all dependencies must be externalized for easy mocking. I am using Unity as an IoC tool to help in this area.
To accomplish this, I have defined my service class and data contracts as needed to support the service interface. The service class will have a dependency injected from the business layer that actually performs the work. And here's where it has gotten messy for me.
I've been try to go the Unit of Work and Repository route to help with persistance ignorance. I have an ITemplateRepository and an ISessionRepository which I can access from my IUnitOfWork implementation. The service class gets an instance of my SessionManager class (in my BLL) injected. The SessionManager receives the IUnitOfWork implementation through constructor injection and will delegate all persistence to the UoW but I find myself playing a shell game with the various logic.
Should all of the logic described above be in the SessionManager class or perhaps the UoW implementation? I want as little logic as possible in the repository implementations because changing the data access platform could result in unwanted changes to the application logic. Since my repository is working against an interface, how do I best go about creating the new session (keeping in mind that a valid session has a reference to the template, er, form being used)? Would it be better to still use POCOs even though I have to support the designer and use a tool like AutoMapper inside the repository implementation to handle translating the objects?
Ugh!
I know I am just stuck in analysis paralysis so a little nudge is probably all I need. What would be ideal would be if someone could provide an example how you would you would solve the problem given the business rules and architectural constraints I've defined.
If you don't use POCOs then your not really going to be data store agnostic. And using POCOs will allow you to get your system up and running with memory based repositories which is what you'll likely want to use for your unit tests anyhow.
The AutoMapper sounds nice but I wouldn't consider it a deal breaker. Mapping POCOs to EF4, LinqToSql, nHibernate isn't that time consuming unless you have hundreds of tables. When/If your POCOs begin to diverge from your persistence layer then you might find that an AutoMapper wont really fit the bill.
I want to to intercept a method in Service Builder, for example: XXXLocalService.update(). But I don't know the correct way to do this. I have done some research but I haven't found a clear way to do this.
Any help will be greatly appreciated.
There are basically two ways to achieve this in Liferay, assuming you want to intercept Liferay's services:
Service Wrapper Hooks
What this does is gives you a wrapper around the desired service, for eg: UserLocalServiceWrapper would be a wrapper around UserLocalService and would have complete control over the methods defined in this interface. And this is a good approach if you know the exact method you want to modify/intercept in that particular service.
Also with this approach you have full control whether the original method should run or not.
The link provides the full detailed tutorial how to achieve this.
Model Listener Hooks
This hook should be used when you want to track any changes on the particular Model like in the above case User and this is helpful when you are not sure which method is going to update the model.
What this basically does is gives you a set of methods like onBeforeUpdate, onAfterUpdate, onAfterCreate etc to have control over the model.
Also this approach would work good enough for your custom services as well.