It's possible to get all instances of a particular kind of Nest services? Like, for example, all test services?
And then pass them as a constructor parameter of another service?
Well, after some time, I was able to implement it, thanks to the article An extensible NestJS pattern.
An example can be viewed in here.
Basically it finds the services implementations by searching ModulesContainer and its instance from ModuleRef.
Related
I have a discussion with some colleagues of mine about the Azure Functions. I'm giving you a bit of context.
I have created an Azure Functions responsible for communicating the accounting system. In this function I have all I need related to the accounting. So, if you want to use my functions, you know in this one you find everything. I think it is easy to manage also because everything is in one solution. Probably, if I have to update a model or a function, other functions or classes are effected.
For this reason, I have in this function different triggers (HTTP, Servicebus, Timer...). I think an Azure Function is container and each function in it is a "micro" service and it implements SOLID principles by nature. Then, I can say my implementation is correct.
My colleagues said it is not good practice to mix different type of triggers in the same Azure Function.
What is the best practice? Is there any (official) recommendation or advice for that?
It is okay to use different type of triggers in the same Azure Function. But you need to consider that functions within a function app share resources.
Here is the general best practices for your reference.
General best practices
I am trying to understand what the purpose of injecting service providers into a NestJS controller? The documentation here explains here how to use them, that's not the issue here: https://docs.nestjs.com/providers
What I am trying to understand is, in most traditional web applications regardless of platform, a lot of the logic that would go into a NestJS service would otherwise just normally go right into a controller. Why did NestJS decide to move the provider into its own class/abstraction? What is the design advantages gained here for the developer?
Nest draws inspiration from Angular which in turn drew inspiration from enterprise application frameworks like .NET and Java Spring Boot. In these frameworks, the biggest concerns are ideas called Separation of Concern (SoC) and the Single Responsibility Principle (SRP), which means that each class deal with a specific function, and for the most part it can do it without really knowing much about other parts of the application (which leads to loosely coupled design patterns).
You could, if you wanted, put all of your business logic in a controller and call it a day. After all, that would be the easy thing to do, right? But what about testing? You'll need to send in a full request object for each functionality you want to test. You could then make a request factory that makes theses requests for you so it's easier to test, but now you're also looking at needing to test the factory to make sure it is producing correctly (so now you're testing your test code). If you broke apart the controller and the service, the controller could be tested that it just returns whatever the service returns and that's that. Then he service can have a specific input (like from the #Body() decorator in NestJS) and have a much easier input to work with an test.
By splitting the code up, the developer gains flexibility in maintenance, testing, and some autonomy if you are on a team and have interfaces set up so you know what kind of architecture you'll be getting from an injected service without needing to know how the service works in the first place. However, if you still aren't convinced you can also read up on Module Programming, Coupling, and Inversion of Control
Since I cannot modify builtin models (entities, intents..) as provided by the LUIS.ai, How can I import them into my own model in a way that I can modify them further specific to my scenario(s).
Some of the contextual information can be found here: https://github.com/Microsoft/BotBuilder/issues/1694#issuecomment-305531910
I am using Azure Bot Service with Node.js
If you are using the new prebuilt domains, once you add them to your model, you should be able to tweak them.
If you are using the Cortana prebuilt app, I don't think you will be able to update it; however, the documentation contains some information if you want to "mimic" it.
If you explain exactly what are your scenarios, we might be able to come up with other alternatives.
I can't think of a straight-forward way to go about doing this, but you could take the .csv logs from LUIS and incorporate it into your model; at the least the response column data is in json format.
I'm completely new to the Windows Azure and Windows Workflow scope of things.
But basically, what I'm trying to implement is the Cloud web-app that's going to be responsible for pushing down tile updates/badge/toast notifications to my Winodws 8 application.
The code to run to send down the tile notification etc is fine, but needs to be executed every hour or so.
I decided the most straight forward approach was to make an MVC application that would have a WebAPI, this WebAPI will be responsible for receiving the ChannelURI from the ModernApplication that sends it to it, and will be stored on SQL Azure.
There will then be a class that has a static method which does the logic for gathering the new data and generating a new Tile/Badge/Toast.
I've created a simple Activity workflow, that has a Sequence with a DoWhile(true) activity. Inside the body of this DoWhile, contains a Sequence which has InvokeMethod and Delay, the InvokeMethod will call my class that contains the static method. The delay is set to one hour.
So that seems to be all okay. I then start this Activity via the Application_Start in Global.asax with the following line:
this.ActivityInvoker = new WorkflowInvoker(new NotificationActivity());
this.ActivityInvoker.InvokeAsync();
So I just tested it with that and it seems to be running my custom static method at the set interval.
That's all good, but now I have three questions in relation to this way of handling it:
Is this the correct/best approach to do this? If not, what are some other ways I should look into.
If a new instance is spun up on Azure, how do I ensure that the running Workflow for both instances won't step on each other's foot? i.e. how do I make sure that the InvokeMethod won't run two times, I only want it to run once an hour regardless of how many instances there are.
How do I ensure that if the instances crash/go-down that the state of it is maintained?
Any help, guidance, etc is much appreciated.
A couple of good questions that I would love to answer, however trying to do some on a forum like this would be difficult. But let's give it a crack. To start with at least.
1) There is nothing wrong with your approach for implementing a scheduled task. I can think of a few other ways of doing it. Like running a simple Worker Role with a Do{Thread.Sleep(); ...} simple, but effective. There are more complex / elegant ways too including using external libraries and frameworks for scheduling tasks in Azure.
2) You would need to implement some sort of Singleton type pattern in your workflow / job processing engine. You could for instance acquire a lease on a 1Kb blob record when your job starts, and not allow another instance to start etc.
For more detailed answers I suggest we take this offline and have a Skype call and discuss in detail your requirements. You know how to get hold of me via email :) look forward to it.
I want to to intercept a method in Service Builder, for example: XXXLocalService.update(). But I don't know the correct way to do this. I have done some research but I haven't found a clear way to do this.
Any help will be greatly appreciated.
There are basically two ways to achieve this in Liferay, assuming you want to intercept Liferay's services:
Service Wrapper Hooks
What this does is gives you a wrapper around the desired service, for eg: UserLocalServiceWrapper would be a wrapper around UserLocalService and would have complete control over the methods defined in this interface. And this is a good approach if you know the exact method you want to modify/intercept in that particular service.
Also with this approach you have full control whether the original method should run or not.
The link provides the full detailed tutorial how to achieve this.
Model Listener Hooks
This hook should be used when you want to track any changes on the particular Model like in the above case User and this is helpful when you are not sure which method is going to update the model.
What this basically does is gives you a set of methods like onBeforeUpdate, onAfterUpdate, onAfterCreate etc to have control over the model.
Also this approach would work good enough for your custom services as well.