Nestjs Request and Application Lifecycle - nestjs

I am looking for information about the request and application life-cycle for The NestJS framework. Specifically:
What is the order of execution of the following processes in a request, for a route that implements: middleware, pipes, guards, interceptors, and any other potential request process
What is the lifespan of modules and providers in a NestJS application? Do they last for the lifespan of a request, or the application, or something else?
Are there any lifecycle hooks, in addition to OnModuleInit and OnModuleDestroy?
What causes a Modeule to be destroyed (and trigger the OnModuleDestroy event)?

What is the order of execution of the following processes in a request, for a route that implements: middleware, pipes, guards, interceptors, and any other potential request process
The common order is:
Middlewares
Guards
Interceptors (before the stream is manipulated)
Pipes
Interceptors (after the stream is manipulated)
Exception filters (if any exception is caught)
What is the lifespan of modules and providers in a NestJS application? Do they last for the lifespan of a request, or the application, or something else?
They do last for the lifespan of the application. Modules are destroyed when a NestApplication or a NestMicroservice is being closed (see close method from INestApplication).
Are there any lifecycle hooks, in addition to OnModuleInit and OnModuleDestroy?
No there aren't at the moment.
What causes a Modeule to be destroyed (and trigger the OnModuleDestroy event)?
See my answer to the second point. As you look interested in lifecyle hooks, you might pay some interest to issues #938 and #550

What is the order of execution of the following processes in a request, for a route that implements: middleware, pipes, guards, interceptors, and any other potential request process
Middleware -> Guards -> Interceptors (code before next.handle()) -> Pipes -> Route Handler -> Interceptors (eg: next.handle().pipe( tap(() => changeResponse()) ) )-> Exception Filter (if exception is thrown)
With all three of them, you can inject other dependencies (like services,...) in their constructor.
What is the lifespan of modules and providers in a NestJS application? Do they last for the lifespan of a request, or the application, or something else?
A provider can have any of the following scopes:
SINGLETON - A single instance of the provider is shared across the entire application. The instance lifetime is tied directly to the application lifecycle. Once the application has bootstrapped, all singleton providers have been instantiated. Singleton scope is used by default.
REQUEST - A new instance of the provider is created exclusively for each incoming request. The instance is garbage-collected after the request has completed processing.
TRANSIENT - Transient providers are not shared across consumers. Each consumer that injects a transient provider will receive a new, dedicated instance.
Using singleton scope is recommended for most use cases. Sharing providers across consumers and across requests means that an instance can be cached and its initialization occurs only once, during application startup.
Example
import { Injectable, Scope } from '#nestjs/common';
#Injectable({ scope: Scope.REQUEST })
export class CatsService {}
Are there any lifecycle hooks, in addition to OnModuleInit and OnModuleDestroy?
OnApplicationBootstrap - Called once the application has fully started and is bootstrapped
OnApplicationShutdown - Responds to the system signals (when application gets shutdown by e.g. SIGTERM). Use this hook to gracefully shutdown a Nest application. This feature is often used with Kubernetes, Heroku or similar services.
Both OnModuleInit and OnApplicationBootstrap hooks allow you to defer the application initialization process (return a Promise or mark the method as async).
What causes a Module to be destroyed (and trigger the OnModuleDestroy event)?
Usually shutdown signal from Kubernetes, Heroku or similar services.

Related

#nestjs/bull -- Add one consumer as provider to more than one module

Added as a provider the service and the consumer in more than one module, and I get this error: throw new Error('Cannot define the same handler twice ' + name);
Already tried to create a separate module with them and import in the modules that need this classes but the jobs are not processed by the consumer in prod, only in local.
Steps to reproduce:
create a consumer and a producer service
add the consumer and the producer service as providers to more than one module
error
Expected behavior:
Jobs must be processed by the consumer
#nestjs/bull - ^0.5.5
bull - ^4.8.2
#nestjs/core - ^8.4.5
node.js - 18.2.0
Instead of injecting consumer into two different services (which is not allowed as you mentioned), inject consumer into a separate service then inject those couple of services into it. Then route each process to their correct service's method.

Spring Boot Rest-Controller restrict multithreading

I want my Rest Controller POST Endpoint to only allow one thread to execute the method and every other thread shall get 429 until the first thread is finished.
#ResponseStatus(code = HttpStatus.CREATED)
#PostMapping(value ="/myApp",consumes="application/json",produces="application/json")
public Execution execute(#RequestBody ParameterDTO StartDateParameter)
{
if(StartDateParameter.getStartDate()==null) {
throw new ResponseStatusException(HttpStatus.BAD_REQUEST);
}else {
if(Executer.isProcessAlive()) {
throw new ResponseStatusException(HttpStatus.TOO_MANY_REQUESTS);
}else {
return Executer.execute(StartDateParameter);
}
}
}
When I send multithreaded requests, every request gets 201. So I think the requests get in earlier than the isAlive() method beeing checked. How can I change it to only process the first request and "block" every other?
Lifecycle of a controller in spring is managed by the container and by default, it is singleton, which means that there is one instance of the bean created at startup and multiple threads can use it. The only way you can make it single threaded is if you use a synchronized block or handle the request call through an Executor service. But that defeats the entire purpose of using spring framework.
Spring provides other means to make your code thread safe. You can use the #Scope annotation to override the default scope. Since you are using a RestController, you could use the "request" scope (#Scope("request")), which creates a new instance to process your every http request. Doing it this way will make ensure that only 1 thread will be accessing your controller code at any given time.

Quarkus Transactions on different thread

I have a quarkus application with an async endpoint that creates an entity with default properties, starts a new thread within the request method and executes a long running job and then returns the entity as a response for the client to track.
#POST
#Transactional
public Response startJob(#NonNull JsonObject request) {
// create my entity
JobsRecord job = new JobsRecord();
// set default properties
job.setName(request.getString("name"));
// make persistent
jobsRepository.persist(job);
// start the long running job on a different thread
Executor.execute(() -> longRunning(job));
return Response.accepted().entity(job).build();
}
Additionally, the long running job will make updates to the entity as it runs and so it must also be transactional. However, the database entity just doesn't get updated.
These are the issues I am facing:
I get the following warnings:
ARJUNA012094: Commit of action id 0:ffffc0a80065:f2db:5ef4e1c7:0 invoked while multiple threads active within it.
ARJUNA012107: CheckedAction::check - atomic action 0:ffffc0a80065:f2db:5ef4e1c7:0 commiting with 2 threads active!
Seems like something that should be avoided.
I tried using #Transaction(value = TxType.REQUIRES_NEW) to no avail.
I tried using the API Approach instead of the #Transactional approach on longRunning as mentioned in the guide as follows:
#Inject UserTransaction transaction;
.
.
.
try {
transaction.begin();
jobsRecord.setStatus("Complete");
jobsRecord.setCompletedOn(new Timestamp(System.currentTimeMillis()));
transaction.commit();
} catch (Exception e) {
e.printStackTrace();
transaction.rollback();
}
but then I get the errors: ARJUNA016051: thread is already associated with a transaction! and ARJUNA016079: Transaction rollback status is:ActionStatus.COMMITTED
I tried both the declarative and API based methods again this time with context propagation enabled. But still no luck.
Finally, based on the third approach, I thought keeping the #Transactional on the Http request handler and leaving longRunning as is without declarative or API based transaction approaches would work. However the database still does not get updated.
Clearly I am misunderstanding how JTA and context propagation works (among other things).
Is there a way (or even a design pattern) that allows me to update database entities asynchronously in a quarkus web application? Also why wouldn't any of the approaches I took have any effect?
Using quarkus 1.4.1.Final with ext: [agroal, cdi, flyway, hibernate-orm, hibernate-orm-panache, hibernate-validator, kubernetes-client, mutiny, narayana-jta, rest-client, resteasy, resteasy-jackson, resteasy-mutiny, smallrye-context-propagation, smallrye-health, smallrye-openapi, swagger-ui]
You should return an async type from your JAX-RS resource method, the transaction context will then be available when the async stage executes. There is some relevant documentation in the quarkus guide on context propagation.
I would start by looking at the one of the reactive examples such as the getting started quickstart. Try annotating each resource endpoint with #Transactional and the async code will run with a transaction context.

CDI multithreading

We want to optimize our application. There is some streight linear work going on, that can be executed in multiple threads with smaller working sets.
Our typical service is accessed using the #Inject annotation from within our CDI-managed beans. Also such a service could have it's own dependencies injected, i.e.:
public class MyService {
#Inject
private OtherService otherService;
#Inject
private DataService1 dataService1;
...
public void doSomething() {
...
}
}
Because I can not use #Inject inside the class implementing Runnable. (It's not container managed.) I tried to pass the required services to the class before starting the thread. So, using something like this, makes the service instance (myService) available within the thread:
Class Thread1 implements Runnable{
private MyService myService
public Thread1(MyService myService){
this.myService = myService;
}
public void run(){
myService.doSomething();
}
}
Following the call-hierarchy the call to doStometing() is fine, because a reference to myService has been passed. As far as I understand CDI, the injection is done the moment the attribute is accessed for the first time, meaning, when the doStomething() method tries to access either otherService or dataService1, the injection would be performed.
At that point however I receive an exception, that there is no context available.
I also tried to use the JBossThreadExecuter class instead of Plain-Threads - it leads to the very same result.
So the question would be, if there is a nice way to associate a context (or request) with a created Thread?
For EJB-Beans, I read that marking a method with #Asynchronous will cause the method to be run in a managed thread which itself will be wired to the context. That would basically be exactly what I'm searching for.
Is there a way to do this in CDI?
Or is there any way to obtain a context from within a unmanaged thread?
Weld allows programmatic context management, (there's an example in the official docs).
But before you go this way give EJBs a chance )
#Async invocation functionality is there exactly for your case. And as a bonus you'll get timeout interception and transaction management.
When you kick off an async process, your #RequestScoped and #SessionScoped objects are no longer in scope. That's why you get resolution errors for the injected #RequestScoped objects. Using #Stateless without a CDI scope is essentially #Dependent. You can use #ApplicationScoped objects or if you're on CDI 1.1 you can start up #TransactionScoped.
You have to use JavaEE 7 feature, the managed executor. So it will provide a context for your runnable. I'm not sure if your JBoss version is JavaEE 7 compatible. At least Glassfish 4 is, and that approach works.
See details here
Easiest Solution one can think of is Ejb Async.
They are powerful, does the job and most importantly the concurrency is handled by the container(which could be an issue at some point of time if its not properly managed).
Just a simple use case lets say if we have written a rest service and each request spawns 10 threads(ex using CompletableFuture or anything) to do some long processing tasks and for an instance if 500 requests are made then how will the threads be managed, how the app behaves, does it waits for a thread from the thread pool, what is the timeout period, etc etc and to add to our comfort what happens when the threads are Deamon Threads. We can avoid these overheads to some extent using EJBs.
Its always a good thing to have a friend from the technical services team to help us with all these container specific implementations.

Multithreaded Singleton in a WCF - What is the appropriate Service Behavior?

I have a class (ZogCheckPublisher) that implements the multithreaded singleton pattern. This class is used within the exposed method (PrintZogChecks) of a WCF service that is hosted by a Windows Service.
public class ProcessKicker : IProcessKicker
{
public void PrintZogChecks(ZogCheckType checkType)
{
ZogCheckPublisher.Instance.ProcessCheckOrCoupon(checkType);
}
}
ZogCheckPublisher keeps track of which 'checkType' is currently in the process of being printed, and rejects requests that duplicate a currently active print request. I am trying to understand ServiceBehaviors and the appropriate behavior to use. I think that this is appropriate:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
One instance of the service that is multithreaded. If I am understanding things rightly?
Your understanding is correct.
The service behavior will implement a single multithreaded instance of the service.
[ServiceBehaviorAttribute(Name = "Test", InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple]
In a singleton service the configured concurrency mode alone governs the concurrent execution of pending calls. Therefore, if the service instance is configured with ConcurrencyMode.Multiple, concurrent processing of calls from the same client is allowed. Calls will be executed by the service instance as fast as they come off the channel (up to the throttle limit). Of course, as is always the case with a stateful unsynchronized service instance, you must synchronize access to the service instance or risk state corruption.
The following links provide additional Concurrency Management guidance:
Multithreaded Singleton WCF Service
http://msdn.microsoft.com/en-us/library/orm-9780596521301-02-08.aspx
Regards,

Resources