I have a Typescript-backed Express.js project that uses a singleton Redis client.
The singleton includes wrapper functions to Redis commands needed for my application (e.g., SADD).
Singleton
Here is a snippet of my singleton Redis client service, which is relevant to my question:
/**
* Set up a singleton class instance to interface
* with the Redis database, along with helper async
* functions that provide functionality.
*/
var redis = require("redis");
// https://github.com/redis/node-redis/issues/1673
type RedisClientType = ReturnType<typeof redis.createClient>;
type RedisClientOptionsType = Parameters<typeof redis.createClient>[0];
export class Redis {
private static instance: Redis;
private static client: RedisClientType;
constructor() {
if (Redis.instance)
return Redis.instance;
Redis.instance = this;
Redis.client = null;
}
/* ... */
async initializeClient(options: RedisClientOptionsType) {
Redis.client = redis.createClient(options);
Redis.client.on('connect', function() {
Redis.instance.log('Client connected');
});
Redis.client.on('error', function(err: Error) {
Redis.instance.log(`Could not communicate with Redis client [${err}]`);
});
await Redis.client.connect();
}
async shutdownClient() {
await Redis.client.quit();
}
async multi() {
await Redis.client.multi();
}
async exec() {
await Redis.client.exec();
}
/* ... */
async sAdd(k: string, v: string) {
return await Redis.client.sAdd(k, v);
}
}
Individual calls to sAdd, sMembers, etc. work fine. So the client itself is initialized correctly, and it is able to process basic Redis calls.
What I would like to do is perform some chained transactions, e.g., from a rudimentary POST request using the singleton Redis client service, process some data from an uploaded file, and then add some key-value pairs (to start):
import { Redis } from '#/service/redis';
/* ... */
export const myPost = async (req: Request, res: Response) => {
const redis = new Redis(); // initialized client, as defined above
const k = 'my-key';
const v = 'my-value';
await redis
.multi()
.sAdd(k, v)
.exec();
}
Problem
The problem is that I get two errors with the chained await ... call.
First error
The first error is related to the await keyword, just before the multi/sAdd/exec call:
'await' expressions are only allowed within async functions and at the top levels of modules.ts(1308)
This await is within the async-ed post function. (I assume that I need to this handle the results of the underlying Promise chain.)
Second error
The second error is related to the sAdd wrapper:
Property 'sAdd' does not exist on type 'Promise<void>'.ts(2339)
I tried to add a return type to the call to multi:
async multi(): Promise<RedisClientType> {
await Redis.client.multi();
}
But this did not resolve the error with sAdd.
Question
What changes do I make to the service/singleton, which would allow calls to wrapper functions to be chained?
Currently, I am working on a NestJS project with the bull queue. In my controller, I have a get function to receive the request from the front end. Based on the request, I will send a gRPC call to retrieve data from other microservice. I would like to let the gRPC call function work with the bull queue. So, in the get function, I put the gRPC call function into producer, which can be executed in the consumer. However, after the gRPC call function is executed in the consumer, I can not find a way to return the retrieved data to the previous get function so that I can send the data back to the front end.
Any help would be appreciated.
You won't be able, the main purpose of using queues is non-blockage of any incoming request.
What you can do is, returning the job id of bull queue and then the front-end dev can track the response on it, or maybe use some event-driven approaches or websocket so you can tell him to refresh the resposne for it
You can, actually !
Here is an example :
import { Process, Processor } from '#nestjs/bull';
import { Job } from 'bull';
#Processor('myProcessor')
export class MyProcessor {
#Process('myProcess')
async handleMyProcess(job: Job<{ myInput: string }>) {
await new Promise((resolve) => setTimeout(resolve, 5000));
return 'hello world !';
}
}
Then in your service :
const compressJob = await this.myQueue.add('myProcess', {
myInput: 'foo',
});
const test = await compressJob.finished();
console.log(compressJob, test);
Before coming here I have read the official documentation of Rxjs and some other pages but I am still not clear. What I understood is this:
It is used to "join" 2 observables and thus obtain a single observable as a result, I also saw that it is used to "flatten" an observable (I am also not very clear).
Now ... I have days trying to program a user registry using Angular and Node.js with Express and I found a little tutorial which I decided to use and it has this code:
import { Injectable, Injector } from '#angular/core';
import { HttpClient, HttpInterceptor, HttpRequest, HttpHandler, HttpEvent, HttpErrorResponse } from '#angular/common/http';
import { Observable, throwError } from 'rxjs';
import { catchError, retry, mergeMap } from 'rxjs/operators'
import { AuthenticationService } from './authentication.service';
#Injectable({
providedIn: 'root'
})
export class AppInterceptor implements HttpInterceptor {
constructor(private injector: Injector) { }
intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
let accessToken = "", refreshToken = ""
const tokens = JSON.parse(sessionStorage.getItem("tokens"))
if (tokens) {
accessToken = tokens.accessToken
refreshToken = tokens.refreshToken
}
let clonHttp: HttpRequest<any>
clonHttp = tokens ? req.clone({ headers: req.headers.append("Authorization", `Bearer ${accessToken}`) }) : req
let auth = this.injector.get(AuthenticationService);
return next.handle(clonHttp)
.pipe(
catchError((error: HttpErrorResponse) => {
if (error.error instanceof ErrorEvent) {
console.log("error event")
} else if (error.status == 401) {
return auth.getNewAccessToken(refreshToken)
.pipe(
retry(3),
mergeMap(
(response: any) => {
tokens.accessToken = response.accessToken
sessionStorage.setItem("tokens", JSON.stringify(tokens))
clonHttp = req.clone({ headers: req.headers.append("Authorization", `Bearer ${response.accessToken}`) })
return next.handle(clonHttp)
}
)
)
} else if (error.status == 409) {
return throwError("User not logged")
} else {
if (error.error && error.error.message) {
return throwError(error.error.message)
} else {
return throwError("Check your connection")
}
}
})
)
}
}
If you see, when you use the MergeMap operator they only pass you the answer (a single observable), or at least that's what I can see. What I'm trying to say is that I don't see that they are using it with 2 observables or to mix 2 observables, which is what I have read in their official documentation, in fact, in the examples they show they always use it with 2 observables.
Honestly it has been too difficult for me to understand this operator, if someone could help me understand it in a simple way, I would be extremely grateful, in addition to understanding its use in that code that I show earlier. Greetings in advance. Thank you!
mergeMap, like many other so-called higher order mapping operators, maintains one or multiple inner observables.
An inner observable is created with the outer value and the provided function. The outer value essentially is just the value received from its source. For example:
of(1, 2, 3).pipe(
mergeMap((outerValue, index) => /* ... return an observable ... */)
).subscribe(); // `outerValue`: 1, 2, 3 (separately)
When an outer value comes in, a new inner observable will be created. I think the best way to understand this is to have a look at the source code:
// `value` - the `outerValue`
protected _next(value: T): void {
if (this.active < this.concurrent) {
this._tryNext(value);
} else {
this.buffer.push(value);
}
}
protected _tryNext(value: T) {
let result: ObservableInput<R>;
const index = this.index++;
try {
// Create the inner observable based on the `outerValue` and the provided function (`this.project`)
// `mergeMap(project)`
result = this.project(value, index);
} catch (err) {
this.destination.error(err);
return;
}
this.active++;
// Subscribe to the inner observable
this._innerSub(result, value, index);
}
Please disregard for now concurrent and buffer, we'll have a look at them a bit later.
Now, what happens when an inner observable emits ? Before going any further, it's worth mentioning that, although it's obvious, an inner observable requires an inner subscriber. We can see this in the _innerSub method from above:
private _innerSub(ish: ObservableInput<R>, value: T, index: number): void {
const innerSubscriber = new InnerSubscriber(this, value, index);
const destination = this.destination as Subscription;
destination.add(innerSubscriber);
// This is where the subscription takes place
subscribeToResult<T, R>(this, ish, undefined, undefined, innerSubscriber);
}
When an inner observable emits, the notifyNext method will be called:
notifyNext(outerValue: T, innerValue: R,
outerIndex: number, innerIndex: number,
innerSub: InnerSubscriber<T, R>): void {
this.destination.next(innerValue);
}
Where destination points to the next subscriber in the chain. For example, it can be this:
of(1)
.pipe(
mergeMap(/* ... */)
)
.subscribe({} /* <- this is the `destination` for `mergeMap` */)
This will be explained in more detail in What about the next subscriber in the chain below.
So, what does it mean to to mix 2 observables ?
Let's see this example:
of(2, 3, 1)
.pipe(
mergeMap(outerValue => timer(outerValue).pipe(mapTo(outerValue)))
)
.subscribe(console.log)
/* 1 \n 2 \n 3 */
When 2 arrives, mergeMap will subscribe to an inner observable that will emit in 200ms. This is an asynchronous action, but notice that the outer values(2, 3, 1) arrive synchronously. Next, 3 arrives and will create an inner obs. that will emit in 300ms. Since the current script has not finished executing yet, the callback queue is not yet considered. Now 1 arrives, and will create an inner obs. that will emit in 100 ms.
mergeMap has now 3 inner observables and will pass along the inner value of whichever inner observable emits.
As expected, we get 1, 2, 3.
So that's what mergeMap does. Mixing observables can be thought of this way: if an outer value comes and an inner observable has already been created, then mergeMap simply says: "no problem, I'll just create a new inner obs. and subscribe to it".
What about concurrent and buffer
mergeMap can be given a second argument, concurrent which indicates how many inner observables should handle at the same time. These number of active inner observables is tracked with the active property.
As seen in _next method, if active >= concurrent, the outerValues will be added to a buffer, which is a queue(FIFO).
Then, when one active inner observable completes, mergeMap will take the oldest value from the value and will create an inner observable out of it, using the provided function:
// Called when an inner observable completes
notifyComplete(innerSub: Subscription): void {
const buffer = this.buffer;
this.remove(innerSub);
this.active--;
if (buffer.length > 0) {
this._next(buffer.shift()!); // Create a new inner obs. with the oldest buffered value
} else if (this.active === 0 && this.hasCompleted) {
this.destination.complete();
}
}
With this in mind, concatMap(project) is just mergeMap(project, 1).
So, if you have:
of(2, 3, 1)
.pipe(
mergeMap(outerValue => timer(outerValue * 100).pipe(mapTo(outerValue)), 1)
)
.subscribe(console.log)
this will be logged:
2 \n 3 \n 1.
What about the next subscriber in the chain
Operators are functions that return another function which accepts an observable as their only parameter and return another observable.
When a stream is being subscribed to, each observable returned by an operator will have its own subscriber.
All these subscribers can be seen as a linked list. For example:
// S{n} -> Subscriber `n`, where `n` depends on the order in which the subscribers are created
of(/* ... */)
.pipe(
operatorA(), // S{4}
operatorB(), // S{3}
operatorC(), // S{2}
).subscribe({ /* ... */ }) // S{1}; the observer is converted into a `Subscriber`
S{n} is the parent(destination) of S{n+1}, meaning that S{1} is the destination of S{2}, S{2} is the destination of S{3} and so forth.
StackBlitz
Unexpected results
Compare these:
of(2, 1, 0)
.pipe(
mergeMap(v => timer(v * 100).pipe(mapTo(v)))
).subscribe(console.log)
// 0 1 2
of(2, 1, 0)
.pipe(
mergeMap(v => timer(v).pipe(mapTo(v)))
).subscribe(console.log)
// 1 0 2
As per MDN:
The specified amount of time (or the delay) is not the guaranteed time to execution, but rather the minimum time to execution. The callbacks you pass to these functions cannot run until the stack on the main thread is empty.
As a consequence, code like setTimeout(fn, 0) will execute as soon as the stack is empty, not immediately. If you execute code like setTimeout(fn, 0) but then immediately after run a loop that counts from 1 to 10 billion, your callback will be executed after a few seconds.
This section by MDN should clarify things as well.
I'd say this is environment-specific, rather than RxJs-specific.
In the second snippet, the delays are consecutive so that's why you're getting unexpected results. If you increase the delays just a bit, like: timer(v * 2), you should get the expected behavior.
So merge map is mainly used to resolve multiple inner observables concurrently and when all inner observables are resolved outer observable will resolve. I hope this helps.
Imagine you have to to read a list of ids from some async source, being it a remote service, a DB, a file on your file system.
Imagine that you have to launch an async query for each id to get the details.
Imagine you have to collect all the details for each id and do something else.
You end up having an initial Obsersable emitting a list and then a bunch of of Observables generated by that list. This is were you would use mergeMap.
The code would look like this
mySourceObs = getIdListFromSomewhere();
myStream = mySourceObs.pipe(
// after you get the list of the ids from your service, you generate a new strem
// which emits all the values of the list via from operator
concatMap(listOfIds => from(listOfIds)),
// for each id you get the details
mergeMap(id => getDetails(id),
)
If you subscribe to myStream you get a stream of details data, one for every id of the original list. The code would be simply
myStream.subscribe(
detail => {
// do what you have to do with the details of an id
}
)
MORE ON THE CODE REFERENCED IN THE QUESTION
My understanding of piece of code using mergeMap is the following:
you fetch a new token with auth.getNewAccessToken
If something goes wrong you retry 3 times
When you receive a fresh token, you do some stuff and then you clone something with next.handle(clonHttp)
Key point is that both auth.getNewAccessToken and next.handle(clonHttp) are async calls returning an Observable.
In this case you want to make sure that FIRST you get the response from auth.getNewAccessToken and ONLY THEN you call next.handle(clonHttp).
In this case the best way to code such logic is using concatMap which ensures that the second Observable is concatenated to the successful completion of the first one.
mergeMap and switchMap can also work in this scenario since auth.getNewAccessToken emits only ONCE and then completes, but the right semantic is given by concatMap (which by the way is the same as mergeMap with concurrency set to 1, but this is another story).
I am looking for an option to use nest as a back-end Gateway service -
The idea is to poll on DB changes ( and maybe later to move it to event driven ) - de facto no listener would be required here.
On change the Nest would update a 3rd pt API calls
What would be best practice here ?
Take a look here, I'm doing something similar to what you're after: https://github.com/nerdybeast/sith-api/blob/feature/redis-cache/src/modules/api/sobjects/trace-flag/TraceFlagPoller.ts
I created a class that "polls" a backend and emits an event when it detects a change in that backend. You could have other code that listens for this event which makes the call to your 3rd party api.
UPDATE:
As you stated, Nest does have a basic application context which skips the http service setup, here's how you can do that:
index.ts
import { NestFactory } from '#nestjs/core';
import { ApplicationModule } from './ApplicationModule';
import { DatabaseService } from './DatabaseService';
(async () => {
const app = await NestFactory.createApplicationContext(ApplicationModule);
const databaseService = app.get<DatabaseService>(DatabaseService);
await databaseService.poll();
})();
DatabaseService.ts
#Injectable()
export class DatabaseService {
private expectedResult: any;
public async poll() : Promise<void> {
const result = await getData();
if(result !== this.expectedResult) {
this.expectedResult = result;
await axios.post('https://some-url.com', result);
}
//Poll every 5 seconds or whatever
setTimeout(() => this.poll(), 5000);
}
}
This could be the solution if you had to poll the database instead of being able to subscribe to it. With this approach, when you start the app, it will poll forever, constantly updating your 3rd party api.
I would start the index.ts file with pm2 or forever so that you can have a graceful restart if your process crashes for some reason.
I would personally use typeORM subscribers like I have done many times for similar requirements. However I use an eventEmitter to not block the saving action. This is a snippet of what I usually do.
#Injectable()
export class EntityUpdate implements EntitySubscriberInterface {
constructor(
#InjectConnection() readonly connection: Connection,
#InjectEventManager() emitter: AppEvents<AbstractEntity>,
) {
connection.subscribers.push(this);
}
afterInsert(event: InsertEvent<AbstractEntity>): void {
this.emitter('entity', {
method: 'update',
entity,
});
}
}
Then I could listen to the event anywhere within my application and handle that status change of the entity
When I create an observable and I am done with it I unsubscribe it directly
const data$ = this.httpClient.get('https://jsonplaceholder.typicode.com/todos/1').subscribe(res => {
console.log('live', res);
data$.unsubscribe(); // <---- works fine
});
But say if I create an Observable using of and try to do the same
const obs$ = of(1).subscribe(e => {
console.log('test', e)
obs$.unsubscribe(); // <--- Problem while creating Observable by of
});
Whats different between these 2 observables?
Your code should be import Subscription and unsubscribe in ngOnDestroy
import { Observable, Subscription, of } from "rxjs";
private subscription$: Subscription;
this.subscription$ = of(1).subscribe(e => {
console.log('test', e)
});
ngOnDestroy() {
this.subscription$.unsubscribe();
}
Update: What I understand is http request is an observable that potentialy have incoming value in the future and of simply create a list of observable
And from #cartant comment
of completes synchronously, so you are attempting to call
obs$.unsubscribe before the assignment to obs$ has been made.
If you have only one observable in your component that would be the easiest scenario for unsubscribing by the following way:
ngOnDestroy() {
this.subscription$.unsubscribe();
}
the more complex scenario is having multiple observables, at that moment you will not to push each observable while subscribing in subscriptions: Subscription[] and while destroying do unsubscribe all added subscriptions in the array ngOnDestroy(){
this.subscriptions.forEach(sub => sub.unsubscribe());
}
or you can use that npm package
For ensuring unsubscription in a component, I would recommend creating a Subject which emits in ngOnDestroy() like this:
destroy$ = new Subject<boolean>();
...
ngOnDestroy() {
this.destroy$.next(true);
}
And adding a takeUntil on each Observable in the component:
myObs$.pipe(
takeUntil(this.destroy$),
tap(stuff => doStuff())
).subscribe();
This way you avoid polluting things with loads of unnecessary Subscription variables.