Hot to send message to all queue subscribers? - nestjs

I have two microservices. First microservice it's front server, which requests for some work in one of second microservice instance. The great solution of NATS transports is queueing. I can send message and NATS will send it only to one of instance. But that if i need to send message, or event to every queue instances?
Front service:
//module
providers: [
...,
{
provide: 'CLIENT_PROXY_FACTORY',
useFactory: (configService: ConfigService) => {
return ClientProxyFactory.create(configService.get('microservice'));
},
inject: [ConfigService],
}
],
// service
constructor(
#Inject('CLIENT_PROXY_FACTORY')
private nats: ClientProxy,
) {
}
// in some method
method() {
this.nats.emit('test', 0);
this.nats.send('test1', true).toPromise();
}
Worker service:
// bootstrap
transport: Transport.NATS,
options: {
url: 'nats://127.0.0.1:4222',
queue: 'worker'
}
// controller
#MessagePattern('test')
messagefunc() {
console.log('Got message!');
}
#EventPattern('test1')
eventfunc() {
console.log('Got event!');
}
So. In my example i need to create 2 or more instances of Worker service. And i need an solution how to sometimes deliver events or messages to all Worker service instances?

Make the worker services subscribe to two subjects:
Subject A in a queue group. Messages published to A will be sent only to one worker.
Subject B without queue group. Messages published to B will be sent to all workers.
Sorry, can't tell how to implement this in NestJS.

Related

nodejs multiple instances (AWS load balancer) duplicated WS listeners

We have node.js app, hosted on AWS elastic beanstalk and it uses load balancer. So we have multiple backend instances.
On each instance we create WS listener (to listen to smart contracts events).
e.g.
ws.on("Event", () => { // handle event })
Problem here is that we receive same event multiple times.
We'd like to handle event at most once.
Tried setting in redis "handling" flag. In handler checked if it's true => do nothing. If false => set "handling" to true and handle event.
But faced another issue, sometimes both instances receive false and start handling same event.
ws.on("Event", async () => {
const isHandling = await redis.get("handling");
// isHandling may be null on multiple instances
if(isHandling !== "true") {
await redis.set("handling", "true");
// handle event
}
})
Thank you in advance!

nestjs Gateways emit an event to all connected sockets

How to issue an event to all connected sockets?
export class EventsGateway {
#SubscribeMessage('message')
async onEvent(client, data) {
// The following is the use of `socket.io` to issue events to all connected sockets.
// io.emit('message', data);
}
}
How do I perform this in nestjs?
NestJS allows you to create message listeners using decorators. Within this method, you are able to respond to the client by returning a WsResponse object.
However, NestJS also allows you to get the WebSocket instance using the WebSocketServer decorator.
To send an Event to all connected clients you will need to use the WebSocketServer decorator and use the native WebSocket instance to emit a message, like so:
import WebSocketServer from '#nestjs/websockets'
export class EventsGateway {
#WebSocketServer() server;
#SubscribeMessage('message')
onEvent(client: any, payload: any): Observable<WsResponse<any>> | any {
this.server.emit('message', payload);
}
}

What's a valid #MessagePattern for NestJS MQTT microservice?

I'm trying to setup a MQTT Microservice using NestJS according to the docs.
I've started a working Mosquitto Broker using Docker and verified it's operability using various MQTT clients. Now, when I start the NestJS service it seems to be connecting correctly (mqqt.fx shows new client), yet I am unable to receive any messages in my controllers.
This is my bootstrapping, just like in the docs:
main.ts
async function bootstrap() {
const app = await NestFactory.createMicroservice(AppModule, {
transport: Transport.MQTT,
options: {
host: 'localhost',
port: 1883,
protocol: 'tcp'
}
});
app.listen(() => console.log('Microservice is listening'));
}
bootstrap();
app.controller.ts
#Controller()
export class AppController {
#MessagePattern('mytopic') // tried {cmd:'mytopic'} or {topic:'mytopic'}
root(msg: Buffer) {
console.log('received: ', msg)
}
}
Am I using the message-pattern decorator wrongly or is my concept wrong of what a NestJS MQTT microservice even is supposed to do? I thought it might subscribe to the topic I pass to the decorator. My only other source of information being the corresponding unit tests
nest.js Pattern Handler
On nest.js side we have the following pattern handler:
#MessagePattern('sum')
sum(data: number[]): number {
return data.reduce((a, b) => a + b, 0);
}
As #Alexandre explained, this will actually listen to sum_ack.
Non-nest.js Client
A non-nest.js client could look like this (just save as client.js, run npm install mqtt and run the program with node client.js):
var mqtt = require('mqtt')
var client = mqtt.connect('mqtt://localhost:1883')
client.on('connect', function () {
client.subscribe('sum_res', function (err) {
if (!err) {
client.publish('sum_ack', '{"data": [2, 3]}');
}
})
})
client.on('message', function (topic, message) {
console.log(message.toString())
client.end()
})
It sends a message on the topic sum_ack and listens to messages on sum_res. When it receives a message on sum_res, it logs the message and ends the program. nest.js expects the message format to be {data: myData} and then call the param handler sum(myData).
// Log:
{"err":null,"response":5} // This is the response from sum()
{"isDisposed":true} // Internal "complete event" (according to unit test)
Of course, this is not very convenient...
nest.js Client
That is because this is meant to be used with another nest.js client rather than a normal mqtt client. The nest.js client abstracts all the internal logic away. See this answer, which describes the client for redis (only two lines need to be changed for mqtt).
async onModuleInit() {
await this.client.connect();
// no 'sum_ack' or {data: [0, 2, 3]} needed
this.client.send('sum', [0, 2, 3]).toPromise();
}
The documentation is not very clear, but it seem that for mqtt if you have #MessagePattern('mytopic') you can publish a command on the topic mytopic_ack and you will get response on mytopic_res. I am still trying to find out how to publish to the mqtt broker from a service.
See https://github.com/nestjs/nest/blob/e019afa472c432ffe9e7330dc786539221652412/packages/microservices/server/server-mqtt.ts#L99
public getAckQueueName(pattern: string): string {
return `${pattern}_ack`;
}
public getResQueueName(pattern: string): string {
return `${pattern}_res`;
}
#Tanas is right. Nestjs/Microservice now listens to your $[topic] and answer to $[topic]/reply. The postfix _ack and _res are deprecated.
For example:
#MessagePattern('helloWorld')
getHello(): string {
console.log("hello world")
return this.appService.getHello();
}
Listens now on Topic: helloWorld
Replies now on Topic helloWorld/reply
Regarding ID
You should also provide an id within the payload (See #Hakier) and Nestjs will reply with an answer, containing your id.
If you don't have any id, there still won't be any reply but the corresponding logic will still trigger.
For example (Using the snipped from above):
your msg:
{"data":"foo","id":"bar"}
Nestjs reply:
{"response":"Hello World!","isDisposed":true,"id":"bar"}
Without ID:
your message:
{"data":"foo"} or {}
No reply but Hello World in Terminal
I was fighting with MQTT today and this helped me a little, but I had more problems and below you can see my findings:
Wrong way of configuration broker URL
In my case when I used non-local MQTT server I started with this:
const app = await NestFactory.createMicroservice(AppModule, {
transport: Transport.MQTT,
options: {
host: 'test.mosquitto.org',
port: 1883,
protocol: 'tcp',
},
});
await app.listenAsync();
but like you can read in a constructor of ServerMqtt they use url option only (when not provided it fallbacks to 'mqtt://localhost:1883'. While I do not have local MQTT it will never resolve app.listenAsync() which is resolved only on connect and will also not run any handler.
It started to work when I adjusted code to use url option.
const app = await NestFactory.createMicroservice(AppModule, {
transport: Transport.MQTT,
options: {
url: 'mqtt://test.mosquitto.org:1883',
},
});
await app.listenAsync();
Messages require id property
Second very weird problem was that when I used Non-nest.js Client script from #KimKern I had to register two MessagePatterns: sum and sum_ack:
#MessagePattern('sum')
sum(data: number[]): number {
return data.reduce((a, b) => a + b, 0);
}
#MessagePattern('sum_ack')
sumAck(data: number[]): number {
return data.reduce((a, b) => a + b, 0);
}
When I used console.log I discovered that the latter is being run but only when the first one is present. You can push the same message to the broker using mqtt cli tool to check it:
mqtt pub -t 'sum_ack' -h 'test.mosquitto.org' -m '{"data":[1,2]}'
But the biggest problem was that it didn't reply (publish sum_res).
The solution was to provide also id while sending a message.
mqtt pub -t 'sum_ack' -h 'test.mosquitto.org' -m '{"data":[1,2], "id":"any-id"}'
Then we could remove 'sum_ack' MessagePattern and leave only this code:
#MessagePattern('sum')
sum(data: number[]): number {
return data.reduce((a, b) => a + b, 0);
}
The reason for this was hidden inside handleMessage method of ServerMqtt which will not publish response from a handler if a message didn't have id.
TL/DR
Specify url to message broker using url option only and always provide id for a message.
I hope that will save some time to others.
Happy hacking!

Is there a better way with NodeJs to get updates from a Telegram bot?

I'm using simply like below:
class Bot {
constructor(token) {
let _baseApiURL = `https://api.telegram.org`;
//code here
}
getAPI(apiName) {
return axios.get(`${this.getApiURL()}/${apiName}`);
}
getApiURL() {
return `${this.getBaseApiUrl()}/bot${this.getToken()}`;
}
getUpdates(fn) {
this.getAPI('getUpdates')
.then(res => {
this.storeUpdates(res.data);
fn(res.data);
setTimeout(() => {
this.getUpdates(fn);
}, 1000);
})
.catch(err => {
console.log('::: ERROR :::', err);
});
}
}
const bot = new Bot('mytoken');
bot.start();
I'd like to know whether there is a better way to listen for Telegram's updates, instead of using a timeout and redo an Ajax call to 'getUpdates' API
Telegram supports polling or webhooks, so you can use the latter to avoid polling the getUpdates API
Getting updates
There are two mutually exclusive ways of receiving updates for your
bot — the getUpdates method on one hand and Webhooks on the other.
Incoming updates are stored on the server until the bot receives them
either way, but they will not be kept longer than 24 hours.
Regardless of which option you choose, you will receive JSON-serialized Update objects as a result.
More info on: https://core.telegram.org/bots/api#getting-updates
You can use telegraf to easily setup a webhook or to handle the polling for you with a great API

Get message to persist in RabbitMQ when there a no consumers

My scenario: I want my app to publish logs to RabbitMQ and have another process consume those logs and write to a DB. Additionally, logs should persist in RabbitMQ even if there is no consumer at the moment. However, with the code I have now, my logs don't show up in RabbitMQ unless I start a consumer. What am I doing wrong?
My code:
var amqp = require('amqp');
var connection = amqp.createConnection({
host: "localhost",
port: 5672
});
connection.on('ready', function() {
// Immediately publish
setTimeout(function() {
connection.publish('logs',
new Buffer('hello world'), {},
function(err, res) {
console.log(err, '|', res);
});
}, 0);
// Wait a second to subscribe
setTimeout(function() {
connection.queue('logs', function(q) {
q.subscribe(function(message) {
console.log(message.data);
});
});
}, 1000);
});
Many times the general set up with rabbit MQ is for the publisher to declare and exchange and publish to it. Then the consumer declares the same exchange (which will just ensure it exists if it is already there and create it if the consumer starts first). This is incorrect for your usage. You need to have the queue created from the moment that you start publishing to it.
The publisher must create the exchange and the queue, the queue needs to be
autodelete=false
, durable only helps if you plan to restart your RabbitMQ server. It then publishes to the exchange and the messages will be delivered to the queue where they will wait for your consumer to connect to it and then read all the messages that it missed. It must use the exact same queue declare parameters as the producer did when it declared the queue. As it is
autodelete=false
It will ensure that no matter when the the consumer comes up and down it will stay alive and retain the messages.

Resources