grpc nodejs where to put retrypolicy - node.js

referencing https://github.com/grpc/proposal/blob/master/A6-client-retries.md
it is not clear where the retry policy is actually placed or referenced, is it part of
protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
})
A supplemental question once retry is setup, regarding the call.on('xxxxxx', where in the API docs are the options listed? Using vscode I don't get any lint suggestions, but copilot gave me these, is there a more comprehensive list?
call.on('end', () => {
console.log("---END---")
})
call.on('error', err => {
console.log("---ERROR---:"+JSON.stringify(err))
})
call.on('status', status => {
console.log("---STATUS---")
})
call.on('metadata', metadata => {
console.log("---METADATA---")
})
call.on('cancelled', () => {
console.log("---CANCELLED---")
})
call.on('close', () => {
console.log("---CLOSE---")
})
call.on('finish', () => {
console.log("---FINISH---")
})
call.on('drain', () => {
console.log("---DRAIN---")
})

First, the retry functionality is currently not supported in the Node gRPC library (but it is in development).
Second, once the retry functionality is supported, it can be configured in the service config, as specified in the Integration with Service Config section of the proposal you linked. The service config can be provided to the client automatically by the service owner through the name resolution mechanism, or it can be provided when constructing a Client or Channel object by setting the grpc.service_config channel argument with a value that is a string containing a JSON-encoded service config.
Third, the call objects returned when calling methods are Node stream objects with additional events for metadata and status. Depending on how the method is defined in the .proto file, the call can be a Readable stream, a Writable stream, both, or neither, and it will emit the corresponding methods. The cancelled event is only emitted by server call objects, which do not emit the metadata or status events.

Related

Can we access request header on Firebase Auth triggers i.e onCreate?

How can one access the IP and country in firebase auth onCreate trigger? Is there any other way to get this info?
For the front end, I'm using firebase-ui and for user register using the following method.
app.auth().createUserWithEmailAndPassword(email,password)
Cloud Functions code:
exports.processSignUp = functions.auth.user().onCreate(async user => {
return admin.auth().setCustomUserClaims(user.uid, {
clientIp: 'xxx.xxx.xx.xx', // required here headers['x-forwarded-for']
country: 'countryName' // required here headers['x-appengine-country']
})
.then(() => {
})
.catch(error => {
console.log(error);
});
});
The Cloud Function is triggered by the Google/Firebase infrastructure, and thus the headers are the values from that infrastructure. No information about the user/device that called createUserWithEmailAndPassword is available beyond what is passed in the user object.
If you need more information, consider implementing a callable function that you call directly from your code passing both the credentials (that you now pass to createUserWithEmailAndPassword) and the additional information you need, and then creating the user in the Cloud Functions code itself.

How to add header properties to messages using seneca-amqp-transport

I am working on a project that requires the usage of a few rabbitmq queues. One of the queues requires that the messages are delayed for processing at a time in the future. I noticed in the documentation for rabbmitmq there is a new plugin called RabbitMQ Delayed Message Plugin that seems to allow this functionality. In the past for rabbmitmq based projects, I used seneca-amqp-transport for message adding and processing. The issue is that I have not seen any documentation for seneca or been able to find any examples outlining how to add header properties.
It seems as if I need to initially make sure the queue is created with x-delayed-type. Additionally, as each message is added to the queue, I need to make sure the x-delay header parameter is added to the message before it is sent to rabbbitmq. Is there a way to pass this parameter, x-delay, with seneca-amqp-transport?
Here is my current code for adding a message to the queue:
return new Promise((resolve, reject) => {
const client = require('seneca')()
.use('seneca-amqp-transport')
.client({
type: 'amqp',
pin: 'action:perform_time_consuming_act',
url: process.env.AMQP_SEND_URL
}).ready(() => {
client.act('action:perform_time_consuming_act', {
message: {data: 'this is a test'}
}, (err, res) => {
if (err) {
reject(err);
}
resolve(true);
});
});
}
In the code above, where would header-related data go?
I just looked up the code of the library and under lib/client/publisher.js , this should do the trick
function publish(message, exchange, rk, options) {
const opts = Object.assign({}, options, {
replyTo: replyQueue,
contentType: JSON_CONTENT_TYPE,
x-delay: 5000,
correlationId: correlationId
});
return ch.publish(exchange, rk, Buffer.from(message), opts);
}
Give it a Try , should work. Here the delay value if set to 5000 milliseconds. You can also overload the publish method to take the value as a parameter.

What's a valid #MessagePattern for NestJS MQTT microservice?

I'm trying to setup a MQTT Microservice using NestJS according to the docs.
I've started a working Mosquitto Broker using Docker and verified it's operability using various MQTT clients. Now, when I start the NestJS service it seems to be connecting correctly (mqqt.fx shows new client), yet I am unable to receive any messages in my controllers.
This is my bootstrapping, just like in the docs:
main.ts
async function bootstrap() {
const app = await NestFactory.createMicroservice(AppModule, {
transport: Transport.MQTT,
options: {
host: 'localhost',
port: 1883,
protocol: 'tcp'
}
});
app.listen(() => console.log('Microservice is listening'));
}
bootstrap();
app.controller.ts
#Controller()
export class AppController {
#MessagePattern('mytopic') // tried {cmd:'mytopic'} or {topic:'mytopic'}
root(msg: Buffer) {
console.log('received: ', msg)
}
}
Am I using the message-pattern decorator wrongly or is my concept wrong of what a NestJS MQTT microservice even is supposed to do? I thought it might subscribe to the topic I pass to the decorator. My only other source of information being the corresponding unit tests
nest.js Pattern Handler
On nest.js side we have the following pattern handler:
#MessagePattern('sum')
sum(data: number[]): number {
return data.reduce((a, b) => a + b, 0);
}
As #Alexandre explained, this will actually listen to sum_ack.
Non-nest.js Client
A non-nest.js client could look like this (just save as client.js, run npm install mqtt and run the program with node client.js):
var mqtt = require('mqtt')
var client = mqtt.connect('mqtt://localhost:1883')
client.on('connect', function () {
client.subscribe('sum_res', function (err) {
if (!err) {
client.publish('sum_ack', '{"data": [2, 3]}');
}
})
})
client.on('message', function (topic, message) {
console.log(message.toString())
client.end()
})
It sends a message on the topic sum_ack and listens to messages on sum_res. When it receives a message on sum_res, it logs the message and ends the program. nest.js expects the message format to be {data: myData} and then call the param handler sum(myData).
// Log:
{"err":null,"response":5} // This is the response from sum()
{"isDisposed":true} // Internal "complete event" (according to unit test)
Of course, this is not very convenient...
nest.js Client
That is because this is meant to be used with another nest.js client rather than a normal mqtt client. The nest.js client abstracts all the internal logic away. See this answer, which describes the client for redis (only two lines need to be changed for mqtt).
async onModuleInit() {
await this.client.connect();
// no 'sum_ack' or {data: [0, 2, 3]} needed
this.client.send('sum', [0, 2, 3]).toPromise();
}
The documentation is not very clear, but it seem that for mqtt if you have #MessagePattern('mytopic') you can publish a command on the topic mytopic_ack and you will get response on mytopic_res. I am still trying to find out how to publish to the mqtt broker from a service.
See https://github.com/nestjs/nest/blob/e019afa472c432ffe9e7330dc786539221652412/packages/microservices/server/server-mqtt.ts#L99
public getAckQueueName(pattern: string): string {
return `${pattern}_ack`;
}
public getResQueueName(pattern: string): string {
return `${pattern}_res`;
}
#Tanas is right. Nestjs/Microservice now listens to your $[topic] and answer to $[topic]/reply. The postfix _ack and _res are deprecated.
For example:
#MessagePattern('helloWorld')
getHello(): string {
console.log("hello world")
return this.appService.getHello();
}
Listens now on Topic: helloWorld
Replies now on Topic helloWorld/reply
Regarding ID
You should also provide an id within the payload (See #Hakier) and Nestjs will reply with an answer, containing your id.
If you don't have any id, there still won't be any reply but the corresponding logic will still trigger.
For example (Using the snipped from above):
your msg:
{"data":"foo","id":"bar"}
Nestjs reply:
{"response":"Hello World!","isDisposed":true,"id":"bar"}
Without ID:
your message:
{"data":"foo"} or {}
No reply but Hello World in Terminal
I was fighting with MQTT today and this helped me a little, but I had more problems and below you can see my findings:
Wrong way of configuration broker URL
In my case when I used non-local MQTT server I started with this:
const app = await NestFactory.createMicroservice(AppModule, {
transport: Transport.MQTT,
options: {
host: 'test.mosquitto.org',
port: 1883,
protocol: 'tcp',
},
});
await app.listenAsync();
but like you can read in a constructor of ServerMqtt they use url option only (when not provided it fallbacks to 'mqtt://localhost:1883'. While I do not have local MQTT it will never resolve app.listenAsync() which is resolved only on connect and will also not run any handler.
It started to work when I adjusted code to use url option.
const app = await NestFactory.createMicroservice(AppModule, {
transport: Transport.MQTT,
options: {
url: 'mqtt://test.mosquitto.org:1883',
},
});
await app.listenAsync();
Messages require id property
Second very weird problem was that when I used Non-nest.js Client script from #KimKern I had to register two MessagePatterns: sum and sum_ack:
#MessagePattern('sum')
sum(data: number[]): number {
return data.reduce((a, b) => a + b, 0);
}
#MessagePattern('sum_ack')
sumAck(data: number[]): number {
return data.reduce((a, b) => a + b, 0);
}
When I used console.log I discovered that the latter is being run but only when the first one is present. You can push the same message to the broker using mqtt cli tool to check it:
mqtt pub -t 'sum_ack' -h 'test.mosquitto.org' -m '{"data":[1,2]}'
But the biggest problem was that it didn't reply (publish sum_res).
The solution was to provide also id while sending a message.
mqtt pub -t 'sum_ack' -h 'test.mosquitto.org' -m '{"data":[1,2], "id":"any-id"}'
Then we could remove 'sum_ack' MessagePattern and leave only this code:
#MessagePattern('sum')
sum(data: number[]): number {
return data.reduce((a, b) => a + b, 0);
}
The reason for this was hidden inside handleMessage method of ServerMqtt which will not publish response from a handler if a message didn't have id.
TL/DR
Specify url to message broker using url option only and always provide id for a message.
I hope that will save some time to others.
Happy hacking!

Is there a better way with NodeJs to get updates from a Telegram bot?

I'm using simply like below:
class Bot {
constructor(token) {
let _baseApiURL = `https://api.telegram.org`;
//code here
}
getAPI(apiName) {
return axios.get(`${this.getApiURL()}/${apiName}`);
}
getApiURL() {
return `${this.getBaseApiUrl()}/bot${this.getToken()}`;
}
getUpdates(fn) {
this.getAPI('getUpdates')
.then(res => {
this.storeUpdates(res.data);
fn(res.data);
setTimeout(() => {
this.getUpdates(fn);
}, 1000);
})
.catch(err => {
console.log('::: ERROR :::', err);
});
}
}
const bot = new Bot('mytoken');
bot.start();
I'd like to know whether there is a better way to listen for Telegram's updates, instead of using a timeout and redo an Ajax call to 'getUpdates' API
Telegram supports polling or webhooks, so you can use the latter to avoid polling the getUpdates API
Getting updates
There are two mutually exclusive ways of receiving updates for your
bot — the getUpdates method on one hand and Webhooks on the other.
Incoming updates are stored on the server until the bot receives them
either way, but they will not be kept longer than 24 hours.
Regardless of which option you choose, you will receive JSON-serialized Update objects as a result.
More info on: https://core.telegram.org/bots/api#getting-updates
You can use telegraf to easily setup a webhook or to handle the polling for you with a great API

Using publisher confirms with RabbitMQ, in which cases publisher will be notified about success/failure?

Quoting the book, RabbitMQ in Depth:
A Basic.Ack request is sent to a publisher when a message that it has
published has been directly consumed by consumer applications on all
queues it was routed to or that the message was enqueued and persisted
if requested.
Confused with Has been directly consumed, does it mean when consumer send ack to broker publisher will be informed that consumer process message successfully? or it means that publisher will be notified when consumer just receive message from the queue?
or that the message was enqueued and persisted if requested. Is this like conjuction or publisher will be informed when either of those happens? (In that case publisher would be notified twice)
Using node.js and amqplib wanted to check what is happening actually:
// consumer.js
amqp.connect(...)
.then(connection => connection.createChannel())
.then(() => { assert exchange here })
.then(() => { assert queue here })
.then(() => { bind queue and exchange here })
.then(() => {
channel.consume(QUEUE, (message) => {
console.log('Raw RabbitMQ message received', message)
// Simulate some job to do
setTimeout(() => {
channel.ack(message, false)
}, 5000})
}, { noAck: false })
})
// publisher.js
amqp.connect(...)
.then(connection => connection.createConfirmChannel())
.then(() => { assert exchange here })
.then(() => {
channel.publish(exchange, routingKey, new Buffer(...),{}, (err, ok) => {
if (err) {
console.log('Error from handling confirmation on publisher side', err)
} else {
console.log('From handling confirmation on publisher side', ok)
}
})
})
Running the example, i can see following logs:
From handling confirmation on publisher side undefined
Raw RabbitMQ message received
Time to ack the message
As far as i see, at least by this log, publisher will be notified only when message was enqueued? (So having consumer acking the message will not influence publisher in any way)
Quoting further:
If a message cannot be routed, the broker will send a Basic.Nack RPC
request indicating the failure. It is then up to the publisher to
decide what to do with the message.
Changing the above example, where i only changed the routing key of the message to something that should not be routed anywhere (there are no bindings that would match routing key), from logs i can see only following.
From handling confirmation on publisher side undefined
Now i'm more confused, about what publisher is notified exactly here? I would understand that it receive an error, like Can't route anywhere, that would be aligned with quote above. But as you can see err is not defined and as side question even if amqplib in their official docs are using (err, ok), in no single case i see those defined. So here output is same like in above example, how one can differ between above example and un-routable message.
So what im up to here, when exactly publisher will be notified about what is happening with the message? Any concrete example in which one would use PublisherConfirms? From logging above, i would conclude that is nice to have it in cases where you want to be 100% sure that message was enqueued.
After searching again and again i have found this
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
The basic rules are as follows:
An un-routable mandatory or immediate message is confirmed right after the basic.return
transient message is confirmed the moment it is enqueued
Persistent message is confirmed when it is persisted to disk or when it is consumed on every queue.
If more than one of these conditions are met, only the first causes a
confirm to be sent. Every published message will be confirmed sooner
or later and no message will be confirmed more than once.
by default publishers don't know anything about consumers.
PublisherConfirms is used to check if the message reached the broker, but not if the message has been enqueued.
you can use mandatory flag to be sure the message has been routed
see this https://www.rabbitmq.com/reliability.html
To ensure messages are routed to a single known queue, the producer
can just declare a destination queue and publish directly to it. If
messages may be routed in more complex ways but the producer still
needs to know if they reached at least one queue, it can set the
mandatory flag on a basic.publish, ensuring that a basic.return
(containing a reply code and some textual explanation) will be sent
back to the client if no queues were appropriately bound.
I'm not entirely sure about the notification on ack/nack question, but check out the BunnyBus Node library for a simpler api and RabbitMQ management :)
https://github.com/xogroup/bunnybus
const BunnyBus = require('bunnybus');
const bunnyBus = new BunnyBus({
user: 'your-user',
vhost: 'your-vhost', // cloudamqp defaults vhost to the username
password: 'your-password',
server: 'your.server.com'
});
const handler = {
'test.event': (message, ack) => {
// Do your work here.
// acknowledge the message off of the bus.
return ack();
}
};
// Create exchange and queue if they do not already exist and then auto connect.
return bunnyBus.subscribe('test', handler)
.then(() => {
return bunnyBus.publish({event: 'test.event', body: 'here\'s the thing.'});
})
.catch(console.log);

Resources