How does my function continuously check for an incoming message? The following function exits, after receiving a message. Considering, long polling has been enabled for the queue how do I continuously check for a new message?
function checkMessage(){
var params = {
QueueUrl : Constant.QUEUE_URL,
VisibilityTimeout: 0,
WaitTimeSeconds: 0
}
sqs.receiveMessage(params,(err,data) => {
if(data){
console.log("%o",data);
}
});
}
Your function would need to continually poll Amazon SQS.
Long Polling will delay a response by up to 20 seconds if there are no messages available. If a message becomes available during that period, it will be immediately returned. If there is no message after 20 seconds, it returns without providing a message.
Therefore, your function would need to poll SQS again (perhaps doing something else in the meantime).
var processMessages = (function (err, data) {
if (data.Messages) {
for (i = 0; i < data.Messages.length; i++) {
var message = data.Messages[i];
var body = JSON.parse(message.Body);
// process message
// delete if successful
}
}
});
while (true) {
sqs.receiveMessage({
QueueUrl: sqsQueueUrl,
MaxNumberOfMessages: 5, // how many messages to retrieve in a batch
VisibilityTimeout: 60, // how long until these messages are available to another consumer
WaitTimeSeconds: 15 // how many seconds to wait for messages before continuing
}, processMessages);
}
(function checkMessage(){
var params = {
QueueUrl : Constant.QUEUE_URL,
VisibilityTimeout: 0,
WaitTimeSeconds: 0
}
sqs.receiveMessage(params,(err,data) => {
if(data){
console.log("%o",data);
}
checkMessage()
});
})()
To continuously check for an incoming message in your aws sqs you will want to recusrsively call the aws sqs whenever a data is returned.
Related
I have a ScheduledEvent on my lamda function for every 24 hours and then inside function, I am calling SQS to get my messages.
export class EmailNotificationProcessor {
public static async run(): Promise<void> {
console.log('event');
await this.getNotificationFromSqs();
}
private static async getNotificationFromSqs(): Promise<void> {
const messagesToDelete: DeleteMessageBatchRequestEntryList = [];
const messageRequest: ReceiveMessageRequest = {
QueueUrl: process.env.DID_NOTIFICATION_SQS_QUEUE,
MaxNumberOfMessages:10,
WaitTimeSeconds:20
}
const { Messages }: ReceiveMessageResult = await receiveMessage(messageRequest);
console.log('Messages', Messages);
console.log('Total Messages ', Messages.length);
if (Messages && Messages.length > 0) {
for (const message of Messages) {
console.log('body is ', message.Body);
messagesToDelete.push({
Id: message.MessageId,
ReceiptHandle: message.ReceiptHandle,
} as DeleteMessageBatchRequestEntry);
}
}
await deleteMessages(messagesToDelete);
}
}
I am expecting 1 to 30 messages inside my queue and want to process all messages before sending an email which consists of the content that I will parse from sqs body.
My function for receiving messages
export const receiveMessage = async (request: SQS.ReceiveMessageRequest): Promise<PromiseResult<SQS.ReceiveMessageResult, AWSError>> =>{
console.log('inside receive');
return sqs.receiveMessage(request).promise();
}
Now I am not able to receive all messages at once and only getting 3 messages or sometimes 1 message at a time.
I know limit for API call is 10 in one single request but is there any way to wait and get all your message.
First of all, There is no configuration to get more then 10 messages from queue.
ReceiveMessage: Retrieves one or more messages (up to 10), from the specified queue
For your other problems: I think you are using Short poll ReceiveMessage call.If the number of messages in the queue is extremely small, you might not receive any messages in a particular ReceiveMessage response.
Try Long Polling:
Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren't included in a response).
Note* For getting more messages you need to wrap the call to SQS in a loop and keep requesting more messages until the queue is empty, which can lead you to get duplicate messages as well so try VisibilityTimeout in that problem.
Try VisibilityTimeout: The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request.
Sample Wrap up SQS call code:
function getMessages(params, count = 0, callback) {
let allMessages = [];
sqs.receiveMessage(params, function (err, data) {
if (err || (data && !data.Messages || data.Messages.length <= 0)) {
if(++count >= config.SQSRetries ){
return callback(null, allMessages);
}
return setTimeout(() => {
return getMessages(params, count, callback);
}, 500);
} else if (++count !== config.SQSRetries ){
allMessages.push(data);
return setTimeout(() => {
return getMessages(params, count, callback);
}, 500);
} else {
allMessages.push(data);
callback(null, allMessages);
}
});
In config.SQSRetries, we have set the value according to our requirement but as your SQS have 1 to 30 messages, Then '7' will be good for you!
Links: RecieveMessage UserGuide
I am trying to inside a Lambda function run a for loop which will parse and send SQS messages to a certai queue. Currently it is running the for loop and creating the params properly (I checked via logging) and is running a log message just outside/after the for loop saying the lambda is done.
Issue is that the SQS message isn't being sent and/or arriving in the SQS queue.
I haven't inclued the rest of the lambda function as it is just noise and doesn't relate to the issue since it is running correctly already, the only issue is with the sqs message.
for (var i = 0; i < dogs.length; i++) {
let MessageBody = JSON.stringify(dogs[i]);
let params = {
MessageBody,
QueueUrl: process.env.serviceQueue,
DelaySeconds: 0
};
sqs.sendMessage(params, function(err, data) {
if (err) {
logger.error(`sqs.sendMessage: Error message: ${err}`);
} else {
let stringData = JSON.stringify(data);
logger.info(`sqs.sendMessage: Data: ${stringData}`);
}
});
}
iterating over multiple async requests, and using callback is a recipe for disaster as well as messy code. Id recommend the below (using async/await)
await Promise.all(dogs.map(async (dog) => {
let params = {
MessageBody: JSON.stringify(dog),
QueueUrl: process.env.serviceQueue,
DelaySeconds: 0
}
let data = await sqs.sendMessage(params).promise().catch(err => {
logger.error(`sqs.sendMessage: Error message: ${err}`);
});
logger.info(`sqs.sendMessage: Data: ${JSON.stringify(data)}`);
}));
I have a case, where I enqueue some messages using 'queue'. Now I want to extract these messages serially. I am using 'worker' but it fires all messages in async way calling my callbackfunction for each message at the same time. Is there a way I can control the worker to extract only a single message or call some de-queue message serially?
// Code enqueuing 3 messages...
var queue = new NR.queue({connection: connectionDetails});
queue.enqueue(config.Redis.resquequeuename, config.Redis.resquequeuejob, {
message_id: 1,
message_payload: "this is a test body of my message"
});
queue.enqueue(config.Redis.resquequeuename, config.Redis.resquequeuejob, {
message_id: 2,
message_payload: "this is a test body of my message"
});
queue.enqueue(config.Redis.resquequeuename, config.Redis.resquequeuejob, {
message_id: 3,
message_payload: "this is a test body of my message"
});
// Code using worker to dequeue messages
var GetQueueMessage = function(appcallback){
var job = {};
job[config.Redis.resquequeuejob] = {
plugins: [],
pluginOptions: {},
perform: function(compositemessage, callback){
callback(null, appcallback(null, compositemessage));
}
};
var workername = os.hostname() + ":" + process.pid;
var worker = new NR.worker({connection: connectionDetails, queues: [config.Redis.resquequeuename], 'name': workername}, job);
worker.connect(function(){
worker.workerCleanup();
console.log("Worker starting, Resque worker. Worker name: "+ workername);
worker.start();
});
// different even handlers for worker here...
};
workerGetQueueMessage(function(err, pCompositeMessage){
console.log("Composite Message returned from Worker: " + JSON.stringify(pCompositeMessage));
console.log(new Date(dt.now()));
});
I'm using a HighlevelProducer and HighlevelConsumer to send and receive Messages. The HighlevelConsumer is configured with autoCommit=false as I want to commit Messages only when it was produced successfully. The problem is, that the first message never really gets commited.
Example:
Send Messages 1-10.
Receive Message 1
Receive Message 2
Commit Message 2
...
Receive Message 10
Commit Message 10
Commit Message 1
If I restart my Consumer, all messages from 1 to 10 are processed again. Only if I send new messages to the consumer, the old messages get committed. This happens for any number of messages.
My Code reads as follows:
var kafka = require('kafka-node'),
HighLevelConsumer = kafka.HighLevelConsumer,
client = new kafka.Client("localhost:2181/");
consumer = new HighLevelConsumer(
client,
[
{ topic: 'mytopic' }
],
{
groupId: 'my-group',
id: "my-consumer-1",
autoCommit: false
}
);
consumer.on('message', function (message) {
console.log("consume: " + message.offset);
consumer.commit(function (err, data) {
console.log("commited:" + message.offset);
});
console.log("consumed:" + message.offset);
});
process.on('SIGINT', function () {
consumer.close(true, function () {
process.exit();
});
});
process.on('exit', function () {
consumer.close(true, function () {
process.exit();
});
});
var messages = 10;
var kafka = require('kafka-node'),
HighLevelProducer = kafka.HighLevelProducer,
client = new kafka.Client("localhost:2181/");
var producer = new HighLevelProducer(client, { partitionerType: 2, requireAcks: 1 });
producer.on('error', function (err) { console.log(err) });
producer.on('ready', function () {
for (i = 0; i < messages; i++) {
payloads = [{ topic: 'mytopic', messages: "" }];
producer.send(payloads, function (err, data) {
err ? console.log(i + "err", err) : console.log(i + "data", data);
});
}
});
Am I doing something wrong or is this a bug in kafka-node?
A commit of message 2 is an implicit commit of message 1.
As you commits are done asynchronously, and commit of message 1 and message 2 are done quick after each other (ie, committing 2 happens before the consumer did send the commit of 1), the first commit will not happen explicitly and only a single commit of message 2 will be sent.
How do I get the number of messages currently en-queued?
My code is basically the following:
function readQueue() {
var open = require('amqplib').connect(config.rabbitServer);
open.then(function (conn) {
var ok = conn.createChannel();
ok = ok.then(function (ch) {
ch.prefetch(config.bulkSize);
setInterval(function () {
handleMessages();
}, config.bulkInterval);
ch.assertQueue(config.inputQueue);
ch.consume(config.inputQueue, function (msg) {
if (msg !== null) {
pendingMessages.push(msg);
}
});
});
return ok;
}).then(null, console.warn);
}
I found nothing in the documentation or while debugging, and I did see a different library that allows this, so wondering if amqplib supports this as well.
You can get the queue-length with amqplib.
In my case the queue has the feature 'durable:true'. You have to pass it as an option.
var amqp = require('amqplib/callback_api');
amqp.connect(amqp_url, function(err, conn) {
conn.createChannel(function(err, ch) {
var q = 'task2_queue';
ch.assertQueue(q, {durable: true}, function(err, ok) {
console.log(ok);
});
});
});
It will return an object like this:
{ queue: 'task2_queue', messageCount: 34, consumerCount: 2 }
For more information: https://www.squaremobius.net/amqp.node/channel_api.html#channel_assertQueue
I think the assertQueue method call will return an object that contains the current message count. I don't remember the exact property name off-hand, but it should be in there.
The real trick, though, is that this number will never be updated once you call assertQueue. The only way to get an updated message count is to call assertQueue again. This can have some performance implications if you're checking it too frequently.
You should call channel.checkQueue(queueName) and then you will get an object { queue: 'queueName', messageCount: 1, consumerCount: 0 } where the property messageCount which is the exactly current number of messages in the queue
I couldn't find a direct solution using node, but by using api from RabbitMQ I was able to get message count.
After enabling management plugin of RabbitMQ the apis can be accessed using http://127.0.0.1:15672/api/queues/vhost/name and user login as guest with password guest.
var request = require('request');
var count_url = "http://guest:guest#127.0.0.1:15672/api/queues/%2f/" + q;
var mincount = 0;
..........
..........
request({
url : count_url
}, function(error, response, body) {
console.log("Called RabbitMQ API");
if (error) {
console.error("Unable to fetch Queued Msgs Count" + error);
return;
}
else
{
var message = JSON.parse(body);
if (message.hasOwnProperty("messages_ready")) {
// this DOES NOT COUNT UnAck msgs
var msg_ready = JSON.stringify(message.messages_ready);
console.log("message.messages_ready=" + msg_ready);
if (msg_ready == mincount) {
console.log("mincount Reached ..Requesting Producer");
///Code to Produce msgs ..
}
}
if (message.hasOwnProperty("messages")) {
// _messages_ total messages i.e including unAck
var msg = JSON.stringify(message.messages);
console.log("message.messages=" + msg);
}
}
});