Errors when Sending a Message to a Queue on SQS AWS - node.js

Getting Errors
MissingRequiredParameter: Missing required key
UnexpectedParameter: Unexpected key
when trying to Sending a Message to a Queue on SQS AWS, and data returns Null.
what am I doing wrong? message contains correct data.
/**
*
* #param message
*/
function sendMessage (message) {
// Send the message to this other Queue
sqs.sendMessage(message, function (err, data) {
if (err) {
console.log('Error', err)
} else {
console.log('Success', data.MessageId)
}
}
)
}

It is not clear what is the message in your code!
However, here the message should be params of the sendMessage function and not the message data itself. Then, it should be like (minimal options):
message = {
MessageBody: JSON.stringify(real_message_content),
QueueUrl: process.env.SQS_MAILER, <= your queue
};

Related

Get all message from aws sqs

I have a ScheduledEvent on my lamda function for every 24 hours and then inside function, I am calling SQS to get my messages.
export class EmailNotificationProcessor {
public static async run(): Promise<void> {
console.log('event');
await this.getNotificationFromSqs();
}
private static async getNotificationFromSqs(): Promise<void> {
const messagesToDelete: DeleteMessageBatchRequestEntryList = [];
const messageRequest: ReceiveMessageRequest = {
QueueUrl: process.env.DID_NOTIFICATION_SQS_QUEUE,
MaxNumberOfMessages:10,
WaitTimeSeconds:20
}
const { Messages }: ReceiveMessageResult = await receiveMessage(messageRequest);
console.log('Messages', Messages);
console.log('Total Messages ', Messages.length);
if (Messages && Messages.length > 0) {
for (const message of Messages) {
console.log('body is ', message.Body);
messagesToDelete.push({
Id: message.MessageId,
ReceiptHandle: message.ReceiptHandle,
} as DeleteMessageBatchRequestEntry);
}
}
await deleteMessages(messagesToDelete);
}
}
I am expecting 1 to 30 messages inside my queue and want to process all messages before sending an email which consists of the content that I will parse from sqs body.
My function for receiving messages
export const receiveMessage = async (request: SQS.ReceiveMessageRequest): Promise<PromiseResult<SQS.ReceiveMessageResult, AWSError>> =>{
console.log('inside receive');
return sqs.receiveMessage(request).promise();
}
Now I am not able to receive all messages at once and only getting 3 messages or sometimes 1 message at a time.
I know limit for API call is 10 in one single request but is there any way to wait and get all your message.
First of all, There is no configuration to get more then 10 messages from queue.
ReceiveMessage: Retrieves one or more messages (up to 10), from the specified queue
For your other problems: I think you are using Short poll ReceiveMessage call.If the number of messages in the queue is extremely small, you might not receive any messages in a particular ReceiveMessage response.
Try Long Polling:
Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren't included in a response).
Note* For getting more messages you need to wrap the call to SQS in a loop and keep requesting more messages until the queue is empty, which can lead you to get duplicate messages as well so try VisibilityTimeout in that problem.
Try VisibilityTimeout: The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request.
Sample Wrap up SQS call code:
function getMessages(params, count = 0, callback) {
let allMessages = [];
sqs.receiveMessage(params, function (err, data) {
if (err || (data && !data.Messages || data.Messages.length <= 0)) {
if(++count >= config.SQSRetries ){
return callback(null, allMessages);
}
return setTimeout(() => {
return getMessages(params, count, callback);
}, 500);
} else if (++count !== config.SQSRetries ){
allMessages.push(data);
return setTimeout(() => {
return getMessages(params, count, callback);
}, 500);
} else {
allMessages.push(data);
callback(null, allMessages);
}
});
In config.SQSRetries, we have set the value according to our requirement but as your SQS have 1 to 30 messages, Then '7' will be good for you!
Links: RecieveMessage UserGuide

async for loop not putting sqs message in queue

I am trying to inside a Lambda function run a for loop which will parse and send SQS messages to a certai queue. Currently it is running the for loop and creating the params properly (I checked via logging) and is running a log message just outside/after the for loop saying the lambda is done.
Issue is that the SQS message isn't being sent and/or arriving in the SQS queue.
I haven't inclued the rest of the lambda function as it is just noise and doesn't relate to the issue since it is running correctly already, the only issue is with the sqs message.
for (var i = 0; i < dogs.length; i++) {
let MessageBody = JSON.stringify(dogs[i]);
let params = {
MessageBody,
QueueUrl: process.env.serviceQueue,
DelaySeconds: 0
};
sqs.sendMessage(params, function(err, data) {
if (err) {
logger.error(`sqs.sendMessage: Error message: ${err}`);
} else {
let stringData = JSON.stringify(data);
logger.info(`sqs.sendMessage: Data: ${stringData}`);
}
});
}
iterating over multiple async requests, and using callback is a recipe for disaster as well as messy code. Id recommend the below (using async/await)
await Promise.all(dogs.map(async (dog) => {
let params = {
MessageBody: JSON.stringify(dog),
QueueUrl: process.env.serviceQueue,
DelaySeconds: 0
}
let data = await sqs.sendMessage(params).promise().catch(err => {
logger.error(`sqs.sendMessage: Error message: ${err}`);
});
logger.info(`sqs.sendMessage: Data: ${JSON.stringify(data)}`);
}));

Datastore Contention Errors

Error: too much contention on these datastore entities. please try again.
at /Users/wgosse/Documents/data-transfer-request/node_modules/grpc/src/node/src/client.js:554:15 code: 409, metadata: Metadata { _internal_repr: {} }
We’re attempting to set up a system where a node event listener will pull messages from a Pubsub queue and use these messages to update datastore entities as they come in. Unfortunately, we’re running into a contention error when too many messages are pulled off at once. Normally, we would batch these requests but having this code in the event listener makes this difficult to pull off. Is there a way besides batching to eliminate these errors?
The entities we’re trying to update do have a shared ancestor if that’s relevant.
listenForMessages establishes the event listener and shows the callback with the update and acknowledgement logic.
// Start listener to wait for return messages
pubsub_model.listenForMessages((message) => {
filepath_ctrl.updateFromSub(
message.attributes,
(err, data) => {
if (err) {
console.log('PUBSUB: Unable to update filepath entity. Error message: ', err);
return false;
}
console.log('PUBSUB: Filepath entity updated.');
// "Ack" (acknowledge receipt of) the message
message.ack();
return data;
}
);
});
/**
* Establishes an event listener to recieve return messages post processing
* #param {Integer} retries
* #param {Function} messageHandler
*/
function listenForMessages(messageCallback) {
pubsubConnect(
0,
return_topic,
config.get('PUBSUB_RECIEVE_TOPIC'),
return_sub,
config.get('PUBSUB_RECIEVE_SUB'),
(err) => {
if (err) {
console.log('PUBSUB: ERROR: Error encountered while attempting to establish listening connection: ', err);
return false;
}
console.log('PUBSUB: Listening for messages...');
//function for handling messages
const msgHandlerConstruct = (message) => {
messageHandler(messageCallback, message);
};
const errHandler = (puberr) => {
console.log('PUBSUB: ERROR: Error encountered when listening for messages: ', puberr);
}
return_sub.on('message', msgHandlerConstruct);
return_sub.on('error', errHandler);
return true;
}
);
return true;
}
/**
* Business logic for processing return messages. Upserts the message into the datastore as a filepath.
* #param {object} message
*/
function messageHandler(callback, message) {
console.log(`PUBSUB: Received message ${message.id}:`);
console.log(`\tData: ${message.data}`);
console.log(`\tAttributes: ${JSON.stringify(message.attributes)}`);
// Datastore update logic
//Callback MUST acknowledge after error detection
callback(message);
}
updateFromSub takes a message and structures the attributes into an entity to be saved to datastore, then calls our update method.
/**
* Gets the entity to be updated and updates anything that's changed in the message
* #param {*} msg_id
* #param {*} transfer_id
* #param {*} cb
*/
module.exports.updateFromSub = function (msg_attributes, cb) {
if (msg_attributes.id && msg_attributes.transfer_id) {
filepath_model.read(msg_attributes.id, msg_attributes.transfer_id, (err, entity) => {
if (err) {
return cb(err);
}
writeUpdateToOject(entity, msg_attributes, (obj_err, updated_entity) => {
if (obj_err) {
return cb(err);
}
filepath_model.update(msg_attributes.id, msg_attributes.transfer_id, updated_entity, cb);
return true;
});
return true;
});
} else {
cb('Message missing id and/or transfer id. Message: ', msg_attributes);
return false;
}
return true;
};
The update method is from the GCP tutorial, but has been modified to accommodate a parent child relation.
const Datastore = require('#google-cloud/datastore');
const ds = Datastore({
projectId: config.get('GCLOUD_PROJECT')
});
function update (id, parentId, data, cb) {
let key;
if (id) {
key = ds.key([parentKind,
parseInt(parentId, 10),
kind,
parseInt(id, 10)]);
} else {
key = ds.key([parentKind,
parseInt(parentId, 10),
kind]);
}
const entity = {
key: key,
data: toDatastore(data, ['description'])
};
ds.save(
entity,
(err) => {
data.id = entity.key.id;
cb(err, err ? null : data);
}
);
}
You are reaching writes per second limit on the same entity group. Default it is 1 write per second.
Datastore limits table.
https://cloud.google.com/datastore/docs/concepts/limits
It seems that pubsub generating messages with too high intensity, so datastore can't write them one by one within this limit. What you can try, is to use pubsub polling subscription, collect set of updates and write them with a single batch.
Sounds like a case of hotspotting. When you need to perform a high rate of sustained writes to an entity, you may choose to manually shard your entities into entities of different kinds, but using the same key.
See here: https://cloud.google.com/datastore/docs/best-practices#high_readwrite_rates_to_a_narrow_key_range

Check for an incoming message in aws sqs

How does my function continuously check for an incoming message? The following function exits, after receiving a message. Considering, long polling has been enabled for the queue how do I continuously check for a new message?
function checkMessage(){
var params = {
QueueUrl : Constant.QUEUE_URL,
VisibilityTimeout: 0,
WaitTimeSeconds: 0
}
sqs.receiveMessage(params,(err,data) => {
if(data){
console.log("%o",data);
}
});
}
Your function would need to continually poll Amazon SQS.
Long Polling will delay a response by up to 20 seconds if there are no messages available. If a message becomes available during that period, it will be immediately returned. If there is no message after 20 seconds, it returns without providing a message.
Therefore, your function would need to poll SQS again (perhaps doing something else in the meantime).
var processMessages = (function (err, data) {
if (data.Messages) {
for (i = 0; i < data.Messages.length; i++) {
var message = data.Messages[i];
var body = JSON.parse(message.Body);
// process message
// delete if successful
}
}
});
while (true) {
sqs.receiveMessage({
QueueUrl: sqsQueueUrl,
MaxNumberOfMessages: 5, // how many messages to retrieve in a batch
VisibilityTimeout: 60, // how long until these messages are available to another consumer
WaitTimeSeconds: 15 // how many seconds to wait for messages before continuing
}, processMessages);
}
(function checkMessage(){
var params = {
QueueUrl : Constant.QUEUE_URL,
VisibilityTimeout: 0,
WaitTimeSeconds: 0
}
sqs.receiveMessage(params,(err,data) => {
if(data){
console.log("%o",data);
}
checkMessage()
});
})()
To continuously check for an incoming message in your aws sqs you will want to recusrsively call the aws sqs whenever a data is returned.

Wrong commit order when using autoCommit=false in HighlevelConsumer

I'm using a HighlevelProducer and HighlevelConsumer to send and receive Messages. The HighlevelConsumer is configured with autoCommit=false as I want to commit Messages only when it was produced successfully. The problem is, that the first message never really gets commited.
Example:
Send Messages 1-10.
Receive Message 1
Receive Message 2
Commit Message 2
...
Receive Message 10
Commit Message 10
Commit Message 1
If I restart my Consumer, all messages from 1 to 10 are processed again. Only if I send new messages to the consumer, the old messages get committed. This happens for any number of messages.
My Code reads as follows:
var kafka = require('kafka-node'),
HighLevelConsumer = kafka.HighLevelConsumer,
client = new kafka.Client("localhost:2181/");
consumer = new HighLevelConsumer(
client,
[
{ topic: 'mytopic' }
],
{
groupId: 'my-group',
id: "my-consumer-1",
autoCommit: false
}
);
consumer.on('message', function (message) {
console.log("consume: " + message.offset);
consumer.commit(function (err, data) {
console.log("commited:" + message.offset);
});
console.log("consumed:" + message.offset);
});
process.on('SIGINT', function () {
consumer.close(true, function () {
process.exit();
});
});
process.on('exit', function () {
consumer.close(true, function () {
process.exit();
});
});
var messages = 10;
var kafka = require('kafka-node'),
HighLevelProducer = kafka.HighLevelProducer,
client = new kafka.Client("localhost:2181/");
var producer = new HighLevelProducer(client, { partitionerType: 2, requireAcks: 1 });
producer.on('error', function (err) { console.log(err) });
producer.on('ready', function () {
for (i = 0; i < messages; i++) {
payloads = [{ topic: 'mytopic', messages: "" }];
producer.send(payloads, function (err, data) {
err ? console.log(i + "err", err) : console.log(i + "data", data);
});
}
});
Am I doing something wrong or is this a bug in kafka-node?
A commit of message 2 is an implicit commit of message 1.
As you commits are done asynchronously, and commit of message 1 and message 2 are done quick after each other (ie, committing 2 happens before the consumer did send the commit of 1), the first commit will not happen explicitly and only a single commit of message 2 will be sent.

Resources