SQS queue - send message without MessageGroupId? - node.js

I am trying to send a message without MessageGroupId because I basically don't need it. I have a few microservices running, that should read from the queue any time and if I put the same group ID it means that only one service can read these messages one by one.
Now generating an UUID as a MessageGroupId sounds like a bad practice.
Is there a way to disable MessageGroupId or send a default value that won't act as a MessageGroupId?
const params = {
MessageDeduplicationId: `${uuidv1()}`,
MessageBody: JSON.stringify({
name: 'Ben',
lastName: 'Beri',
}),
QueueUrl: `https://sqs.us-east-1.amazonaws.com/${accountId}/${queueName}`,
};
sqs.sendMessage(params, (err, data) => {
if (err) {
console.log('error! ' + err.message);
return;
}
console.log(data.MessageId);
});
error! The request must contain the parameter MessageGroupId.

We can't insert the message into the queue without messagegroupid, if you want messages to be picked sequentially, then use the same messagegroupid for all the messages, else use unique value for each.
What are the implications you are facing with using UUID as messagegroupid

Related

Messages order of smooch - whatsapp

I have a bot and I use smooch to run the bot on whatsapp.
I use 'smooch-core' npm for that.
When I send a lot of messages one after the other sometimes the messages are displayed in reverse order in whatsapp.
Here is the code for sending messages:
for (const dataMessage of data) {
await sendMessage(dataMessage);
}
function sendMessage(dataMessage) {
return new Promise((resolve, reject) => {
smoochClient.appUsers.sendMessage({
appId: xxxx,
userId: userId,
message: dataMessage
}).then((response) => {
console.log('response: ' + JSON.stringify(response), 'green');
resolve();
}).catch(err => {
console.log('error: ' + JSON.stringify(err), 'red');
reject(err);
});
});
All dataMessage looks like this:
{
role: "appMaker",
type: "text",
text: txt
}
I tried to see how I could arrange it, and I saw that there was an option to get the message status from the webhook, and then wait for each message to arrive for the appropriate status. And only then send the following message.
But I would like to know is there something simpler? Is there a parameter that can add to the message itself to say what its order is? Or is there something in the npm that gives information about the message and its status?
In the doc below, Whatsapp mentions that they do not guarantee message ordering.
https://developers.facebook.com/docs/whatsapp/faq#faq_173242556636267
The same limitation applies to any async messaging platform (and most of them are), so backend processing times and other random factors can impact individual message processing/delivery times and thus ordering on the user device (e.g. backend congestion, attachments, message size, etc.).
You can try to add a small [type-dependent] delay between sending each message to reduce the frequency of mis-ordered messages (longer delay for messages with attachments, etc.).
The fool-proof way (with much more complexity) is to queue messages by appUser on your end, only sending the next message after receiving the message:delivery:user webhook event for the previous message.

How to add header properties to messages using seneca-amqp-transport

I am working on a project that requires the usage of a few rabbitmq queues. One of the queues requires that the messages are delayed for processing at a time in the future. I noticed in the documentation for rabbmitmq there is a new plugin called RabbitMQ Delayed Message Plugin that seems to allow this functionality. In the past for rabbmitmq based projects, I used seneca-amqp-transport for message adding and processing. The issue is that I have not seen any documentation for seneca or been able to find any examples outlining how to add header properties.
It seems as if I need to initially make sure the queue is created with x-delayed-type. Additionally, as each message is added to the queue, I need to make sure the x-delay header parameter is added to the message before it is sent to rabbbitmq. Is there a way to pass this parameter, x-delay, with seneca-amqp-transport?
Here is my current code for adding a message to the queue:
return new Promise((resolve, reject) => {
const client = require('seneca')()
.use('seneca-amqp-transport')
.client({
type: 'amqp',
pin: 'action:perform_time_consuming_act',
url: process.env.AMQP_SEND_URL
}).ready(() => {
client.act('action:perform_time_consuming_act', {
message: {data: 'this is a test'}
}, (err, res) => {
if (err) {
reject(err);
}
resolve(true);
});
});
}
In the code above, where would header-related data go?
I just looked up the code of the library and under lib/client/publisher.js , this should do the trick
function publish(message, exchange, rk, options) {
const opts = Object.assign({}, options, {
replyTo: replyQueue,
contentType: JSON_CONTENT_TYPE,
x-delay: 5000,
correlationId: correlationId
});
return ch.publish(exchange, rk, Buffer.from(message), opts);
}
Give it a Try , should work. Here the delay value if set to 5000 milliseconds. You can also overload the publish method to take the value as a parameter.

How to manage parallel HTTP request that are based on Message Queuing Listneners (NodeJS)

I'm sure this kind of problem have been resolved here many time but I can't find how those question was formulated.
I have a micro-services that handle the communication between my infrastructure and a MQTT Broker. Every time a HTTP request is received I send a "Who is alive in the room XXX ?" message on the MQTT Broker, and every client registered on the "XXX/alive" topic have to answer and I wait Y milliseconds before closing the request by sending back the responses received to the client.
It works well when I'm handling one request. But it screws up when more than one request is asked at a time.
Here is the Express route handling the HTTP requests :
app.get('/espPassports', (req, res) => {
mqttHelper.getESPPassports(req.query.model_Name).then((passports) => {
res.send(passports).end();
}).catch(err => {
res.send(err).end();
})
})
Here is how the getESPPassports works :
getESPPassports: async (model_Name) => {
return new Promise((resolve, reject) => {
// Say there is a request performed
ongoing_request.isOpen = true;
ongoing_request.model_Name = model_Name;
// Ask who is alive
con.publish(topic, "ASK");
setTimeout(() => {
// If no answer after given timeout
if (ongoing_request.passports.length == 0) {
reject({ error: "No MQTT passports found" });
// Else send a deep clone of the answers (else it's empty)
} else {
resolve(JSON.parse(JSON.stringify(ongoing_request.passports)));
}
// Delete the current request object and 'close it'
ongoing_request.passports.length = 0;
ongoing_request.isOpen = false;
ongoing_request.model_Name = ""
}, process.env.mqtt_timeout || 2000)
})
}
};
And here is the MQTT listener :
con.on("message", (topic, message) => {
// If a passport is received check the topic and if there is a request opened
if (_checkTopic(topic) && ongoing_request.isOpen) {
try {
ongoing_request.passports.push(JSON.parse(message));
} catch (error) {
// do stuff if error
}
}
}
})
I know the problem come from the boolean i'm using to specify if there is a request ongoing, I was thinking to create an object for each new request and identify them by a unique id (like a timetamp) but I have no way to make the MQTT listneners to know this unique id.
I have some other solution in mind but I'm not sure they'll work and I feel like there is a way to handle that nicely that I don't know about.
Have a good day.
You need to generate a unique id for each request and include it in the MQTT message, you can then cache the Express response object keyed by the unique id.
The devices need to include the unique id in their responses so they can be paired up with the right response.
The other approach is just to cache responses from the devices and assign the cache a Time to Live so you don't need to ask the devices every time.

Using publisher confirms with RabbitMQ, in which cases publisher will be notified about success/failure?

Quoting the book, RabbitMQ in Depth:
A Basic.Ack request is sent to a publisher when a message that it has
published has been directly consumed by consumer applications on all
queues it was routed to or that the message was enqueued and persisted
if requested.
Confused with Has been directly consumed, does it mean when consumer send ack to broker publisher will be informed that consumer process message successfully? or it means that publisher will be notified when consumer just receive message from the queue?
or that the message was enqueued and persisted if requested. Is this like conjuction or publisher will be informed when either of those happens? (In that case publisher would be notified twice)
Using node.js and amqplib wanted to check what is happening actually:
// consumer.js
amqp.connect(...)
.then(connection => connection.createChannel())
.then(() => { assert exchange here })
.then(() => { assert queue here })
.then(() => { bind queue and exchange here })
.then(() => {
channel.consume(QUEUE, (message) => {
console.log('Raw RabbitMQ message received', message)
// Simulate some job to do
setTimeout(() => {
channel.ack(message, false)
}, 5000})
}, { noAck: false })
})
// publisher.js
amqp.connect(...)
.then(connection => connection.createConfirmChannel())
.then(() => { assert exchange here })
.then(() => {
channel.publish(exchange, routingKey, new Buffer(...),{}, (err, ok) => {
if (err) {
console.log('Error from handling confirmation on publisher side', err)
} else {
console.log('From handling confirmation on publisher side', ok)
}
})
})
Running the example, i can see following logs:
From handling confirmation on publisher side undefined
Raw RabbitMQ message received
Time to ack the message
As far as i see, at least by this log, publisher will be notified only when message was enqueued? (So having consumer acking the message will not influence publisher in any way)
Quoting further:
If a message cannot be routed, the broker will send a Basic.Nack RPC
request indicating the failure. It is then up to the publisher to
decide what to do with the message.
Changing the above example, where i only changed the routing key of the message to something that should not be routed anywhere (there are no bindings that would match routing key), from logs i can see only following.
From handling confirmation on publisher side undefined
Now i'm more confused, about what publisher is notified exactly here? I would understand that it receive an error, like Can't route anywhere, that would be aligned with quote above. But as you can see err is not defined and as side question even if amqplib in their official docs are using (err, ok), in no single case i see those defined. So here output is same like in above example, how one can differ between above example and un-routable message.
So what im up to here, when exactly publisher will be notified about what is happening with the message? Any concrete example in which one would use PublisherConfirms? From logging above, i would conclude that is nice to have it in cases where you want to be 100% sure that message was enqueued.
After searching again and again i have found this
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
The basic rules are as follows:
An un-routable mandatory or immediate message is confirmed right after the basic.return
transient message is confirmed the moment it is enqueued
Persistent message is confirmed when it is persisted to disk or when it is consumed on every queue.
If more than one of these conditions are met, only the first causes a
confirm to be sent. Every published message will be confirmed sooner
or later and no message will be confirmed more than once.
by default publishers don't know anything about consumers.
PublisherConfirms is used to check if the message reached the broker, but not if the message has been enqueued.
you can use mandatory flag to be sure the message has been routed
see this https://www.rabbitmq.com/reliability.html
To ensure messages are routed to a single known queue, the producer
can just declare a destination queue and publish directly to it. If
messages may be routed in more complex ways but the producer still
needs to know if they reached at least one queue, it can set the
mandatory flag on a basic.publish, ensuring that a basic.return
(containing a reply code and some textual explanation) will be sent
back to the client if no queues were appropriately bound.
I'm not entirely sure about the notification on ack/nack question, but check out the BunnyBus Node library for a simpler api and RabbitMQ management :)
https://github.com/xogroup/bunnybus
const BunnyBus = require('bunnybus');
const bunnyBus = new BunnyBus({
user: 'your-user',
vhost: 'your-vhost', // cloudamqp defaults vhost to the username
password: 'your-password',
server: 'your.server.com'
});
const handler = {
'test.event': (message, ack) => {
// Do your work here.
// acknowledge the message off of the bus.
return ack();
}
};
// Create exchange and queue if they do not already exist and then auto connect.
return bunnyBus.subscribe('test', handler)
.then(() => {
return bunnyBus.publish({event: 'test.event', body: 'here\'s the thing.'});
})
.catch(console.log);

How to publish sns to a specific endpoint?

I have a issue with publishing sns to a specific endpoint.
My code:
var AWS = require('aws-sdk');
AWS.config.loadFromPath('/web/config.json');
var sns = new AWS.SNS();
sns.publish({
// TopicArn:'arn:aws:sns:us-west-2:302467918846:MyTestTopik',
TargetArn: 'arn:aws:sns:us-west-2:302467918846:MyTestTopik:613ee49c-d4dc-4354-a7e6-c1d9d8277c56',
Message: "Success!!! ",
Subject: "TestSNS"
}, function(err, data) {
if (err) {
console.log("Error sending a message " + err);
} else {
console.log("Sent message: " + data.MessageId);
}
});
When I use TopicArn, everything is fine. But when I try to send notification to a specific endpoint I take error:
Error sending a message InvalidParameter: Invalid parameter: Topic Name
And I have no idea what kind of parameters it is and from where.
Something similar is working fine for me.
I'm able to publish to a specific endpoint using: Apple Push Notification Service Sandbox (APNS_SANDBOX)
You might also want to try and update the aws-sdk, current version is 1.9.0.
Here's my code, TargetArn was copied directly from the SNS console. I omitted some of the data, like &
var sns = new AWS.SNS();
var params = {
TargetArn:'arn:aws:sns:us-west-2:302467918846:endpoint/APNS_SANDBOX/<APP_NAME>/<USER_TOKEN>'
Message:'Success!!! ',
Subject: 'TestSNS'
};
sns.publish(params, function(err,data){
if (err) {
console.log('Error sending a message', err);
} else {
console.log('Sent message:', data.MessageId);
}
});
You might have an invalid Region. Check you Region for the Topic and set it accordingly. For example if you are us-west-2 you could do something like
var sns = new aws.SNS({region:'us-west-2'});
None of this will work if you don't massage the payload a bit.
var arn = 'ENDPOINT_ARN';
console.log("endpoint arn: " + arn);
var payload = {
default: message_object.message,
GCM: {
data: {
message: message_object.message
}
}
};
// The key to the whole thing is this
//
payload.GCM = JSON.stringify(payload.GCM);
payload = JSON.stringify(payload);
// Create the params structure
//
var params= {
TargetArn: arn,
Message: payload,
MessageStructure: 'json' // Super important too
};
sns.publish(params , function(error, data) {
if (error) {
console.log("ERROR: " + error.stack);
}
else {
console.log("data: " + JSON.stringify(data));
}
context.done(null, data);
});
So, it turns out that you have to specify the message structure (being json). I tried to publish to endpoint from the AWS console and it worked great as long as I selected JSON. Using RAW would do nothing.
In my script was doing was the previous posts were doing:
var params = {
TargetArn: arn,
Message:'Success!!! ',
Subject: 'TestSNS'
};
And even though CloudWatch was logging success, I never once got the message.
As soon as I added the MessageStructure data and that I properly formatted the payload, it worked like a charm.
The [default] parameter is not useful but I left it in there to show what the structure could look like.
If you don't stringify the payload.GCM part, SNS will barf and say that your message should include a "GCM" element.
The only thing that is annoying is that you are required to know what the endpoint is. I was hoping that you didn't have to format the message based on the endpoint, which really defeats the purpose of SNS in some ways.
Are you trying endpoints other that push notifications such as sms? Direct addressing is currently only available for push notifications endpoints. That is the error you will get when you try to publish to a specific endpoint that does not allow direct direct addressing!
http://aws.amazon.com/sns/faqs/#Does_SNS_support_direct_addressing_for_SMS_or_Email
I was having the exact same issue as you. The problem is the TargetArn that you're using, there's not clear documentation about it. Error happens if you try to put the Application ARN in the TargetArn.
That will produce the error: Invalid parameter: TargetArn Reason: >arn:aws:sns:us-west-2:561315416312351:app/APNS_SANDBOX/com.APP_NAME_HERE.app is >not a valid ARN to publish to.
All you need to do is to put the EndpointArn in the TargetArn.
If you need to see the EndpointArn, you can:
Call listPlatformApplications() to get all your applications ARN's.
Call listEndpointsByPlatformApplication() with the App ARN to get the EndpointArn's list.
Enjoy!

Resources