I have an application which sends messages to a queue, and another application which subscribes to the queue and process it. I want OTP messages to be given higher priority than other messages, hence I am trying to use ActiveMQ message priority to achieve this.
This is the code for ActiveMQ connection using STOMP protocol in nodejs using stompit library:
const serverPrimary = {
host: keys.activeMQ.host,
port: keys.activeMQ.port,
ssl: ssl,
connectHeaders: {
host: '/',
login: keys.activeMQ.username,
passcode: keys.activeMQ.password,
'heart-beat': '5000,5000',
},
}
connManager = new stompit.ConnectFailover(
[serverPrimary, serverFailover],
reconnectOptions,
)
connManager.on('error', function (e) {
const connectArgs = e.connectArgs
const address = connectArgs.host + ':' + connectArgs.port
logger.error({ error: e, customMessage: address })
})
channelPool = new stompit.ChannelPool(connManager)
Code for sending message
const pushMessageToAMQ = (queue, message) => {
const queues = Object.values(activeMQ.queues)
if (!queues.includes(queue)) {
_mqLog(mqLogMessages.unknownQueue + queue)
return
}
//Priority header is set
const header = {
destination: queue,
priority: 7
}
//If message is not a string
if (typeof message !== 'string') message = JSON.stringify(message)
//Logging message before sending
_mqLog(
mqLogMessages.sending,
{ service: services.amq },
{ header: header, message: message },
)
//Sending message to amq
_sendMessageToAMQ(header, message, error => {
if (error) {
_mqError(error, mqLogMessages.sendingError, { service: services.amq })
}
})
}
const _sendMessageToAMQ = (headers, body, callback) => {
channelPool.channel((error, channel) => {
if (error) {
callback(error)
return
}
channel.send(headers, body, callback)
})
}
Here's the code for subscribing to queue in the second application:
const amqSubscribe = (queue, callback, ack = 'client-individual') => {
log({ customMessage: 'Subscribing to ' + queue })
const queues = Object.values(activeMQ.queues)
if (!queues.includes(queue)) {
return
}
channelPool.channel((error, channel) => {
let header = {
destination: queue,
ack: ack,
'activemq.prefetchSize': 1,
}
//Check for error
if (error) {
_mqError(error, mqLogMessages.baseError, header)
} else {
channel.subscribe(
header,
_synchronisedHandler((error, message, next) => {
//Check for error
if (error) {
_mqError(error, mqLogMessages.subscriptionError, header)
next()
} else {
//Read message
message.readString('utf-8', function (error, body) {
if (error) {
_mqError(error, mqLogMessages.readError, header)
next()
} else {
//Message read successfully call callback
callback(body, () => {
//Acknowledgment callback
channel.ack(message)
next()
})
}
})
}
}),
)
}
})
}
Activemq.xml
<policyEntries>
<policyEntry queue=">" prioritizedMessages="true" useCache="false" expireMessagesPeriod="0" queuePrefetch="1" />
.......
I tried pushing different messages with different priority and turned on the second application (i.e. the one which subscribes to the messages) after all the messages were pushed to queue. However, the execution order of the messages was the same as the one which was sent. The priority didn't change anything. Is there something that I am missing?
Do I have to add something in consumer end for it to work?
Support for priority is disabled by default in ActiveMQ "Classic" (used by Amazon MQ). As the documentation states:
...support [for message priority] is disabled by default so it needs to be be enabled using per destination policies through xml configuration...
You need to set prioritizedMessages="true" in the policyEntry for your queue, e.g.:
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" prioritizedMessages="true"/>
...
To be clear, this is configured on the broker (i.e. not the client) in activemq.xml, and it applies to every kind of client.
Related
I am trying to implement push notifications with react and nodejs using service workers.
I am having problem while i am showing notification to the user.
Here is my service worker code:
self.addEventListener('push', async (event) => {
const {
type,
title,
body,
data: { redirectUrl },
} = event.data.json()
if (type === 'NEW_MESSAGE') {
try {
// Get all opened windows that service worker controls.
event.waitUntil(
self.clients.matchAll().then((clients) => {
// Get windows matching the url of the message's coming address.
const filteredClients = clients.filter((client) => client.url.includes(redirectUrl))
// If user's not on the same window as the message's coming address or if it window exists but it's, hidden send notification.
if (
filteredClients.length === 0 ||
(filteredClients.length > 0 &&
filteredClients.every((client) => client.visibilityState === 'hidden'))
) {
self.registration.showNotification({
title,
options: { body },
})
}
}),
)
} catch (error) {
console.error('Error while fetching clients:', error.message)
}
}
})
self.addEventListener('notificationclick', (event) => {
event.notification.close()
console.log(event)
if (event.action === 'NEW_MESSAGE') {
event.waitUntil(
self.clients.matchAll().then((clients) => {
if (clients.openWindow) {
clients
.openWindow(event.notification.data.redirectUrl)
.then((client) => (client ? client.focus() : null))
}
}),
)
}
})
When new notification comes from backend with a type of 'NEW_MESSAGE', i get the right values out of e.data and try to use them on showNotification function but it seems like something is not working out properly because notification looks like this even though event.data equals to this => type = 'NEW_MESSAGE', title: 'New Message', body: , data: { redirectUrl: }
Here is how notification looks:
Thanks for your help in advance.
The problem was i assigned parameters in the wrong way.
It should've been like this:
self.registration.showNotification(title, { body })
Environment Information
docker image based on node:12.13.1-alpine
Node Version : 12.13.1
node-rdkafka version : latest
The below code snippet is working fine. But sometimes it's stopping reading messages from specific Kafka's partition (we are having about 20 topics (5 partitions each one) with same pattern). We are not getting any errors. After service restart and rebalance consuming continue as usual. Which tuning should be done to manage those stuck partitions?
Throughput is low, it's about 150 messages for all topics per minute, each message is small JSON with some details (~500kb). We are running with 10 pods for specific service.
import { ConsumerStream, createReadStream } from 'node-rdkafka';
const kafkaConsumer = createConsumerStream(shutdown, config.kafka.topics);
kafkaConsumer.on('data', async (rawMessage) => {
const {
topic, partition, offset, value
} = rawMessage;
try {
await processKafkaMessage(rawMessage);
kafkaConsumer.consumer.commit({
topic: topic,
partition: partition,
offset: offset + 1
});
} catch (err) {
logger.error('Failed to process inbound kafka message');
}
});
export const createConsumerStream = (shutdown, topics:Array<string>):ConsumerStream => {
const globalConfig = {
'metadata.broker.list': ['kafka:9092'],
'group.id': 'my_group_1',
'enable.auto.commit': false,
'partition.assignment.strategy': 'roundrobin',
'topic.metadata.refresh.interval.ms': 30 * 100,
'batch.num.messages': 100000,
'queued.max.messages.kbytes': 10000,
'fetch.message.max.bytes': 10000,
'fetch.max.bytes': 524288000,
'retry.backoff.ms': 200,
retries: 5
};
const topicConfig = { 'auto.offset.reset': 'earliest' };
const streamOptions = {
topics: topics,
waitInterval: batchMaxTime,
fetchSize: batchMaxSize
};
const stream:ConsumerStream = createReadStream(globalConfig, topicConfig, streamOptions);
stream.on('error', (err) => {
logger.error('Error in kafka consumer stream', {
error_msg: err.message,
error_name: err.name
});
});
stream.consumer.on('event.error', (err) => {
if (err.stack === 'Error: Local: Broker transport failure') return;
logger.error('Error in kafka consumer');
stream.emit('rd-kafka-error', err);
});
stream.consumer.on('rebalance', ({ message }, assignment) => {
logger.info('Rebalance event', { assigned_topics: assignment });
});
So I'm using NestJS (v8) with the RabbitMQ transport (Transport.RMQ) to listen for messages
My NestJS code look something like this:
// main.ts
const app = await NestFactory.createMicroservice<MicroserviceOptions>(AppModule, {
transport: Transport.RMQ,
options: {
urls: ['amqp://localhost:5672'],
queue: 'my-queue',
replyQueue: 'my-reply-queue'
},
});
// my.controller.ts
import { Controller } from '#nestjs/common';
import { MessagePattern } from '#nestjs/microservices';
#Controller()
export class MyController {
#MessagePattern('something')
do(data: {source: string}): {source: string} {
console.log(data);
data.source += ' | MyController';
return data;
}
}
And in Node.JS application, I use amqplib to send to the NestJS application and receive the response
this is the code of the Node.JS application:
const queueName = 'my-queue';
const replyQueueName = 'my-reply-queue';
const amqplib = require('amqplib');
async function run() {
const conn = await amqplib.connect('amqp://localhost:5672');
const channel = await conn.createChannel();
await channel.assertQueue(queueName);
await channel.assertQueue(replyQueueName);
// Consumer: Listen to messages from the reply queue
await channel.consume(replyQueueName, (msg) => console.log(msg.content.toString()));
// Publisher: Send message to the queue
channel.sendToQueue(
queueName,
Buffer.from(
JSON.stringify({
pattern: 'something',
data: { source: 'node-application' },
})
),
{ replyTo: replyQueueName }
);
}
run()
When I run the node and the Nest.JS applications, the Nest.JS gets the message from the Node.JS publisher but the Node.JS consumer is never called with the reply
The fix was to add an id key in the data that the Node.JS application sends:
// ...
// Publisher: Send message to the queue
channel.sendToQueue(
queueName,
Buffer.from(
JSON.stringify({
// Add the `id` key here so the Node.js consumer will get the message in the reply queue
id: '',
pattern: 'something',
data: { source: 'node-application' },
})
),
{ replyTo: replyQueueName }
);
// ...
Detailed explanation (in Nest.JS source code)
This is because in the handleMessage function in server-rmq.ts file there is a check if id property of the message is undefined
// https://github.com/nestjs/nest/blob/026c1bd61c561a3ad24da425d6bca27d47567bfd/packages/microservices/server/server-rmq.ts#L139-L141
public async handleMessage(
message: Record<string, any>,
channel: any,
): Promise<void> {
// ...
if (isUndefined((packet as IncomingRequest).id)) {
return this.handleEvent(pattern, packet, rmqContext);
}
// ...
}
And there is no logic of sending messages to the reply queue in the handleEvent function, just handling the event
My backend pushes messages to rabbitmq queue and I need to fetch those messages to display in the frontend part of the messages, since the messages have to be in specific order, I cannot use an asynchronous approach.
I have written this code
var open require("amqplib").connect("amqp://guest:guest#localhost:5682");
var queue = "developer";
export default {
name: 'Subsriber',
data: function() {
return {
selected: "",
services: [],
}
},
mounted() {
var url = "http://10.0.9.134:5060/services/scripts";
this.services = []; // empty the existing list first.
setTimeout(() => {
axios.get(url)
.then(response => {
this.services = response.data;
})
}, 2000)
open.then(function(conn){
return conn.createChannel();
}).then(function(ch){
return ch.assertQueue(queue).then(function(ok){
return ch.consume(queue, function(msg){
if (msg != null){
console.log(msg.content.toString());
}
});
});
});
}
but I get this error:
"Unhandled rejection TypeError: QS.unescape is not a function
openFrames#webpack-internal:///./node_modules/_amqplib#0.5.2#amqplib/lib/connect.js:50:1
connect#webpack-internal:///./node_modules/_amqplib#0.5.2#amqplib/lib/connect.js:145:14
connect/<#webpack-internal:///./node_modules/_amqplib#0.5.2#amqplib/channel_api.js:7:12
connect#webpack-internal:///./node_modules/_amqplib#0.5.2#amqplib/channel_api.js:6:10
#webpack-internal:///./node_modules/babel-loader/lib/index.js!./node_modules/vue-loader/lib/selector.js?type=script&index=0!./src/components/SelectServices.vue:56:12
["./node_modules/babel-loader/lib/index.js!./node_modules/vue-loader/lib/selector.js?type=script&index=0!./src/components/SelectServices.vue"]#http://10.0.9.134/app.js:1251:1
__webpack_require__#http://10.0.9.134/app.js:679:1
hotCreateRequire/fn#http://10.0.9.134/app.js:89:20
#webpack-internal:///./src/components/SelectServices.vue:1:148
["./src/components/SelectServices.vue"]#http://10.0.9.134/app.js:1804:1
__webpack_require__#http://10.0.9.134/app.js:679:1
hotCreateRequire/fn#http://10.0.9.134/app.js:89:20
#webpack-internal:///./src/router/index.js:4:85
["./src/router/index.js"]#http://10.0.9.134/app.js:1820:1
__webpack_require__#http://10.0.9.134/app.js:679:1
hotCreateRequire/fn#http://10.0.9.134/app.js:89:20
#webpack-internal:///./src/main.js:4:66
["./src/main.js"]#http://10.0.9.134/app.js:1812:1
__webpack_require__#http://10.0.9.134/app.js:679:1
hotCreateRequire/fn#http://10.9.0.134/app.js:89:20
[0]#http://10.9.1.147/app.js:1829:18
__webpack_require__#http://10.0.9.134/app.js:679:1
#http://10.9.0.134/app.js:725:18
#http://10.9.0.134/app.js:1:1"
I'm replicating EasyNetQ functionality in NodeJS (so that a Node app can communicate with over Rabbit with an EasyNetQ enabled .NET app). I've replicated EasyNetQ's Publish/Subscribe and EasyNetQ's Send/Receive, but i'm having some difficulty with EasyNetQ's Request/Response.
Here is my current Node code:
var rqrxID = uuid.v4(); //a GUID
var responseQueue = 'easynetq.response.' + rqrxID;
Q(Play.AMQ.ConfirmChannel.assertQueue(responseQueue, { durable: false, exclusive: true, autoDelete: true }))
.then((okQueueReply) =>
Play.AMQ.ConfirmChannel.consume(responseQueue, (msg) => {
//do something here...
Play.AMQ.ConfirmChannel.ack(msg);
})
)
.then((okSubscribeReply) => {
Q(Play.AMQ.ConfirmChannel.assertExchange('easy_net_q_rpc', 'direct', { durable: true, autoDelete: false }))
.then((okExchangeReply) =>
Play.AMQ.ConfirmChannel.publish(
global.AppConfig.amq.rpc.exchange,
dto.AsyncProcessorCommand.Type,
Play.ToBuffer(command),
{ type: command.GetType() },
(err, ok): void => {
if (err !== null) {
console.warn('Message nacked!');
responseDeferred.reject(err);
}
}
)
)
})
.catch((failReason) => {
console.error(util.format('Error creating response queue: %s', failReason));
return null;
});
Note that the publish works and is received by the .NET code. That code then sends a response and the issue is that the response isn't received. Here's the .NET code:
Bus.Respond<AsyncProcessorCommand, AsyncProcessorCommandResponse>(
request =>
{
Console.WriteLine("Got request: '{0}'", request);
return new AsyncProcessorCommandResponse()
{
ID = Guid.NewGuid(),
ResponseType = "ENQResp"
};
});
I'm sure I'm missing something, but not sure what. Who can help?
UPDATE
I have solved at least part of this. Taking the value of responseQueue and setting that into the options for publish as "replyTo" hooks the response up - nice. Now I just have to figure out how to either not create a new queue each time OR, make the response queue go away...
UPDATE FINAL
So, using the channel setup I had and saving the cinsumerTag (actually, specifying it) allowed me to cancel the consumer and the queue auto-deleted.
Taking my comments from above to answer this.
There are two pieces to this. First, from the code above, create your response queue so that it auto-deletes (when the consumer count drops to 0):
channel.assertQueue(responseQueue, { durable: false, exclusive: true, autoDelete: true }))
Then create/publish to the queue the "server" is listening on - making sure to set "replyTo" for the response queue you just created (the type piece is another bit of ENQ-needed code):
{ type: command.GetType(), replyTo: responseQueue }
So an entire (currently messy as it's "play" code) method for executing this pattern looks like:
private static Request(command: dto.AsyncProcessorCommand): Q.Promise<dto.interfaces.IAsyncProcessorCommandResponse> {
var responseDeferred = Q.defer<dto.interfaces.IAsyncProcessorCommandResponse>();
var consumerTag = uuid.v4();
var rqrxID = uuid.v4();
var responseQueue = 'easynetq.response.' + rqrxID;
var handleResponse = (msg: any): void => {
var respType = null;
switch(command.Action) {
default:
respType = 'testResp';
}
//just sending *something* back, should come from 'msg'
responseDeferred.resolve(new dto.AsyncProcessorCommandResponse(respType, { xxx: 'yyy', abc: '123' }));
}
Q(Play.AMQ.ConfirmChannel.assertQueue(responseQueue, { durable: false, exclusive: true, autoDelete: true }))
.then((okQueueReply) =>
Play.AMQ.ConfirmChannel.consume(responseQueue, (msg) => {
handleResponse(msg);
Play.AMQ.ConfirmChannel.ack(msg);
Play.AMQ.ConfirmChannel.cancel(consumerTag);
},
{ consumerTag: consumerTag })
)
.then((okSubscribeReply) => {
Q(Play.AMQ.ConfirmChannel.assertExchange('easy_net_q_rpc', 'direct', { durable: true, autoDelete: false }))
.then((okExchangeReply) =>
Play.AMQ.ConfirmChannel.publish(
'easy_net_q_rpc',
dto.AsyncProcessorCommand.Type,
Play.ToBuffer(command),
{ type: command.GetType(), replyTo: responseQueue },
(err, ok): void => {
if (err !== null) {
console.warn('Message nacked!');
responseDeferred.reject(err);
}
}
)
)
})
.catch((failReason) => {
console.error(util.format('Error creating response queue: %s', failReason));
return null;
});
return responseDeferred.promise
}