Consumer stop messages consuming from specific topic after some time of running - node.js

Environment Information
docker image based on node:12.13.1-alpine
Node Version : 12.13.1
node-rdkafka version : latest
The below code snippet is working fine. But sometimes it's stopping reading messages from specific Kafka's partition (we are having about 20 topics (5 partitions each one) with same pattern). We are not getting any errors. After service restart and rebalance consuming continue as usual. Which tuning should be done to manage those stuck partitions?
Throughput is low, it's about 150 messages for all topics per minute, each message is small JSON with some details (~500kb). We are running with 10 pods for specific service.
import { ConsumerStream, createReadStream } from 'node-rdkafka';
const kafkaConsumer = createConsumerStream(shutdown, config.kafka.topics);
kafkaConsumer.on('data', async (rawMessage) => {
const {
topic, partition, offset, value
} = rawMessage;
try {
await processKafkaMessage(rawMessage);
kafkaConsumer.consumer.commit({
topic: topic,
partition: partition,
offset: offset + 1
});
} catch (err) {
logger.error('Failed to process inbound kafka message');
}
});
export const createConsumerStream = (shutdown, topics:Array<string>):ConsumerStream => {
const globalConfig = {
'metadata.broker.list': ['kafka:9092'],
'group.id': 'my_group_1',
'enable.auto.commit': false,
'partition.assignment.strategy': 'roundrobin',
'topic.metadata.refresh.interval.ms': 30 * 100,
'batch.num.messages': 100000,
'queued.max.messages.kbytes': 10000,
'fetch.message.max.bytes': 10000,
'fetch.max.bytes': 524288000,
'retry.backoff.ms': 200,
retries: 5
};
const topicConfig = { 'auto.offset.reset': 'earliest' };
const streamOptions = {
topics: topics,
waitInterval: batchMaxTime,
fetchSize: batchMaxSize
};
const stream:ConsumerStream = createReadStream(globalConfig, topicConfig, streamOptions);
stream.on('error', (err) => {
logger.error('Error in kafka consumer stream', {
error_msg: err.message,
error_name: err.name
});
});
stream.consumer.on('event.error', (err) => {
if (err.stack === 'Error: Local: Broker transport failure') return;
logger.error('Error in kafka consumer');
stream.emit('rd-kafka-error', err);
});
stream.consumer.on('rebalance', ({ message }, assignment) => {
logger.info('Rebalance event', { assigned_topics: assignment });
});

Related

how to read x number of messages in kafkajs consumer at a time

i have situation where to achieve better performance i have to read multiple kakfka message at a time, i have search on the internet and found their is functionality of kafka called batch where we can read messages in batch the problem is that i am not able to configure it to receive only max x number of message at a time .
code that i found
await consumer.run({
eachBatchAutoResolve: true,
eachBatch: async ({
batch,
resolveOffset,
heartbeat,
commitOffsetsIfNecessary,
uncommittedOffsets,
isRunning,
isStale,
}) => {
for (let message of batch.messages) {
console.log({
topic: batch.topic,
partition: batch.partition,
highWatermark: batch.highWatermark,
message: {
offset: message.offset,
key: message.key.toString(),
value: message.value.toString(),
headers: message.headers,
}
})
resolveOffset(message.offset)
await heartbeat()
}
},
})
environment = Node

my api needs time to process a request, how can I use React + SWR to continue checking on the status?

I have an endpoint in my API that initiates a process on AWS. This process takes time. It can last several seconds or minutes depending on the size of the request. As of right now, I'm rebuilding my app to use swr. However, before this new update with swr I created a recursive function that would call itself with a timeout and continuously ping the API to request the status of my AWS process, only exiting once the response had the appropriate type.
I'd like to dump that recursive function because, well ... it was kinda hacky. Though, I'm still getting familiar with swr and I'm not a NodeJS API building master so I'm curious what thoughts come to mind in regards to improving the pattern below.
Ideally, the lowest hanging fruit would be to set up swr in some way to handle the incoming response and keep ping if the response isn't type: "complete" but I'm not sure how I'd do that. It pretty much just pings once and shows me whatever status it found at that time.
any help is appreciated!
tldr;
how can I set up swr to continually ping the API until my content is finished loading?
part of my API that sends out responses based how far along the AWS process is:
if (serviceResponse !== undefined) {
// * task is not complete
const { jobStatus } = serviceResponse.serviceJob;
if (serviceJobStatus.toLowerCase() === 'in_progress') {
return res.status(200).send({ type: 'loading', message: serviceJobStatus });
}
if (serviceJobStatus.toLowerCase() === 'queued') {
return res.status(200).send({ type: 'loading', message: serviceJobStatus });
}
if (serviceJobStatus.toLowerCase() === 'failed') {
return res.status(400).send({ type: 'failed', message: serviceJobStatus });
}
// * task is complete
if (serviceJobStatus.toLowerCase() === 'completed') {
const { serviceFileUri } = serviceResponse.serviceJob?.Data;
const { data } = await axios.get(serviceUri as string);
const formattedData = serviceDataParser(data.results);
return res.status(200).send({ type: 'complete', message: formattedData });
}
} else {
return res.status(400).send({ type: 'error', message: serviceResponse });
}
}
my current useSWR hook:
const { data: rawServiceData } = useSwr(
serviceEndpoint,
url => axios.get(url).then(r => r.data),
{
onSuccess: data => {
if (data.type === 'complete') {
dispatch(
setStatus({
type: 'success',
data: data.message,
message: 'service has been successfully generated.',
display: 'support-both',
})
);
dispatch(setRawService(data.message));
}
if (data.type === 'loading') {
dispatch(
setStatus({
type: 'success',
data: data.message,
message: 'service job is in progress.',
display: 'support-both',
})
);
}
},
}
);
After some digging around, figured I'd use the refreshInterval option that comes with swr. I am changing the state of a boolean on my component.
while the request is 'loading' the boolean in state is false.
once the job is 'complete' the boolean in state is set to true.
there is a ternary within my hook that sets the refreshInterval to 0 (default:off) or 3000.
const [serviceJobComplete, setServiceJobComplete] = useState(false);
const { data: serviceData } = useSwr(
serviceEndpoint,
url => axios.get(url).then(r => r.data),
{
revalidateIfStale: false,
revalidateOnFocus: false,
revalidateOnReconnect: false,
refreshInterval: serviceJobComplete ? 0 : 3000,
...
// other options
}
);
helpful resources:
https://github.com/vercel/swr/issues/182
https://swr.vercel.app/docs/options

Unable to use ActiveMQ priority messages using STOMP protocol in nodejs

I have an application which sends messages to a queue, and another application which subscribes to the queue and process it. I want OTP messages to be given higher priority than other messages, hence I am trying to use ActiveMQ message priority to achieve this.
This is the code for ActiveMQ connection using STOMP protocol in nodejs using stompit library:
const serverPrimary = {
host: keys.activeMQ.host,
port: keys.activeMQ.port,
ssl: ssl,
connectHeaders: {
host: '/',
login: keys.activeMQ.username,
passcode: keys.activeMQ.password,
'heart-beat': '5000,5000',
},
}
connManager = new stompit.ConnectFailover(
[serverPrimary, serverFailover],
reconnectOptions,
)
connManager.on('error', function (e) {
const connectArgs = e.connectArgs
const address = connectArgs.host + ':' + connectArgs.port
logger.error({ error: e, customMessage: address })
})
channelPool = new stompit.ChannelPool(connManager)
Code for sending message
const pushMessageToAMQ = (queue, message) => {
const queues = Object.values(activeMQ.queues)
if (!queues.includes(queue)) {
_mqLog(mqLogMessages.unknownQueue + queue)
return
}
//Priority header is set
const header = {
destination: queue,
priority: 7
}
//If message is not a string
if (typeof message !== 'string') message = JSON.stringify(message)
//Logging message before sending
_mqLog(
mqLogMessages.sending,
{ service: services.amq },
{ header: header, message: message },
)
//Sending message to amq
_sendMessageToAMQ(header, message, error => {
if (error) {
_mqError(error, mqLogMessages.sendingError, { service: services.amq })
}
})
}
const _sendMessageToAMQ = (headers, body, callback) => {
channelPool.channel((error, channel) => {
if (error) {
callback(error)
return
}
channel.send(headers, body, callback)
})
}
Here's the code for subscribing to queue in the second application:
const amqSubscribe = (queue, callback, ack = 'client-individual') => {
log({ customMessage: 'Subscribing to ' + queue })
const queues = Object.values(activeMQ.queues)
if (!queues.includes(queue)) {
return
}
channelPool.channel((error, channel) => {
let header = {
destination: queue,
ack: ack,
'activemq.prefetchSize': 1,
}
//Check for error
if (error) {
_mqError(error, mqLogMessages.baseError, header)
} else {
channel.subscribe(
header,
_synchronisedHandler((error, message, next) => {
//Check for error
if (error) {
_mqError(error, mqLogMessages.subscriptionError, header)
next()
} else {
//Read message
message.readString('utf-8', function (error, body) {
if (error) {
_mqError(error, mqLogMessages.readError, header)
next()
} else {
//Message read successfully call callback
callback(body, () => {
//Acknowledgment callback
channel.ack(message)
next()
})
}
})
}
}),
)
}
})
}
Activemq.xml
<policyEntries>
<policyEntry queue=">" prioritizedMessages="true" useCache="false" expireMessagesPeriod="0" queuePrefetch="1" />
.......
I tried pushing different messages with different priority and turned on the second application (i.e. the one which subscribes to the messages) after all the messages were pushed to queue. However, the execution order of the messages was the same as the one which was sent. The priority didn't change anything. Is there something that I am missing?
Do I have to add something in consumer end for it to work?
Support for priority is disabled by default in ActiveMQ "Classic" (used by Amazon MQ). As the documentation states:
...support [for message priority] is disabled by default so it needs to be be enabled using per destination policies through xml configuration...
You need to set prioritizedMessages="true" in the policyEntry for your queue, e.g.:
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" prioritizedMessages="true"/>
...
To be clear, this is configured on the broker (i.e. not the client) in activemq.xml, and it applies to every kind of client.

Node Process hangs when saving many pubnub instances in memory

When load testing the program i was using pubnub for creating some integration i sent around 2000 request and on each request pubnub instance was created with different pub,sub keys and subscription to channel and listeners added but after some time when there is network issue pubnub throws socket hang up error and the memory starts spiking and eventually process is killed though i am destroying pubnub object when there is failure in subscription.
class pubnub{
private config;
private pubnub;
constructor(options){
this.config = options
}
register(callback) {
let timetoken = null;
this.pubnub = new Pubnub({
publish_key: options.publish_key,
subscribe_key: options.subscribe_key,
ssl: true,
keepAlive: true
});
this.pubnub.addListener({
message: function (m) {
// console.log('----------------- ', m);
if (timetoken !== m.timetoken) {
timetoken = m.timetoken;
}
},
status: function (m) {
console.log(m);
if (m && m.error === true) {
this.pubnub.destroy(true);
return callback(m.errorData);
}
callback(null, true);
}
}
});
this.pubnub.subscribe({
channels: option.channels
});
}
}

Replicate EasyNetQ Request/Response with amqplib in nodeJS

I'm replicating EasyNetQ functionality in NodeJS (so that a Node app can communicate with over Rabbit with an EasyNetQ enabled .NET app). I've replicated EasyNetQ's Publish/Subscribe and EasyNetQ's Send/Receive, but i'm having some difficulty with EasyNetQ's Request/Response.
Here is my current Node code:
var rqrxID = uuid.v4(); //a GUID
var responseQueue = 'easynetq.response.' + rqrxID;
Q(Play.AMQ.ConfirmChannel.assertQueue(responseQueue, { durable: false, exclusive: true, autoDelete: true }))
.then((okQueueReply) =>
Play.AMQ.ConfirmChannel.consume(responseQueue, (msg) => {
//do something here...
Play.AMQ.ConfirmChannel.ack(msg);
})
)
.then((okSubscribeReply) => {
Q(Play.AMQ.ConfirmChannel.assertExchange('easy_net_q_rpc', 'direct', { durable: true, autoDelete: false }))
.then((okExchangeReply) =>
Play.AMQ.ConfirmChannel.publish(
global.AppConfig.amq.rpc.exchange,
dto.AsyncProcessorCommand.Type,
Play.ToBuffer(command),
{ type: command.GetType() },
(err, ok): void => {
if (err !== null) {
console.warn('Message nacked!');
responseDeferred.reject(err);
}
}
)
)
})
.catch((failReason) => {
console.error(util.format('Error creating response queue: %s', failReason));
return null;
});
Note that the publish works and is received by the .NET code. That code then sends a response and the issue is that the response isn't received. Here's the .NET code:
Bus.Respond<AsyncProcessorCommand, AsyncProcessorCommandResponse>(
request =>
{
Console.WriteLine("Got request: '{0}'", request);
return new AsyncProcessorCommandResponse()
{
ID = Guid.NewGuid(),
ResponseType = "ENQResp"
};
});
I'm sure I'm missing something, but not sure what. Who can help?
UPDATE
I have solved at least part of this. Taking the value of responseQueue and setting that into the options for publish as "replyTo" hooks the response up - nice. Now I just have to figure out how to either not create a new queue each time OR, make the response queue go away...
UPDATE FINAL
So, using the channel setup I had and saving the cinsumerTag (actually, specifying it) allowed me to cancel the consumer and the queue auto-deleted.
Taking my comments from above to answer this.
There are two pieces to this. First, from the code above, create your response queue so that it auto-deletes (when the consumer count drops to 0):
channel.assertQueue(responseQueue, { durable: false, exclusive: true, autoDelete: true }))
Then create/publish to the queue the "server" is listening on - making sure to set "replyTo" for the response queue you just created (the type piece is another bit of ENQ-needed code):
{ type: command.GetType(), replyTo: responseQueue }
So an entire (currently messy as it's "play" code) method for executing this pattern looks like:
private static Request(command: dto.AsyncProcessorCommand): Q.Promise<dto.interfaces.IAsyncProcessorCommandResponse> {
var responseDeferred = Q.defer<dto.interfaces.IAsyncProcessorCommandResponse>();
var consumerTag = uuid.v4();
var rqrxID = uuid.v4();
var responseQueue = 'easynetq.response.' + rqrxID;
var handleResponse = (msg: any): void => {
var respType = null;
switch(command.Action) {
default:
respType = 'testResp';
}
//just sending *something* back, should come from 'msg'
responseDeferred.resolve(new dto.AsyncProcessorCommandResponse(respType, { xxx: 'yyy', abc: '123' }));
}
Q(Play.AMQ.ConfirmChannel.assertQueue(responseQueue, { durable: false, exclusive: true, autoDelete: true }))
.then((okQueueReply) =>
Play.AMQ.ConfirmChannel.consume(responseQueue, (msg) => {
handleResponse(msg);
Play.AMQ.ConfirmChannel.ack(msg);
Play.AMQ.ConfirmChannel.cancel(consumerTag);
},
{ consumerTag: consumerTag })
)
.then((okSubscribeReply) => {
Q(Play.AMQ.ConfirmChannel.assertExchange('easy_net_q_rpc', 'direct', { durable: true, autoDelete: false }))
.then((okExchangeReply) =>
Play.AMQ.ConfirmChannel.publish(
'easy_net_q_rpc',
dto.AsyncProcessorCommand.Type,
Play.ToBuffer(command),
{ type: command.GetType(), replyTo: responseQueue },
(err, ok): void => {
if (err !== null) {
console.warn('Message nacked!');
responseDeferred.reject(err);
}
}
)
)
})
.catch((failReason) => {
console.error(util.format('Error creating response queue: %s', failReason));
return null;
});
return responseDeferred.promise
}

Resources