Receive Timeout on SMPP - opensmpp

Does anyone knows the reason or logic why the timeout setting on the receive method of OpenSMPP is always divisible by ten? This is based on my experience: when I set it to 5 seconds, the timeout becomes 10 seconds, and when I set it to 11 seconds, the timeout becomes 20 seconds.
I tried to look for an answer by going deep at the codes of open-smpp-3.0.1 but I couldn't find the logic where 1 second becomes 10 seconds. I hope someone here was able to figure out this one before.
Btw, my bind request is a Receiver, and my sync mode is synchronous.

I think is the "Queue Wait Timeout". In the code says about this value:
"This timeout specifies for how long will go the receiving into wait if the PDU (expected or any) isn't in the pduQueue yet. After that the queue is probed again (etc.) until receiving timeout expires or the PDU is received".
The default value is 10 second, so, if timeout is 1 to 10 seconds only waits for the queue for 10 seconds but if you define a timeout for the receiver of 11 seconds it waits 2 times for the queue. This way the receiver waits for 20 seconds. You can modify this value calling after bindind this method:
sessionSmpp.getReceiver().setQueueWaitTimeout(milliseconds);

Related

Bull.js jobs stalling despite timeout being set

I have a Bull queue running lengthy video upload jobs which could take any amount of time from < 1 min up to many minutes.
The jobs stall after the default 30 seconds, so I increased the timeout to several minutes, but this is not respected. If I set the timeout to 10ms it immediately stalls, so it is taking timeout into account.
Job {
opts: {
attempts: 1,
timeout: 600000,
delay: 0,
timestamp: 1634753060062,
backoff: undefined
},
...
}
Despite the timeout, I am receiving a stalled event, and the job starts to process again.
EDIT: I thought "stalling" was the same as timing out, but apparently there is a separate timeout for how often Bull checks for stalled jobs. In other words the real problem is why jobs are considered "stalled" even though they are busy performing an upload.
The problem seems to be your job stalling because of the operation you are running which blocks the event loop. you could convert your code into a non-blocking one and solve the problem that way.
That being said, stalled interval check could be set in queue settings while initiating the queue (more of a quick solution):
const queue = new Bull('queue', {
port: 6379,
host: 'localhost',
db: 0,
settings: {
stalledInterval: 60 * 60 * 1000, // change default from 30 sec to 1 hour, set 0 for disabling the stalled interval
},
})
based on bull's doc:
timeout: The number of milliseconds after which the job should be fail with a timeout error
stalledInterval: How often check for stalled jobs (use 0 for never checking)
Increasing the stalledInterval (or disabling it by setting it as 0) would remove the check that makes sure event loop is running thus enforcing the system to ignore the stall state.
again for docs:
When a worker is processing a job it will keep the job "locked" so other workers can't process it.
It's important to understand how locking works to prevent your jobs from losing their lock - becoming _stalled_ -
and being restarted as a result. Locking is implemented internally by creating a lock for `lockDuration` on interval
`lockRenewTime` (which is usually half `lockDuration`). If `lockDuration` elapses before the lock can be renewed,
the job will be considered stalled and is automatically restarted; it will be __double processed__. This can happen when:
1. The Node process running your job processor unexpectedly terminates.
2. Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see [#488](https://github.com/OptimalBits/bull/issues/488) for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the `lockDuration` setting (with the tradeoff being that it will take longer to recognize a real stalled job).
As such, you should always listen for the `stalled` event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.
As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of `maxStalledCount` times (default: `1`).

Pulsar: If a message gets nack'd (negativeAcknowledge()) when will it be redelivered?

If we cannot process a message (perhaps due to some timing problem or race condition) and we call
consumer.negativeAcknowledge(messageId);
When will it be redelivered to retry processing?
I am unable to figure out what the default setting for delivery is from the documentation
https://pulsar.apache.org/docs/en/concepts-messaging/#negative-acknowledgement
https://pulsar.apache.org/docs/en/concepts-messaging/#acknowledgement-timeout
https://pulsar.apache.org/docs/en/reference-configuration/
The default is 60 seconds.
You can configure it in the consumer:
Consumer<byte[]> consumer = client.newConsumer()
.topic("my-topic")
.subscriptionName("my-sub")
.negativeAckRedelivery(10, TimeUnit.SECONDS)
.subscribe()

RabbitMQ: how to limit consuming rate

I need to limit the rate of consuming messages from rabbitmq queue.
I have found many suggestions, but most of them offer to use prefetch option. But this option doesn't do what I need. Even if I set prefetch to 1 the rate is about 6000 messages/sec. This is too many for consumer.
I need to limit for example about 70 to 200 messages per second. This means consuming one message every 5-14ms. No simultaneous messages.
I'm using Node.JS with amqp.node library.
Implementing a token bucket might help:
https://en.wikipedia.org/wiki/Token_bucket
You can write a producer that produces to the "token bucket queue" at a fixed rate with a TTL on the message (maybe expires after a second?) or just set a maximum queue size equal to your rate per second. Consumers that receive a "normal queue" message must also receive a "token bucket queue" message in order to process the message effectively rate limiting the application.
NodeJS + amqplib Example:
var queueName = 'my_token_bucket';
rabbitChannel.assertQueue(queueName, {durable: true, messageTtl: 1000, maxLength: bucket.ratePerSecond});
writeToken();
function writeToken() {
rabbitChannel.sendToQueue(queueName, new Buffer(new Date().toISOString()), {persistent: true});
setTimeout(writeToken, 1000 / bucket.ratePerSecond);
}
I've already found a solution.
I use module nanotimer from npm for calculation delays.
Then I calculate delay = 1 / [message_per_second] in nanoseconds.
Then I consume message with prefetch = 1
Then I calculate really delay as delay - [processing_message_time]
Then I make timeout = really delay before sending ack for the message
It works perfectly. Thanks to all
See 'Fair Dispatch' in RabbitMQ Documentation.
For example in a situation with two workers, when all odd messages are heavy and even messages are light, one worker will be constantly busy and the other one will do hardly any work. Well, RabbitMQ doesn't know anything about that and will still dispatch messages evenly.
This happens because RabbitMQ just dispatches a message when the message enters the queue. It doesn't look at the number of unacknowledged messages for a consumer. It just blindly dispatches every n-th message to the n-th consumer.
In order to defeat that we can use the prefetch method with the value of 1. This tells RabbitMQ not to give more than one message to a worker at a time. Or, in other words, don't dispatch a new message to a worker until it has processed and acknowledged the previous one. Instead, it will dispatch it to the next worker that is not still busy.
I don't think RabbitMQ can provide you this feature out of the box.
If you have only one consumer, then the whole thing is pretty easy, you just let it sleep between consuming messages.
If you have multiple consumers I would recommend you to use some "shared memory" to keep the rate. For example, you might have 10 consumers consuming messages. To keep 70-200 messages rate across all of them, you will make a call to Redis, to see if you are eligible to process message. If yes, then update Redis, to show other consumers that currently one message is in process.
If you have no control over consumer, then implement option 1 or 2 and publish message back to Rabbit. This way the original consumer will consume messages with the desired pace.
This is how I fixed mine with just settimeout
I set mine to process consume every 200mls which will consume 5 data in 1 seconds I did mine to do update if exist
channel.consume(transactionQueueName, async (data) => {
let dataNew = JSON.parse(data.content);
const processedTransaction = await seperateATransaction(dataNew);
// delay ack to avoid duplicate entry !important dont remove the settimeout
setTimeout(function(){
channel.ack(data);
},200);
});
Done

Guidance OnMessageOptions.AutoRenewTimeout

Can someone offer some more guidance on the use of the Azure Service Bus OnMessageOptions.AutoRenewTimeout
http://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.onmessageoptions.autorenewtimeout.aspx
as I haven't found much documentation on this option, and would like to know if this is the correct way to renew a message lock
My use case:
1) Message Processing Queue has a Lock Duration of 5 minutes (The maximum allowed)
2) Message Processor using the OnMessageAsync message pump to read from the queue (with a ReceiveMode.PeekLock) The long running processing may take up to 10 minutes to process the message before manually calling msg.CompleteAsync
3) I want the message processor to automatically renew it's lock up until the time it's expected to Complete processing (~10minutes). If after that period it hasn't been completed, the lock should be automatically released.
Thanks
-- UPDATE
I never did end up getting any more guidance on AutoRenewTimeout. I ended up using a custom MessageLock class that auto renews the Message Lock based on a timer.
See the gist -
https://gist.github.com/Soopster/dd0fbd754a65fc5edfa9
To handle long message processing you should set AutoRenewTimeout == 10 min (in your case). That means that lock will be renewed during these 10 minutes each time when LockDuration is expired.
So if for example your LockDuration is 3 minutes and AutoRenewTimeout is 10 minutes then every 3 minute lock will be automatically renewed (after 3 min, 6 min and 9 min) and lock will be automatically released after 12 minutes since message was consumed.
To my personal preference, OnMessageOptions.AutoRenewTimeout is a bit too rough of a lease renewal option. If one sets it to 10 minutes and for whatever reason the Message is .Complete() only after 10 minutes and 5 seconds, the Message will show up again in the Message Queue, will be consumed by the next stand-by Worker and the entire processing will execute again. That is wasteful and also keeps the Workers from executing other unprocessed Requests.
To work around this:
Change your Worker process to verify if the item it just received from the Message Queue had not been already processed. Look for Success/Failure result that is stored somewhere. If already process, call BrokeredMessage.Complete() and move on to wait for the next item to pop up.
Call periodically BrokeredMessage.RenewLock() - BEFORE the lock expires, like every 10 seconds - and set OnMessageOptions.AutoRenewTimeout to TimeSpan.Zero. Thus if the Worker that processes an item crashes, the Message will return into the MessageQueue sooner and will be picked up by the next stand-by Worker.
I have the very same problem with my workers. Even the message was successfully processing, due to long processing time, service bus removes the lock applied to it and the message become available for receiving again. Other available worker takes this message and start processing it again. Please, correct me if I'm wrong, but in your case, OnMessageAsync will be called many times with the same message and you will ended up with several tasks simultaneously processing it. At the end of the process MessageLockLost exception will be thrown because the message doesn't have a lock applied.
I solved this with the following code.
_requestQueueClient.OnMessage(
requestMessage =>
{
RenewMessageLock(requestMessage);
var messageLockTimer = new System.Timers.Timer(TimeSpan.FromSeconds(290));
messageLockTimer.Elapsed += (source, e) =>
{
RenewMessageLock(requestMessage);
};
messageLockTimer.AutoReset = false; // by deffault is true
messageLockTimer.Start();
/* ----- handle requestMessage ----- */
requestMessage.Complete();
messageLockTimer.Stop();
}
private void RenewMessageLock(BrokeredMessage requestMessage)
{
try
{
requestMessage.RenewLock();
}
catch (Exception exception)
{
}
}
There are a few mounts since your post and maybe you have solved this, so could you share your solution.

Geb Exception in Groovy

I am getting the following exception
geb.waiting.WaitTimeoutException at ApprovalChannelSpec.groovy:40
Caused by: org.codehaus.groovy.runtime.powerassert.PowerAssertionError at ApprovalChannelSpec.groovy:40
A more details can be found below:
![1]:http://i.imgur.com/a2mlRil.png
It means that you have a condition that did not happen within the allotted time. In your case it looks like it is waiting for 45 seconds for the invoices link tab to be present, but it never shows up.
The docs for the waitFor method specify this http://www.gebish.org/manual/0.7.0/api/geb-core/geb/waiting/Wait.html#waitFor(groovy.lang.Closure):
Invokes the given block every retryInterval seconds until it returns a
true value according to the Groovy Truth. If block does not return a
truish value within timeout seconds then a WaitTimeoutException will
be thrown. If the given block is executing at the time when the
timeout is reached, it will not be interrupted. This means that this
method may take longer than the specified timeout. For example, if the
block takes 5 seconds to complete but the timeout is 2 seconds, the
wait is always going to take at least 5 seconds.
If block throws any Throwable, it is treated as a failure and the
block will be tried again after the retryInterval has expired. If the
last invocation of block throws an exception it will be the cause of
the WaitTimeoutException that will be thrown.
You need use waitFor
Look this docs: waiting
p.s. Yes, #jeff-story right.

Resources