I am getting the following exception
geb.waiting.WaitTimeoutException at ApprovalChannelSpec.groovy:40
Caused by: org.codehaus.groovy.runtime.powerassert.PowerAssertionError at ApprovalChannelSpec.groovy:40
A more details can be found below:
![1]:http://i.imgur.com/a2mlRil.png
It means that you have a condition that did not happen within the allotted time. In your case it looks like it is waiting for 45 seconds for the invoices link tab to be present, but it never shows up.
The docs for the waitFor method specify this http://www.gebish.org/manual/0.7.0/api/geb-core/geb/waiting/Wait.html#waitFor(groovy.lang.Closure):
Invokes the given block every retryInterval seconds until it returns a
true value according to the Groovy Truth. If block does not return a
truish value within timeout seconds then a WaitTimeoutException will
be thrown. If the given block is executing at the time when the
timeout is reached, it will not be interrupted. This means that this
method may take longer than the specified timeout. For example, if the
block takes 5 seconds to complete but the timeout is 2 seconds, the
wait is always going to take at least 5 seconds.
If block throws any Throwable, it is treated as a failure and the
block will be tried again after the retryInterval has expired. If the
last invocation of block throws an exception it will be the cause of
the WaitTimeoutException that will be thrown.
You need use waitFor
Look this docs: waiting
p.s. Yes, #jeff-story right.
Related
I've a Celery (4.4.7) task which makes a blocking call and may block for long time. I don't want to keep worker busy for long time, so I setup soft_time_limit for the task. My hope was to fail a task (if it's impossible to complete it quickly) and retry it later.
My issue is that SoftTimeLimitExceeded exception is not being raised (I suppose due to the call blocking on OS level). As a result the task is killed by hard time_limit and I don't have a chance to retry it.
#shared_task(
acks_late=True,
ignore_results=True,
soft_time_limit=5,
time_limit=15,
default_retry_delay=1,
retry_kwargs={"max_retries": 10},
retry_backoff=True,
retry_backoff_max=1200, # 20 min
retry_jitter=True,
autoretry_for=(SoftTimeLimitExceeded,),
)
def my_task():
blocking_call_taking_long_time()
What I tried:
Hard time limit is impossible to intercept
I expected ack_late would push my timed-out task back to the queue, but it doesn't happen.
Tried adding reject_on_worker_lost, neither value changes things for me
SoftTimeLimitExceeded exception is 100% not there - neither autoretry_for, nor regular try ... except don't catch it
For now I ended up with setting explicit timeout for the blocking operation. This requires adding a parameter everywhere along the call chain.
Is there some other path I'm missing?
How can I pause execution for a certain amount of time in Godot?
I can't really find a clear answer.
The equivalent of Thread.Sleep(1000); for Godot is OS.DelayMsec(1000). The documentation says:
Delays execution of the current thread by msec milliseconds. msec must be greater than or equal to 0. Otherwise, delay_msec will do nothing and will print an error message.
Note: delay_msec is a blocking way to delay code execution. To delay code execution in a non-blocking way, see SceneTree.create_timer. Yielding with SceneTree.create_timer will delay the execution of code placed below the yield without affecting the rest of the project (or editor, for EditorPlugins and EditorScripts).
Note: When delay_msec is called on the main thread, it will freeze the project and will prevent it from redrawing and registering input until the delay has passed. When using delay_msec as part of an EditorPlugin or EditorScript, it will freeze the editor but won't freeze the project if it is currently running (since the project is an independent child process).
One-liner:
yield(get_tree().create_timer(1), "timeout")
This will delay the execution of the following line for 1 second.
Usually I make this to a sleep() function for convenience:
func sleep(sec):
yield(get_tree().create_timer(sec), "timeout")
Call it with sleep(1) to delay 1 second.
A python script executes an IO-bound function a lot of times (order of magnitude: anything between 5000 and 75000). This is still pretty performant by using
def _iterator(): ... # yields 5000-75000 different names
def _thread_function(name): ...
with concurrent.futures.ThreadPoolExecutor(max_workers=11) as executor:
executor.map(_thread_function, _iterator(), timeout=44)
If a user presses CTRL-C, it just messes up a single thread. I want it to stop launching new threads; and finish the current ongoing threads or kill them instantly, whatever.
How can I do that?
Exception handling in concurrent.futures.Executor.map might answer your question.
In essence, from the documentation of concurrent.futures.Executor.map
If a func call raises an exception, then that exception will be raised when its value is retrieved from the iterator.
As you are never retrieving the values from map(), the exception is never raised in your main thread.
Furthermore, from PEP 255
If an unhandled exception-- including, but not limited to, StopIteration --is raised by, or passes through, a generator function, then the exception is passed on to the caller in the usual way, and subsequent attempts to resume the generator function raise StopIteration. In other words, an unhandled exception terminates a generator's useful life.
Hence if you change your code to (notice the for loop):
def _iterator(): ... # yields 5000-75000 different names
def _thread_function(name): ...
with concurrent.futures.ThreadPoolExecutor(max_workers=11) as executor:
for _ in executor.map(_thread_function, _iterator(), timeout=44):
pass
The InterruptedError will be raised in the main thread, and by passing through the generator (executor.map(_thread_function, _iterator(), timeout=44)) it will terminate it.
Does anyone knows the reason or logic why the timeout setting on the receive method of OpenSMPP is always divisible by ten? This is based on my experience: when I set it to 5 seconds, the timeout becomes 10 seconds, and when I set it to 11 seconds, the timeout becomes 20 seconds.
I tried to look for an answer by going deep at the codes of open-smpp-3.0.1 but I couldn't find the logic where 1 second becomes 10 seconds. I hope someone here was able to figure out this one before.
Btw, my bind request is a Receiver, and my sync mode is synchronous.
I think is the "Queue Wait Timeout". In the code says about this value:
"This timeout specifies for how long will go the receiving into wait if the PDU (expected or any) isn't in the pduQueue yet. After that the queue is probed again (etc.) until receiving timeout expires or the PDU is received".
The default value is 10 second, so, if timeout is 1 to 10 seconds only waits for the queue for 10 seconds but if you define a timeout for the receiver of 11 seconds it waits 2 times for the queue. This way the receiver waits for 20 seconds. You can modify this value calling after bindind this method:
sessionSmpp.getReceiver().setQueueWaitTimeout(milliseconds);
Can someone offer some more guidance on the use of the Azure Service Bus OnMessageOptions.AutoRenewTimeout
http://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.onmessageoptions.autorenewtimeout.aspx
as I haven't found much documentation on this option, and would like to know if this is the correct way to renew a message lock
My use case:
1) Message Processing Queue has a Lock Duration of 5 minutes (The maximum allowed)
2) Message Processor using the OnMessageAsync message pump to read from the queue (with a ReceiveMode.PeekLock) The long running processing may take up to 10 minutes to process the message before manually calling msg.CompleteAsync
3) I want the message processor to automatically renew it's lock up until the time it's expected to Complete processing (~10minutes). If after that period it hasn't been completed, the lock should be automatically released.
Thanks
-- UPDATE
I never did end up getting any more guidance on AutoRenewTimeout. I ended up using a custom MessageLock class that auto renews the Message Lock based on a timer.
See the gist -
https://gist.github.com/Soopster/dd0fbd754a65fc5edfa9
To handle long message processing you should set AutoRenewTimeout == 10 min (in your case). That means that lock will be renewed during these 10 minutes each time when LockDuration is expired.
So if for example your LockDuration is 3 minutes and AutoRenewTimeout is 10 minutes then every 3 minute lock will be automatically renewed (after 3 min, 6 min and 9 min) and lock will be automatically released after 12 minutes since message was consumed.
To my personal preference, OnMessageOptions.AutoRenewTimeout is a bit too rough of a lease renewal option. If one sets it to 10 minutes and for whatever reason the Message is .Complete() only after 10 minutes and 5 seconds, the Message will show up again in the Message Queue, will be consumed by the next stand-by Worker and the entire processing will execute again. That is wasteful and also keeps the Workers from executing other unprocessed Requests.
To work around this:
Change your Worker process to verify if the item it just received from the Message Queue had not been already processed. Look for Success/Failure result that is stored somewhere. If already process, call BrokeredMessage.Complete() and move on to wait for the next item to pop up.
Call periodically BrokeredMessage.RenewLock() - BEFORE the lock expires, like every 10 seconds - and set OnMessageOptions.AutoRenewTimeout to TimeSpan.Zero. Thus if the Worker that processes an item crashes, the Message will return into the MessageQueue sooner and will be picked up by the next stand-by Worker.
I have the very same problem with my workers. Even the message was successfully processing, due to long processing time, service bus removes the lock applied to it and the message become available for receiving again. Other available worker takes this message and start processing it again. Please, correct me if I'm wrong, but in your case, OnMessageAsync will be called many times with the same message and you will ended up with several tasks simultaneously processing it. At the end of the process MessageLockLost exception will be thrown because the message doesn't have a lock applied.
I solved this with the following code.
_requestQueueClient.OnMessage(
requestMessage =>
{
RenewMessageLock(requestMessage);
var messageLockTimer = new System.Timers.Timer(TimeSpan.FromSeconds(290));
messageLockTimer.Elapsed += (source, e) =>
{
RenewMessageLock(requestMessage);
};
messageLockTimer.AutoReset = false; // by deffault is true
messageLockTimer.Start();
/* ----- handle requestMessage ----- */
requestMessage.Complete();
messageLockTimer.Stop();
}
private void RenewMessageLock(BrokeredMessage requestMessage)
{
try
{
requestMessage.RenewLock();
}
catch (Exception exception)
{
}
}
There are a few mounts since your post and maybe you have solved this, so could you share your solution.