retry function is not working for more than 3 count - dsl

I am using retry in my code to check the GET status and I retry the hit until I get 200.
my code is :
Given configure retry = { interval: 5000 , attempts: 5 }
And URL
And param query = 'name:' + title
And def auth = callonce
read('classpath:examples/Tokens/ViewToken.feature')
{'viewAccessToken': 'viewAccessToken' }
And print ' view token', auth.viewAccessToken
And header Authorization = auth.viewAccessToken
And retry until responseStatus == 200
When method get.
But this works only for 3 counts even though I have set the retry count to 5.
How can I fix this?

* configure retry = { count: 5, interval: 5000 }

Related

NodeJS - While loop until certain condition is met or certain time is passed

I've seen some questions/answers very similar but none exactly describing what I would like to achieve. Some background, this is a multi step provision flow. In pretty short words this is the goal.
1. POST an action.
2. GET status based in one variable submitted above. If response == "done" then proceed. Returns an ID.
3. POST an action. Returns an ID.
4. GET status based on ID returned above. If response == "done" then proceed. Returns an ID.
5. (..)
I think there are 6/7 steps in total.
The first question is, are there any modules that could help me somehow achieve this? The only requirement is that each attempt to get status should have an X amount of delay and should expire, marking the flow as failed after an X amount of time.
Nevertheless, the best I could get to, is this, assuming for example step 2:
GetNewDeviceId : function(req, res) {
const delay = ms => new Promise((resolve, reject) => setTimeout(resolve, ms));
var ip = req;
async function main() {
let response;
while (true) {
try {
response = await service.GetNewDeviceId(ip);
console.log("Running again for: " + ip + " - " + response)
if (response["value"] != null) {
break;
}
} catch {
// In case it fails
}
console.log("Delaying for: " + ip)
await delay(30000);
}
//Call next step
console.log("Moving on for: "+ ip)
}
main();
}
This brings couple of questions,
I'm not sure this is indeed the best/clean way.
How can I set a global timeout, let's say 30 minutes, forcing it to step out of the loop and call a "failure" function.
The other thing I'm not sure (NodeJS newbie here) is that, assuming this get's called let's say 4 times, with different IP before any of those 4 are finished, NodeJS will run each call in each own context right? I quickly tested this and it seems like so.
I'm not sure this is indeed the best/clean way.
It am unsure whether your function GetNewDeviceId involves a recursion, that is, whether it invokes itself as service.GetNewDeviceId. That would not make sense, service.GetNewDeviceId should perform a GET request, right? If that is the case, your function seems clean to me.
How can I set a global timeout, let's say 30 minutes, forcing it to step out of the loop and call a "failure" function.
let response;
let failAt = new Date().getTime() + 30 * 60 * 1000; // 30 minutes after now
while (true) {
if (new Date().getTime() >= failAt)
return res.status(500).send("Failure");
try {...}
...
await delay(30000);
}
The other thing I'm not sure (NodeJS newbie here) is that, assuming this get's called let's say 4 times, with different IP before any of those 4 are finished, NodeJS will run each call in each own context right?
Yes. Each invocation of the function GetNewDeviceId establishes a new execution context (called a "closure"), with its own copies of the parameters req and res and the variables response and failAt.

Retry http request with backoff - (Nestjs - axios - rxjs) throws socket hang up

[second-Update]: i solved the issue by implmentig retry with promises and try & catch
[first-Update]:
I tried the retrial mechanism with HTTP request with content-type: application/json and it works!! but my issue Is with content type form-data.
I guess it's similar to this problem: Axios interceptor to retry sending FormData
Services architecture
I'm trying to make an HTTP request to service-a from NestJS-app.
and I want to implement retry with backoff logic.
to re-produce service-a failure, I'm restarting its docker container and make the HTTP request.
retry logic implemented as 3 retries.
first time as service-a is restarting.. throws 405 service not available error and make retry.
all 3 retries failed with a socket hang up error.
HTTP request code using axios nestjs wrapper lib
retryWithBackOff rxjs operator implementation
the first call throws a 405 Service Unavailable error.
then application starts retries.
first retry fires after service-a started, failed with error socket hang up
first, second, and third retries failed with socket hang up.
3 sockets hang up errors
my expected behavior is:
when service-a started then the first retry fires, it should work with a successful response.
notice that 3 retries don't log to the Nginx server anything!
While your solution probably works, it could be improved in terms of single responsibility, which RxJS can help with. I use an adapted solution of a code snippet I found once on the web (I can't find the original source any more).
interface GenericRetryStrategy {
getAttempt?(): number;
maxRetryAttempts?: number;
scalingDuration?: number;
maxDuration?: number;
retryFormula?: RetryFormula;
excludedStatusCodes?: number[]; // All errors with these codes will circumvent retry and just return the error
}
const genericRetryStrategy$ =
({
getAttempt,
maxRetryAttempts = 3,
scalingDuration = 1000,
maxDuration = 64000,
retryFormula = 'constant', // time-to-retry-count interpolation
excludedStatusCodes = [], // All errors with these codes will circumvent retry and just return the error
}: GenericRetryStrategy = {}) =>
(error$: Observable<unknown>): Observable<number> =>
error$.pipe(
switchMap((error, i) => {
const retryAttempt = getAttempt ? getAttempt() : i + 1;
// if maximum number of retries have been met
// or response is a error code we don't wish to retry, throw error
if (
retryAttempt > maxRetryAttempts ||
excludedStatusCodes.find(e => e === error.code)
) {
return throwError(error);
}
const retryDuration = getRetryCurve(retryFormula, retryAttempt);
const waitDuration = Math.min(
maxDuration,
retryDuration * scalingDuration,
);
// retry after 1000ms, 2000ms, etc …
return timer(waitDuration);
}),
);
You would then call it like this:
const retryThreeTimes$ = genericRetryStrategy$({
maxRetryAttempts: 3,
excludedStatusCodes: [HttpStatus.PayloadTooLarge, HttpStatus.NotFound] // This will throw the error straight away
});
this.setupUploadAttachements(url, clientApiKey, files, toPoTenantId).pipe(retryWhen(retryThreeTimes$))
This function/operator can now be re-used for all kinds of requests. It is very flexible. It also makes your operator logic more readable, since the complex retry logic sits somewhere else and does not “pollute” your pipe.
You might have to do some adjustment, since axios does return a different error payload, it seems (at least judging from your code examples). Also, if I understood your code correctly, you actually don't want to throw and error when the above error codes apply. In that case, you could add another catchError after the retryWhen and filter these codes, while returning of([]).

How to extend time after max invalid attempt for Login (node-rate-limiter-flexible)

Basically i want to protect my login endpoint API from brute-force attack. The existing idea is when user consume max invalid attempt(suppose 5 retry) then i want to locked user and extend time for another each invalid attempt by 30 sec.
I am protecting that endpoint by node-rate-limiter-flexible package. (You can suggest best library for this)
const opts = {
points: 5, // 6 points
duration: 30, // Per second
};
const rateLimiter = new RateLimiterMemory(opts);
rateLimiter.consume(userid)
.then((rateLimiterRes) => {
// Login endpoint code
})
.catch((rateLimiterRes) => {
// Too many invalid attempts
});
Above code is working fine for max 5 invalid attempt and then blocked user for 30 second. But what i want to do that when user consumed max invalid attempt then for another each invalid attempt extend time by 30 sec. ((Means time will be gradually increase for each invalid attempt. maximum for 1 day). (Sorry for my ugly English)
Increase rateLimiterRes.msBeforeNext on 30 seconds every time userId blocked and use rateLimiter.block method to setup new duration.
rateLimiter.consume(userid)
.then((rateLimiterRes) => {
// Login endpoint code
})
.catch((rateLimiterRes) => {
const newBlockLifetimeSecs = Math.round(rateLimiterRes.msBeforeNext / 1000) + 30
rateLimiter.block(userid, newBlockLifetimeSecs)
.then(() => {
// Too many invalid attempts
})
.catch(() => {
// In case store limiter used (not in-memory)
})
});
There is also example of Fibonacci-like increasing of block duration on wiki

How to retry a http request until a condition is met using rxjs

I want to retry a http request until some data exists up to 10 times with a delay of 2 seconds between each retry.
const $metrics = from(axios(this.getMetrics(session._id, sessionRequest._id, side)));
const res = $metrics.pipe(
map((val: any) => {
console.log("VALUE:", val.data.metrics.length);
if (val.data.metrics.length === 0) {
throw val;
}
return val;
}),
retryWhen((errors) => errors.pipe(delay(2000), take(10))),
).subscribe();
I am trying to follow the example in the documentation. https://www.learnrxjs.io/operators/error_handling/retry.html
I create $metrics observable from an axios http promise.
I use the map operator to check if the response of the http request matches my condition to retry. val.data.metrics.length === 0. If it does it throws an error.
I retry the http requests up to 10 times with a 10 second delay.
I expect after 3-4 retries for this metrics array to have data, but in my console when i log the response i get the following.
VALUE: 0
Im not sure if this is even making multiple http requests because the console log only returns one output instead of 10.
UPDATE
Ive updated the code to use retryWhen instead of retry, it does a delay of 2 seconds and will only take 10 errors before stopping.
Now i believe the problem is that it only makes 1 http request because the console log only returns a single output.
try use defer()
const $metrics = defer(()=>from(axios(this.getMetrics(session._id, sessionRequest._id, side))))
one thing to point out, you should inspect your network tab and see if the request is made in retry. your console.log is in map() operator which will be skipped when error thrown, could be why u don't see console.log there. You can try out the example below.
import { timer, interval,from } from 'rxjs';
import { map, tap, retryWhen, delayWhen,delay,take } from 'rxjs/operators';
//emit value every 1s
const source = from(fetch('http://kaksfk')).pipe(tap(val => console.log(`fetcching you won't see`)))
const example = source.pipe(
retryWhen(errors =>
errors.pipe(
// log error message
tap(val => console.log(`retrying`)),
// restart in 5 seconds
delay(2000),
take(5),
),
),
)

Azure Search .net SDK- How to use "FindFailedActionsToRetry"?

Using the Azure Search .net SDK, when you try to index documents you might get an exception IndexBatchException.
From the documentation here:
try
{
var batch = IndexBatch.Upload(documents);
indexClient.Documents.Index(batch);
}
catch (IndexBatchException e)
{
// Sometimes when your Search service is under load, indexing will fail for some of the documents in
// the batch. Depending on your application, you can take compensating actions like delaying and
// retrying. For this simple demo, we just log the failed document keys and continue.
Console.WriteLine(
"Failed to index some of the documents: {0}",
String.Join(", ", e.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key)));
}
How should e.FindFailedActionsToRetry be used to create a new batch to retry the indexing for failed actions?
I've created a function like this:
public void UploadDocuments<T>(SearchIndexClient searchIndexClient, IndexBatch<T> batch, int count) where T : class, IMyAppSearchDocument
{
try
{
searchIndexClient.Documents.Index(batch);
}
catch (IndexBatchException e)
{
if (count == 5) //we will try to index 5 times and give up if it still doesn't work.
{
throw new Exception("IndexBatchException: Indexing Failed for some documents.");
}
Thread.Sleep(5000); //we got an error, wait 5 seconds and try again (in case it's an intermitent or network issue
var retryBatch = e.FindFailedActionsToRetry<T>(batch, arg => arg.ToString());
UploadDocuments(searchIndexClient, retryBatch, count++);
}
}
But I think this part is wrong:
var retryBatch = e.FindFailedActionsToRetry<T>(batch, arg => arg.ToString());
The second parameter to FindFailedActionsToRetry, named keySelector, is a function that should return whatever property on your model type represents your document key. In your example, your model type is not known at compile time inside UploadDocuments, so you'll need to change UploadsDocuments to also take the keySelector parameter and pass it through to FindFailedActionsToRetry. The caller of UploadDocuments would need to specify a lambda specific to type T. For example, if T is the sample Hotel class from the sample code in this article, the lambda must be hotel => hotel.HotelId since HotelId is the property of Hotel that is used as the document key.
Incidentally, the wait inside your catch block should not wait a constant amount of time. If your search service is under heavy load, waiting for a constant delay won't really help to give it time to recover. Instead, we recommend exponentially backing off (e.g. -- the first delay is 2 seconds, then 4 seconds, then 8 seconds, then 16 seconds, up to some maximum).
I've taken Bruce's recommendations in his answer and comment and implemented it using Polly.
Exponential backoff up to one minute, after which it retries every other minute.
Retry as long as there is progress. Timeout after 5 requests without any progress.
IndexBatchException is also thrown for unknown documents. I chose to ignore such non-transient failures since they are likely indicative of requests which are no longer relevant (e.g., removed document in separate request).
int curActionCount = work.Actions.Count();
int noProgressCount = 0;
await Polly.Policy
.Handle<IndexBatchException>() // One or more of the actions has failed.
.WaitAndRetryForeverAsync(
// Exponential backoff (2s, 4s, 8s, 16s, ...) and constant delay after 1 minute.
retryAttempt => TimeSpan.FromSeconds( Math.Min( Math.Pow( 2, retryAttempt ), 60 ) ),
(ex, _) =>
{
var batchEx = ex as IndexBatchException;
work = batchEx.FindFailedActionsToRetry( work, d => d.Id );
// Verify whether any progress was made.
int remainingActionCount = work.Actions.Count();
if ( remainingActionCount == curActionCount ) ++noProgressCount;
curActionCount = remainingActionCount;
} )
.ExecuteAsync( async () =>
{
// Limit retries if no progress is made after multiple requests.
if ( noProgressCount > 5 )
{
throw new TimeoutException( "Updating Azure search index timed out." );
}
// Only retry if the error is transient (determined by FindFailedActionsToRetry).
// IndexBatchException is also thrown for unknown document IDs;
// consider them outdated requests and ignore.
if ( curActionCount > 0 )
{
await _search.Documents.IndexAsync( work );
}
} );

Resources