The polling time is not recorded in response time how can I do that below is my scenario
I send a request every 500ms to a server to see if the result for the query is available( this is known by status as completed) when the status is completed I send another request to fetch result
Problem: the pooling time is not captured as part of the response time. so If I waited for 5 min(polled for 5 min) to get the result then this should be added to response time as the user will to see this added response time when he uses the system from a UI
you need to put your request and polling in a group and set useGroupDurationMetric in gatling.conf to true
It will be if you start your timer before the looping structure and end it when you have your final result.
We have been doing this is LoadRunner for the better part of a two decades while a web client polls every n seconds for a report to complete.
PCODE
start_timer(timer name)
do
{
sleep ( some seconds or milliseconds)
Check Report Status for done
} while (not done)
stop_timer(timer name)
Related
For an async web_url Loadrunner has added the web_reg_async_attributes. Within the Poll_0_ResponseCB I'm waiting until the aHttpStatusCode == 200. This is needed because it's an XHR request where the server transfers all the data async to the browser. The transaction is considered finished AFTER all the data has been received in the GET request.
The request looks as following:
web_reg_async_attributes("ID=Poll_0",
"Pattern=Poll",
"URL=https://[URL]/api2/notifications/GetUnreadNotifications",
"PollIntervalMs=500",
"RequestCB=Poll_0_RequestCB",
"ResponseCB=Poll_0_ResponseCB",
LAST);
web_url("GetUnreadNotifications_2",
"URL=https://[URL]/api2/notifications/GetUnreadNotifications",
"Resource=0",
"RecContentType=application/json",
"Referer=https://[URL]/",
"Snapshot=t14.inf",
"Mode=HTML",
LAST);
web_sync("ParamCreated=stopAsync", "RetryIntervalMs=500", "RetryTimeoutMs=120000", LAST);
web_stop_async("ID=Poll_0",LAST);
Loadrunner sees the polling mechanism as Wasted Time, but in reality it's polling untill all data is received and I need to include this in the actual Duration.
How can I include the web_sync polling part inside Duration instead of Wasted Time?
ended with a "Pass" status (Duration: 33,9532 Wasted Time: 33,3178).
Yes, this API is that slow...
I believe wasted time is the time to execute the loadrunner API itself. This came about in the late 1990s because a competing tool laid marketing landmines at customers for the Mercury team for not tracking the time to execute the APIs like their "superior tool" did at the time.
Well, you place a barrier in front of Mercury sales, then engineering gets involved. And "Boom!" we get tracked wasted time, along with tracked think time. You could reduce your polling window from 500ms to 750 milliseconds to have less overhead on your polling execution.
if I am having 10 threads running to execute 5 http requests, and at the end of execution I want to know time take by each thread to execute those 5 http request , how can I do that ? can someone please help...
Just put these 5 HTTP Requests under the Transaction Controller
You might also want to add __threadNum() function to the Transaction Controller's label in order to be able to distinguish the aggregate response times for individual threads
I use gunicorn to allow my Flask rest API to process multiple requests at the same time. I have 5 workers in my gunicorn config (2 x $(NUM_CORES) + 1). I measured the response time and here are my results :
The response time for 1 request is 20s.
If I send 5 requests at the same time, the response time for each request is 55s.
If I send 6 requests at the same time, the response time for the first 5 requests is 55s and the response time for the 6th request is 75s ( = 55 + 20 )
I don't understand why 5 requests at the same time takes 55s. I expected a 20 seconds response time, as for 1 request, thinking it would process the 5 requests in parallel at the same time.
55 seconds is almost 3 times more than 20 seconds, my individual processing time.
I don't know much about multithreading. Can someone explain me why the response time for parallel tasks is that longer than individual processing time ?
Thanks
The reason I had these results is because 5 workers was too much in my case.
I chose 5 because of gunicorn recommendation here
Generally we recommend (2 x $num_cores) + 1 as the number of workers
to start off with. While not overly scientific, the formula is based
on the assumption that for a given core, one worker will be reading or
writing from the socket while the other worker is processing a
request.
In my case all, the requests I was sending at the same time were all to process parameters. Therefore I changed the workers number to 2 and I have now results that make sense to me.
The response time for 1 request is 20s.
If I send 2 requests at the same time, the response time for each request is 20s.
If I send 3 requests at the same time, the response time for the first 2 requests
is 20s and the response time for the 3rd request is 40s.
Can the execution of an expressJS method be delayed for 30 days or more just by using setTimeout ?
Let's say I want to create an endpoint /sendMessage that send a message to my other app after a timeout of 30 days. Will my expressJS method execution will last long time enough to fire this message after this delay ?
If your server runs continuously for 30 days or more, then setTimeout() will work for that. But, it is probably not smart to rely on that fact that your server never, ever has to restart.
There are 3rd party programs/modules designed explicitly for this. If you don't want to use one of them, then what I have done in the past is I write each future firing time into a JSON file and I set a timer for it with setTimeout(). If the timer successfully fires, then I remove that time from the JSON file.
So, at any point in time, the JSON file always contains a list of times in the future that I want timers to fire for. Any timer that fires is immediately removed from the JSON file.
Anytime my server starts up, I read the times from the JSON file and reconfigure the setTimeout() for each one.
This way, even if my server restarts, I won't lose any of the timers.
In case you were wondering, the way nodejs creates timers, it does not cost you anything to have a bunch of future timers configured. Nodejs keeps the timers in a sorted linked list and the event loop just checks the time for the next timer to fire - the one at the front of the sorted list (the rest of the timers are not looked at until they get to the front of the sorted list). This means the only time it costs anything to have lots of future timers is when inserting a new timer into the sorted list and there is no regular cost in the event loop to having lots of pending timers present.
I am making a bot on dialogflow with a webhook. I get an error : DEADLINE_EXCEEDED. My webhook takes a bit over 5 seconds to return a response. Is there a way to allow a longer time than 5 seconds ?
This is not possible. One possibility is to (if you have for example a background task which takes some time) is to send back (before the 5 sec timeout) an Event. This triggers again a call to the Webhook, so you get another 5 sec to finish your background process.