behaviour of timers between 2 consicutive requests in jmeter loadtest - multithreading

I have a thread group with 2 http samplers(Registration and submit button).
I have a scenario like user have to goto registration page and fill the data in form and click on submit button.
I've recorded a script for the same using http proxy server with waiting time using constant timer(5 seconds).
when I trying with single user the sceanario working fine,i.e.user registrations wait for 5seconds and click on submit.
But when I try to do load test with 10 users the requests are not executing sequentially,i.e.some of registrations are executing at a time then submit request executions.. etc as per observations results with viewresultstree listener.
How can I do this scenario for multi users?

This is expected behaviour as JMeter Threads (virtual users) are totally independent from each other.
JMeter acts as follows:
It launches threads defined in the Thread Group within the bounds of ramp-up period
Threads start executing Samplers upside down (or according to Logic Controllers)
When there are no more samplers to execute and/or loops to iterate - thread is being shut down.
So each thread will execute first Registration then Submit. Depending on your ramp-up period value you might run into situation when some users are executing "Registration" and some are doing "Submit" causing mess in your results.
You can do one of the following:
Add __threadNum() function to your samplers names. At least you won't be confused with the order
Play with your ramp-up period so there would be only 2 maximum online users
Add Synchronizing Timer to your request. This way all 10 users will simultaneously execute all the requests in the timer's scope.

Related

Google Cloud Run - One container handling multiple similar requests with queue for each user

I have a SERVICE that gets a request from a Webhook and this is currently deployed across seperate Cloud Run containers. These seperate containers are the exact same (image), however, each instance processes data seperately for each particular account.
This is due to a ~ 3-5 min processing of the request and if the user sends in more requests, it needs to wait for the existing process to be completed for that particular user before processing the next one to avoid racing conditions. The container can still receive webhooks though, however, the actual processing of the data itself needs to be done one by one for each account.
Is there no way to reduce the container count, as such for example, to use one container to process all the requests, while still ensuring it processes one task for each user at a time and waits for that to complete for that user, before processing the next request from the same user.
To explain it better, i.e.
Multiple tasks can be run across all the users
However, per user 1 task at a time processed; Once that is completed, the next task for that user can be processed
I was thinking of monitoring the tasks through a Redis Cache, however, with Cloud Run being stateless, I am not sure that is the right way to go.
Or seperating the requests and the actual work - Master / Worker - And having the worker report back to the master once a task is completed for the user across 2 images (Using the concurrency to process multiple tasks across the users), however that might mean that I would have to increase the timeout time for Cloud Run.
Good to hear any other suggestions.
Apologies if this doesn't seem clear, feel free to ask for more information.

very high max response and error when submit looping form submission

so my requirement is to run 90 concurrent user doing mutiple scenario (15 scenario)simultenously for 30 minutes in virtual macine.so some of the threads i use concurrent thread group and normal thread group.
now my issue is
1)after i execute all 15 scenarios, my max response for each scenario displayed very high (>40sec). is there any suggestion to reduce this high max response?
2)one of the scenario is submit web form, there is no issue if submit only one, however during the 90 concurrent user execution, some of submit web form will get 500 error code. is the error is because i use looping to achieve 30 min duration?
In order to reduce the response time you need to find the reason for this high response time, the reasons could be in:
lack of resources like CPU, RAM, etc. - make sure to monitor resources consumption using i.e. JMeter PerfMon Plugin
incorrect configuration of the middleware (application server, database, etc.), all these components need to be properly tuned for high loads, for example if you set maximum number of connections on the application server to 10 and you have 90 threads - the 80 threads will be queuing up waiting for the next available executor, the same applies to the database connection pool
use a profiler tool to inspect what's going on under the hood and why the slowest functions are that slow, it might be the case your application algorithms are not efficient enough
If your test succeeds with single thread and fails under the load - it definitely indicates the bottleneck, try increasing the load gradually and see how many users application can support without performance degradation and/or throwing errors. HTTP Status codes 5xx indicate server-side errors so it also worth inspecting your application logs for more insights

Advice on JMeter test plan approach structure

Im new to Jmaeter and an currently trying to get the best use out of it to create an API performance test plan.
Lets take the following scenario.
We have an APi which returns data such as part availability and order details for a range or parts.
I want to analyse the response times of the api under different load patterns.
Lets say we have 5 users.
-Each user sends a series of repeated Requests to the API.
-The request made by each user is unique only to that user.
i.e
User 1 requests parts a,b,c.
User 2 requests parts d,e,f... and so on
-All users are sanding their requests at the same time.
The way I have approached this is to create 5 separate thread groups for each user.
Within each thread group is the specific http request that gets sent by each user.
Each http request is governed by its own loop controller where i have set the number of times for each request to be sent
Since I want all users to be sending their requests at once I have unchecked
“run thread groups consecutively” in the main test plan. at a glance the test plan looks something like this:
test plan view
Since im new to using Jmeter and performance testing i have a few questions regarding my approach:
Is the way I have structured the test plan suitable and maintainable in terms of increasing the number of users that I may wish to test with?
Or would it have been better to have a single thread group with 5 child loop controllers, each containing the user specific request body data?
With my current set up, each thread group uses the default ramp up time of 1 second. I figured this is okay since each thread group represents only one user. However i think this might cause a delay on the start up of each test run. Are there any other potentially better ways to handle this such as using the scheduler or incrementing the ramp up time for each thread group so that they don all start at exactly the same time?
Thanks in advance for any advice
Your approach is correct.
If you want the requests to be in parallel they will have to be in separate Thread Groups. Each Thread Group should model a use-case. In your case, the use-case is a particular mix of requests.
By running the test for sufficiently long time you will not feel the effects of ramp-up time.
First of all your test needs to be realistic, it should represent real users (or user groups) as close as possible. If test does it - it is a good test and vice versa. Something like:
If User1 and User2 represent 2 different group of users (like User1 is authenticated and User2 is not authenticated or User1 is admin and User2 is guest) they should go into different Thread Groups.
It is better to use Thread Group iterations instead of Loop Controllers as some test elements like HTTP Cookie Manager have settings like Clear Cookies each Iteration which don't respect iterations produced by Loop or While Controller, they consider only Thread Group-driven iterations
The only way to guarantee sending requests at the same time is putting them under one Thread Group and using Synchronizing Timer
When it comes to real load test, you should be always gradually adding the load so you could correlate various metrics like response time, throughput, error rate with increased number of virtual users. Same approach should be applied for "ramping-down", you should not be turning off the load at once in order to be able to see how does your application recover after the load. You might want to use some custom Thread Groups available via JMeter Plugins project like:
Stepping Thread Group
Ultimate Thread Group
They provide flexible and convenient way to set the desired load pattern.

JMeter Load Test on online purchasing

I would like to talk some issues that I facing. Currently using JMeter to test the purchase train ticket on website. My question is how should I test the website. Example like, user will require to fill in quantity and information in order to purchase train ticket. After fill in, there is the "confirm" button to click and link to the payment gateway. I want to test, whether server able to hold 100 times on confirm button in 1second without slow down the performance. I would like to view the result of latency,ms of server when user press the confirm button. I have try to use HTTPS test recorder, but it seem like is a wrong way to test the server. Can anyone provide solution or guide me? Thank You very much.
HTTP(S) Test Script Recorder is good for building test scenario "skeleton" - it basically speeds up the process of initial test plan draft creation. In other case you would have to use a sniffer tool to capture browser requests and manually create relevant HTTP Request samplers.
The first thing you need to do is verify how does recorded (or developed) script works for a single user. Add View Results Tree listener and inspect responses. If everything is fine - you're good to go. If not (i.e. if there are errors or you hitting one page instead of performing transaction) - you'll need to look into the requests a little bit deeper and may have to do some correlation - the process of extracting some response bit with a JMeter PostProcessor, converting it into a JMeter Variable and adding as a parameter to the next request.
Once you're happy with how your scenario behaves don't forget to disable View Results Tree listener. Now it's time to define a load pattern. JMeter works as follows:
it kicks off threads (each thread represents a virtual user) within the time frame defined by ramp-up time.
each thread starts executing samplers upside down (or according to the Logic Controllers)
if there are no more samplers to execute or loops to iterate - thread is being shut down.
So make sure that you provide enough loops so your threads have enough job to do, otherwise you may run into a situation when half of threads have already finished their jobs and half of them haven't even started. Ultimate Thread Group available via JMeter Plugins is extremely helpful
If your aim is to see whether your server is capable of serving 100 concurrent "Confirm" in a reasonable time frame and without errors - use Synchronizing Timer
Use non-GUI mode to run your test. After test completion you can open JMeter GUI again, add i.e. Aggregate Report listener which will calculate and visualise metrics like response time, connect time, latency, etc.

Node/Express: running specific CPU-instensive tasks in the background

I have a site that makes the standard data-bound calls, but then also have a few CPU-intensive tasks which are ran a few times per day, mainly by the admin.
These tasks include grabbing data from the db, running a few time-consuming different algorithms, then reuploading the data. What would be the best method for making these calls and having them run without blocking the event loop?
I definitely want to keep the calculations on the server so web workers wouldn't work here. Would a child process be enough here? Or should I have a separate thread running in the background handling all /api/admin calls?
The basic answer to this scenario in Node.js land is to use the core cluster module - https://nodejs.org/docs/latest/api/cluster.html
It is an acceptable API to :
easily launch worker node.js instances on the same machine (each instance will have its own event loop)
keep a live communication channel for short messages between instances
this way, any work done in the child instance will not block your master event loop.

Resources