How to increase number of threads directly through CSV file - multithreading

I have a test plan and that have around 50 threads(individually) in j-meter and for each thread, I want to increase my thread count depending upon the load we trigger.
So sometimes its 10 sometime it's 20 etc and I have CSV file config through which I pass the parameters like UN and Pass.
So every time I run I have to update the thread count of an individual thread manually.
So is there any way I can do it by.CSV file?

Use "jp#gc - Variables From CSV File" plugin to get the threads and pass it to the thread group.
Please check below:-
Hope this helps.

Related

how to format multiple files via multithreading

I'm using multiplethreading to change the case of multiple text files and comparing the time required according to the number of threads i use. I'm not able to understand how to select a set of files for processing
What I do (I have done this using c/c++, Java, and python):
create a queue with enough space to hold all of the filenames
put all of the filenames in the queue
create and start the number of threads you want
each thread needs to know where the queue is
a thread tries to get a filename from the queue
if the queue is empty the thread exits
otherwise the thread processes the file then go to step 5
Wait for threads to finish
that’s it

Queuing and priority?

How can I avoid a queue (Beanstalk) to block other users if the initial one takes time ?
For example, if my first user upload a file that takes 10 hours to be processed, how can I avoid the other users to have to wait 10 hours to have their file started.
For the details : One user upload a file with n lines. This file will be splitted in small chuncks of 1000 lines and added to a queue. 4 worker works simultaneously to process the queue (which means 4000 lines at the same time).
If the first user upload a file containing 100.000 lines, and the second user upload a file of 4.000 lines, he will have to wait that the workers have finished processing the first file.
Is there a way to avoid forcing the second user to wait ?
The only solution that cames to my mind is to limit the number of lines to a certain amount (that will be not too long to wait) and if the file is higher, create a dedicated instance to that specific user.
How would you do?

Loop until data set is not in use with JCL

I am working in mainframe and I need to wait a dataset is released to execute automatically a JOB. Do you know any simple way to loop until a dataset is not in use in JCL? I was looking on the web and i found some solutions with REXX but they seemed too complicated to do such simple thing as I need. Also I have never used REXX.
Regards!
P.D. Also, the data set could not exist.
Edit: I need this becouse I run a XCOM Job which transfer a file of another system to a mainframe dataset. The problem is when this JOB finish, maybe the file is still beign transfered, and would like to wait to transfer be completed before to start the next JOB. Maybe editing the sentence of the next JOB associated to the dataset.
The easy way to do this is to ensure that your file transfer package allocates the dataset with an OLD disposition, that will create a system level enqueue on the dataset and prevent your job from running until the enqueue is released.
Many file transfer packages offer some sort of 'file complete' exit that can also trigger a job once a dataset transmission is fully complete.
But you can't loop in JCL. You can in REXX, but it has a host of issues that you have to deal with, not at all simple.

jMeter adding threads/users (read from CSV Data) to a running thread group

my problem is quite complex.
The matter is to test our web site answers to an increasing amount of requests from different users.
So I can take users/passwords from a CSV Data and launch an HTTP request (with variables readen from the file).
But I don't want to run the thread with all users at same time, but to loop and add at every iteration an other user from the file to the running thread groups (after some delay).
It seems very difficult to do so with jMeter. Perhaps I's need to call a custom java class ?
If I understand you correctly, you just should use Rump up. This parameter control how fast your test will reach maximum threads count.
As explained in JMeter documentation,
The ramp-up period tells JMeter how long to take to "ramp-up" to the
full number of threads chosen. If 10 threads are used, and the ramp-up
period is 100 seconds, then JMeter will take 100 seconds to get all 10
threads up and running. Each thread will start 10 (100/10) seconds
after the previous thread was begun. If there are 30 threads and a
ramp-up period of 120 seconds, then each successive thread will be
delayed by 4 seconds.
Also may be this Throughput Shaping Timer may be helpful for you. You can schedule duration of request with it.
As Jay stated, you can use ramp up to try to control this, though I am not sure the result will be what you are after...though it will add the startup delay. If you have a single thread then each row of the CSV will be processed one at a time, in order.
You can set the thread group to 1 thread and loop forever. In the CSV config you can set a single pass and to terminate the thread on EOF.
CSV Data Set Config-->Recycle on EOF = False
CSV Data Set Config-->Stop thread on EOF = True
Thread Group-->Loop Count = Forever
Also keep in mind that by using BSF and Beanshell you can exact a great deal of control over JMeter.
You should check out UltimateThreadGroup from jmeter-plugins.

Worker processes called in order azure

If Multiple worker processes have to called in order after every task by the previous worker gets done (there is a queue containing pointer to blobs and every worker has multiple instances. Pls see my previous questions.) how should this be done ?
Will Azure fabric do this automatically ? or is there a way to set this in the config file ?
You just follow the same process that you're already got but with more layers. If worker 1 reads something from queue 1, and it needs to let worker 2 know that it's time for it to start processing the same file, worker 1 simply puts a message in queue 2.
Edit: OK, let me see if I fully understand what you're after here. It sounds like you have here is a batch of files that need to go through several processes, but they can't go on to the next step of the process until they've all finished going through the previous step.
If that is the case then, no, there is nothing in Azure that will do that for you automatically.
Because of this, if possible I'd rework my workers so that each file could just be sent on without worrying about what state the other files were in.
If that is not possible, then you need some way of monitoring which files have been completed and which ones are still pending. One way to do this (and hopefully you can expand on this) is the code that creates the batch, creates a progress row in a table somewhere (SQL Azure or Azure Tables, it doesn't matter really) for each file, sends a message to worker one and starts a background task to monitor this table.
When worker 1 finishes processing a file, it updates the relevant row in the monitoring table to say, "Worker 1 finished".
The background thread that was created above waits until all of the rows have "Worker 1 finished" set to true, then creates the messages for Worker 2 and starts looking at the "Worker 2 finished" flag. Rinse repeat for as many worker steps as you have.
When all steps are finished, you'll probably want the background task to clean up this table and also have some sort of timeout in case a message gets lost somewhere.
Although what #knightpfhor is suggesting would do the trick, I would try and go about this in a more simple kind of way without referencing the names of workers :-)
Specifically, If there is a way you already know how many docs need to be processed, I would first create N-amount of rows in a Table, each holdung some info relevant to the current batch, each having columnKey set to be the batch id. I'd then put N number of messages in my queue and let the worker processes pick them up. When each worker is done, it would delete the corresponding row in the table as well. A monitoring process would simoly know a batch started and do a count every once in a while (if it is not cricital, or the worker would do a count after it finishes removing the row) and spawn a new message in the relevant queue for the next worker role to process.
If you wamt even more control you could go with having a row in your table storing the state of your process (processing files, post-processing), etc. In this case, I'd store the state transitions in a queue, and make sure you only make them once. But that's a whole new question alltogether.
Hope it heps.

Resources