I want to run a set of thread groups:
--Thread Group A
--Thread Group B
--Thread Group C
--Thread Group D
I want to Run A and B together(as in 'A' First and Then 'B') and while this is happening i want to run C and D together.
To do that you can use in Thread Group the scheduler checkbox.
And fill duration.
Then configure startup delay to get what you want.
Related
Let’s say I have 6 thread groups A,B,C,D,E,F
I would like to run them in this order
A
And then B and C concurrently
And then D
And then E and F concurrently
I have a mix of regular thread groups and Concurrency Thread Groups. Is it possible to do this? If so how?
Assuming Run Test Group consecutively is uncheked in Test Plan,
If you put A as setUp Thread_Group
these type of threads execute before the test proceeds to the executing of regular Thread Groups.
and D as tearDown Thread_Group
these type of threads execute after the test has finished executing its regular Thread Groups.
You can achieve 1,2,3
4 - I suggest you execute another JMX when finished
I have a questions regarding the actual synchronization points in the following c - like psuedocode examples. In our slides the synchronization point is shown to occur at the point indicated below.
Two process 2 way synchronization, x and y = 0 to start
Process 1
signal(x);
//Marked as sync point
wait(y);
Process 2
signal(y);
//This arrow isn't as exact but appears to be near the middle again.
wait(x);
Now for just two process 2 way sync this seems to make sense. However, when expanding this two 3 process 3 way sync this logic seems to break down. There are no arrows given in the slide deck.
3 Process 3 Way Synchronization (S1, S2, S3 = 0 to start)
Process 0
signal(S0);
signal(S0);
wait(S1);
wait(S2);
Process 1
signal(S1);
signal(S1);
wait(S0);
wait(S2);
Process 2
signal(S2);
signal(S2);
wait(S0);
wait(S1);
Now I find the sync point couldn't actually be between the signal and the wait. For example:
Let's so Process 0 runs first and signals S0 once. Now S0 = 1. Now let's say that before the second signal(S0) can be run that the process is interrupted and Process 1 runs next. Let's say that only one signal(S1) can be run before the process is interrupted. Now the value of S1 = 1. Now let's say that Process 2 runs next. This signal(S2) is allowed to run so S2 = 2. Now the process is not interrupted so it is allowed to continue. Wait(S0) runs which decrements S0 by 1. S0 now equals 0. However, process 2 is allowed to continue running because S0's value is not a negative value. Now wait(S1) is allowed to run and a similar thing here happens.
At this point Process 2 is done running. However Process 0 and Process 1 did not finish their signal's. If the sync point is truly in between signals and wait then this solution to 3 way 3 process sync is incorrect.
A similar issue can arise in solution for 3 process 3 way synchronization that allows each process to run more than one instance of itself at a time. Attached is that slide but I will not explain why the "middle" point in the process can't be the sync point as I already have a huge wall of text.
Please let me know which way is correct, no amount of googling has given me an answer. I will include all relevant slides.
I need to increment a variable for each thread.
Example:
Thread 1: $(Test_Var) should be 1001
Thread 2: $(Test_Var) should be 1002
Thread 3: $(Test_Var)) should be 1003
and so on ..
In the test plan I defined some user defined variables. Here I set up one $(Start_Test_Var) with the value of 1000.
Now I am starting my test and it will always count until 1001 because the start value is set on 1000.
How can I increment the variable for each thread? I never pass the value of 1001 and I have no idea what to do.
JMeter always “remember” the start variable and starts to count from 1000 up but I want Jmeter to count up from the last value of the variable (1000, 1001, 1002).
I tried to set up a “SetUp”-Thread group with all settings and with all user defined variables. Then I added a BeanShell Assertion in my “real” thread group but it didn’t worked either.
Although my calculation works:
Calculation of the variable
Is there a way to override the value of the user defined variable?
Thanks!
JMeter Variables are local to current Thread Group only, if you want to pass variables between Thread Groups you need to use JMeter Properties instead, i.e.
__setProperty() function in the setUp Thread Group to set the value
__P() function in "real" thread group to read the value
See Knit One Pearl Two: How to Use Variables in Different Thread Groups article for more details.
Also be aware of __counter() function which can produce "global" number which will increment by 1 each time this function is called.
Problem
Summary: Parallely apply a function F to each element of an array where F is NOT thread safe.
I have a set of elements E to process, lets say a queue of them.
I want to process all these elements in parallel using the same function f( E ).
Now, ideally I could call a map based parallel pattern, but the problem has the following constraints.
Each element contains a pair of 2 objects.( E = (A,B) )
Two elements may share an object. ( E1 = (A1,B1); E2 = (A1, B2) )
The function f cannot process two elements that share an object. so E1 and E2 cannot be processing in parallel.
What is the right way of doing this?
My thoughts are like so,
trivial thought: Keep a set of active As and Bs, and start processing an Element only when no other thread is already using A OR B.
So, when you give the element to a thread you add the As and Bs to the active set.
Pick the first element, if its elements are not in the active set spawn a new thread , otherwise push it to the back of the queue of elements.
Do this till the queue is empty.
Will this cause a deadlock ? Ideally when a processing is over some elements will become available right?
2.-The other thought is to make a graph of these connected objects.
Each node represents an object (A / B) . Each element is an edge connecting A & B, and then somehow process the data such that we know the elements are never overlapping.
Questions
How can we achieve this best?
Is there a standard pattern to do this ?
Is there a problem with these approaches?
Not necessary, but if you could tell the TBB methods to use, that'll be great.
The "best" approach depends on a lot of factors here:
How many elements "E" do you have and how much work is needed for f(E). --> Check if it's really worth it to work the elements in parallel (if you need a lot of locking and don't have much work to do, you'll probably slow down the process by working in parallel)
Is there any possibility to change the design that can make f(E) multi-threading safe?
How many elements "A" and "B" are there? Is there any logic to which elements "E" share specific versions of A and B? --> If you can sort the elements E into separate lists where each A and B only appears in a single list, then you can process these lists parallel without any further locking.
If there are many different A's and B's and you don't share too many of them, you may want to do a trivial approach where you just lock each "A" and "B" when entering and wait until you get the lock.
Whenever you do "lock and wait" with multiple locks it's very important that you always take the locks in the same order (e.g. always A first and B second) because otherwise you may run into deadlocks. This locking order needs to be observed everywhere (a single place in the whole application that uses a different order can cause a deadlock)
Edit: Also if you do "try lock" you need to ensure that the order is always the same. Otherwise you can cause a lifelock:
thread 1 locks A
thread 2 locks B
thread 1 tries to lock B and fails
thread 2 tries to lock A and fails
thread 1 releases lock A
thread 2 releases lock B
Goto 1 and repeat...
Chances that this actually happens "endless" are relatively slim, but it should be avoided anyway
Edit 2: principally I guess I'd just split E(Ax, Bx) into different lists based on Ax (e.g one list for all E's that share the same A). Then process these lists in parallel with locking of "B" (there you can still "TryLock" and continue if the required B is already used.
I have a list like this:
[Header/Element]
[Element]
[Element]
[Header]
[Element]
[Element]
[Element]
[Header]
[Element]
...
[Element/Header]
So this list could or could not have a [Header] in the first position and might not contain also a [Header] element at the end.
I've been assigned to create an algorithm to group this elements under every header, so, the appearance of a header can start a new group with all elements below corresponding to this group. If the first element of the list is not a header (which can be a possibility) then a default group should be used, so all elements until the next header get in this group. The same for elements at the end: there might not be a header that tells you where to end/start a group. So far, not very difficult to do linearly iterating through the entire list.
The real question is, does anyone knows how can do this grouping algorithm but using multiple threads? The reason I want multiple threads is because this list of headers/elements can be very large so I thought that it would be a good idea to have many threads grouping at different parts of the list.
The problem is that I have no idea what could be the procedure to do this and how could I synchronize the threads, specially with the way the list has been layed out (using headers and then X quantity of elements below).
So, have any of you guys have solved a problem like this before? I'm not really interested in some specific implementation in an X programming language, but mostly in the procedure I could use to accomplish this task (and how should I synchronize these threads to prevent overlapping). I'm using C# just in case some of you really want to share some code.
Assuming there are n items in the list, start each thread i at index i*m,
where m = threadCount / n. Or, in simpler terms, split the list into parts and let each thread handle one part.
Now, let each thread read elements and store it in it's own list.
As soon as you read a header, store the elements you have so far (the previous thread will get these list at the end) and start a new list.
From here it's pretty straight-forward - just read the elements and split whenever you get a header.
When you're done, combine the list you're currently busy with with the first list from the next thread.
If a thread starts on a header, the first list will be empty.
If a thread ends on a header, the current list will be empty, so it will simply take the first list from the next thread.
There are some minor details you should look out for, like how you combine the lists at the end, and like knowing when a list is finalized, or whether it will be combined with other lists, but this should be easy enough.
Example:
Input:
A
B
C
Header
D
E
F
Header
With 4 threads, so each thread gets 2 each:
A
B
C
Header
D
E
F
Header
Then:
Thread Processes
1 A
2 C
3 D
4 F
Thread Processes
1 B
2 Header
3 E
4 Header
Here thread 2 will put C into its original list and thread 4 will put F into its original list, and each will start a new list.
Now we're done, so:
Thread 3 will combine its current list ({D,E}) with thread 4's original list ({F}), so thread 3 will end up with {D,E,F}.
Thread 2 will combine its current list ({}) with thread 3's original list (which is also the current list, since we found no header in thread 3 - {D,E,F}), so thread 2 will end up with {D,E,F}.
Thread 1 will combine its current list ({A,B}) with thread 2's original list ({C}), so thread 1 will end up with {A,B,C}.