When adding Test Action with Stop/Stop Now action
You have Target options: All Threads/Current Thread
When choosing Current Thread in a multiple threads environment it stops and don't continue further this Sampler
The problem is that when choosing All Threads Some threads execute other sampler after than Test Action
It's very confusing, because I expect All Threads option to be more strict than just Current Thread
In code I saw that in Current Threads it also stop current threads context.getThread().stop(); and in All Threads option it doesn't.
Is it a bug or a feature (adding grace period of stopping)?
For example Test Plan:
Thread Group (5 Threads)
Sampler1
Test Action Target: All Threads, Action Stop/Stop Now
Sampler2
Sampler 2 is execute only when Target: All Threads and not when Target: Current Thread
Note: Also choosing Action Go to next loop iteration (Target field is disabled) prevent Sampler2 to be executed
Stop and Stop Now are different:
Stop is a clean shutdown, meaning the current running samples will complete. So it is ok if you see other samplers even after test action
Stop Now is a hard shutdown, meaning current running samples will be interrupted so again, it is ok if you see those other samplers after Test Action
Current thread will only stop the current thread not all thread, so it is ok that:
When choosing Current Thread in a multiple threads environment it stops and don't continue further this Sampler
All Threads will do action on all threads of test, in code we have:
if (action == STOP_NOW) {
log.info("Stopping all threads now from element {}", getName());
context.getEngine().stopTest();
} else {
log.info("Stopping all threads from element {}", getName());
context.getEngine().askThreadsToStop();
}
Regarding your particular case, here is what is happening:
When you select "Current Thread", JMeter immediately stops the current thread as this action is taken into action after the Test Action
When you select "All Threads", JMeter triggers asynchronously a call to all threads shutdown/stop, that's why Sampler2 is called
You may consider this a bug, but I think use case is different.
Still it is now fixed:
https://bz.apache.org/bugzilla/show_bug.cgi?id=61698
Related
Let's consider this simple code with coroutines
import kotlinx.coroutines.*
import java.util.concurrent.Executors
fun main() {
runBlocking {
launch (Executors.newFixedThreadPool(10).asCoroutineDispatcher()) {
var x = 0
val threads = mutableSetOf<Thread>()
for (i in 0 until 100000) {
x++
threads.add(Thread.currentThread())
yield()
}
println("Result: $x")
println("Threads: $threads")
}
}
}
As far as I understand this is quite legit coroutines code and it actually produces expected results:
Result: 100000
Threads: [Thread[pool-1-thread-1,5,main], Thread[pool-1-thread-2,5,main], Thread[pool-1-thread-3,5,main], Thread[pool-1-thread-4,5,main], Thread[pool-1-thread-5,5,main], Thread[pool-1-thread-6,5,main], Thread[pool-1-thread-7,5,main], Thread[pool-1-thread-8,5,main], Thread[pool-1-thread-9,5,main], Thread[pool-1-thread-10,5,main]]
The question is what makes these modifications of local variables thread-safe (or is it thread-safe?). I understand that this loop is actually executed sequentially but it can change the running thread on every iteration. The changes done from thread in first iteration still should be visible to the thread that picked up this loop on second iteration. Which code does guarantee this visibility? I tried to decompile this code to Java and dig around coroutines implementation with debugger but did not find a clue.
Your question is completely analogous to the realization that the OS can suspend a thread at any point in its execution and reschedule it to another CPU core. That works not because the code in question is "multicore-safe", but because it is a guarantee of the environment that a single thread behaves according to its program-order semantics.
Kotlin's coroutine execution environment likewise guarantees the safety of your sequential code. You are supposed to program to this guarantee without any worry about how it is maintained.
If you want to descend into the details of "how" out of curiosity, the answer becomes "it depends". Every coroutine dispatcher can choose its own mechanism to achieve it.
As an instructive example, we can focus on the specific dispatcher you use in your posted code: JDK's fixedThreadPoolExecutor. You can submit arbitrary tasks to this executor, and it will execute each one of them on a single (arbitrary) thread, but many tasks submitted together will execute in parallel on different threads.
Furthermore, the executor service provides the guarantee that the code leading up to executor.execute(task) happens-before the code within the task, and the code within the task happens-before another thread's observing its completion (future.get(), future.isCompleted(), getting an event from the associated CompletionService).
Kotlin's coroutine dispatcher drives the coroutine through its lifecycle of suspension and resumption by relying on these primitives from the executor service, and thus you get the "sequential execution" guarantee for the entire coroutine. A single task submitted to the executor ends whenever the coroutine suspends, and the dispatcher submits a new task when the coroutine is ready to resume (when the user code calls continuation.resume(result)).
We have an API endpoint that starts a thread, and another endpoint to check the status of the thread (based on a thread ID returned by the first API call).
We use the threading module.
The function that the thread is executing may or may not sleep for a duration of time.
When we create the thread, we override the default name provided by the module and add the thread ID that was generated by us (so we can keep track).
The status endpoint gets the thread ID from the client request and simply loops over the results from threading.enumerate(). When the thread is running and not sleeping, we see that the thread is returned by the threading.enumerate() function. When it is sleeping, it is not.
The function we use to see if a thread is alive:
def thread_is_running(thread_id):
all_threads = [ t.getName() for t in threading.enumerate() ]
return any(thread_id in item for item in all_threads)
When we run in debug and print the value of "all_threads", we only see the MainThread thread during our thread's sleep time.
As soon as the sleep is over, we see our thread in the value of "all_threads".
This is how we start the thread:
thread_id = random.randint(10000, 50000)
thread_name = f"{service_name}-{thread_id}"
threading.Thread(target=drain, args=(service_name, params,), name=thread_name).start()
Is there a way to get a list of all threads including idle threads? Is a sleeping thread marked as idle? Is there a better way to pause a thread?
We thought about making the thread update it's state in a database, but due to some internal issue we currently have, we cannot 100% count on writing to our database, so we prefer checking the system for the thread's status.
Turns out the reason we did not see the thread was our use of gunicorn and multi workers.
The thread was initiated on one of the 4 configured workers while the status api call could've been handled by any of the 4 workers. only when it was handled by the worker who is also responsible of running the thread - we were able to see it in the enumerate output
Consider the following code
//proces i: //proces j:
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while(flag[j] == true && turn==j); while(flag[i] == true && turn == i);
<critical section> <critical section>
flag[i] = false; flag[j] = false;
<remainder section <remainder section>
I am certain that the above code will satisfies the mutual exclusion property but what I am uncertain about the following
What exactly does progress mean ? and does the above code satisfy it, The above code requires the the critical section being executed in strict alternation. Is that considered as progress ?
From what I see the above code does not maintain any information on the number of times a process has entered the critical section, would that mean that the above code does not satisfy bounded waiting ?
Progress means that the process will eventually do some work - an example of where this may not be the case is when a low-priority thread might be pre-empted and rolled back by high-priority threads. Once your processes reach their critical section they won't be pre-empted, so they'll make progress.
Bounded waiting means that the process will eventually gain control of the processor - an example of where this may not be the case is when another process has a non-terminating loop in a critical section with no possibility of the thread being interrupted. Your code has bounded waiting IF the critical sections terminate AND the remainder section will not re-invoke the process's critical section (otherwise a process might keep running its critical section without the other process ever gaining control of the processor).
Progress of the processes mean that the processes don't enter in a deadlock situation and hence,their execution continues independently! Actually, at any moment of time,only one of the process i or process j will be executing its critical section code and hence,the consistency will be maintained! SO, Progress of both processes are being talked and met successfully in the given code.
Next, this particular code is for processes which are intended to run only for once and hence, they won't be reaching their critical section code again. It is for single execution of process.
Bounded waiting says that a bound must exist on the number of times
that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and
before that request is granted.
This particular piece of code has nothing to do with bounded waiting and is for trivial cases where processes execute for once only!
I am trying to model a system where there are multiple threads producing data, and a single thread consuming the data. The trick is that I don't want a dedicated thread to consume the data because all of the threads live in a pool. Instead, I want one of the producers to empty the queue when there is work, and yield if another producer is already clearing the queue.
The basic idea is that there is a queue of work, and a lock around the processing. Each producer pushes its payload onto the queue, and then attempts to enter the lock. The attempt is non-blocking and returns either true (the lock was acquired), or false (the lock is held by someone else).
If the lock is acquired, then that thread then processes all of the data in the queue until it is empty (including any new payloads introduced by other producers during processing). Once all of the work has been processed, the thread releases the lock and quits out.
The following is C++ code for the algorithm:
void Process(ITask *task) {
// queue is a thread safe implementation of a regular queue
queue.push(task);
// crit_sec is some handle to a critical section like object
// try_scoped_lock uses RAII to attempt to acquire the lock in the constructor
// if the lock was acquired, it will release the lock in the
// destructor
try_scoped_lock lock(crit_sec);
// See if this thread won the lottery. Prize is doing all of the dishes
if (!lock.Acquired())
return;
// This thread got the lock, so it needs to do the work
ITask *currTask;
while (queue.try_pop(currTask)) {
... execute task ...
}
}
In general this code works fine, and I have never actually witnessed the behavior I am about to describe below, but that implementation makes me feel uneasy. It stands to reason that a race condition is introduced between when the thread exits the while loop and when it releases the critical section.
The whole algorithm relies on the assumption that if the lock is being held, then a thread is servicing the queue.
I am essentially looking for enlightenment on 2 questions:
Am I correct that there is a race condition as described (bonus for other races)
Is there a standard pattern for implementing this mechanism that is performant and doesn't introduce race conditions?
Yes, there is a race condition.
Thread A adds a task, gets the lock, processes itself, then asks for a task from the queue. It is rejected.
Thread B at this point adds a task to the queue. It then attempts to get the lock, and fails, because thread A has the lock. Thread B exits.
Thread A then exits, with the queue non-empty, and nobody processing the task on it.
This will be difficult to find, because that window is relatively narrow. To make it more likely to find, after the while loop introduce a "sleep for 10 seconds". In the calling code, insert a task, wait 5 seconds, then insert a second task. After 10 more seconds, check that both insert tasks are finished, and there is still a task to be processed on the queue.
One way to fix this would be to change try_pop to try_pop_or_unlock, and pass in your lock to it. try_pop_or_unlock then atomically checks for an empty queue, and if so unlocks the lock and returns false.
Another approach is to improve the thread pool. Add a counting semaphore based "consume" task launcher to it.
semaphore_bool bTaskActive;
counting_semaphore counter;
when (counter || !bTaskActive)
if (bTaskActive)
return
bTaskActive = true
--counter
launch_task( process_one_off_queue, when_done( [&]{ bTaskActive=false ) );
When the counting semaphore is active, or when poked by the finished consume task, it launches a consume task if there is no consume task active.
But that is just off the top of my head.
Wait(semaphore sem) {
DISABLE_INTS
sem.val--
if (sem.val < 0){
add thread to sem.L
block(thread)
}
ENABLE_INTS
Signal(semaphore sem){
DISABLE_INTS
sem.val++
if (sem.val <= 0) {
th = remove next
thread from sem.L
wakeup(th)
}
ENABLE_INTS
If block(thread) stops a thread from executing, how, where, and when does it return?
Which thread enables interrupts following the Wait()?
the thread that called block() shouldn’t return until another thread has called wakeup(thread)!
but how does that other thread get to run?
where exactly does the thread switch occur?
block(thread) works that way:
Enables interrupts
Uses some kind of waiting mechanism (provided by the operating system or the busy waiting in the simplest case) to wait until the wakeup(thread) on this thread is called. This means that in this point thread yields its time to the scheduler.
Disables interrupts and returns.
Yes, UP and DOWN are mostly useful when called from different threads, but it is not impossible that you call these with one thread - if you start semaphore with a value > 0, then the same thread can entry the critical section and execute both DOWN (before) and UP (after). Value which initializes the semaphore tells how many threads can enter the critical section at once, which might be 1 (mutex) or any other positive number.
How are the threads created? That is not shown on the lecture slide, because that is only a principle how semaphore works using a pseudocode. But it is a completely different story how you use those semaphores in your application.