I wrote a simple example for threading which generates tables for numbers starting from 1 to 20. when I tested it with main method it executes all the threads (prints all the messages), while all threads are not being run (all messages are not being printed) most of the times (sometimes it runs all the threads) when doing the same with JUnit test. I think there should not be any difference in terms of output.
Here is the class with main method:
public class Calculator implements Runnable {
private int number;
Calculator(final int number){
this.number = number;
}
#Override
public void run() {
for(int i = 1; i <= 10; i++){
System.out.printf("%s : %d * %d = %d \n", Thread.currentThread().getName(), number, i, number * i);
}
}
public static void main(String[] args){
Calculator calculator = null;
Thread thread = null;
for(int i = 1; i < 21; i ++){
calculator = new Calculator(i);
thread = new Thread(calculator);
System.out.println(thread.getName() + " Created");
thread.start();
System.out.println(thread.getName() + " Started");
}
}
}
When I invoke the main method it prints all the results.
Bellow is the code for JUnit test equivalent to the main method:
public class CalculatorTest {
private Calculator calculator;
private Thread thread;
#Test
public void testCalculator() {
for(int i = 1; i < 21; i ++){
calculator = new Calculator(i);
thread = new Thread(calculator);
System.out.println(thread.getName() + " Created");
thread.start();
System.out.println(thread.getName() + " Started");
}
}
}
When I run the above test case, the behavior of the output is not consistant in the scene that sometimes it prints all the messages and most of the times prints only a few and exits. Here is the output captured in case of the above JUnit test case:
Thread-0 Created
Thread-0 Started
Thread-1 Created
Thread-1 Started
Thread-2 Created
Thread-2 Started
Thread-3 Created
Thread-3 Started
Thread-4 Created
Thread-4 Started
Thread-5 Created
Thread-5 Started
Thread-6 Created
Thread-6 Started
Thread-7 Created
Thread-7 Started
Thread-8 Created
Thread-8 Started
Thread-9 Created
Thread-9 Started
Thread-10 Created
Thread-10 Started
Thread-11 Created
Thread-11 Started
Thread-12 Created
Thread-12 Started
Thread-13 Created
Thread-13 Started
Thread-14 Created
Thread-14 Started
Thread-15 Created
Thread-15 Started
Thread-16 Created
Thread-16 Started
Thread-17 Created
Thread-17 Started
Thread-18 Created
Thread-18 Started
Thread-19 Created
Thread-19 Started
Thread-0 : 1 * 1 = 1
Thread-0 : 1 * 2 = 2
Thread-0 : 1 * 3 = 3
Thread-0 : 1 * 4 = 4
Thread-0 : 1 * 5 = 5
Thread-0 : 1 * 6 = 6
Thread-0 : 1 * 7 = 7
Thread-0 : 1 * 8 = 8
Thread-0 : 1 * 9 = 9
Thread-0 : 1 * 10 = 10
Thread-2 : 3 * 1 = 3
Thread-2 : 3 * 2 = 6
Thread-2 : 3 * 3 = 9
Thread-2 : 3 * 4 = 12
Thread-2 : 3 * 5 = 15
Thread-2 : 3 * 6 = 18
Thread-2 : 3 * 7 = 21
Output ends here without printing the remaining messages in other threads/ executing other threads.
Can somebody help me to understand the reason behind this. Thanks in advance.
JUnit is exiting the test method early. You need to wait for all of the threads to complete before you exit the testCalculator() method.
An easy way to do that is by using a CountDownLatch.
Initialize a CountDownLatch with CountDownLatch latch = new CountDownLatch(20).
Pass each Calculator runnable a reference to the latch. At the end of the run() method, call latch.countDown().
At the end of the testCalculator() method call latch.await(). This will block until latch.countDown() has been called 20 times (i.e. when all threads have completed).
Your test method finishes before all the spawned threads are finished. When the JUnit executor finishes, all spawned threads are killed.
If you want to run this kind of test, you should keep a collection of the threads you have created and join() each of them at the end of your test method. The calls to join() each thread are executed in a second loop (following the loop that starts all the threads).
Something like this:
#Test
public void testCalculator() {
List<Thread> threads = new ArrayList<>();
for (int i = 1; i < 21; i++) {
calculator = new Calculator(i);
thread = new Thread(calculator);
threads.add(thread);
System.out.println(thread.getName() + " Created");
thread.start();
System.out.println(thread.getName() + " Started");
}
for (Thread thread : threads) {
thread.join();
}
}
If you want to have the threads all start around the same time (e.g., if your loop that is creating the threads does some non-trivial work each time through the loop):
#Test
public void testCalculator() {
List<Thread> threads = new ArrayList<>();
for (int i = 1; i < 21; i++) {
threads.add(new Thread(new Calculator(i)));
}
for (Thread thread : threads) {
thread.start();
}
for (Thread thread : threads) {
thread.join();
}
}
Related
I run the following code, when I use jstack check thread information, found 100 threads in the runnable state. I know what is the maximum number of CPU thread of execution core * 2, but I'm very confused, even jstack is not instantaneous, why is a runnable thread?Or is not executed by the CPU thread state is runnable.
Has not been thread of execution, his status is also a runnable?
public static void main(String[] args) {
for (int i = 0; i < 100; i++) {
new Thread(() -> {
long last = System.currentTimeMillis();
try {
byte[] buf = new byte[1024];
FileInputStream fileInputStream = new FileInputStream("");
while (fileInputStream.read(buf) != -1) {
}
fileInputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("read over " + (System.currentTimeMillis() - last) );
}, "name" + i).start();
}
}
I noticed that Hystrix has two thread-isolation strategy: Thread and Semaphore.
By default Hystrix used Thread strategy and controls it by hystrix.threadpool.default.coreSize and because of ca ommand with same group key will use the same thread pool. So it is based on group key.
When Hystrix is using Semaphore strategy, the Semaphore will save in a ConcurrentHashMap and the key is command name, will it be based on command name?
Here is the code:
/**
* Get the TryableSemaphore this HystrixCommand should use for execution if not running in a separate thread.
*
* #return TryableSemaphore
*/
protected TryableSemaphore getExecutionSemaphore() {
if (properties.executionIsolationStrategy().get() == ExecutionIsolationStrategy.SEMAPHORE) {
if (executionSemaphoreOverride == null) {
TryableSemaphore _s = executionSemaphorePerCircuit.get(commandKey.name());
if (_s == null) {
// we didn't find one cache so setup
executionSemaphorePerCircuit.putIfAbsent(commandKey.name(), new TryableSemaphoreActual(properties.executionIsolationSemaphoreMaxConcurrentRequests()));
// assign whatever got set (this or another thread)
return executionSemaphorePerCircuit.get(commandKey.name());
} else {
return _s;
}
} else {
return executionSemaphoreOverride;
}
} else {
// return NoOp implementation since we're not using SEMAPHORE isolation
return TryableSemaphoreNoOp.DEFAULT;
}
}
Why they have difference scope? I write some testing code to prove it:
public static void main(String[] args) throws InterruptedException {
int i = 0;
while (i++ < 20) {
final int index = i;
new Thread(() -> {
System.out.println(new ThreadIsolationCommand(index).execute());
System.out.println(new SemaphoreIsolationCommand(index).execute());
}).start();
}
}
static class ThreadIsolationCommand extends HystrixCommand<String> {
private int index;
protected ThreadIsolationCommand(int index) {
super(
Setter
.withGroupKey(HystrixCommandGroupKey.Factory.asKey("ThreadIsolationCommandGroup"))
.andCommandKey(HystrixCommandKey.Factory.asKey(String.valueOf(index)))
.andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
.withExecutionIsolationStrategy(HystrixCommandProperties.ExecutionIsolationStrategy.THREAD)
)
.andThreadPoolPropertiesDefaults(HystrixThreadPoolProperties.Setter()
.withCoreSize(10)
)
);
this.index = index;
}
#Override
protected String run() throws Exception {
Thread.sleep(500);
return "Hello Thread " + index;
}
#Override
protected String getFallback() {
return "Fallback Thread " + index;
}
}
static class SemaphoreIsolationCommand extends HystrixCommand<String> {
private int index;
protected SemaphoreIsolationCommand(int index) {
super(
Setter
.withGroupKey(HystrixCommandGroupKey.Factory.asKey("SemaphoreIsolationCommandGroup"))
.andCommandKey(HystrixCommandKey.Factory.asKey(String.valueOf(index)))
.andCommandPropertiesDefaults(HystrixCommandProperties.Setter()
.withExecutionIsolationStrategy(HystrixCommandProperties.ExecutionIsolationStrategy.SEMAPHORE)
.withExecutionIsolationSemaphoreMaxConcurrentRequests(10)
)
);
this.index = index;
}
#Override
protected String run() throws Exception {
Thread.sleep(500);
return "Hello Semaphore " + index;
}
#Override
protected String getFallback() {
return "Fallback Semaphore " + index;
}
}
This command with same group key and different name, result is :
Fallback Thread 9
Fallback Thread 1
Fallback Thread 8
Fallback Thread 19
Fallback Thread 20
Fallback Thread 14
Fallback Thread 3
Fallback Thread 13
Fallback Thread 17
Fallback Thread 10
Hello Thread 5
Hello Semaphore 17
Hello Semaphore 14
Hello Thread 2
Hello Semaphore 3
Hello Thread 7
Hello Thread 15
Hello Thread 4
Hello Semaphore 13
Hello Semaphore 1
Hello Thread 11
Hello Semaphore 20
Hello Semaphore 19
Hello Thread 18
Hello Thread 12
Hello Semaphore 8
Hello Semaphore 9
Hello Thread 6
Hello Thread 16
Hello Semaphore 10
Hello Semaphore 5
Hello Semaphore 2
Hello Semaphore 7
Hello Semaphore 15
Hello Semaphore 4
Hello Semaphore 11
Hello Semaphore 12
Hello Semaphore 6
Hello Semaphore 18
Hello Semaphore 16
Only Thread strategy has failed. Is that right?
I would like to reorder the handlers processed by a boost io_service:
This is my pseudocode:
start()
{
io.run();
}
thread1()
{
io.post(myhandler1);
}
thread2()
{
io.post(myhandler2);
}
thread1() and thread2() are called independently.
In this case, the io_service processes the handler in the post order.
Queue example: myhandler1|myhandler1|myhandler2|myhandler1|myhandler2
How to modify the io_service processing order to execute myhandler1 and myhandler2 one after the other ?
New Queue example: myhandler1|myhandler2|myhandler1|myhandler2|myhandler1
I wrote this code but CPU usage is 100%:
start()
{
while(1)
{
io1.poll_one();
io2.poll_one();
}
}
thread1()
{
io1.post(myhandler1);
}
thread2()
{
io2.post(myhandler2);
}
Thanks
I'd use two queues. From this ASIO anwer I made once (Non blocking boost io_service for deadline_timers) I took the thread_pool class.
I split it into task_queue and thread_pool classes.
I created a worker type that knows how to juggle two queues:
struct worker {
task_queue q1, q2;
void wake() {
q1.wake();
q2.wake();
}
void operator()(boost::atomic_bool& shutdown) {
std::cout << "Worker start\n";
while (true) {
auto job1 = q1.dequeue(shutdown);
if (job1) (*job1)();
auto job2 = q2.dequeue(shutdown);
if (job2) (*job2)();
if (shutdown && !(job1 || job2))
break;
}
std::cout << "Worker exit\n";
}
};
You can see how the worker loop is structured so that - if tasks are enqueued - queues will be served in alternation.
Note: the wake() call is there for reliable shutdown; the queues use blocking waits, and hence they will need to be signaled (woken up) when the shutdown flag is toggled.
Full Demo
Live On Coliru
#include <boost/function.hpp>
#include <boost/optional.hpp>
#include <boost/thread.hpp>
#include <boost/atomic.hpp>
#include <iostream>
#include <deque>
namespace custom {
using namespace boost;
class task_queue {
private:
mutex mx;
condition_variable cv;
typedef function<void()> job_t;
std::deque<job_t> _queue;
public:
void enqueue(job_t job)
{
lock_guard<mutex> lk(mx);
_queue.push_back(job);
cv.notify_one();
}
template <typename T>
optional<job_t> dequeue(T& shutdown)
{
unique_lock<mutex> lk(mx);
cv.wait(lk, [&] { return shutdown || !_queue.empty(); });
if (_queue.empty())
return none;
job_t job = _queue.front();
_queue.pop_front();
return job;
}
void wake() {
lock_guard<mutex> lk(mx);
cv.notify_all();
}
};
template <typename Worker> class thread_pool
{
private:
thread_group _pool;
boost::atomic_bool _shutdown { false };
Worker _worker;
void start() {
for (unsigned i = 0; i < 1 /*boost::thread::hardware_concurrency()*/; ++i){
std::cout << "Creating thread " << i << "\n";
_pool.create_thread([&] { _worker(_shutdown); });
}
}
public:
thread_pool() { start(); }
~thread_pool() {
std::cout << "Pool going down\n";
_shutdown = true;
_worker.wake();
_pool.join_all();
}
Worker& get_worker() { return _worker; }
};
struct worker {
task_queue q1, q2;
void wake() {
q1.wake();
q2.wake();
}
void operator()(boost::atomic_bool& shutdown) {
std::cout << "Worker start\n";
while (true) {
auto job1 = q1.dequeue(shutdown);
if (job1) (*job1)();
auto job2 = q2.dequeue(shutdown);
if (job2) (*job2)();
if (shutdown && !(job1 || job2))
break;
}
std::cout << "Worker exit\n";
}
};
}
void croak(char const* queue, int i) {
static boost::mutex cout_mx;
boost::lock_guard<boost::mutex> lk(cout_mx);
std::cout << "thread " << boost::this_thread::get_id() << " " << queue << " task " << i << "\n";
}
int main() {
custom::thread_pool<custom::worker> pool;
auto& queues = pool.get_worker();
for (int i = 1; i <= 10; ++i) queues.q1.enqueue([i] { croak("q1", i); });
for (int i = 1; i <= 10; ++i) queues.q2.enqueue([i] { croak("q2", i); });
}
Prints e.g.
Creating thread 0
Pool going down
Worker start
thread 7f7311397700 q1 task 1
thread 7f7311397700 q2 task 1
thread 7f7311397700 q1 task 2
thread 7f7311397700 q2 task 2
thread 7f7311397700 q1 task 3
thread 7f7311397700 q2 task 3
thread 7f7311397700 q1 task 4
thread 7f7311397700 q2 task 4
thread 7f7311397700 q1 task 5
thread 7f7311397700 q2 task 5
thread 7f7311397700 q1 task 6
thread 7f7311397700 q2 task 6
thread 7f7311397700 q1 task 7
thread 7f7311397700 q2 task 7
thread 7f7311397700 q1 task 8
thread 7f7311397700 q2 task 8
thread 7f7311397700 q1 task 9
thread 7f7311397700 q2 task 9
thread 7f7311397700 q1 task 10
thread 7f7311397700 q2 task 10
Worker exit
Generalizing it
Here it is generalized for more queues (e.g. three):
Live On Coliru
Note that the above have 1 worker thread servicing; if you created more than 1 thread, each thread individually would alternate between queues, but overall the order would be undefined (because the thread scheduling is undefined).
The generalized version is somewhat more accurate here since it shared the idx variable between worker threads, but the actual output order still depends on thread scheduling.
Using run_one() instead of poll_one() should work (note that reset() is also required):
start()
{
while(1)
{
io1.run_one();
io2.run_one();
io1.reset();
io2.reset();
}
}
However, I don't know if this is a good solution to any actual problem you might have. This is one of those cases where the question, "What are you really trying to do?" seems relevant. For example, if it makes sense to run handler2 after every invocation of handler1, then perhaps handler1 should invoke handler2.
I have an application which runs 2 worker threads separate from the main GUI thread.
Thread 1:
needs to send some data to thread 2 every 100 ms.
sleeps for 10ms in each loop of its run.
Header:
class thread1:public QThread
{
Q_OBJECT
public:
thread1();
~thread1();
signals:
void wakeThread2();
void sendValue(int);
void sleepThread2();
protected:
void run();
private:
volatile bool stop;
int data;
};
Implementation:
thread1::thread1():stop(false),data(0)
{
}
void thread1::run()
{
while(!stop)
{
++data;
if(data==1000)
data = 0;
cout<<"IN THREAD 1 with data = "<<data<<endl;
emit sendValue(data);
emit wakeThread2();
emit sleepThread2();
msleep(10);
}
}
Thread 2
Header:
class thread2:public QThread
{
Q_OBJECT
public:
thread2();
~thread2();
private slots:
void receiveValue(int);
void Sleep();
protected:
void run();
private:
volatile bool stop;
int data;
};
Implementation:
thread2::thread2():stop(false),data(0)
{
}
void thread2::run()
{
if(!stop)
cout<<"IN THREAD..............2 with data = "<<data<<endl;
}
void thread2::receiveValue(int x)
{
data = x;
}
void thread2::Sleep()
{
msleep(100);
}
MainWindow:
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
t1 = new thread1;
t2 = new thread2;
QObject::connect(t1,SIGNAL(wakeThread2()),t2,SLOT(start()));
QObject::connect(t1,SIGNAL(sendValue(int)),t2,SLOT(receiveValue(int)));
QObject::connect(t1,SIGNAL(sleepThread2()),t2,SLOT(Sleep()));
}
MainWindow::~MainWindow()
{
delete ui;
}
void MainWindow::on_pushButton_startT1_clicked()
{
t1->start();
}
Output:
IN THREAD 1 with data = 1
IN THREAD..............2 with data = 1
IN THREAD 1 with data = 2
IN THREAD 1 with data = 3
IN THREAD 1 with data = 4
IN THREAD 1 with data = 5
IN THREAD 1 with data = 6
IN THREAD 1 with data = 7
IN THREAD 1 with data = 8
IN THREAD 1 with data = 9
IN THREAD 1 with data = 10
IN THREAD 1 with data = 11
IN THREAD..............2 with data = 2
IN THREAD 1 with data = 12
IN THREAD 1 with data = 13
IN THREAD 1 with data = 14
IN THREAD 1 with data = 15
IN THREAD 1 with data = 16
IN THREAD 1 with data = 17
IN THREAD 1 with data = 18
IN THREAD 1 with data = 19
IN THREAD 1 with data = 20
The data in thread 2 is not getting updated with the latest value of thread 1 and the GUI window is totally frozen. Please let me know if there is better/more efficient way to implement multi thread applications with Qt and to communicate between threads.
EDIT : ACCORDING TO LUCA the Thread1 remains almost the same...while Thread2.h looks like this
Thread2.h
#include <QThread>
#include <QTimer>
#include "iostream"
using namespace std;
class Thread2 : public QThread
{
Q_OBJECT
public:
Thread2();
~Thread2();
void startThread();
public slots:
void receiveData(int);
protected:
void run();
private:
volatile bool stop;
int data;
QTimer *timer;
};
and Implementation is....Thread2.cpp..
#include "thread2.h"
Thread2::Thread2():stop(false),data(0)
{
timer = new QTimer;
QObject::connect(timer,SIGNAL(timeout()),this,SLOT(start()));
}
Thread2::~Thread2()
{
delete timer;
}
void Thread2::receiveData(int x)
{
this->data = x;
}
void Thread2::run()
{
cout<<"thread 2 .........data = "<<data<<endl;
}
void Thread2::startThread()
{
timer->start(100);
}
and the mainwindow.cpp looks like this...
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
t1 = new Thread1;
t2 = new Thread2;
QObject::connect(t1,SIGNAL(sendData(int)),t2,SLOT(receiveData(int)));
}
MainWindow::~MainWindow()
{
delete ui;
}
void MainWindow::on_pushButton_start_thread1_clicked()
{
t1->start();
t2->startThread();
}
It seems to me data is actually updated. But thread 1 is 10 times faster than thread 2. When you emit the Sleep signal, thread 2 is put to sleep for 100ms, which makes it unable to process other signals. Those will be placed in a queue and processed as soon as the control returns to the event loop. Then you'll see the message with data updated.
The specification anyway is quite weird for me: I read "thread 1 needs to send data to thread 2 every 100 ms....", but I see you do it every 10ms, but then you say "thread 1 itself sleeps for 10ms in each loop of its run". What is thread 1 supposed to do for the rest of the time?
EDIT: I don't think this is exactly what you wanted, but still I'm not completely sure I understand what you're looking for. Not a complete or good implementation, just to give the idea:
#include <QCoreApplication>
#include <QTimer>
#include <QThread>
class Thread1 : public QThread
{
Q_OBJECT
public:
explicit Thread1() :
data(0) {
// Do nothing.
}
void run() {
while (true) {
data++;
qDebug("Done some calculation here. Data is now %d.", data);
emit dataChanged(data);
usleep(10000);
}
}
signals:
void dataChanged(int data);
private:
int data;
};
class Thread2 : public QObject
{
Q_OBJECT
public:
explicit Thread2() {
timer = new QTimer;
connect(timer, SIGNAL(timeout()), this, SLOT(processData()));
timer->start(100);
}
~Thread2() {
delete timer;
}
public slots:
void dataChanged(int data) {
this->data = data;
}
void processData() {
qDebug("Processing data = %d.", data);
}
private:
QTimer* timer;
int data;
};
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
Thread1 t1;
Thread2 t2;
qApp->connect(&t1, SIGNAL(dataChanged(int)), &t2, SLOT(dataChanged(int)));
t1.start();
return a.exec();
}
#include "main.moc"
The output is:
Done some calculation here. Data is now 1.
Done some calculation here. Data is now 2.
Done some calculation here. Data is now 3.
Done some calculation here. Data is now 4.
Done some calculation here. Data is now 5.
Done some calculation here. Data is now 6.
Done some calculation here. Data is now 7.
Done some calculation here. Data is now 8.
Done some calculation here. Data is now 9.
Done some calculation here. Data is now 10.
Processing data = 10.
Done some calculation here. Data is now 11.
Done some calculation here. Data is now 12.
Done some calculation here. Data is now 13.
Done some calculation here. Data is now 14.
Done some calculation here. Data is now 15.
Done some calculation here. Data is now 16.
Done some calculation here. Data is now 17.
Done some calculation here. Data is now 18.
Done some calculation here. Data is now 19.
Processing data = 19.
Done some calculation here. Data is now 20.
Done some calculation here. Data is now 21.
Done some calculation here. Data is now 22.
Done some calculation here. Data is now 23.
Done some calculation here. Data is now 24.
Done some calculation here. Data is now 25.
Done some calculation here. Data is now 26.
Done some calculation here. Data is now 27.
Done some calculation here. Data is now 28.
Processing data = 28.
...
Beware that Thread2 actually is the main thread (i.e. UI thread) of your application. Move the object to a different thread if you need it.
Hej,
I know how to pass parameters to a Runnable. But when my Thread has run, how to get the result of the process?
class Some implements Runnable
{
int p;
int endresult = 0;
public Some(int param){
p = param;
}
public void run(){
//do something
endresult += p;
//Now how to let the method who executed this runnable know that the result is 2;
}
}
Some s = new Some(1);
Thread t = new Thread(s);
t.start();
when t is finished i want to get the 'endresult' variable;
You have to wait for your thread to terminate and then you can get the field value directly:
t.join();
y = s.endresult;
declare endresult volatile and invoke t.join after it was started - when t is finished this will get the 'endresult' value