I'm trying to throw an exception within a thread and allow the calling process to catch it. However, it seems that this will cause the entire application to crash. See test code attached never prints either exit statement.
1 #include <boost/thread.hpp>
2 #include <iostream>
3
4 void wait(int seconds)
5 {
6 boost::this_thread::sleep(boost::posix_time::seconds(seconds));
7 }
8
9 void thread()
10 {
11 for (int i = 0; i < 5; ++i)
12 {
13 wait(1);
14 std::cout << i << std::endl;
15 }
16 throw;
17 }
18
19 int main()
20 {
21 try
22 {
23 boost::thread t(thread);
24 t.join();
25 std::cout << "Exit normally\n";
26 }
27 catch (...)
28 {
29 std::cout << "Caught Exception\n";
30 }
31 }
Have a look at boost exception: Transporting of Exceptions Between Threads.
This approach has worked well for me.
Related
I´m running a C++ program in a Linux system (Kernel 4.15 - Ubuntu 16.04). When I want to compile my code, I get the signal 11 error message which is related to line 10 ("for" loop) of the following code:
void ProfilerBlock::ReadStack(const trace::EventValue& event,
std::vector<std::string>* stack)
{
process_t pid = ProcessForEvent(event);
std::vector<std::string> symbolizedStack;
const auto* stackField =
value::ArrayValue::Cast(event.getEventField("stack"));
for (const auto& addressValue : *stackField)
{
uint64_t address = addressValue.AsULong();
symbols::Symbol symbol;
uint64_t offset = 0;
if (!_symbols.LookupSymbol(address, _images[pid], &symbol,
&offset))
symbol.set_name("Unknown Symbol");
if (boost::starts_with(symbol.name(), "lttng_profile"))
continue;
stack->push_back(symbol.name());
if (_dumpStacks)
std::cout << symbol.name() << " - " << address << std::endl;
}
if (_dumpStacks)
std::cout << std::endl;
}
Does anybody have any idea about it?
I have some very simple code which is supposed to test a multi-threaded logger by starting 10 threads at the same time which will all write to the logger at once.
I expect to see all 10 messages, not in any order; However, I randomly get 5,6,7,8,9, and sometimes 10 output messages.
Here is the code:
//*.cxx
#include <iostream>
#include <mutex>
#include <shared_mutex> // requires c++14
#include <string>
#include <thread>
#include <vector>
namespace {
std::mutex g_msgLock;
std::shared_timed_mutex g_testingLock;
}
void info(const char * msg) {
std::unique_lock<std::mutex> lock(g_msgLock);
std::cout << msg << '\n'; // don't flush
}
int main(int argc, char** argv) {
info("Start message..");
std::vector<std::thread> threads;
unsigned int threadCount = 10;
threads.reserve(threadCount);
{ // Scope for locking all threads
std::lock_guard<std::shared_timed_mutex> lockAllThreads(g_testingLock); // RAII (scoped) lock
for (unsigned int i = 0; i < threadCount; i++) {
// Here we start the threads using lambdas
threads.push_back(std::thread([&, i](){
// Here we block and wait on lockAllThreads
std::shared_lock<std::shared_timed_mutex> threadLock(g_testingLock);
std::string msg = std::string("THREADED_TEST_INFO_MESSAGE: ") + std::to_string(i);
info(msg.c_str());
}));
}
} // End of scope, lock is released, all threads continue now
for(auto& thread : threads){
thread.join();
}
}
The output is generally something of the form:
Start message..
THREADED_TEST_INFO_MESSAGE: 9
THREADED_TEST_INFO_MESSAGE: 5
THREADED_TEST_INFO_MESSAGE: 3
THREADED_TEST_INFO_MESSAGE: 1
THREADED_TEST_INFO_MESSAGE: 4
THREADED_TEST_INFO_MESSAGE: 0
THREADED_TEST_INFO_MESSAGE: 8
THREADED_TEST_INFO_MESSAGE: 7
Notice that there are only 8 outputs for this run.
Interestingly enough, this problem was associated with my build system which was dropping messages. The executable is always producing the outputs as expected.
I would like to reorder the handlers processed by a boost io_service:
This is my pseudocode:
start()
{
io.run();
}
thread1()
{
io.post(myhandler1);
}
thread2()
{
io.post(myhandler2);
}
thread1() and thread2() are called independently.
In this case, the io_service processes the handler in the post order.
Queue example: myhandler1|myhandler1|myhandler2|myhandler1|myhandler2
How to modify the io_service processing order to execute myhandler1 and myhandler2 one after the other ?
New Queue example: myhandler1|myhandler2|myhandler1|myhandler2|myhandler1
I wrote this code but CPU usage is 100%:
start()
{
while(1)
{
io1.poll_one();
io2.poll_one();
}
}
thread1()
{
io1.post(myhandler1);
}
thread2()
{
io2.post(myhandler2);
}
Thanks
I'd use two queues. From this ASIO anwer I made once (Non blocking boost io_service for deadline_timers) I took the thread_pool class.
I split it into task_queue and thread_pool classes.
I created a worker type that knows how to juggle two queues:
struct worker {
task_queue q1, q2;
void wake() {
q1.wake();
q2.wake();
}
void operator()(boost::atomic_bool& shutdown) {
std::cout << "Worker start\n";
while (true) {
auto job1 = q1.dequeue(shutdown);
if (job1) (*job1)();
auto job2 = q2.dequeue(shutdown);
if (job2) (*job2)();
if (shutdown && !(job1 || job2))
break;
}
std::cout << "Worker exit\n";
}
};
You can see how the worker loop is structured so that - if tasks are enqueued - queues will be served in alternation.
Note: the wake() call is there for reliable shutdown; the queues use blocking waits, and hence they will need to be signaled (woken up) when the shutdown flag is toggled.
Full Demo
Live On Coliru
#include <boost/function.hpp>
#include <boost/optional.hpp>
#include <boost/thread.hpp>
#include <boost/atomic.hpp>
#include <iostream>
#include <deque>
namespace custom {
using namespace boost;
class task_queue {
private:
mutex mx;
condition_variable cv;
typedef function<void()> job_t;
std::deque<job_t> _queue;
public:
void enqueue(job_t job)
{
lock_guard<mutex> lk(mx);
_queue.push_back(job);
cv.notify_one();
}
template <typename T>
optional<job_t> dequeue(T& shutdown)
{
unique_lock<mutex> lk(mx);
cv.wait(lk, [&] { return shutdown || !_queue.empty(); });
if (_queue.empty())
return none;
job_t job = _queue.front();
_queue.pop_front();
return job;
}
void wake() {
lock_guard<mutex> lk(mx);
cv.notify_all();
}
};
template <typename Worker> class thread_pool
{
private:
thread_group _pool;
boost::atomic_bool _shutdown { false };
Worker _worker;
void start() {
for (unsigned i = 0; i < 1 /*boost::thread::hardware_concurrency()*/; ++i){
std::cout << "Creating thread " << i << "\n";
_pool.create_thread([&] { _worker(_shutdown); });
}
}
public:
thread_pool() { start(); }
~thread_pool() {
std::cout << "Pool going down\n";
_shutdown = true;
_worker.wake();
_pool.join_all();
}
Worker& get_worker() { return _worker; }
};
struct worker {
task_queue q1, q2;
void wake() {
q1.wake();
q2.wake();
}
void operator()(boost::atomic_bool& shutdown) {
std::cout << "Worker start\n";
while (true) {
auto job1 = q1.dequeue(shutdown);
if (job1) (*job1)();
auto job2 = q2.dequeue(shutdown);
if (job2) (*job2)();
if (shutdown && !(job1 || job2))
break;
}
std::cout << "Worker exit\n";
}
};
}
void croak(char const* queue, int i) {
static boost::mutex cout_mx;
boost::lock_guard<boost::mutex> lk(cout_mx);
std::cout << "thread " << boost::this_thread::get_id() << " " << queue << " task " << i << "\n";
}
int main() {
custom::thread_pool<custom::worker> pool;
auto& queues = pool.get_worker();
for (int i = 1; i <= 10; ++i) queues.q1.enqueue([i] { croak("q1", i); });
for (int i = 1; i <= 10; ++i) queues.q2.enqueue([i] { croak("q2", i); });
}
Prints e.g.
Creating thread 0
Pool going down
Worker start
thread 7f7311397700 q1 task 1
thread 7f7311397700 q2 task 1
thread 7f7311397700 q1 task 2
thread 7f7311397700 q2 task 2
thread 7f7311397700 q1 task 3
thread 7f7311397700 q2 task 3
thread 7f7311397700 q1 task 4
thread 7f7311397700 q2 task 4
thread 7f7311397700 q1 task 5
thread 7f7311397700 q2 task 5
thread 7f7311397700 q1 task 6
thread 7f7311397700 q2 task 6
thread 7f7311397700 q1 task 7
thread 7f7311397700 q2 task 7
thread 7f7311397700 q1 task 8
thread 7f7311397700 q2 task 8
thread 7f7311397700 q1 task 9
thread 7f7311397700 q2 task 9
thread 7f7311397700 q1 task 10
thread 7f7311397700 q2 task 10
Worker exit
Generalizing it
Here it is generalized for more queues (e.g. three):
Live On Coliru
Note that the above have 1 worker thread servicing; if you created more than 1 thread, each thread individually would alternate between queues, but overall the order would be undefined (because the thread scheduling is undefined).
The generalized version is somewhat more accurate here since it shared the idx variable between worker threads, but the actual output order still depends on thread scheduling.
Using run_one() instead of poll_one() should work (note that reset() is also required):
start()
{
while(1)
{
io1.run_one();
io2.run_one();
io1.reset();
io2.reset();
}
}
However, I don't know if this is a good solution to any actual problem you might have. This is one of those cases where the question, "What are you really trying to do?" seems relevant. For example, if it makes sense to run handler2 after every invocation of handler1, then perhaps handler1 should invoke handler2.
Calling QCoreApplication::hasPendingEvents() or QAbstractEventDispatcher::instance()->hasPendingEvents() inside of a thread works just fine. However, outside of it, the latter one (with appropriate parameter) always returns false (former cannot be used outside, because it refers to the thread from which it is called).
Here is a complete code:
#include <QCoreApplication>
#include <QAbstractEventDispatcher>
#include <QThread>
#include <QDebug>
bool hasPendingEvents(QThread *thread = 0) {
return QAbstractEventDispatcher::instance(thread)->hasPendingEvents();
}
class MyObject: public QObject {
Q_OBJECT
public slots:
void Run() {
qDebug() << __LINE__ << hasPendingEvents() << QCoreApplication::hasPendingEvents();
QThread::sleep(1);
}
};
int main(int argc, char *argv[]) {
QCoreApplication app(argc, argv);
QThread thread;
MyObject t;
t.moveToThread(&thread);
thread.start();
for (int i = 0; i<4; ++i) QMetaObject::invokeMethod(&t, "Run", Qt::QueuedConnection);
for (int i = 0; i<10; ++i) {
QThread::msleep(500);
qDebug() << __LINE__ << hasPendingEvents(&thread) << hasPendingEvents(t.thread());
}
return 0;
}
#include "main.moc"
Here is the output:
15 true true
31 false false
31 false false
15 true true
31 false false
31 false false
15 true true
31 false false
31 false false
15 false false
31 false false
31 false false
31 false false
31 false false
Why doesn't QAbstractEventDispatcher.hasPendingEvents() work outside of a thread? Maybe there is an alternative?
What you're showing might be a Qt bug. Alas, you might not need to checking this way if another thread has any pending events.
The only reason I see that you might want to do this is to manage your own thread pool and move objects to threads that are not "busy". You'd keep a list of "busy" and "available" threads. That's what QAbstractEventDispatcher::aboutToBlock signal is for. Your thread pool should connect to this signal for every thread it creates, and add the thread to the "available" list upon reception.
If, on the other hand, you're trying to use it to implement some event compression, that's really the most awkward way to go about it. In another answer, I show how to implement custom event compression, and also how to compress signal-slot calls.
I am having difficulty in understanding IPC in multiprocess system. I have this system where there are three child processes that send two types of signals to their process group. There are four types of signal handling processes responsible for a particular type of signal.
There is this monitoring process which waits for both the signals and then processes accordingly. When I run this program for a while, the monitoring process doesn't seem to pick up the signal as well as the signal handling process. I could see in the log that the signal is only being generated but not handled at all.
My code is given below
#include <cstdlib>
#include <iostream>
#include <iomanip>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/time.h>
#include <signal.h>
#include <unistd.h>
#include <fcntl.h>
#include <cstdio>
#include <stdlib.h>
#include <stdio.h>
#include <pthread.h>
using namespace std;
double timestamp() {
struct timeval tp;
gettimeofday(&tp, NULL);
return (double)tp.tv_sec + tp.tv_usec / 1000000.;
}
double getinterval() {
srand(time(NULL));
int r = rand()%10 + 1;
double s = (double)r/100;
}
int count;
int count_1;
int count_2;
double time_1[10];
double time_2[10];
pid_t senders[1];
pid_t handlers[4];
pid_t reporter;
void catcher(int sig) {
printf("Signal catcher called for %d",sig);
}
int main(int argc, char *argv[]) {
void signal_catcher_int(int);
pid_t pid,w;
int status;
if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) {
perror("1");
return 1;
}
if(signal(SIGUSR2 ,SIG_IGN) == SIG_ERR) {
perror("2");
return 2;
}
if(signal(SIGINT,signal_catcher_int) == SIG_ERR) {
perror("3");
return 2;
}
//Registering the signal handler
for(int i=0; i<4; i++) {
if((pid = fork()) == 0) {
cout << i << endl;
//struct sigaction sigact;
sigset_t sigset;
int sig;
int result = 0;
sigemptyset(&sigset);
if(i%2 == 0) {
if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) {
perror("2");
return 2;
}
sigaddset(&sigset, SIGUSR1);
sigprocmask(SIG_BLOCK, &sigset, NULL);
} else {
if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) {
perror("2");
return 2;
}
sigaddset(&sigset, SIGUSR2);
sigprocmask(SIG_BLOCK, &sigset, NULL);
}
while(true) {
int result = sigwait(&sigset, &sig);
if(result == 0) {
cout << "The caught signal is " << sig << endl;
}
}
exit(0);
} else {
cout << "Registerd the handler " << pid << endl;
handlers[i] = pid;
}
}
//Registering the monitoring process
if((pid = fork()) == 0) {
sigset_t sigset;
int sig;
int result = 0;
sigemptyset(&sigset);
sigaddset(&sigset, SIGUSR1);
sigaddset(&sigset, SIGUSR2);
sigprocmask(SIG_BLOCK, &sigset, NULL);
while(true) {
int result = sigwait(&sigset, &sig);
if(result == 0) {
cout << "The monitored signal is " << sig << endl;
} else {
cout << "error" << endl;
}
}
} else {
reporter = pid;
}
sleep(3);
//Registering the signal generator
for(int i=0; i<1; i++) {
if((pid = fork()) == 0) {
if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) {
perror("1");
return 1;
}
if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) {
perror("2");
return 2;
}
srand(time(0));
while(true) {
volatile int signal_id = rand()%2 + 1;
cout << "Generating the signal " << signal_id << endl;
if(signal_id == 1) {
killpg(getpgid(getpid()), SIGUSR1);
} else {
killpg(getpgid(getpid()), SIGUSR2);
}
int r = rand()%10 + 1;
double s = (double)r/100;
sleep(s);
}
exit(0);
} else {
cout << "Registered the sender " << pid << endl;
senders[i] = pid;
}
}
while(w = wait(&status)) {
cout << "Wait on PID " << w << endl;
}
}
void signal_catcher_int(int the_sig) {
//cout << "Handling the Ctrl C signal " << endl;
for(int i=0; i<1; i++) {
kill(senders[i],SIGKILL);
}
for(int i=0; i<4; i++) {
kill(handlers[i],SIGKILL);
}
kill(reporter,SIGKILL);
exit(3);
}
Any suggestions?
Here is a sample of the output as well
In the beginning
Registerd the handler 9544
Registerd the handler 9545
1
Registerd the handler 9546
Registerd the handler 9547
2
3
0
Registered the sender 9550
Generating the signal 1
The caught signal is 10
The monitored signal is 10
The caught signal is 10
Generating the signal 1
The caught signal is 10
The monitored signal is 10
The caught signal is 10
Generating the signal 1
The caught signal is 10
The monitored signal is 10
The caught signal is 10
Generating the signal 1
The caught signal is 10
The monitored signal is 10
The caught signal is 10
Generating the signal 2
The caught signal is 12
The caught signal is 12
The monitored signal is 12
Generating the signal 2
Generating the signal 2
The caught signal is 12
The caught signal is 12
Generating the signal 1
The caught signal is 12
The monitored signal is 10
The monitored signal is 12
Generating the signal 1
Generating the signal 2
The caught signal is 12
Generating the signal 1
Generating the signal 2
10
The monitored signal is 10
The caught signal is 12
Generating the signal 1
The caught signal is 12
The monitored signal is GenThe caught signal is TheThe caught signal is 10
Generating the signal 2
Later on
The monitored signal is GenThe monitored signal is 10
Generating the signal 1
Generating the signal 2
The caught signal is 10
The caught signal is 10
The caught signal is 10
The caught signal is 12
Generating the signal 1
Generating the signal 2
Generating the signal 1
Generating the signal 1
Generating the signal 2
Generating the signal 2
Generating the signal 2
Generating the signal 2
Generating the signal 2
Generating the signal 1
The caught signal is 12
The caught signal is 10
The caught signal is 10
Generating the signal 2
Generating the signal 1
Generating the signal 1
Generating the signal 2
Generating the signal 1
Generating the signal 2
Generating the signal 2
Generating the signal 2
Generating the signal 1
Generating the signal 2
Generating the signal 1
Generating the signal 2
Generating the signal 2
The caught signal is 10
Generating the signal 2
Generating the signal 1
Generating the signal 1
As you can see initially, the signal was generated and handled both by my signal handlers and monitoring processes. But later on the signal was generated a lot, but it was not quite processes in the same magnitude as before. Further I could see very less signal processing by the monitoring process
Can anyone please provide some insights. What's going on?
If multiple signals of the same type are pending, Linux by default delivers only one such signal. This is inline with the sigwait documentation:
If prior to the call to sigwait() there are multiple pending instances
of a single signal number, it is implementation-dependent whether upon
successful return there are any remaining pending signals for that
signal number.
So the output of your program depends on the scheduler, if kill is called multiple times and scheduler does not awakes monitoring processes in a mean time, signals of the same types are collapsed into one.
Linux allows to change the default behavior.