Multiple SFML RenderWindow in separate threads - linux

Having some trouble with SFML (version 2.1).
Trying to create two instances of sf::RenderWindow on two separate threads. Application works for some time (amount of time is not constant) and then eventually will crash due to assertion:
testapp: ../../src/xcb_conn.c:186: write_vec: Assertion `!c->out.queue_len' failed.
Test piece of code:
#include <iostream>
#include <thread>
#include <SFML/Window.hpp>
#include <SFML/Graphics.hpp>
#include <X11/Xlib.h>
void WindowOne ()
{
sf::RenderWindow* wnd = new sf::RenderWindow (sf::VideoMode(800,600),
"Window 1");
while (wnd->isOpen())
{
sf::Event event;
while (wnd->pollEvent(event))
{
switch (event.type)
{
case sf::Event::Closed:
wnd->close();
break;
default:
break;
}
}
wnd->clear(sf::Color::White);
wnd->display();
}
delete wnd;
}
void WindowTwo ()
{
sf::RenderWindow* wnd = new sf::RenderWindow (sf::VideoMode(800,600),
"Window 2");
while (wnd->isOpen())
{
sf::Event event;
while (wnd->pollEvent(event))
{
switch (event.type)
{
case sf::Event::Closed:
wnd->close();
break;
default:
break;
}
}
wnd->clear(sf::Color::White);
wnd->display();
}
delete wnd;
}
int main(int argc, char** argv) {
XInitThreads();
std::thread thread1 (WindowOne);
std::thread thread2 (WindowTwo);
thread1.join();
thread2.join();
return 0;
}
Please, help me to find out, what am I doing wrong, or can I even do such things.
Edit:
Forgot to mention that it crashes at:
wnd->display();

Related

Wait for thread queue to be empty

I am new to C++ and multithreading applications. I want to process a long list of data (potentially several thousands of entries) by dividing its entries among a few threads. I have retrieved a ThreadPool class and a Queue class from the web (it is my first time tackling the subject). I construct the threads and populate the queue in the following way (definitions at the end of the post):
ThreadPool *pool = new ThreadPool(8);
std::vector<std::function<void(int)>> *caller =
new std::vector<std::function<void(int)>>;
for (size_t i = 0; i < Nentries; ++i)
{
caller->push_back(
[=](int j){func(entries[i], j);});
pool->PushTask((*caller)[i]);
}
delete pool;
The problem is that only a number of entries equaling the number of created threads are processed, as if the program does not wait for the queue to be empty. Indeed, if I put
while (pool->GetWorkQueueLength()) {}
just before the pool destructor, the whole list is correctly processed. However, I am afraid I am consuming too many resources by using a while loop. Moreover, I have not found anyone doing anything like it, so I think this is the wrong approach and the classes I use have some error. Can anyone find the error (if present) or suggest another solution?
Here are the classes I use. I suppose the problem is in the implementation of the destructor, but I am not sure.
SynchronizeQueue.hh
#ifndef SYNCQUEUE_H
#define SYNCQUEUE_H
#include <list>
#include <mutex>
#include <condition_variable>
template<typename T>
class SynchronizedQueue
{
public:
SynchronizedQueue();
void Put(T const & data);
T Get();
size_t Size();
private:
SynchronizedQueue(SynchronizedQueue const &)=delete;
SynchronizedQueue & operator=(SynchronizedQueue const &)=delete;
std::list<T> queue;
std::mutex mut;
std::condition_variable condvar;
};
template<typename T>
SynchronizedQueue<T>::SynchronizedQueue()
{}
template<typename T>
void SynchronizedQueue<T>::Put(T const & data)
{
std::unique_lock<std::mutex> lck(mut);
queue.push_back(data);
condvar.notify_one();
}
template<typename T>
T SynchronizedQueue<T>::Get()
{
std::unique_lock<std::mutex> lck(mut);
while (queue.empty())
{
condvar.wait(lck);
}
T result = queue.front();
queue.pop_front();
return result;
}
template<typename T>
size_t SynchronizedQueue<T>::Size()
{
std::unique_lock<std::mutex> lck(mut);
return queue.size();
}
#endif
ThreadPool.hh
#ifndef THREADPOOL_H
#define THREADPOOL_H
#include "SynchronizedQueue.hh"
#include <atomic>
#include <functional>
#include <mutex>
#include <thread>
#include <vector>
class ThreadPool
{
public:
ThreadPool(int nThreads = 0);
virtual ~ThreadPool();
void PushTask(std::function<void(int)> func);
size_t GetWorkQueueLength();
private:
void WorkerThread(int i);
std::atomic<bool> done;
unsigned int threadCount;
SynchronizedQueue<std::function<void(int)>> workQueue;
std::vector<std::thread> threads;
};
#endif
ThreadPool.cc
#include "ThreadPool.hh"
#include "SynchronizedQueue.hh"
void doNothing(int i)
{}
ThreadPool::ThreadPool(int nThreads)
: done(false)
{
if (nThreads <= 0)
{
threadCount = std::thread::hardware_concurrency();
}
else
{
threadCount = nThreads;
}
for (unsigned int i = 0; i < threadCount; ++i)
{
threads.push_back(std::thread(&ThreadPool::WorkerThread, this, i));
}
}
ThreadPool::~ThreadPool()
{
done = true;
for (unsigned int i = 0; i < threadCount; ++i)
{
PushTask(&doNothing);
}
for (auto& th : threads)
{
if (th.joinable())
{
th.join();
}
}
}
void ThreadPool::PushTask(std::function<void(int)> func)
{
workQueue.Put(func);
}
void ThreadPool::WorkerThread(int i)
{
while (!done)
{
workQueue.Get()(i);
}
}
size_t ThreadPool::GetWorkQueueLength()
{
return workQueue.Size();
}
You can push tasks saying "done" instead of setting "done" via atomic variable.
So that each thread will exit by itself when seeing "done" task, and no earlier. In destructor you only need to push these tasks and join threads. This is called "poison pill".
Alternatively, if you insist on your current design with done variable, you can wait on the same condition you already have:
std::unique_lock<std::mutex> lck(mut);
while (!queue.empty())
{
condvar.wait(lck);
}
But then you'll need to change your notify_one to notify_all, and this may be sub-optimal.
I want to process a long list of data (potentially several thousands of entries) by dividing its entries among a few threads.
You can do that with parallel algorithms, like tbb::parallel_for:
#include <tbb/parallel_for.h>
#include <vector>
void func(int entry);
int main () {
std::vector<int> entries(1000000);
tbb::parallel_for(size_t{0}, entries.size(), [&](size_t i) { func(entries[i]); });
}
If you need sequential thread ids, you can do:
void func(int element, int thread_id);
template<class C>
inline auto make_range(C& c) -> decltype(tbb::blocked_range<decltype(c.begin())>(c.begin(), c.end())) {
return tbb::blocked_range<decltype(c.begin())>(c.begin(), c.end());
}
int main () {
std::vector<int> entries(1000000);
std::atomic<int> thread_counter{0};
tbb::parallel_for(make_range(entries), [&](auto sub_range) {
static thread_local int const thread_id = thread_counter.fetch_add(1, std::memory_order_relaxed);
for(auto& element : sub_range)
func(element, thread_id);
});
}
Alternatively, there is std::this_thread::get_id.

How to determine if condition_variable::wait_for timed out

I have a class which allows to wait on a condition_variable taking care of the spurious wake ups. Following is the code:
Code:
// CondVarWrapper.hpp
#pragma once
#include <mutex>
#include <chrono>
#include <condition_variable>
class CondVarWrapper {
public:
void Signal() {
std::unique_lock<std::mutex> unique_lock(mutex);
cond_var_signalled = true;
unique_lock.unlock();
cond_var.notify_one();
}
// TODO: WaitFor needs to return false if timed out waiting
bool WaitFor(const std::chrono::seconds timeout) {
std::unique_lock<std::mutex> unique_lock(mutex);
bool timed_out = false;
// How to determine if wait_for timed out ?
cond_var.wait_for(unique_lock, timeout, [this] {
return cond_var_signalled;
});
cond_var_signalled = false;
return timed_out;
}
void Wait() {
std::unique_lock<std::mutex> unique_lock(mutex);
cond_var.wait(unique_lock, [this] {
return cond_var_signalled;
});
cond_var_signalled = false;
}
private:
bool cond_var_signalled = false;
std::mutex mutex;
std::condition_variable cond_var;
};
// main.cpp
#include "CondVarWrapper.hpp"
#include <iostream>
#include <string>
#include <thread>
int main() {
CondVarWrapper cond_var_wrapper;
std::thread my_thread = std::thread([&cond_var_wrapper]{
std::cout << "Thread started" << std::endl;
if (cond_var_wrapper.WaitFor(std::chrono::seconds(1))) {
std::cout << "Wait ended before timeout" << std::endl;
} else {
std::cout << "Timed out waiting" << std::endl;
}
});
std::this_thread::sleep_for(std::chrono::seconds(6));
// Uncomment following line to see the timeout working
cond_var_wrapper.Signal();
my_thread.join();
}
Question:
In the method WaitFor, I need to determine if cond_var timed out waiting? How do I do that? WaitFor should return false when it timed out waiting else it should return true. Is that possible?
I see cv_status explained on cppreference but struggling to find a good expample of how to use it.

thread safe boost intrusive list is slow

I wrapped a boost intrusive list with mutex to make it thread safe, to be used as a producer/consumer queue.
But on windows (MSVC 14) it's really slow, after profiling, 95% of time is spent idle, mainly on push() and waint_and_pop() methods.
I only have 1 producer and 2 producer/consumer threads.
Any suggestions to make this faster?
#ifndef INTRUSIVE_CONCURRENT_QUEUE_HPP
#define INTRUSIVE_CONCURRENT_QUEUE_HPP
#include <thread>
#include <mutex>
#include <condition_variable>
#include <boost/intrusive/list.hpp>
using namespace boost::intrusive;
template<typename Data>
class intrusive_concurrent_queue
{
protected:
list<Data, constant_time_size<false> > the_queue;
mutable std::mutex the_mutex;
std::condition_variable the_condition_variable;
public:
void push(Data * data)
{
std::unique_lock<std::mutex> lock(the_mutex);
the_queue.push_back(*data);
lock.unlock();
the_condition_variable.notify_one();
}
bool empty() const
{
std::unique_lock<std::mutex> lock(the_mutex);
return the_queue.empty();
}
size_t unsafe_size() const
{
return the_queue.size();
}
size_t size() const
{
std::unique_lock<std::mutex> lock(the_mutex);
return the_queue.size();
}
Data* try_pop()
{
Data* popped_ptr;
std::unique_lock<std::mutex> lock(the_mutex);
if(the_queue.empty())
{
return nullptr;
}
popped_ptr= & the_queue.front();
the_queue.pop_front();
return popped_ptr;
}
Data* wait_and_pop(const bool & exernal_stop = false)
{
Data* popped_ptr;
std::unique_lock<std::mutex> lock(the_mutex);
the_condition_variable.wait(lock,[&]{ return ! ( the_queue.empty() | exernal_stop ) ; });
if ( exernal_stop){
return nullptr;
}
popped_ptr=&the_queue.front();
the_queue.pop_front();
return popped_ptr;
}
intrusive_concurrent_queue<Data> & operator=(intrusive_concurrent_queue<Data>&& origin)
{
this->the_queue = std::move(the_queue);
return *this;
}
};
#endif // !INTRUSIVE_CONCURRENT_QUEUE_HPP
Try removing the lock from the methods and lock the whole data structure when you do things with it.

PCL 1.6: generate pcd files from each frame of a oni file

I need to process each frame of a ONI file. For now I want just to save each frame of a file.oni in file.pcd. I follow this code but it works only with PCL 1.7 and I'm using v1.6.
So I changed a bit the code in this manner
#include <pcl/io/openni_grabber.h>
#include <pcl/visualization/cloud_viewer.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <pcl/io/oni_grabber.h>
#include <pcl/io/pcd_io.h>
#include <vector>
int i = 0;
char buf[4096];
class SimpleOpenNIViewer
{
public:
SimpleOpenNIViewer () : viewer ("PCL OpenNI Viewer") {}
void cloud_cb_ (const pcl::PointCloud<pcl::PointXYZ>::ConstPtr &cloud)
{
//if (!viewer.wasStopped())
//{
// viewer.showCloud (cloud);
pcl::PCDWriter w;
sprintf (buf, "frame_%06d.pcd", i);
w.writeBinaryCompressed (buf, *cloud);
PCL_INFO ("Wrote a cloud with %zu (%ux%u) points in %s.\n",cloud->size (), cloud->width, cloud->height, buf);
++i;
//}
}
void run ()
{
pcl::Grabber* interface = new pcl::OpenNIGrabber("file.oni");
boost::function<void (const pcl::PointCloud<pcl::PointXYZ>::ConstPtr&)> f = boost::bind (&SimpleOpenNIViewer::cloud_cb_, this, _1);
interface->registerCallback (f);
interface->start ();
while (!viewer.wasStopped())
{
boost::this_thread::sleep (boost::posix_time::seconds (1));
}
PCL_INFO ("Successfully processed %d frames.\n", i);
interface->stop ();
}
pcl::visualization::CloudViewer viewer;
};
int main ()
{
SimpleOpenNIViewer v;
v.run ();
return 0;
}
But it crash when I run it. Why?
I solved my problem about getting each frame from a ONI file. I need to use the ONIGrabber function set in the trigger mode.
This is the modified code:
#include <pcl/io/openni_grabber.h>
#include <pcl/visualization/cloud_viewer.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <pcl/io/oni_grabber.h>
#include <pcl/io/pcd_io.h>
#include <vector>
class SimpleOpenNIViewer
{
public:
SimpleOpenNIViewer () : viewer ("PCL OpenNI Viewer") {}
void cloud_cb_ (const pcl::PointCloud<pcl::PointXYZ>::ConstPtr &cloud)
{
//if (!viewer.wasStopped())
//{
// viewer.showCloud (cloud);
pcl::PCDWriter w;
sprintf (buf, "frame_%06d.pcd", i);
w.writeBinaryCompressed (buf, *cloud);
PCL_INFO ("Wrote a cloud with %zu (%ux%u) points in %s.\n",cloud->size (),
cloud->width, cloud->height, buf);
++i;
//}
}
void run ()
{
pcl::Grabber* interface = new pcl::ONIGrabber("file.oni",false,false); //set for trigger
boost::function<void (const pcl::PointCloud<pcl::PointXYZ>::ConstPtr&)> f = boost::bind (&SimpleOpenNIViewer::cloud_cb_, this, _1);
interface->registerCallback (f);
interface->start ();
while (!viewer.wasStopped())
{
interface->start()//to update each frame from the oni file
boost::this_thread::sleep (boost::posix_time::seconds (1));
}
PCL_INFO ("Successfully processed %d frames.\n", i);
interface->stop ();
}
pcl::visualization::CloudViewer viewer;
};
int main ()
{
SimpleOpenNIViewer v;
v.run ();
return 0;
}`

Multithread Directory and File Search

I am new to semaphores and the concepts of mutual exclusion. I am supposed to recursively text search in files through directories using multithreading. The number of threads is to be given by the user.
The issue with this code is it goes through one directory and then waits. I cannot figure out what is wrong.I am getting a segmentation fault error. Cannot figure out why is this happening.
#include <iostream>
#include <sys/wait.h>
#include <sys/types.h>
#include <pthread.h>
#include <string.h>
#include <unistd.h>
#include <dirent.h>
#include <sys/stat.h>
#include <fstream>
#include <limits.h>
#include <stdlib.h>
#include <semaphore.h>
using namespace std;
#include <stdio.h>
int iDirectories=0;
pthread_mutex_t mutex=PTHREAD_MUTEX_INITIALIZER;
sem_t semaphore1;
char searchStringThread[PATH_MAX];
int directories=0;
class directoryQueue
{
private:
struct Node
{
char directoryPath[PATH_MAX];
Node *next;
};
Node *front;
Node *rear;
Node *nodeCount;
public:
directoryQueue(void)
{
front=NULL;
rear=NULL;
nodeCount=0;
}
void Enqueue(char array[PATH_MAX])
{
Node *newNode;
newNode=new Node;
strcpy(newNode->directoryPath,array);
newNode->next=NULL;
if(isEmpty())
{
front=newNode;
rear=newNode;
}
else
{
rear->next=newNode;
rear=newNode;
}
nodeCount++;
}
char * Dequeue(void)
{
Node *temp;
if (isEmpty())
cout << "Error ! Empty Queue "<<endl;
else
{
char *deque;
deque=new char[PATH_MAX];
strcpy(deque,front->directoryPath);
temp = front->next;
front = temp;
nodeCount--;
return deque;
}
}
bool isEmpty(void)
{
if(nodeCount)
return false;
else
return true;
}
void makeNull(void)
{
while(!isEmpty())
{
Dequeue();
}
}
~directoryQueue(void)
{
makeNull();
}
};
directoryQueue saveDirectory;
void *threadHandler(void *)
{
int thpath_length;
char thPath[PATH_MAX];
char saveITDirectory[PATH_MAX];
char itDirectory[PATH_MAX];
int threadCount;
struct dirent *iWalker;
DIR *iDirectory;
pthread_mutex_lock(&mutex);
threadCount=iDirectories++;
pthread_mutex_unlock(&mutex);
sem_wait(&semaphore1);
pthread_mutex_lock(&mutex);
strcpy(itDirectory,saveDirectory.Dequeue());
pthread_mutex_unlock(&mutex);
iDirectory=opendir(itDirectory);
if(iDirectory==NULL)
{
cout<<"Error"<<endl;
cout<<itDirectory<<" Cannot be Opened"<<endl;
exit(10000);
}
while((iWalker=readdir(iDirectory)) !=NULL)
{
if(iWalker->d_type==DT_REG)
{
strcpy(saveITDirectory,iWalker->d_name);
cout<<itDirectory<<"/"<<endl;
if (strcmp (saveITDirectory, "..") == 0 ||
strcmp (saveITDirectory, ".") == 0)
{
continue;
}
else
{
thpath_length = snprintf(thPath,PATH_MAX,"%s/%s",itDirectory,saveITDirectory);
cout<<thPath<<endl;
if (thpath_length >= PATH_MAX)
{
cout<<"Path is too long"<<endl;
exit (1000);
}
ifstream openFile;
openFile.open(thPath);
char line[1500];
int currentLine = 0;
if (openFile.is_open()) {
while (openFile.good()) {
currentLine++;
openFile.getline(line, 1500);
if (strstr(line, searchStringThread) != NULL){
cout<<thPath<<": "<<currentLine<<": "<<line<<endl;
cout<<"This was performed by Thread no. "<<threadCount<<endl;
cout<<"ID :"<<pthread_self();
}
}
}
openFile.close();
}
}
if (closedir (iDirectory))
{
cout<<"Unable to close "<<itDirectory<<endl;
exit (1000);
}
}
}
void walkThroughDirectory(char directory_name[PATH_MAX],char searchString[PATH_MAX])
{
DIR * directory;
struct dirent * walker;
char d_name[PATH_MAX];
int path_length;
char path[PATH_MAX];
directory=opendir(directory_name);
if(directory==NULL)
{
cout<<"Error"<<endl;
cout<<directory_name<<" Cannot be Opened"<<endl;
exit(10000);
}
while((walker=readdir(directory)) !=NULL)
{
strcpy(d_name,walker->d_name);
cout<<directory_name<<"/"<<endl;
if (strcmp (d_name, "..") == 0 ||
strcmp (d_name, ".") == 0)
{
continue;
}
else
{
path_length = snprintf(path,PATH_MAX,"%s/%s",directory_name,d_name);
cout<<path<<endl;
if (path_length >= PATH_MAX)
{
cout<<"Path is too long"<<endl;
exit (1000);
}
if(walker->d_type==DT_DIR)
{
pthread_mutex_lock(&mutex);
saveDirectory.Enqueue(path);
pthread_mutex_lock(&mutex);
sem_post(&semaphore1);
directories++;
walkThroughDirectory (path,searchString);
}
else if(walker->d_type==DT_REG)
{
ifstream openFile;
openFile.open(path);
char line[1500];
int currentLine = 0;
if (openFile.is_open()) {
while (openFile.good()) {
currentLine++;
openFile.getline(line, 1500);
if (strstr(line, searchString) != NULL)
cout<<path<<": "<<currentLine<<": "<<line<<endl;
}
}
openFile.close();
}
}
}
if (closedir (directory))
{
cout<<"Unable to close "<<directory_name<<endl;
exit (1000);
}
}
int main(int argc,char *argv[])
{
char * name;
cout<<"Total Directories "<< directories<<endl;
name=get_current_dir_name();
cout<<"Current Directory is: "<<name<<endl;
sem_init(&semaphore1,0,0);
strcpy(searchStringThread,argv[1]);
int number_of_threads=atoi(argv[3]);
pthread_t threads[number_of_threads];
walkThroughDirectory(argv[2],argv[1]);
pthread_mutex_lock(&mutex);
saveDirectory.Enqueue(argv[2]);
pthread_mutex_unlock(&mutex);
sem_post(&semaphore1);
for(int i=0;i<number_of_threads;i++)
{
pthread_create(&threads[i],NULL,threadHandler,NULL);
}
for(int j=0;j<number_of_threads;j++)
{
pthread_join(threads[j],NULL);
}
while(saveDirectory.isEmpty())
{
cout<<"Queue is Empty"<<endl;
cout<<"Exiting"<<endl;
exit(10000);
}
free(name);
cout<<"Total Directories "<< directories<<endl;
return 0;
}
There's a simple bug where you lock a mutex twice instead of unlocking it when you're done:
pthread_mutex_lock(&mutex);
saveDirectory.Enqueue(path);
pthread_mutex_lock(&mutex);
should be:
pthread_mutex_lock(&mutex);
saveDirectory.Enqueue(path);
pthread_mutex_unlock(&mutex);
Note: this isn't to say that there aren't other problems - just that this is probably your immediate problem.
The biggest problem is that it looks like you put directories on the saveDirectory queue (so another thread can pull it off to work on it), then go ahead an process that directory recursively in the thread that just put it on the queue. I think you'll need to give some more thought on how the work will be divided among the threads.
A couple of more minor comments:
you might want to consider using std::string if that's permitted. It should make some of your string handling simpler (you leak memory from the data returned from directoryQueue::Dequeue(), for example)
if the primary reason for the existence of the directoryQueue class is to hold work items for multiple threads, then maybe it should manage it's own mutex so callers don't need to deal with that complexity

Resources