I have a really simple program that works with clang33 under OSX. However if I try to run the same program under Linux it fails. Has anyone got std::asynch to work with clang33 under Linux (CentoOS)?
#include <iostream>
#include <future>
#include <thread>
int main() {
// future from a packaged_task
std::packaged_task<int()> task([]() {
return 7;
}); // wrap the function
std::future<int> f1 = task.get_future(); // get a future
std::thread(std::move(task)).detach(); // launch on a thread
// future from an async()
std::future<int> f2 = std::async(std::launch::async, []() {
return 8;
});
// future from a promise
std::promise<int> p;
std::future<int> f3 = p.get_future();
std::thread([](std::promise<int> & p) {
p.set_value(9);
},
std::ref(p)).detach();
std::cout << "Waiting..." << std::flush;
f1.wait();
f2.wait();
f3.wait();
std::cout << "Done!\nResults are: " << f1.get() << ' ' << f2.get() << ' '
<< f3.get() << '\n';
}
The above example works with trunk/198686 when I compile libc++ with cxxabi. However now I have encountered another problem:
#include <iostream>
#include <vector>
#include <exception>
int main () {
std::vector<int> foo;
try {
foo.at(1);
}
catch (std::exception& e) {
std::cerr << "exception caught: " << e.what() << '\n';
}
std::cout << "Works" << '\n';
return 0;
}
The example code above generates the following expected output under OS X:
exception caught: vector
Works
Under Linux I get the following output:
exception caught: Segmentation fault
I have debugged the code and segmentation fault occurs inside the destructor of logic_error (stdexcept.cpp, line 137). Does anyone have any suggestions?
BTW: Its no longer possible to compile libc++ using the libsupc++ method.
I have actually got everything to work. The above problems occurs with r198686. I checked out the same revision as #BenPope and then everything works as expected.
Thanks,
Patrik
Related
#include <iostream>
#include <thread>
#include <signal.h>
#include <unistd.h>
void handler(int sig){
std::cout << "handler" << std::endl;
}
void func() {
sleep(100);
perror("sleep err:");
}
int main(void) {
signal(SIGINT, handler);
std::thread t(func);
pthread_kill(t.native_handle(), SIGINT);
perror("kill err:");
t.join();
return 0;
}
If I put sleep() inside main function, and send a signal by pressing ctrl+c, sleep will be interrupted and return immediately with perror() saying it's interrupted.
But with the code above, the "handler" in handler function will be printed, but sleep will not return and the program keeps running. The output of this program is:
kill err:: Success
handler
And if I replace sleep() with recvfrom(), recvfrom() will not be interrupted even it's inside the main thread.
#include <vector>
#include <string.h>
#include <netinet/in.h>
#include <errno.h>
#include <unistd.h>
void SigHandler(int sig){
std::cout << "handler" << std::endl;
}
int main(void) {
signal(SIGINT, SigHandler);
int bind_fd_;
if ((bind_fd_ = socket(AF_INET, SOCK_DGRAM, 0)) < 0) {
std::cout << "socket creation failed " << strerror(errno) << std::endl;
}
struct sockaddr_in servaddr;
memset(&servaddr, 0, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
servaddr.sin_port = htons(12345);
if (bind(bind_fd_, reinterpret_cast<const struct sockaddr *>(&servaddr),
sizeof(servaddr)) < 0) {
std::cout << "socket bind failed " << strerror(errno) << std::endl;
}
struct sockaddr_in cliaddr;
socklen_t cliaddr_len = sizeof(cliaddr);
std::vector<char> buffer(10*1024*1024,0);
std::cout << "Wait for new request"<< std::endl;
int n = 0;
while (n == 0) {
std::cout << "before recvfrom" << std::endl;
n = recvfrom(bind_fd_, buffer.data(), buffer.size(), 0,
reinterpret_cast<struct sockaddr *>(&cliaddr), &cliaddr_len);
// sleep(100);
perror("recvfrom err: ");
std::cout << "recv " << n << " bytes from " << cliaddr.sin_port<< std::endl;
}
}
I don't know what is wrong with my code, hoping your help, thanks
At the time you direct the signal to the thread, that thread has not yet proceeded far enough to block in sleep(). Chances are that it has not even been scheduled for the first time. Change the code to something like
std::thread t(func);
sleep(5); // give t enough time to arrive in sleep()
pthread_kill(t.native_handle(), SIGINT);
and you'll see what you expect.
Note that using signals in a multithreaded program is not usually a good idea because certain aspects are undefined/not-so-clearly defined.
Note also that it is not correct to use iostreams inside a signal handler. Signal handlers run in a context where pretty much nothing is safe to do, much like an interrupt service routine on bare metal. See here for a thorough explanation of that matter.
I am trying to teach myself C++11 threading, and I would like to start a background producer thread at the beginning of the application, and have it run until application exit. I would also like to have consumer thread (which also runs for the life of the application).
A real-world example would be a producer thread listening on a Com port for incoming GPS data. Once a full message had been accumulated, it could be parsed to see if it was a message of interest, then converted into a string (say), and 'delivered back' to be consumed (update current location, for example).
My issue is I haven't been able to figure out how to do this without blocking the rest of the application when I 'join()' on the consumer thread.
Here is my very simplified example that hopefully shows my issues:
#include <QCoreApplication>
#include <QDebug>
#include <thread>
#include <atomic>
#include <iostream>
#include <queue>
#include <mutex>
#include <chrono>
#include "threadsafequeuetwo.h"
ThreadSafeQueueTwo<int> goods;
std::mutex mainMutex;
std::atomic<bool> isApplicationRunning = false;
void theProducer ()
{
std::atomic<int> itr = 0;
while(isApplicationRunning)
{
// Simulate this taking some time...
std::this_thread::sleep_for(std::chrono::milliseconds(60));
// Push the "produced" value onto the queue...
goods.push(++itr);
// Diagnostic printout only...
if ((itr % 10) == 0)
{
std::unique_lock<std::mutex> lock(mainMutex);
std::cout << "PUSH " << itr << " on thread ID: "
<< std::this_thread::get_id() << std::endl;
}
// Thread ending logic.
if (itr > 100) isApplicationRunning = false;
}
}
void theConsumer ()
{
while(isApplicationRunning || !goods.empty())
{
int val;
// Wait on new values, and 'pop' when available...
goods.waitAndPop(val);
// Here, we would 'do something' with the new values...
// Simulate this taking some time...
std::this_thread::sleep_for(std::chrono::milliseconds(10));
// Diagnostic printout only...
if ((val % 10) == 0)
{
std::unique_lock<std::mutex> lock(mainMutex);
std::cout << "POP " << val << " on thread ID: "
<< std::this_thread::get_id() << std::endl;
}
}
}
int main(int argc, char *argv[])
{
std::cout << "MAIN running on thread ID: "
<< std::this_thread::get_id() << std::endl;
// This varaiable gets set to true at startup, and,
// would only get set to false when the application
// wants to exit.
isApplicationRunning = true;
std::thread producerThread (theProducer);
std::thread consumerThread (theConsumer);
producerThread.detach();
consumerThread.join(); // BLOCKS!!! - how to get around this???
std::cout << "MAIN ending on thread ID: "
<< std::this_thread::get_id() << std::endl;
}
The ThreadSafeQueueTwo class is the thread safe queue implementation taken almost exactly as is from the "C++ Concurrency In Action" book. This seems to work just fine. Here it is if anybody is interested:
#ifndef THREADSAFEQUEUETWO_H
#define THREADSAFEQUEUETWO_H
#include <queue>
#include <memory>
#include <mutex>
#include <condition_variable>
template<typename T>
class ThreadSafeQueueTwo
{
public:
ThreadSafeQueueTwo()
{}
ThreadSafeQueueTwo(ThreadSafeQueueTwo const& rhs)
{
std::lock_guard<std::mutex> lock(myMutex);
myQueue = rhs.myQueue;
}
void push(T newValue)
{
std::lock_guard<std::mutex> lock(myMutex);
myQueue.push(newValue);
myCondVar.notify_one();
}
void waitAndPop(T& value)
{
std::unique_lock<std::mutex> lock(myMutex);
myCondVar.wait(lock, [this]{return !myQueue.empty(); });
value = myQueue.front();
myQueue.pop();
}
std::shared_ptr<T> waitAndPop()
{
std::unique_lock<std::mutex> lock(myMutex);
myCondVar.wait(lock, [this]{return !myQueue.empty(); });
std::shared_ptr<T> sharedPtrToT (std::make_shared<T>(myQueue.front()));
myQueue.pop();
return sharedPtrToT;
}
bool tryPop(T& value)
{
std::lock_guard<std::mutex> lock(myMutex);
if (myQueue.empty())
return false;
value = myQueue.front();
myQueue.pop();
return true;
}
std::shared_ptr<T> tryPop()
{
std::lock_guard<std::mutex> lock(myMutex);
if (myQueue.empty())
return std::shared_ptr<T>();
std::shared_ptr<T> sharedPtrToT (std::make_shared<T>(myQueue.front()));
myQueue.pop();
return sharedPtrToT;
}
bool empty()
{
std::lock_guard<std::mutex> lock(myMutex);
return myQueue.empty();
}
private:
mutable std::mutex myMutex;
std::queue<T> myQueue;
std::condition_variable myCondVar;
};
#endif // THREADSAFEQUEUETWO_H
Here's the output:
I know there are obvious issues with my example, but my main question is how would I run something like this in the background, without blocking the main thread?
Perhaps an even better way of trying to solve this is, is there a way that every time the producer has 'produced' some new data, could I simply call a method in the main thread, passing in the new data? This would be similar to queued signal/slots it Qt, which I am big fan of.
I'm trying to draw offscreen with OpenGL. For this I use EGL to initialize a pbuffer surface, and then draw to it, reading the results back with glReadPixels. But the following program gives me garbage on different (Mesa-based Intel on Linux) GPUs. Namely, on Atom N550 I get zeros, while on Xeon E3-1200 v3 I have 70 00 07 44 instead of the expected 40 80 bf ff.
With LIBGL_ALWAYS_SOFTWARE=1 environment variable set, I get the expected results. Also, if I comment out the line with eglBindAPI, I get good result on Xeon, but still zeros on Atom.
Here's my program:
#include <EGL/egl.h>
#include <GL/gl.h>
#include <iostream>
#include <iomanip>
#include <cstring>
int eglPrintError(std::string const& context)
{
const GLint error=eglGetError();
std::cerr << context << ": error 0x" << std::hex << int(error) << "\n";
return 1;
}
bool checkError(std::string const& funcName)
{
const GLenum error=glGetError();
if(error!=GL_NO_ERROR)
{
std::cerr << funcName << ": error 0x" << std::hex << int(error) << "\n";
return true;
}
return false;
}
constexpr int fbW=1, fbH=1;
bool initGL()
{
if(!eglBindAPI(EGL_OPENGL_API)) return !eglPrintError("eglBindAPI");
const EGLDisplay dpy=eglGetDisplay(EGL_DEFAULT_DISPLAY);
if(!dpy) return !eglPrintError("eglGetDisplay");
if(!eglInitialize(dpy,nullptr,nullptr)) return !eglPrintError("eglInitialize");
static const EGLint cfgAttribs[]={EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_RED_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_BLUE_SIZE, 8,
EGL_ALPHA_SIZE, 8,
EGL_NONE};
EGLConfig cfg;
EGLint cfgCount;
if(!eglChooseConfig(dpy,cfgAttribs,&cfg,1,&cfgCount))
return !eglPrintError("eglChooseConfig");
if(cfgCount==0)
{
std::cerr << "Failed to get any usable EGL configs\n";
return false;
}
const EGLContext context=eglCreateContext(dpy,cfg,EGL_NO_CONTEXT,NULL);
if(!context) return !eglPrintError("eglCreateContext");
const EGLint surfaceAttribs[]={EGL_WIDTH, fbW, EGL_HEIGHT, fbH, EGL_NONE};
const EGLSurface surface=eglCreatePbufferSurface(dpy,cfg,surfaceAttribs);
if(!surface) return eglPrintError("eglCreatePbufferSurface");
if(!eglMakeCurrent(dpy,surface,surface,context))
return !eglPrintError("eglMakeCurrent");
return true;
}
int main(int argc, char** argv)
{
if(!initGL()) return 1;
glViewport(0,0,fbW,fbH);
glClearColor(0.25,0.5,0.75,1.);
glClear(GL_COLOR_BUFFER_BIT);
glFinish();
unsigned char data[4*fbW*fbH];
std::memset(data,0xb7,sizeof data); // to see unchanged values
glReadPixels(0,0,fbW,fbH,GL_RGBA,GL_UNSIGNED_BYTE,data);
if(checkError("glReadPixels")) return 1;
std::cout << "Data read: " << std::hex << std::setfill('0');
for(auto datum : data)
std::cout << std::setw(2) << +datum << " ";
std::cout << "\n";
return 0;
}
My question is, is there anything wrong in the above code which could lead to such behavior, or are my drivers just simply buggy?
My colleagues and I had a strange bug in a C++ Builder program and boiled it down to the following snippet:
#include <vcl.h>
#include <iostream>
void SIDE_EFFECTS() {
if (StrToFloat("1337")) {
throw "abc";
}
}
int _tmain(int argc, _TCHAR* argv[])
{
double innocent = StrToFloat("42");
std::cout << innocent << std::endl;
try {
SIDE_EFFECTS();
} catch (...) {
}
std::cout << innocent << std::endl;
return 0;
}
Expected Output:
42
42
Actual Output when compiled for 64bit/ReleaseBuild/OptimizationsON:
42
1337
Compiler (latest 10.1 Berlin version of C++ Builder):
Embarcadero C++ 7.20 for Win64 Copyright (c) 2012-2016 Embarcadero Technologies, Inc.
Embarcadero Technologies Inc. bcc64 version 3.3.1 (35759.1709ea1.58602a0) (based on LLVM 3.3.1)
The internet says [citation needed] that the bug is always in the user program but never in the compiler or standard library, so please enlighten us if/where we do things that are not to be done in C++ / C++ Builder.
Strictly speaking, there is nothing wrong with this code, so it has to be a compiler bug. File a bug report at Quality Portal.
That being said, you should generally stay away from using catch (...). If you are going to catch an exception at all, at least catch what you are expecting and willing to handle:
catch (const char *)
Let anything unexpected pass through and be handled higher up the caller chain.
I would not recommend throwing a string literal directly. It is better to wrap it in a std::runtime_error or System::Sysutils::Exception based object instead.
#include <vcl.h>
#include <iostream>
#include <stdexcept>
void SIDE_EFFECTS() {
if (StrToFloat("1337")) {
throw std::runtime_error("abc");
}
}
int _tmain(int argc, _TCHAR* argv[])
{
double innocent = StrToFloat("42");
std::cout << innocent << std::endl;
try {
SIDE_EFFECTS();
} catch (const std::runtime_error &) {
}
std::cout << innocent << std::endl;
return 0;
}
#include <vcl.h>
#include <iostream>
void SIDE_EFFECTS() {
if (StrToFloat("1337")) {
throw Exception("abc");
}
}
int _tmain(int argc, _TCHAR* argv[])
{
double innocent = StrToFloat("42");
std::cout << innocent << std::endl;
try {
SIDE_EFFECTS();
} catch (const Exception &) {
}
std::cout << innocent << std::endl;
return 0;
}
I am following a tutorial and I think I have done everything right, but it continues saying "Unsupported image format".
The code:
SDL_Texture *LoadTexture(string filePath, SDL_Renderer *renderTarget)
{
SDL_Texture *texture = nullptr;
SDL_Surface *surface = IMG_Load(filePath.c_str());
if (surface == NULL)
{
cout << "Error: " << IMG_GetError() << endl;
}
else
{
texture = SDL_CreateTextureFromSurface(renderTarget, surface);
if (texture == NULL)
{
cout << "Error: " << SDL_GetError() << endl;
}
}
SDL_FreeSurface(surface);
return texture;
}
The surface stays NULL after it accepts the result of IMG_Load();
Also, my includes:
#include <iostream>
#include <SDL2/SDL.h>
#include <SDL/SDL_image.h>
And, my initialization:
SDL_Init(SDL_INIT_VIDEO);
int image_flags = IMG_INIT_PNG;
if (IMG_Init(image_flags) != image_flags)
{
cout << "Error: " << IMG_GetError() << endl;
}
Also, in case it matters, I am doing this in Ubuntu and I recently switched from Windows, so I may not be doing something with the libraries correctly.
Edit: If you would ask, I am trying to work with a PNG, so I am not using a format that I haven't initialized
Change this :
#include <SDL/SDL_image.h>
to
#include <SDL2/SDL_image.h>
You are currently using SDL_image (from SDL1) header with SDL2, I think your problem is there. Maybe you will need to install SDL2_image-dev, if it not installed.