So far as I know about X11 with Xlib, is that with multi threading a programmer has 2 choices
call early enough XInitThreads
or use new connection (XOpenDisplay) per thread.
Suppose i don't like first the first method with XInitThreads() call. Why does second fail?
#include <X11/Xlib.h>
#include <thread>
void startBasicWin() {
Display *display;
if ( (display=XOpenDisplay(NULL)) == NULL )
{
fprintf( stderr, "cannot connect to X server\n");
exit( -1 );
}
XCloseDisplay(display);
}
int main() {
std::thread t3 = std::thread(startBasicWin);
std::thread t4 = std::thread(startBasicWin);
std::thread t5 = std::thread(startBasicWin);
std::thread t6 = std::thread(startBasicWin);
std::thread t7 = std::thread(startBasicWin);
std::thread t8 = std::thread(startBasicWin);
std::thread t9 = std::thread(startBasicWin);
t3.join();
t4.join();
t5.join();
t6.join();
t7.join();
t8.join();
t9.join();
}
compiled with
g++ -o xlib_multi xlib_multi.cpp -lX11 -std=c++11 -pthread -g
sometimes produces output:
Segmentation fault
or
No protocol specified
cannot connect to X server :0
Can it be, that I can't use XOpenDisplay() without thread-synchronization? But once the X11 connections are created with Xlib, I could use Xlib in multi-threaded environment without any problems? Is such assumption correct?
Or is Xlib just buggy for multi-threading anyway?
Chances are that XOpenDisplay() uses some global variable internally that is not thread-safe or shares data between the displays. I don't think it's wise to call XOpenDisplay like that from within a thread; I suggest opening the displays sequentially first, then start the threads with a Display pointer. Or protect the code section around XOpenDisplay (and XCloseDisplay!) with a mutex.
Either way, the fact that there is a separate XInitThreads() call makes your assumption that everything will be fine "after" XOpenDisplay() very dangerous.
Related
I read about Advanced Programming in Unix Environment 3rd, 11.6.2 Deadlock Avoidance:
A thread will deadlock itself if it tries to lock the same mutex twice
In order to verify this, I write a demo:
pthread_mutex_t mutex;
int main() {
pthread_mutex_init(&mutex, NULL);
pthread_mutex_lock(&mutex);
printf("lock 1\n");
pthread_mutex_lock(&mutex);
printf("lock 2\n");
pthread_mutex_unlock(&mutex);
printf("unlock 1\n");
pthread_mutex_unlock(&mutex);
printf("unlock 2\n");
pthread_mutex_destroy(&mutex);
return 0;
}
Main thread didn't blocked, and the output is:
lock 1
lock 2
unlock 1
unlock 2
Why is it so?
How are you compiling this? I suspect you did not pass the -pthread option to the compiler and pthread-related things like the above remain as noops (i.e. they are not pulled in).
I just tested your prog compiled as
cc -pthread meh.c
and the result nicely hangs after "lock 1".
A simple test program, I expect it will "clone" to fork a child process, and each process can execute till its end
#include<stdio.h>
#include<sched.h>
#include<unistd.h>
#include<sys/types.h>
#include<errno.h>
int f(void*arg)
{
pid_t pid=getpid();
printf("child pid=%d\n",pid);
}
char buf[1024];
int main()
{
printf("before clone\n");
int pid=clone(f,buf,CLONE_VM|CLONE_VFORK,NULL);
if(pid==-1){
printf("%d\n",errno);
return 1;
}
waitpid(pid,NULL,0);
printf("after clone\n");
printf("father pid=%d\n",getpid());
return 0;
}
Ru it:
$g++ testClone.cpp && ./a.out
before clone
It didn't print what I expected. Seems after "clone" the program is in unknown state and then quit. I tried gdb and it prints:
Breakpoint 1, main () at testClone.cpp:15
(gdb) n-
before clone
(gdb) n-
waiting for new child: No child processes.
(gdb) n-
Single stepping until exit from function clone#plt,-
which has no line number information.
If I remove the line of "waitpid", then gdb prints another kind of weird information.
(gdb) n-
before clone
(gdb) n-
Detaching after fork from child process 26709.
warning: Unexpected waitpid result 000000 when waiting for vfork-done
Cannot remove breakpoints because program is no longer writable.
It might be running in another process.
Further execution is probably impossible.
0x00007fb18a446bf1 in clone () from /lib64/libc.so.6
ptrace: No such process.
Where did I get wrong in my program?
You should never call clone in a user-level program -- there are way too many restrictions on what you are allowed to do in the cloned process.
In particular, calling any libc function (such as printf) is a complete no-no (because libc doesn't know that your clone exists, and have not performed any setup for it).
As K. A. Buhr points out, you also pass too small a stack, and the wrong end of it. Your stack is also not properly aligned.
In short, even though K. A. Buhr's modification appears to work, it doesn't really.
TL;DR: clone, just don't use it.
The second argument to clone is a pointer to the child's stack. As per the manual page for clone(2):
Stacks grow downward on all processors that run Linux (except the HP PA processors), so child_stack usually points to the topmost address of the memory space set up for the child stack.
Also, 1024 bytes is a paltry amount for a stack. The following modified version of your program appears to run correctly:
// #define _GNU_SOURCE // may be needed if compiled as C instead of C++
#include <stdio.h>
#include <sched.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <errno.h>
int f(void*arg)
{
pid_t pid=getpid();
printf("child pid=%d\n",pid);
return 0;
}
char buf[1024*1024]; // *** allocate more stack ***
int main()
{
printf("before clone\n");
int pid=clone(f,buf+sizeof(buf),CLONE_VM|CLONE_VFORK,NULL);
// *** in previous line: pointer is to *end* of stack ***
if(pid==-1){
printf("%d\n",errno);
return 1;
}
waitpid(pid,NULL,0);
printf("after clone\n");
printf("father pid=%d\n",getpid());
return 0;
}
Also, #Employed Russian is right -- you probably shouldn't use clone except if you're trying to have some fun. Either fork or vfork are more sensible interfaces to clone whenever they meet your needs.
I am trying to use OpenGL with shared context (because of sharing textures between windows) via FreeGLUT library... It work fine, I can share textures, but i failed on the end of program or during windows closing by mouse...
I have cerated the code which emulate the problem: (http://pastie.org/9437038)
// file: main.c
// compile: gcc -o test -lglut main.c
// compile: gcc -o test -lglut -DTIME_LIMIT main.c
#include "GL/freeglut.h"
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
int winA, winB, winC;
int n;
glutInit(&argc, argv);
glutSetOption(GLUT_ACTION_ON_WINDOW_CLOSE , GLUT_ACTION_CONTINUE_EXECUTION);
//glutSetOption(GLUT_RENDERING_CONTEXT, GLUT_USE_CURRENT_CONTEXT);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
winA = glutCreateWindow("Test A");
glutSetOption(GLUT_RENDERING_CONTEXT, GLUT_USE_CURRENT_CONTEXT);
winB = glutCreateWindow("Test B");
winC = glutCreateWindow("Test C");
printf("loop\n");
#ifdef TIME_LIMIT
for (n=0;n<50;n++)
{
glutMainLoopEvent();
usleep(5000);
}
#else //TIMELIMIT
glutMainLoop();
#endif // TIME_LIMIT
printf("Destroy winC\n");
glutDestroyWindow(winC);
printf("Destroy winB\n");
glutDestroyWindow(winB);
printf("Destroy winA\n");
glutDestroyWindow(winA);
printf("Normal end\n");
return 0;
}
Output:
loop
X Error of failed request: GLXBadContext
Major opcode of failed request: 153 (GLX)
Minor opcode of failed request: 4 (X_GLXDestroyContext)
Serial number of failed request: 113
Current serial number in output stream: 114
Segmentation fault
output with TIME_LIMIT:
loop
Destroy winC
Destroy winB
Destroy winA
Segmentation fault
Without calling glutSetOption(GLUT_RENDERING_CONTEXT, GLUT_USE_CURRENT_CONTEXT);, it works well.
Do anybody have idea what am I doing bad?
The option GLUT_USE_CURRENT_CONTEXT does not create shared contexts. It just means that the same GL context is used for all windows. You only have one GL conxtext, and destroy it when you first destroy a window which uses that, so the other destruction calls fail. None of the GLUT implementations I'm aware of actually supports GL context sharing.
GLUT_USE_CURRENT_CONTEXT is more like a hack (and it is nor part of the GLUT specification anyway), and not really a well-implemented. It could use some reference counting to destroy the context not before the last window using it is destroyed, but that is simply not the case.
I am running Visual C++ 2013 and I notice that creating a thread with the std::thread class spawns two threads. Is this by design? If so, what is the reason for this?
When I use _beginthreadex() it only spawns one thread as I would expect.
unsigned int __stdcall Func(void*)
{
unsigned int i = 0;
while (i < 1000000000)
{
++i;
}
return i;
}
int wmain()
{
thread doStuff(Func, nullptr);
auto id = doStuff.get_id();
doStuff.join();
}
EDIT 1
When I put a breakpoint on doStuff.join() I see the following output. The id variable matches the 55760 thread. When I use _beginthreadex() I do not get that extra thread "ntdll.dll thread".
EDIT 2
Here is the call stack with symbols loaded.
ThreadTest.exe!wmain() Line 21
ThreadTest.exe!__tmainCRTStartup() Line 623
ThreadTest.exe!wmainCRTStartup() Line 466
kernel32.dll!#BaseThreadInitThunk#12()
ntdll.dll!___RtlUserThreadStart#8()
ntdll.dll!__RtlUserThreadStart#8()
Main Thread is obvious. It's your main thread. When you create a thread, only one thread will be created. The msvcr* thread is Microsoft C Runtime Library. I don't think you can control it but don't mind it. Your code works as you expect.
I'm trying to load textures in a background thread to help speed up my application.
The stack we are using is C/C++ on Linux, compiling with gcc. We're using OpenGL, GLUT and GLEW. We have been using libSOIL for texture loading.
Ultimately, launching texture loads with libSOIL fails because it encounters a glGetString() call that causes a segfault. Trying to narrow down the problem, I wrote a very simple OpenGL application that reproduces the behavior. The below code sample shouldn't "do anything," but it also shouldn't segfault. If I knew why it did, I could in theory rework libSOIL so that it would behave in a pthreaded environment.
void *glPthreadTest( void* arg ) {
glGetString( GL_EXTENSIONS ); //SIGSEGV
return NULL;
}
int main( int argc, char **argv ) {
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH );
glewInit();
glGetString( GL_EXTENSIONS ); // Does not cause SIGSEGV
pthread_t id;
if (pthread_create( &id, NULL, glPthreadTest, (void*)NULL ) != 0)
fprintf( stderr, "phtread_create glPthreadTest failed.\n" );
glutMainLoop();
return EXIT_SUCCESS;
}
A sample stacktrace for this application from gdb looks like this:
#0 0x00000038492f86e9 in glGetString () from /usr/lib64/nvidia/libGL.so.1
No symbol table info available.
#1 0x0000000000404425 in glPthreadTest (arg=0x0) at sf.cpp:168
No locals.
#2 0x0000003148e07d15 in start_thread (arg=0x7ffff7b36700) at pthread_create.c:308
__res = <optimized out>
pd = 0x7ffff7b36700
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737349117696, -5802871742031723458, 1, 211665686528, 140737349117696, 0, 5802854601940796478,
-5829171783283899330}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
pagesize_m1 = <optimized out>
sp = <optimized out>
freesize = <optimized out>
#3 0x00000031486f246d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:114
No locals.
You'll notice I am using the nvidia libGL implementation, but this also occurs identically with the mesa libgl that Ubuntu uses for Intel HD graphics cards.
Any tips for what might be going wrong, or how to investigate further to see what's happening?
Edit: Here are the #includes and the compile string for my example test:
#include <SOIL.h>
#include <GL/glew.h>
#include <GL/freeglut.h>
#include <GL/freeglut_ext.h>
#include <signal.h>
#include <pthread.h>
#include <cstdio>
g++ -Wall -pedantic -I/usr/include/SOIL -O0 -ggdb -o sf sf.cpp -lSOIL -pthread -lGL -lGLU -lGLEW -lglut -lX11
In order for any OpenGL call to operate properly, it requires an OpenGL context. Contexts are created using a window-system binding call (like wglCreateContext or similar). After creating a context, it needs to be "made current", which means associating the context with the current thread of execution. This is accomplished with another window-system specific call (like wglMakeCurrent for Microsoft Windows, or glXMakeCurrent for X Windows). GLUT abstracts all of that complexity away from you, doing all of those operations when you call glutCreateWindow.
Now, an important rule to know is that only a single OpenGL context can be current to a thread of execution at any one time. So, in the OP's original example, if she/he could make the context current in the Pthread they created, then the context would be lost in the main thread. The way to keep all this consistent is to only use a single context in a single thread. (It's possible to have OpenGL contexts share data, but that's neither exposed by GLUT, nor possible without using the window-system context creation calls).
In your case, it's likely that GLUT doesn't allow access to what you really need (i.e., the OpenGL context), to make it current in the other thread. You'd need to create and manage OpenGL contexts yourself.