I'm trying to create a GLX context, attach it to a X Window, detach and destroy it again, then create another GLX context with a different Visual and attach it to the same window.
#include <GL/glx.h>
#include <X11/Xlib.h>
#include <stdlib.h>
#include <stdio.h>
// Descriptions for the visuals to try - if both are equal, the example works
static int attr_sets[][3] = {
{ GLX_RGBA, GLX_DOUBLEBUFFER, None },
{ GLX_RGBA, None }
};
Display *dpy;
XVisualInfo *vi;
GLXContext cxt;
Window wnd;
size_t i;
void fail(const char *m) { fprintf(stderr, "fail: %s #%lu\n", m, i+1); abort(); }
int main(void) {
dpy = XOpenDisplay(NULL);
wnd = XCreateSimpleWindow(dpy, RootWindow(dpy, 0), 0, 0, 1, 1, 1, 0, 0);
for (i = 0; i < 2; ++i) {
if (!(vi = glXChooseVisual(dpy, 0, attr_sets[1]))) fail("choose");
if (!(cxt = glXCreateContext(dpy, vi, None, True))) fail("create");
XFree(vi);
if (!glXMakeCurrent(dpy, wnd, cxt)) fail("attach");
if (!glXMakeCurrent(dpy, wnd, 0)) fail("detach");
glXDestroyContext(dpy, cxt);
}
XDestroyWindow(dpy, wnd);
XCloseDisplay(dpy);
return 0;
}
This example works on Mesa 10.5.2 with Intel graphics but fails on AMD fglrx 12.104 when the second context is attached (fail: attach #2).
What is the reason for this error? Is this forbidden by specification or is it a driver error?
If you look at the definition of XCreateSimpleWindow you'll see, that it's actually just a wrapper around XCreateWindow. XCreateWindow in turn will use the visual of it's parent.
Now X11 visuals are only half the story. When you attach a OpenGL context to a Drawable for the first time, the visual (and for the more advanced features also its FBConfig) of that Drawable may become refined, so that later on only OpenGL contexts compatible with that configurations can be attached.
In short once a Drawables Visual/FBConfig has been pinned down, only OpenGL contexts compatible to it can be attached. See the error's defined for glXMakeCurrent, notably
BadMatch is generated if drawable was not created with the same X
screen and visual as ctx. It is also generated if drawable is None and
ctx is not NULL.
Normally when using GLX you'd use glXCreateWindow to create a OpenGL exclusive subwindow in your main window, which Visual/FBConfig you can set without affecting your main window.
Related
I'm using GTK-4 (version 4.8.2-1) with g++ on mingw-w64. I'm running Windows 11. I'm creating an app with a moveable widget and noticed that on task manager, the app rapidly eats up memory whenever a drag action is performed. Below is a minimal working example that produces this issue on my machine. Obviously this code doesn't actually do anything, but dragging the label repeatedly (on my machine, about 20 times) will cause the app to crash. Am I misunderstanding the API in some fundamental way and inadvertently causing this? Or is this an issue with GTK? mingw?
#include <gtk/gtk.h>
static GdkContentProvider* on_drag_prepare(GtkDragSource *source, double x, double y, GtkWidget* label) {
GValue* a = new GValue;
(*a) = G_VALUE_INIT;
g_value_init(a, G_TYPE_INT);
g_value_set_int(a, 1);
// The content is basically a single integer.
return gdk_content_provider_new_for_value(a);
}
static void activate( GtkApplication *app, gpointer user_data) {
// window
GtkWidget* window = gtk_application_window_new (app);
gtk_window_set_title (GTK_WINDOW (window), "Drag Memory Leak");
gtk_window_set_default_size (GTK_WINDOW (window), 400, 400);
// label to drag
GtkWidget* label = gtk_label_new("Drag Me");
gtk_window_set_child(GTK_WINDOW (window), label);
// setting up drag callback
GtkDragSource *drag_source = gtk_drag_source_new();
gtk_drag_source_set_actions(drag_source, GDK_ACTION_MOVE);
g_signal_connect (drag_source, "prepare", G_CALLBACK (on_drag_prepare), label);
gtk_widget_add_controller (GTK_WIDGET (label), GTK_EVENT_CONTROLLER (drag_source));
gtk_window_present (GTK_WINDOW (window));
}
int main( int argc, char **argv) {
GtkApplication *app;
int status;
app = gtk_application_new("org.gtk.example", G_APPLICATION_DEFAULT_FLAGS);
g_signal_connect (app, "activate", G_CALLBACK (activate), NULL);
status = g_application_run (G_APPLICATION (app), argc, argv);
g_object_unref (app);
return status;
}
I expected that when I compiled and ran this program, I'd not have a memory leak when dragging a widget repeatedly.
we have multiple printing applications that draw text into (an enhmetafile) GDI. When these painting actions (in short something like CreateFont(), SelectFont(), DrawText(), GetObject(), ... DeselectFont(), DeleteFont()) are done in threads, the application crashes very soon in DeleteObject() of a font handle. If the threads are synchronized, it does not happen. Under Windows 10 there's is no problem at all.
Reproduction by some simple code is not trivial, and our code is a little complex querying the LOGFONT, querying current object, ... to lay out the page to paint into (including wordbreak etc), and a simple multithreaded sample does not show this behaviour. It must be an unfortunate combination of the font APIs (or a combination with other GDI object APIs).
Trace of the crash is always in the same place, a corrupted heap caused by the DeleteObject API:
ntdll.dll!_RtlReportCriticalFailure#12() Unknown
ntdll.dll!_RtlpReportHeapFailure#4() Unknown
ntdll.dll!_RtlpHpHeapHandleError#12() Unknown
ntdll.dll!_RtlpLogHeapFailure#24() Unknown
ntdll.dll!_RtlpFreeHeapInternal#20() Unknown
ntdll.dll!RtlFreeHeap() Unknown
gdi32full.dll!_vFreeCFONTCrit#4() Unknown
gdi32full.dll!_vDeleteLOCALFONT#4() Unknown
gdi32.dll!_InternalDeleteObject#4() Unknown
gdi32.dll!_DeleteObject#4() Unknown
I do write it here in the hope of finding someone who has the same problem - or to be found by someone looking for others (like me here) ;)
OK, the culprit for our case of printing in a metafile DC has been found: the APIs GetTextExtentPoint() and its alias GetTextExtentPoint32() are not threadsafe in Windows 11 and do corrupt the GDI heap if GDI text operations are used by multiple threads.
More findings:
DC is a metafile DC:
heap becomes corrupted if `GetTextExtentPoint()ยด is being used
everything works without this API
DC is a Window DC:
the code always hangs in an endless loop in ExtTextOut() or GetTextExtentPoint() (if opted in) in the application's painting loop. BTW: Even in Windows 10...! Not a deadlock, but full processor load (one at least one of the CPUs, so there's some kind of synchronization)
The code is attached, you may play around with the macros SHOW_ERROR and USE_METAFILE...:
#include "stdafx.h"
#include <windows.h>
#include <windowsx.h>
#include <process.h>
#include <assert.h>
#define SHOW_ERROR 1
#define USE_METAFILE 0
#define sizeofTSTR(b) (sizeof(b)/sizeof(TCHAR))
const int THREADCOUNT = 20;
struct scThreadData
{
public:
volatile LONG* _pnFinished;
TCHAR _szFilename[MAX_PATH];
DWORD _dwFileSize;
};
void _cdecl g_ThreadFunction(void* pParams)
{
scThreadData* pThreadData = reinterpret_cast<scThreadData*>(pParams);
PRINTDLG pd = {0};
HDC hDC = NULL;
::Sleep(rand() % 1000);
printf("start %d\n", ::GetCurrentThreadId());
#if USE_METAFILE
pd.lStructSize = sizeof(pd);
pd.Flags = PD_RETURNDC | PD_RETURNDEFAULT;
::PrintDlg(&pd);
RECT rcPage = {0,0,10000,10000};
hDC = ::CreateEnhMetaFile(pd.hDC, pThreadData->_szFilename, &rcPage, _T("Hallo"));
#else
hDC = ::GetDC(NULL);
#endif
for (int i = 0; i < 20000; ++i)
{
HFONT newFont = ::CreateFont(-100, 0, 0, 0, 0, 0, 0, 0, DEFAULT_CHARSET, OUT_TT_PRECIS, CLIP_DEFAULT_PRECIS, ANTIALIASED_QUALITY, 0, L"Arial");
HFONT oldFont = SelectFont(hDC, newFont);
::ExtTextOut(hDC, 0, 0, 0, NULL, _T("x"), 1, NULL);
#if SHOW_ERROR
SIZE sz = {};
::GetTextExtentPoint(hDC, L"Hallo", 5, &sz); // <<-- causes GDI heap to be corrupted
#endif
SelectFont(hDC, oldFont);
::DeleteFont(newFont);
}
#if USE_METAFILE
::DeleteEnhMetaFile(::CloseEnhMetaFile(hDC));
::DeleteDC(pd.hDC);
#else
::DeleteDC(hDC);
#endif
::DeleteFile(pThreadData->_szFilename);
printf("end %d\n", ::GetCurrentThreadId());
// done
::InterlockedIncrement(pThreadData->_pnFinished);
}
int _tmain(int argc, _TCHAR* argv[])
{
volatile LONG nFinished(0);
scThreadData TD[THREADCOUNT];
TCHAR szUserName[30];
TCHAR szComputerName[30];
DWORD dwLen;
dwLen = sizeofTSTR(szUserName);
::GetUserName(szUserName,&dwLen);
dwLen = sizeofTSTR(szComputerName);
::GetComputerName(szComputerName,&dwLen);
for (int nThread = 0; nThread < THREADCOUNT; ++nThread)
{
TD[nThread]._pnFinished = &nFinished;
_stprintf_s(TD[nThread]._szFilename,MAX_PATH,_T("test-%s-%d.emf"),szUserName,nThread);
_beginthread(g_ThreadFunction,10000,(void*)&TD[nThread]);
::Sleep(200);
}
Sleep(1000);
while (nFinished < THREADCOUNT)
{
::Sleep(100);
}
return 0;
}
My plan was to create a loading thread inside of which I load resources for a game; such as 3D models, shaders, textures, etc. On the main thread I perform all the game logic and rendering. Then, on my loading thread, I create a sf::Context (SFML shared OpenGL context) which is used only for loading.
This is working for loading shaders. However, xserver sometimes crashes when attempting to load models. I have narrowed the crash down to the glBufferData() call. I have checked that there is nothing wrong with the data that I am sending.
Is it possible call glBufferData() from a second thread using a second OpenGL context? If not, why is it possible to load shaders in the second context? If it is possible, what could be going wrong?
#include <iostream>
#include <thread>
#include <GL/glew.h>
#include <SFML/OpenGL.hpp>
#include <SFML/Graphics.hpp>
#include <X11/Xlib.h>
class ResourceLoader
{
public:
void Run()
{
sf::Context loadingContext;
loadingContext.setActive(true);
// Some test data.
float* testData = new float[3000];
for (unsigned int i = 0; i < 3000; ++i)
{
testData[i] = 0.0f;
}
// Create lots of VBOs containing our
// test data.
for (unsigned int i = 0; i < 1000; ++i)
{
// Create VBO.
GLuint testVBO = 0;
glGenBuffers(1, &testVBO);
std::cout << "Buffer ID: " << testVBO << std::endl;
// Bind VBO.
glBindBuffer(GL_ARRAY_BUFFER, testVBO);
// Crashes on this call!
glBufferData(
GL_ARRAY_BUFFER,
sizeof(float) * 3000,
&testData[0],
GL_STATIC_DRAW
);
// Unbind VBO.
glBindBuffer(GL_ARRAY_BUFFER, 0);
// Sleep for a bit.
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
delete[] testData;
}
};
int main()
{
XInitThreads();
// Create the main window.
sf::RenderWindow window(sf::VideoMode(800, 600), "SFML window", sf::Style::Default, sf::ContextSettings(32));
window.setVerticalSyncEnabled(true);
// Make it the active window for OpenGL calls.
window.setActive();
// Configure the OpenGL viewport to be the same size as the window.
glViewport(0, 0, window.getSize().x, window.getSize().y);
// Initialize GLEW.
glewExperimental = GL_TRUE; // OSX fix.
if (glewInit() != GLEW_OK)
{
window.close();
exit(1); // failure
}
// Enable Z-buffer reading and writing.
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
// Create the resource loader.
ResourceLoader loader;
// Run the resource loader in a separate thread.
std::thread loaderThread(&ResourceLoader::Run, &loader);
// Detach the loading thread, allowing it to run
// in the background.
loaderThread.detach();
// Game loop.
while (window.isOpen())
{
// Event loop.
sf::Event event;
while (window.pollEvent(event))
{
if (event.type == sf::Event::Closed)
{
window.close();
}
}
// Clear scren.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Switch to SFML's OpenGL state.
window.pushGLStates();
{
// Perform SFML drawing here...
sf::RectangleShape rect(sf::Vector2f(100.0f, 100.0f));
rect.setPosition(100.0f, 100.0f);
rect.setFillColor(sf::Color(255, 255, 255));
window.draw(rect);
}
// Switch back to our game rendering OpenGL state.
window.popGLStates();
// Perform OpenGL drawing here...
// Display the rendered frame.
window.display();
}
return 0;
}
I think the problem is that you call glBufferData before setting up GLEW, so the function pointer to glBufferData is not initialized. Please try this ordering for initializing your program:
Initialize the RenderWindow
Initialize GLEW
Start threads, and create additional contexts as required!
I have an application which accesses OpenGL context.I run it on 2 OSs :
1.Kubuntu 13.4
2.Ubuntu 12.4
I am experiencing the following issue: on OS 1 it takes around 60 ms to setup the context, while on OS 2 it takes 10 times more.Both OSs use Nvidia GPUs with driver version 319.It also seems like OpenGL API calls are slower in general for OS 2.The contexts are offscreen.Currently I have no clue what could cause it.My question is what are possible sources of such an overhead?X11 setup?Or may be something on the OS level?
Another difference is that OS 1 uses Nvidia GTX680 while OS2 uses Nvidia GRID K1 card.Also OS2 resides on a server and the latency tests are run locally on that machine.
UPDATE:
This is the part which causes most of overhead:
typedef GLXContext (*glXCreateContextAttribsARBProc)(Display*, GLXFBConfig, GLXContext, Bool, const int*);
typedef Bool (*glXMakeContextCurrentARBProc)(Display*, GLXDrawable, GLXDrawable, GLXContext);
static glXCreateContextAttribsARBProc glXCreateContextAttribsARB = 0;
static glXMakeContextCurrentARBProc glXMakeContextCurrentARB = 0;
int main(int argc, const char* argv[]){
static int visual_attribs[] = {
None
};
int context_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
GLX_CONTEXT_MINOR_VERSION_ARB, 0,
None
};
Display* dpy = XOpenDisplay(0);
int fbcount = 0;
GLXFBConfig* fbc = NULL;
GLXContext ctx;
GLXPbuffer pbuf;
/* open display */
if ( ! (dpy = XOpenDisplay(0)) ){
fprintf(stderr, "Failed to open display\n");
exit(1);
}
/* get framebuffer configs, any is usable (might want to add proper attribs) */
if ( !(fbc = glXChooseFBConfig(dpy, DefaultScreen(dpy), visual_attribs, &fbcount) ) ){
fprintf(stderr, "Failed to get FBConfig\n");
exit(1);
}
/* get the required extensions */
glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc)glXGetProcAddressARB( (const GLubyte *) "glXCreateContextAttribsARB");
glXMakeContextCurrentARB = (glXMakeContextCurrentARBProc)glXGetProcAddressARB( (const GLubyte *) "glXMakeContextCurrent");
if ( !(glXCreateContextAttribsARB && glXMakeContextCurrentARB) ){
fprintf(stderr, "missing support for GLX_ARB_create_context\n");
XFree(fbc);
exit(1);
}
/* create a context using glXCreateContextAttribsARB */
if ( !( ctx = glXCreateContextAttribsARB(dpy, fbc[0], 0, True, context_attribs)) ){
fprintf(stderr, "Failed to create opengl context\n");
XFree(fbc);
exit(1);
}
/* create temporary pbuffer */
int pbuffer_attribs[] = {
GLX_PBUFFER_WIDTH, 800,
GLX_PBUFFER_HEIGHT, 600,
None
};
pbuf = glXCreatePbuffer(dpy, fbc[0], pbuffer_attribs);
XFree(fbc);
XSync(dpy, False);
/* try to make it the current context */
if ( !glXMakeContextCurrent(dpy, pbuf, pbuf, ctx) ){
/* some drivers does not support context without default framebuffer, so fallback on
* using the default window.
*/
if ( !glXMakeContextCurrent(dpy, DefaultRootWindow(dpy), DefaultRootWindow(dpy), ctx) ){
fprintf(stderr, "failed to make current\n");
exit(1);
}
}
/* try it out */
printf("vendor: %s\n", (const char*)glGetString(GL_VENDOR));
return 0;
}
Specifically , the line :
pbuf = glXCreatePbuffer(dpy, fbc[0], pbuffer_attribs);
where the dummy pbuffer is created is the slowest.If the rest of function calls take in average 2-4 ms,this call takes 40 ms on OS 1. Now , on OS2 (which is slow) the pbuffer creation takes 700ms! I hope now my problems looks more clear.
Are you absolutely sure "OS2" has correctly set up drivers and isn't falling back on SW OpenGL (Mesa) rendering? What framerate does glxgears report on each system?
I note Ubuntu 12.4 was released April 2012 while I believe NVidia's "GRID" tech wasn't even announced until GTC May 2012 and I think cards didn't turn up until 2013 (see relevant Nvidia press releases). Therefore it seems very unlikely Nvidia's drivers as supplied with Ubuntu 12.4 support the grid card (unless you've made some effort to upgrade using more recent driver releases from Nvidia?).
You may be able to check the list of supported hardware in /usr/share/doc/nvidia-glx/README.txt.gz's Appendix A "Supported NVIDIA GPU Products" (at least that's where this useful information lives on my Debian machines).
I had a perfectly working OpenCV code (having the function cvCaptureFromCAM(0)). But when I modified it to run in a separate thread, I get this "Video Source" selection dialog box and it asks me to choose the Webcam. Even though I select a cam, it appears that the function cvCaptureFromCAM(0) returns null. I also tried by passing the values 0, -1,1, CV_CAP_ANYto this function. I have a doubt that this dialog box causes this issue. Is there any way to avoid this or does anyone have any other opinion?
I've followed the following posts when debugging:
cvCreateCameraCapture returns null
OpenCV cvCaptureFromCAM returns zero
EDIT
Code structure
//header includes
CvCapture* capture =NULL;
IplImage* frame = NULL;
int main(int argc, char** argv){
DWORD qThreadID;
HANDLE ocvThread = CreateThread(0,0,startOCV, NULL,0, &qThreadID);
initGL(argc, argv);
glutMainLoop();
CloseHandle(ocvThread);
return 0;
}
void initGL(int argc, char** argv){
//Initialize GLUT
//Create the window
//etc
}
DWORD WINAPI startOCV(LPVOID vpParam){
//capture = cvCaptureFromCAM(0); //0 // CV_CAP_ANY
if ((capture = cvCaptureFromCAM(1)) == NULL){ // same as simply using assert(capture)
cerr << "!!! ERROR: vCaptureFromCAM No camera found\n";
return -1;
}
frame = cvQueryFrame(capture);
}
//other GL functions
Thanks.
Since this is a problem that only happens on Windows, an easy fix is to leave cvCaptureFromCAM(0) on the main() thread and then do the image processing stuff on a separate thread, as you intented originally.
Just declare CvCapture* capture = NULL; as a global variable so all your threads can access it.
Solved. I couldn't get rid of the above mentioned dialog box, but I avoided the error by simply duplicating the line capture = cvCaptureFromCAM(0);
capture = cvCaptureFromCAM(0);
capture = cvCaptureFromCAM(0);
It was just random. I suspect it had something to do with behavior of Thread. What's your idea?
Thanks all for contributing.