OpenGL EGL eglGetDisplay keeps return EGL error 0x3008(EGL_BAD_DISPLAY ) - linux

My ubuntu version is 16.04, and I first installed mesa-common-dev, libgl1-mesa-dev, libglm-dev, libegl1-mesa-dev. Then I installed NVIDIA-Linux-x86_64-440.64.run with opengl support.
But when I tried to run a toy example, I keep getting this error main: Assertion display != EGL_NO_DISPLAY failed
/* Compile with gcc -g3 -o example example.c -lX11 -lEGL */
#include <assert.h>
#include <stdio.h>
#include <EGL/egl.h>
#include <EGL/eglplatform.h>
void printEGLError();
int main(void) {
Display* x_display = XOpenDisplay(NULL);
EGLDisplay display = eglGetDisplay(x_display);
// EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
assert(display != EGL_NO_DISPLAY);
EGLint major, minor;
eglInitialize(display, &major, &minor);
char *string = eglQueryString(display, EGL_CLIENT_APIS);
assert(string);
printf("%s\n", string);
return 0;
}
/* Use printEGLError to show a description of the last EGL Error.
The descriptions are taken from the eglGetError manual */
#define ERROR_DESC(...) fprintf(stderr, "%s\n", __VA_ARGS__); break
void printEGLError() {
switch(eglGetError()) {
case(EGL_SUCCESS):
ERROR_DESC("The last function succeeded without error.");
case(EGL_NOT_INITIALIZED):
ERROR_DESC("EGL is not initialized, or could not be initialized, for the specified EGL display connection.");
case(EGL_BAD_ACCESS):
ERROR_DESC("EGL cannot access a requested resource (for example a context is bound in another thread).");
case(EGL_BAD_ALLOC):
ERROR_DESC("EGL failed to allocate resources for the requested operation.");
case(EGL_BAD_ATTRIBUTE):
ERROR_DESC("An unrecognized attribute or attribute value was passed in the attribute list.");
case(EGL_BAD_CONTEXT):
ERROR_DESC("An EGLContext argument does not name a valid EGL rendering context.");
case(EGL_BAD_CONFIG):
ERROR_DESC("An EGLConfig argument does not name a valid EGL frame buffer configuration.");
case(EGL_BAD_CURRENT_SURFACE):
ERROR_DESC("The current surface of the calling thread is a window, pixel buffer or pixmap that is no longer valid.");
case(EGL_BAD_DISPLAY):
ERROR_DESC("An EGLDisplay argument does not name a valid EGL display connection.");
case(EGL_BAD_SURFACE):
ERROR_DESC("An EGLSurface argument does not name a valid surface (window, pixel buffer or pixmap) configured for GL rendering.");
case(EGL_BAD_MATCH):
ERROR_DESC("Arguments are inconsistent (for example, a valid context requires buffers not supplied by a valid surface).");
case(EGL_BAD_PARAMETER):
ERROR_DESC("One or more argument values are invalid.");
case(EGL_BAD_NATIVE_PIXMAP):
ERROR_DESC("A NativePixmapType argument does not refer to a valid native pixmap.");
case(EGL_BAD_NATIVE_WINDOW):
ERROR_DESC("A NativeWindowType argument does not refer to a valid native window.");
case(EGL_CONTEXT_LOST):
ERROR_DESC("A power management event has occurred. The application must destroy all contexts and reinitialise OpenGL ES state and objects to continue rendering. ");
}
}
More Information: my graphics card is Titan Xp and I tried to run sudo servide lightdm stop and removed all possible remote desktop softwares. But the problem still exists. Anyone could help?

For those who may be confused about this problem, just unset DISPLAY. This may save your day.

Related

Dynamic Linker does not resolve symbol although the library is already loaded

I stumbled over the following problem in my large-grown project: I have a set of libraries which depend on each other and on external libraries. Of one dependency ("libvtkCommonCore-*.so"), there are different variants, which need to be used interchangeably. The variants have different suffixes ("libvtkCommonCore-custom1.so", "libvtkCommonCore-custom2.so" and so on). Thus I cannot link the library, which needs symbols from it, directly to the providing library. Rather I link the application of the library which uses it to the appropriate variant and then load my own library.
This approach generally works but fails under some circumstances and I'm a bit lost while finding out what goes wrong.
This situation is working:
Sketch of situation 1
("libA" needs symbols from "libvtkCommonCore". It is loaded at run time by the constructor of some static object in "libB" using a "dlopen" call with flags RTLD_LAZY|RTLD_GLOBAL. libvtkCommonCore* and libB were linked at build time to an executable)
This situation now ceases to work:
Sketch of situation 2
(actually the same as before but complicated by the fact that libvtkCommonCore* and libB are linked to another library libC at build time. This library is loaded from an executable at run time using "dlopen")
I investigated the case by setting LD_DEBUG to "files", "symbols" and/or "binding" and study the output. It reveals that libvtkCommonCore* is loaded, initialized and kept in memory all the time and before libA is loaded. When the linked tries to resolve "SymbolX" in libA, it does not search libvtkCommonCore, although it did for other libraries which needed the same symbol.
Note: I use Linux (Ubuntu 20) with the recent Gcc and CMake. Both the executable in situation 1 and "libC" in situation 2 were built with the flags "-Wl,--add-needed -Wl,--no-as-needed".
Note 2: if I launch the executable in situation 2 with LD_PRELOAD=libvtkCommonCore-custom1.so set, no errors appear.
I would be grateful for any hint how to continue debugging this issue.
A minimum example of the problem is comprised by these files:
libvtkCommonCore-custom1.cpp:
#include <iostream>
void SymbolX()
{
std::cout<<"This just does nothing useful."<<std::endl;
}
libA.cpp:
void SymbolX(); // in libvtkCommonCore-custom1.so
struct LibAStaticObject
{
LibAStaticObject()
{
SymbolX();
}
} libAStaticObject;
libB.cpp:
#include <dlfcn.h>
#include <iostream>
class LibALoader
{
public:
LibALoader()
{
void *handle = dlopen ( "libA.so", RTLD_LAZY|RTLD_GLOBAL|RTLD_NODELETE );
if ( !handle )
{
std::cerr<<"Could not load module library libA!\nReason: " << dlerror() << std::endl;
}
}
} libAloader;
libC.cpp
/*empty*/
executable_situation1.cpp:
#include <iostream>
int main(int argc, char*argv[])
{
std::cout<<"starting."<<std::endl;
return 0;
}
executable_situation2.cpp
#include <iostream>
#include <dlfcn.h>
class LibCLoader
{
public:
LibCLoader()
{
void *handle = dlopen ( "libC.so", RTLD_LAZY|RTLD_GLOBAL|RTLD_NODELETE );
if ( !handle )
{
std::cerr<<"Could not load module library libC.so!\nReason: " << dlerror() << std::endl;
}
}
} libCloader;
int main(int argc, char*argv[])
{
std::cout<<"starting."<<std::endl;
return 0;
}
CMakeLists.txt:
add_library(vtkCommonCore-custom1 SHARED libvtkCommonCore-custom1.cpp)
add_library(A SHARED libA.cpp)
add_library(B SHARED libB.cpp)
target_link_libraries(B dl)
add_library(C SHARED libC.cpp)
target_link_libraries(C vtkCommonCore-custom1 B)
set_target_properties(C PROPERTIES LINK_FLAGS "-Wl,--add-needed -Wl,--no-as-needed -Wl,--copy-dt-needed-entries")
add_executable(executable_situation1 executable_situation1.cpp)
target_link_libraries(executable_situation1 vtkCommonCore-custom1 B)
set_target_properties(executable_situation1 PROPERTIES LINK_FLAGS "-Wl,--add-needed -Wl,--no-as-needed -Wl,--copy-dt-needed-entries") #"-Wl,--no-as-needed")
add_executable(executable_situation2 executable_situation2.cpp)
target_link_libraries(executable_situation2 dl)
Run it by these commands:
$ mkdir build
$ cd build
$ cmake .. && make
$ LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./executable_situation1
This just does nothing useful.
starting.
$ LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./executable_situation2
./executable_situation2: symbol lookup error: ./libA.so: undefined symbol: _Z7SymbolXv
Indeed, the problem is that in situation 2 the libvtkCommonCore is not in the lookup scope of libA while in situation 1 it is in the global scope.
The only (probably ugly) solution I found was to put in a kind of a stub library that loads libvtkCommonCore along with libB using "dlopen" with option "RTLD_GLOBAL". This places libvtkCommonCore in the global lookup scope. The new library is then linked to libC instead of its direct dependencies.

Qt: Why does my custom qgraphicsview widget create error during compiling on Linux?

My Qt project compiled successfully on Windows, but when I was trying to compile it on Linux it gives me all kinds of errors including the one I'm asking here:
I have a custom QGraphicsView class in my project, and it's prompted from the Qt designer. When I compiled my codes on a Linux machine, it gives me errors:
/usr/mvl1/hy2vf/metaData/bin/ui_gtvalidation.h:55: error: ISO C++ forbids declaration of ‘myGraphicsView’ with no type
/usr/mvl1/hy2vf/metaData/bin/ui_gtvalidation.h:55: error: expected ‘;’ before ‘*’ token
/usr/mvl1/hy2vf/metaData/bin/ui_gtvalidation.h: In member function ‘void Ui_GTvalidation::setupUi(QDialog*)’:
/usr/mvl1/hy2vf/metaData/bin/ui_gtvalidation.h:173: error: ‘graphicsView’ was not declared in this scope
/usr/mvl1/hy2vf/metaData/bin/ui_gtvalidation.h:173: error: expected type-specifier before ‘myGraphicsView’
/usr/mvl1/hy2vf/metaData/bin/ui_gtvalidation.h:173: error: expected ‘;’ before ‘myGraphicsView’
Does anyone have had same issues? What's the solution?
Here is the part in ui_gtvalidation.h where it says the problems are. I'm actually not sure what part of my codes I should post to help, so let me know what you want to look at.
55:myGraphicsView *graphicsView;
173:graphicsView = new myGraphicsView(GTvalidation);
graphicsView->setObjectName(QString::fromUtf8("graphicsView"));
myGraphicsView.h
#include <QtGui>
class myGraphicsView : public QGraphicsView{
public:
myGraphicsView(QWidget* parent = 0);
~myGraphicsView(void);
protected:
//Take over the interaction
virtual void wheelEvent(QWheelEvent* event);
};
myGraphicsView.cpp:
#include "myGraphicsView.h"
myGraphicsView::myGraphicsView(QWidget *parent) : QGraphicsView(parent){
}
myGraphicsView::~myGraphicsView(void){
}
void myGraphicsView::wheelEvent(QWheelEvent* event) {
setTransformationAnchor(QGraphicsView::AnchorUnderMouse);
// Scale the view / do the zoom
double scaleFactor = 1.15;
if(event->delta() > 0) {
// Zoom in
scale(scaleFactor, scaleFactor);
} else {
// Zooming out
scale(1.0 / scaleFactor, 1.0 / scaleFactor);
}
// Don't call superclass handler here
// as wheel is normally used for moving scrollbars
}
Your problem is actually in your gtvalidation.ui file. When you promote a widget to custom class, you need to specify include header correctly. For some reason compiler cannot find specified header in Linux. The most simple reason of this could be capitalization mismatch (as Linux filesystems are case sensitive and Windows ones are not). Check header files specified in promotion settings of your form in Designer.

Linux Rendering offscreen with OpenGL 3.2+ w/ FBOs

I have ubuntu machine, and a command line application written in OS X which renders something offscreen using FBOs. This is part of the code.
this->systemProvider->setupContext(); //be careful with this one. to add thingies to identify if a context is set up or not
this->systemProvider->useContext();
glewExperimental = GL_TRUE;
glewInit();
GLuint framebuffer, renderbuffer, depthRenderBuffer;
GLuint imageWidth = _viewPortWidth,
imageHeight = _viewPortHeight;
//Set up a FBO with one renderbuffer attachment
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenRenderbuffers(1, &renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB, imageWidth, imageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer);
//Now bind a depth buffer to the FBO
glGenRenderbuffers(1, &depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, _viewPortWidth, _viewPortHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer);
The "system provider" is a C++ wrapper around OS X's NSOpenGLContext, which is used just to create a rendering context and making it current, without associating it with a window. All the rendering happens in the FBOs.
I am trying to use the same approach for Linux (Ubuntu) using GLX, but I am having a hard time doing it, since I see that GLX requires a pixel buffer.
I am trying to follow this tutorial:
http://renderingpipeline.com/2012/05/windowless-opengl/
At the end it uses a pixel buffer to make the context current, which I hear is deprecated and we should abandon it in favour of Frame Buffer Objects, is that right (I may be wrong about this).
Does anyone have a better approach, or idea?
I don't know if it's the best solution, but it surely works for me.
Binding the functions to local variables that we can use
typedef GLXContext (*glXCreateContextAttribsARBProc)(Display*, GLXFBConfig, GLXContext, Bool, const int*);
typedef Bool (*glXMakeContextCurrentARBProc)(Display*, GLXDrawable, GLXDrawable, GLXContext);
static glXCreateContextAttribsARBProc glXCreateContextAttribsARB = NULL;
static glXMakeContextCurrentARBProc glXMakeContextCurrentARB = NULL;
Our objects as class properties:
Display *display;
GLXPbuffer pbuffer;
GLXContext openGLContext;
Setting up the context:
glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc) glXGetProcAddressARB( (const GLubyte *) "glXCreateContextAttribsARB" );
glXMakeContextCurrentARB = (glXMakeContextCurrentARBProc) glXGetProcAddressARB( (const GLubyte *) "glXMakeContextCurrent");
display = XOpenDisplay(NULL);
if (display == NULL){
std::cout << "error getting the X display";
}
static int visualAttribs[] = {None};
int numberOfFrameBufferConfigurations;
GLXFBConfig *fbConfigs = glXChooseFBConfig(display, DefaultScreen(display), visualAttribs, &numberOfFrameBufferConfigurations);
int context_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB ,3,
GLX_CONTEXT_MINOR_VERSION_ARB, 2,
GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
None
};
std::cout << "initialising context...";
this->openGLContext = glXCreateContextAttribsARB(display, fbConfigs[0], 0, True, context_attribs);
int pBufferAttribs[] = {
GLX_PBUFFER_WIDTH, (int)this->initialWidth,
GLX_PBUFFER_HEIGHT, (int)this->initialHeight,
None
};
this->pbuffer = glXCreatePbuffer(display, fbConfigs[0], pBufferAttribs);
XFree(fbConfigs);
XSync(display, False);
Using the context:
if(!glXMakeContextCurrent(display, pbuffer, pbuffer, openGLContext)){
std::cout << "error with content creation\n";
}else{
std::cout << "made a context the current context\n";
}
After that, one can use FBOs normally, as he would in any other occasion. Up to this day, my question is actually unanswered (if there is any better alternative), so I am just offering a solution that worked for me. Seems to me that GLX does not use the notion of pixel buffers the same way as OpenGL does, hence my confusion. The preferred way to render offscreen is FBOs, but for an OpenGL context to be created on Linux, a pixel buffer (the GLX kind) must be created. After that, using FBOs with the code I provided in the question will work as expected, the same way it does on OS X.

Where is ConnectEx defined?

I want to use ConnectEx function on Windows7, with MSVC2010.
I am getting error C3861: 'ConnectEx': identifier not found
MSDN suggests the function should be declared in mswsock.h, however, when checking it, it's not defined there.
Any tips?
If you read further into the MSDN article for ConnectEx() you mentioned, it says:
Note The function pointer for the ConnectEx function must be obtained
at run time by making a call to the WSAIoctl function with the
SIO_GET_EXTENSION_FUNCTION_POINTER opcode specified. The input buffer
passed to the WSAIoctl function must contain WSAID_CONNECTEX, a
globally unique identifier (GUID) whose value identifies the ConnectEx
extension function. On success, the output returned by the WSAIoctl
function contains a pointer to the ConnectEx function. The
WSAID_CONNECTEX GUID is defined in the Mswsock.h header file.
Unlike other Windows API functions, ConnectEx() must be loaded at runtime, as the header file doesn't actually contain a function declaration for ConnectEx() (it does have a typedef for the function called LPFN_CONNECTEX) and the documentation doesn't specifically mention a specific library that you must link to in order for this to work (which is usually the case for other Windows API functions).
Here's an example of how one could get this to work (error-checking omitted for exposition):
#include <Winsock2.h> // Must be included before Mswsock.h
#include <Mswsock.h>
// Required if you haven't specified this library for the linker yet
#pragma comment(lib, "Ws2_32.lib")
/* ... */
SOCKET s = /* ... */;
DWORD numBytes = 0;
GUID guid = WSAID_CONNECTEX;
LPFN_CONNECTEX ConnectExPtr = NULL;
int success = ::WSAIoctl(s, SIO_GET_EXTENSION_FUNCTION_POINTER,
(void*)&guid, sizeof(guid), (void*)&ConnectExPtr, sizeof(ConnectExPtr),
&numBytes, NULL, NULL);
// Check WSAGetLastError()!
/* ... */
// Assuming the pointer isn't NULL, you can call it with the correct parameters.
ConnectExPtr(s, name, namelen, lpSendBuffer,
dwSendDataLength, lpdwBytesSent, lpOverlapped);

error C2065: 'CComQIPtr' : undeclared identifier

I'm still feeling my way around C++, and am a complete ATL newbie, so I apologize if this is a basic question. I'm starting with an existing VC++ executable project that has functionality I'd like to expose as an ActiveX object (while sharing as much of the source as possible between the two projects).
I've approached this by adding an ATL project to the solution in question, and in that project have referenced all the .h and .cpp files from the executable project, added all the appropriate references, and defined all the preprocessor macros. So far so good. But I'm getting a compiler error in one file (HideDesktop.cpp). The relevant parts look like this:
#include "stdafx.h"
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
#include <WinInet.h> // Shell object uses INTERNET_MAX_URL_LENGTH (go figure)
#if _MSC_VER < 1400
#define _WIN32_IE 0x0400
#endif
#include <atlbase.h> // ATL smart pointers
#include <shlguid.h> // shell GUIDs
#include <shlobj.h> // IActiveDesktop
#include "stdhdrs.h"
struct __declspec(uuid("F490EB00-1240-11D1-9888-006097DEACF9")) IActiveDesktop;
#define PACKVERSION(major,minor) MAKELONG(minor,major)
static HRESULT EnableActiveDesktop(bool enable)
{
CoInitialize(NULL);
HRESULT hr;
CComQIPtr<IActiveDesktop, &IID_IActiveDesktop> pIActiveDesktop; // <- Problematic line (throws errors 2065 and 2275)
hr = pIActiveDesktop.CoCreateInstance(CLSID_ActiveDesktop, NULL, CLSCTX_INPROC_SERVER);
if (!SUCCEEDED(hr))
{
return hr;
}
COMPONENTSOPT opt;
opt.dwSize = sizeof(opt);
opt.fActiveDesktop = opt.fEnableComponents = enable;
hr = pIActiveDesktop->SetDesktopItemOptions(&opt, 0);
if (!SUCCEEDED(hr))
{
CoUninitialize();
// pIActiveDesktop->Release();
return hr;
}
hr = pIActiveDesktop->ApplyChanges(AD_APPLY_REFRESH);
CoUninitialize();
// pIActiveDesktop->Release();
return hr;
}
This code is throwing the following compiler errors:
error C2065: 'CComQIPtr' : undeclared identifier
error C2275: 'IActiveDesktop' : illegal use of this type as an expression
error C2065: 'pIActiveDesktop' : undeclared identifier
The two weird bits: (1) CComQIPtr is defined in atlcomcli.h, which is included in atlbase.h, which is included in HideDesktop.cpp; and (2) this file is only throwing these errors when it's referenced in my new ATL/AX project: it's not throwing them in the original executable project, even though they have basically the same preprocessor definitions. (The ATL AX project, naturally enough, defines _ATL_DLL, but I can't see where that would make a difference.)
My current workaround is to use a normal "dumb" pointer, like so:
IActiveDesktop *pIActiveDesktop;
HRESULT hr = ::CoCreateInstance(CLSID_ActiveDesktop,
NULL, // no outer unknown
CLSCTX_INPROC_SERVER,
IID_IActiveDesktop,
(void**)&pIActiveDesktop);
And that works, provided I remember to release it. But I'd rather be using the ATL smart stuff.
Any thoughts?
You may have forgotten the namespace ATL
ATL::CComQIPtr

Resources