I am experimenting with OpenGL 2.x and 3.x tutorials. The programs compile and link but then segfault on seemingly innocent lines such as
glGenBuffers (1, &m_buffer);
My main() starts with glewInit and glutInit. OpenGL 1 programs compile and run fine, it just seems to be the new functions wrapped by glew.
One tutorial says I should have this test before trying anything else:
if (false == glewIsSupported ("GL_VERSION_2_0"))
This test always fails, even when I change the version string to GL_VERSION_1_0.
#define GL_VERSION_1_3 1 is the highest such definition in GL/gl.h, and there is no GL/gl3.h or GL/GL3 directory.
apt says I have freeglut3 and freeglut3-dev installed, also mesa-common-dev, libglew-1.6 and libgl1-mesa-dev, but there doesn't seem to be any libgl3* package available.
Here is some driver info (I have no proprietary drivers, integrated Intel Ivy Bridge graphics with Nvidia extra card, both are I belive OpenGL 1.4 compatible)
#> glxinfo | grep version
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL version string: 3.0 Mesa 9.0
OpenGL shading language version string: 1.30
All this has left me quite confused.
Are there specific OpenGL2/3/4 packages I should be installing, or in theory is it the same development package for all (for Ubuntu)?
Why is GL_VERSION_1_3 the highest defined version whereas glGenBuffers wasn't introduced until version 1.5?
Why does glewIsSupported fail even for version 1.0?
The impression I get is that I don't have libraries and/or drivers which actually implement the API, but seems as though I do according to glxinfo, which makes me think there's something wrong with the development libraries, but I don't have a coherent picture of what is going on there.
Basically, what do I have to do to get my program to compile/link/run?
I know Ubuntu isn't a great development environment but please don't suggest that I change distro. There must be a way!
My main() starts with glewInit and glutInit
Nope. You don't get a current GL context until glutCreateWindow() returns. You can call glewInit() and glewIsSupported() after that.
Something like this:
#include <GL/glew.h>
#include <GL/glut.h>
...
int main( int argc, char** argv )
{
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_RGBA | GLUT_DOUBLE );
glutInitWindowSize( 300, 300 );
glutCreateWindow( "OpenGL" );
glewInit();
...
return 0;
}
Related
I've used NVidia card, with the properary drivers installed on a Debian Stretch.
But because i'm carry my hard drive between different machines (intel, amd, but always on amd64 act), i'm decided to drop the NVidia card, and rollback opengl to MESA in order to use 3D acceleration on every machine. After a lot of struggling i'm successfully identified and recovered some badly overwritten file (libGL.so, libdrm2.so, by NVidia installer).
Now i'm successfully recovered the 64bit related libraries, so glxgears, browser's WebGL support, gnuplot, etc. works well.
But 32bit libraries (wine, steam) still doesn't work well, it's always falls back to "Mesa X11" render.
I'v used glxgears
$ LIBGL_DEBUG=verbose glxinfo | grep "OpenGL renderer string"
to identify which so and DRI selected. It's prints the lookup process and the renderer:
libGL: OpenDriver: trying /usr/lib/x86_64-linux-gnu/dri/tls/r600_dri.so
libGL: OpenDriver: trying /usr/lib/x86_64-linux-gnu/dri/r600_dri.so
libGL: Using DRI2 for screen 0
OpenGL renderer string: Gallium 0.4 on AMD SUMO (DRM 2.50.0 / 4.12.0-0.bpo.1-amd64, LLVM 3.9.1)
To investigate in 32 bit libraries (we can't have from mesa both the 64 and 32 bit installed), i've downloaded the 32bit version:
$ apt-get download mesa-utils:i386
Unpacked it and tries to figure out why it's fails to select the proper DRI:
LIBGL_DEBUG=verbose ./glxinfo | grep "OpenGL renderer string"
OpenGL renderer string: Mesa X11
The pevious 64bit glxinfo prints debugging information to the stderr therefore we can see how the selection happens.
With 32bit version i can't get any useful information, even if i specify the
LIBGL_DRIVERS_PATH=/usr/lib/i386-linux-gnu/dri/
evironment variable, where mesa might find the proper 32 bit so.
$ file /usr/lib/i386-linux-gnu/dri/r600_dri.so
/usr/lib/i386-linux-gnu/dri/r600_dri.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (GNU/Linux), dynamically linked, BuildID[sha1]=d5177f823f11ac8ea7412e517aa6684154de506e, stripped
How can i get more information about the MESA DRI selection?
What I have:
I am writing Qt application for Linux (I work in Linx Mint 17.3 64-bit)
I use C++11 features in my Qt project (Qt ver 5.5)
I want to add libslave to my Qt project.
libslave uses deprecated (for C++11) boost::function, boost::shared_ptr, boost::bind and boost::any.
My trouble:
When I compile with gcc (v the whole project or only library with -std=c++11 flag boost crashes with many errors. Qt Creator shows about 4000 errors, but they are pretty similar and look like:
typedef boost::function< void( RecordSet& )> callback;
is not complete type
BOOST_NOEXCEPT'does not name a type
~any() BOOST_NOEXCEPT
etc...
I have tried to rewrite library with C++11 std library, but std does not containg boost::any analog, so that was bad idea.
Question:
How to compile boost (or at least libslave) with c++11?
Boost Version: 1.54 (from repo)
g++ version: 4.8.4 (from repo)
Qt version: 5.5 (downloaded from Official Site)
Linux Mint: 17.3 Rosa
UPDATE:
Example:
You can download code what I try to compile by this link.
Instruction:
Download tarball
Extract
Go to folder and just type make (all works fine)
Open MakeFile and replace CXX variable to
CXX = g++ -std=c++11
Try to make again and you'll get errors.
P.S.
To compile library you'll need libmysqld-dev, libboost-all-dev, libmysqlclient-dev.
Probably you'll need something else, but I don't remeber. Sorry.
I found the hack and it works for me.
I replace boost::bind usage in file nanomysql.h to std::bind by such strings:
...
typedef std::map<std::string, field> value_t;
typedef std::vector< value_t > result_t;
void store(result_t& out)
{
//You need specify template because of push_back has overloads
auto hack = std::bind<void(result_t::*)(const value_t&)>(&result_t::push_back, &out, _1);
use(hack);
}
...
And replace all boost::shared_ptr, boost::function to std::shared_ptr and std::function in all files in library.
After this everything compiles and work fine with -std=c++11 flag.
Whole code of nanomysql.h you can see here:
Link to code
Use actual fork of libslave - https://github.com/vozbu/libslave with support c++11. Support for mysql 5.6 and 5.7 will be soon
For my current project, I need to use CUDA and the Intel C/C++ compilers in the same project. (I rely on the SSYEV implementation of Intel's MKL, which takes roughly 10 times as long when using GCC+MKL instead of ICC+MKL (~3ms from GCC, ~300µs from ICC).
icc -v
icc version 12.1.5
NVIDIA states, that Intel ICC 12.1 is supported (http://docs.nvidia.com/cuda/cuda-samples/index.html#linux-platforms-supported), but even after having downgraded to Intel ICC 12.1.5 (installed as part of the Intel Composer XE 2011 SP1 Update 3), I am still running into this issue:
nvcc -ccbin=icc src/test.cu -o test
/usr/local/cuda-5.5/bin//..//include/host_config.h(72): catastrophic error: #error directive: -- unsupported ICC configuration! Only ICC 12.1 on Linux x86_64 is supported!
#error -- unsupported ICC configuration! Only ICC 12.1 on Linux x86_64 is supported!
Unfortunately, it seems as if Nvidia is merely tolerating the use of ICC, because I would hardly call it "support", given the lack of information provided by Nvidia for using ICC together with CUDA.
I am running Ubuntu 12.10 x86_64 and CUDA 5.5. Telling icc to mimick the behavior of the stock GCC 4.7.2 using the -Xcompiler -gcc-version=470 option did not help either. Using google/search, I was only able to find threads from the Nvidia forums dealing with CUDA 3.x and Intel ICC 11.1, but I was unable to transfer the obtained information to current CUDA releases.
I would be very grateful for any suggestion on how to solving this issue :-)
Referring to the file referenced in the error you received, it's specifically looking for an ICC compiler with a particular build date:
#if defined(__ICC)
#if !(__INTEL_COMPILER == 9999 && __INTEL_COMPILER_BUILD_DATE == 20110811) || !defined(__GNUC__) || !defined(__LP64__)
#error -- unsupported ICC configuration! Only ICC 12.1 on Linux x86_64 is supported!
#endif
The solution would be to have the intel compiler that actually matches that specified build date. As indicated, ICC 12.1, ie. version 12.1.0.233, instead of ICC 12.1.5 should do the trick.
The narrow focus is at least partly due to a test limitation. In this case, a particular ICC variant was tested with the CUDA toolkit before it was released, and so that host config check has this test in it.
I confronted the problem when compiling madagascar-1.5 with icc2013 and ifort2013. Then I try to resolve the problem by downloading ICC version 2011 update7. Based the INTEL_COMPILER_BUILD_DATE which is 20110811, I can download the correct one. I think the date 20110811 matched icc is the correct one.
i am using centos 6.4 which has gcc with version 4.4.7 but CUDA 5 require gcc version 4.4.5 as per following link CUDA-toolkit-release-notes
How can I downgrade gcc to 4.4.5 or below without causing harm to my system?
Actually I think 4.4.7 will be OK. If you're having trouble using 4.4.7, please post a new question with the details of the problems you are having. Although the link you reference mentions 4.4.5, that simply means what CUDA was tested with. If you look in /usr/local/cuda/include/host_defines.h you will see that the enforced limit is 4.6.x or below:
#if defined(__GNUC__)
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 6)
#error -- unsupported GNU version! gcc 4.7 and up are not supported!
#endif /* __GNUC__> 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 6) */
#endif /* __GNUC__ */
If you really want to install a different gcc/g++, it is possible, you can search on those topics on stack overflow, or on the web. Here's one example on the web of a how-to site that explains installing an arbitrary version of gcc/g++ alongside the version that ships with your OS. It mentions Fedora 15 but the instructions should work OK for your CentOS 6.4
I am looking for a a short OpenGL geometry shader example that will run on Linux, preferably with as few dependencies as possible. Basically I want to use that program as a test to see if geometry shaders are supported at all on the system it's currently running on.
Just use glxinfo (in the package mesa-utils on Ubuntu/Debian) and check the extension list (GL_EXT/ARB_geometry_shader4) or OpenGL version (>= 3.2) for geometry shader support.
Extension example:
user#machine:~$ glxinfo | grep "GL_EXT_framebuffer_object"
GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_object,
Version example:
user#machine:~$ glxinfo | grep "OpenGL version"
OpenGL version string: 2.1 Mesa 7.10.2