Does using XEmbed put restrictions on OpenGL functionality? - linux

I am building an NPAPI plugin for Linux that uses the XEmbed protocol for the window that is controlled by the plugin. I am using Gtk+ for attaching to the window, wrapping the XEmbed window with a GtkPlug. I want to render an OpenGL surface on the window (using GtkGLExt) but when I enable the resulting OpenGL context I am unable to create GLSL shaders; indeed, querying the value of glGetString( GL_VERSION ) shows that the version string for OpenGL has changed from "2.1 NVIDIA..." to "1.4 (2.1 NIVIDIA...)", suggesting that the GL drivers have downgraded the OpenGL functionality in this situation.
I haven't been able to find any direct references to limitations that using XEmbed places on OpenGL functionality. Does anyone know if XEmbed effectively downgrades OpenGL to a fixed pipeline?

Related

What's the difference between a GLX visual and a FBconfig?

I'm learning OpenGL under X11 with xcb and I'm having a hard time figuring out the difference between visuals and fbconfigs (the ones you find in glxinfo)
As far as I could see a visual is a set of properties related to depth buffer, stencil buffer, framebuffer, etc.. what's the difference with fbconfigs and why would one be preferable to the other?
In the X Window System a Visual encapsulates the color mapping (color type, color depth) for a Display. The same Display can be configured with different Visuals.
When OpenGL was born, about a decade after X System, a structure XVisualInfo was created in the OGL part, not in the X System. This new structure extended the Visual type by adding more features, such as ancillary buffers, double buffer, and stereo. This XVisualInfo was used to create the gl-context.
In 1998 the GLX 1.3 specification (find it at Khronos page), added more features, notably GLXPbuffer for off-screen rendering, but easier than GLXPixmap. Also added were transparency, multi-sampling, and samples buffers. The configuration for the GLXDrawable (Window or GLXPixmap, or now also GLXWindow and GLXPbuffer) was going too different from the Visual abilities, and so GLXFBConfig was introduced.
The current GLX 1.4 specification allows, for backwards compatibility reasons and if you don't use GLX>1.2 features, the use of XVisualInfo. But the prefered way of creating a context is by GLXFBConfig.
Notice that rendering to a GLXPbuffer does not use a X Visual. Notice also that using Framebuffer objects since OGL 3.0 makes obsolete the use of GLXPbuffer.
The visual is a concept of X11 itself. It describes the color encoding properties. A particular X11 server my supper a set of different visuals, and and X11 client (graphical application) may choose one that is best suited for it's use case. Every X11 window is created with respect to one visual. See the documentation about X11 visual types for details.
On an X11 server with the glX extension, there are a couple of such visuals which provide hardware accelerated rendering via OpenGL. Before you can create a X11 window which you're going to use for GL rendering, you need to query a suitable visual. In traditional glX, you would use for example glXChooseVisual to do that.
A GLXFBConfig on the other hand is a entity that is only relevant for GLX itself, the classical X server does not know anything about it. GLXFBconfigs can be used to create off-screen rendering buffers called P-Buffers (which are kind of obsolete nowadays, though).
One could classify FBConfigs into two groups:
GLXFBConfigs which you can use to create a X11 window with. In this case, the FBConfig refers to some X11 visual ID, and you can use glXGetVisualFromFBConfig to query that.
GLXFBConfigs which can solely be used for off-screen rendering. There is no associated visual ID, so you cannot use these to create X11 windows with.
FBConfigs provide a newer and more flexible interface via glxChooseFBConfig, so it is preferable alwyas to use the FBConfig API, even if you want an off-screen window.
What a typical GL implementation will do is to provide an FBconfig for each visual type it is supporting, so you should find those twice in the glxinfo output: as the actual visuals, and as more or less identical fbconfigs. Additionally, it will offer some more fbconfigs with formats which would be untypical for X11 windows (like more than 32bit color depth).

How to create opengl context via drm (Linux)

I want to use OpenGL rendering without X, with google i find it: http://dvdhrm.wordpress.com/2012/08/11/kmscon-linux-kmsdrm-based-virtual-console/ there says that it is possible. I should use DRM and EGL. EGL can create opengl context but requires a NativeWindow. DRM probably will provide me NativeWindow, is not it? Should i use KMS? I know that i must have open source video driver. I want exactly OpenGL context, but not OpenGL ES (Linux). Maybe, someone knows tutorial or example code?
Yes, you need kms stack (example). Here is a simple example under linux, it use OpenGL es, But the step to have it working against OpenGL api are simple.
In the egl attribs set EGL_RENRERABLE_TYPE to EGL_OPENGL_BIT
And tell egl which api to bind to:
eglBindAPI(EGL_OPENGL_API);
Be sure to have latest kernel drivers and mesa-dev, libdrm-dev, libgbm-dev. This pieces of code is portable on android, it's just not so easy to have default android graphic stack silenced.
note: I had trouble with 32bit version, but still don't know why. those libs are actively developed, so not sure it wasn't a bug.
*note2: depending on your GLSL version, float precision is supported or not.
precision mediump float;
note3: if you have permision failure with /dev/dri/card0, grant it with:
sudo chmod 666 /dev/dri/card0
or add current user to video group with
sudo adduser $user video
you may also setguid for your executable with group set to video. (maybe best option)

OpenGL without X.org in linux

I'd like to open an OpenGL context without X in Linux. Is there any way at all to do it?
I know it's possible for integrated Intel graphics card hardware, though most people have Nvidia cards in their system. I'd like to get a solution that works with Nvidia cards.
If there's no other way than through integrated Intel hardware, I guess it'd be okay to know how it's done with those.
X11 protocol itself is too large and complex. Mouse/Keyboard/Tablet input multiplexing it provides is too watered-down for modern programs. I think it's the worst roadblock that prevents Linux desktop from improving, which is why I look for alternatives.
Update (Sep. 17, 2017):
NVIDIA recently published an article detailing how to use OpenGL on headless systems, which is a very similar use case as the question describes.
In summary:
Link to libOpenGL.so and libEGL.so instead of libGL.so. (Your linker options should therefore be -lOpenGL -lEGL
Call eglGetDisplay, then eglInitialize to initialize EGL.
Call eglChooseConfig with the config attribute EGL_SURFACE_TYPE followed with EGL_PBUFFER_BIT.
Call eglCreatePbufferSurface, then eglBindApi(EGL_OPENGL_API);, then eglCreateContext and eglMakeCurrent.
From that point on, do your OpenGL rendering as usual, and you can blit your pixel buffer surface wherever you like. This supplementary article from NVIDIA includes a basic example and an example for multiple GPUs. The PBuffer surface can also be replaced with a window surface or pixmap surface, according to the application needs.
I regret not doing more research on this on my previous edit, but oh well. Better answers are better answers.
Since my answer in 2010, there have been a number of major shakeups in the Linux graphics space. So, an updated answer:
Today, nouveau and the other DRI drivers have matured to the point where OpenGL software is stable and performs reasonably well in general. With the introduction of the EGL API in Mesa, it's now possible to write OpenGL and OpenGL ES applications on even Linux desktops.
You can write your application to target EGL, and it can be run without the presence of a window manager or even a compositor. To do so, you would call eglGetDisplay, eglInitialize, and ultimately eglCreateContext and eglMakeCurrent, instead of the usual glx calls to do the same.
I do not know the specific code path for working without a display server, but EGL accepts both X11 displays and Wayland displays, and I do know it is possible for EGL to operate without one. You can create GL ES 1.1, ES 2.0, ES 3.0 (if you have Mesa 9.1 or later), and OpenGL 3.1 (Mesa 9.0 or later) contexts. Mesa has not (as of Sep. 2013) yet implemented OpenGL 3.2 Core.
Notably, on the Raspberry Pi and on Android, EGL and GL ES 2.0 (1.1 on Android < 3.0) are supported by default. On the Raspberry Pi, I don't think Wayland yet works (as of Sep. 2013), but you do get EGL without a display server using the included binary drivers. Your EGL code should also be portable (with minimal modification) to iOS, if that interests you.
Below is the outdated, previously accepted post:
I'd like to open an OpenGL context without X in linux. Is there any way at all to do it?
I believe Mesa provides a framebuffer target. If it provides any hardware acceleration at all, it will only be with hardware for which there are open source drivers that have been adapted to support such a use.
Gallium3D is also immature, and support for this isn't even on the roadmap, as far as I know.
I'd like to get a solution that works with nvidia cards.
There isn't one. Period.
NVIDIA only provides an X driver, and the Nouveau project is still immature, and doesn't support the kind of use that you're looking for, as they are currently focused only on the X11 driver.
You might be interested in a project called Wayland
http://en.wikipedia.org/wiki/Wayland_%28display_server%29
Have you looked at this page?
http://virtuousgeek.org/blog/index.php/jbarnes/2011/10/31/writing_stanalone_programs_with_egl_and_
It is likely a bit outdated. I haven't tried yet, but I would appreciate more documentation of this type.
Probably a good idea, as of today, is to follow Wayland compositor-drm.c implementation:
http://cgit.freedesktop.org/wayland/weston/tree/src/compositor-drm.c
https://gitlab.freedesktop.org/mesa/kmscube/ is a good a reference implementation of OGL (or OGLES) hardware-accelerated rendering without an X11 or wayland dependency.
You can look at how Android has solved this issues. See Android-x86 project.
Android uses mesa with egl and opengles. Android has its own simple Gralloc component for mode setting and graphic allocations. On top of that they have SurfaceFlinger component which is a composition engine, which uses OpenGLES for acceleration.
Cannot see why couldn't you use these components in similar way and even reuse the Android glue code.

windows ce - 2d graphics library

I have Windows CE 5.0 device and it doesn't support any hardware accelearation.
I am looking for some good 2d graphics library to do following things.
I prefer backend programming in Compact .Net Framework.
Drawing fonts with antialiasing.
drawing lines, and simple vector objects with antialiasing.
I am not doing animation, so i don't care about frames per seconds performance.
i have looked into following libraries, but nothing suits me.
opengl (vincent 3d software rendering) - works, but api is very low level and complex.
openvg - no software implementation for windows ce.
Cairo - api is very neat, but no wince build.
Adobe Flash - installs as browser plugin , no activex support in wince.
Anti-aliased fonts in .Net CF 2.0+ can be done with Microsoft.WindowsCE.Form.LogFont -- after creating your logfont, you can use it with any WinForms widget's .Font property by converting it using System.Drawing.Font.FromLogFont().
...you might need to enable anti-aliasing in the registry for these to render properly, see this MSDN article for the right keys: [http://msdn.microsoft.com/en-us/library/ms901096.aspx][1].
There was a decent implementation of GDI+ for .Net CF 1.0 called "XrossOne Mobile GDI+", it's not longer supported, but you can get the source code here: http://www.isquaredsoftware.com/XrossOneGDIPlus.php -- Run it through the import wizard on VS2008 to build it for later versions of CF. I liked this library for its alpha transparency support without hardware acceleration, rounded rectangles and gradient support.
Someone was advertising this library in some forum. It's for Windows Mobile, but you can check it out. I have no experience with it.
link
I have Google's skia library compiling under WindowsCE, although I haven't done much with it yet :) It wasn't too hard to get working. It does support a OpenGL/ES backend.
There is also AGG (Anti Grain Geometry) which is a heavy C++ library based on templates.

If I build and link an OpenGL application using only OpenGL ES 1.x calls, will it still work?

I am writing an OpenGL game which will hopefuflly be for both linux and iphoneOS, I basically want to be able to build using the OpenGL ES 1.5 headers and run it on my linux desktop. Can I do this? IE, I want to only use the subset of API calls common between OpenGL and OpenGL-ES.
Doing the above and linking with normal libGL.a from my system gets me my screen but I seem to be able to do nothing but change the scene background colour.
I've done exactly that, and it worked well for me.
There are a bunch OpenGL|ES extensions that aren't available on standard OpenGL but very nice to have on a low spec platform. glDrawTexImage is such an extension. Emulating these extensions using a hand full of desktop OpenGL-calls is not a big deal though.
Also OpenGL|ES supports the fixed-point data-format for most entrypoints. Take glClearColorx for example. These aren't available for the desktop OpenGL, so you have to write a wrapper if you want to use them. It's a bit more work if you also store your vertex data in this format.
Oh - and note that OpenGL|ES does not come with the glu-library. You can use it on the desktop, but if you do you'll have to reimplement them later (see the 100 questions about gluLookAt and gluUnproject).
There is no such thing as OpenGL ES 1.5. Did you mean 1.1 ?
Also, how do you get a window ? This is platform specific.
In any case, you still should compile against the header that corresponds to the lib you will link against. You don't know for sure what the header sets up (e.g. on windows, which you don't care about but still, calling conventions are specified in there).
There are also some calls that don't map well between the 2. E.g. APIs that are only using doubles in GL are float in GLES (from the ES spec):
The double-precision only commands
DepthRange, Frustum, and Ortho are
replaced with single-precision or
fixed-point variants
So in short, there is a bit more work than just using the same code, although the work in question is still minimal if you stick to GL ES subset.

Resources