I would like to learn graphics programming with OpenGL. And since I will just start learning it I decided to learn the new/OpenGL3 way of doing things.As far as I can see one has to create an OpenGL 3 context for this (Core profile in the new OpenGL 3.2 if I understand this correctly). Well I thought about using Qt for this, currently using version 4.5.2, since I know it already and like it and it supports creating the OpenGL widget. What I have the problem with is that it looks like the OpenGL widget is always crated with the old OpenGL 2 context and I can't see the option to make it in/switch it to OpenGL 3 way. Am I missing some obvious thing here or do I need something a bit more tricky to create OpenGL 3 context with Qt? Is it even supported in current version of Qt? I'm using Linux, if it makes any difference.
Mesa software rendering is still stuck on OpenGL 2.1. If you're using the binary NVidia drivers they provide OpenGL 3.2 support on sufficiently recent hardware. AMD's latest fglrx supports 3.1. Open Source drivers seem to top out around 1.3-1.4.
If you've gotten this far you'll probably have to hack the Qt sources to use GLX_ARB_create_context instead of GLXCreateContext to get a OpenGL 3.2 Core context.
This guy seems to have had partial success, if you haven't already come across the thread via Google.
Related
When I build an OpenGL application on Windows 10 I have to link to opengl32.lib. I use GLEW for loading OpenGL functions. GLEW internally uses wglGetProcAddress(). opengl32.lib provides support only for OpenGL 1.1. How does opengl32.lib work when wglGetProcAddress() asks for some newer OpenGL functionality? Does it act as a proxy and communicate with a graphics driver, for example OpenGL nvidia library?
Does it work the same way on Linux?
On Windows you have a so called OpenGL ICD (= Installable Client Driver). Essentially for every function that the OpenGL-1.1 specification defines this ICD provides an implementation proper that's passed through to by opengl32.dll, based on the currently active OpenGL context (i.e. you can have multiple ICDs installed, serving different OpenGL contexts within the same program).
The wglGetProcAddress function is part of that set, which is the reason, why you've to load extending/newer functions for each context separately on Windows. So essentially when you call wglGetProcAddress it will just call the actual ...GetProcAddress of the ICD.
On Linux we never had the concept of ICDs. A couple of years ago we finally got GLvnd (GL vendor neutral dispatch) which essentially gives Linux an ICD like mechanism. However the GLX specification clearly states, that the addresses obtained through glXGetProcAddress are invariant and identical for all OpenGL contexts. That means, that it's the OpenGL implementation's task (and not the intermediary layer between) to dispatch functions based on the context. The Mesa developers describe it here: https://docs.mesa3d.org/dispatch.html
In short: It's a mess.
I'd like to develop an application targeting modern popular Linux distributions that uses GTK for its UI, but also the Vulkan API to render a 3D model. Ideally I'd like to use the gtkmm C++ wrapper for GTK, and the Vulkan C++ API.
What ways do I currently have to do this?
I know that I can get a Vulkan context using SDL2 and other similar low level libraries, and I can get an OpenGL context using GTK. But I haven't found resources for combining these two approaches.
To start, I'm not limited to developing on or targeting any particular Linux distribution. Although any insights into why a particular environment makes this easier or more difficult are appreciated.
Edit:
I'm aware of this question: What is the Vulkan equivalent of the GtkGLArea widget (GTK+)?
However, many months have passed since its most recent update. My Google searching does not indicate that the state of affairs has changed, but I would like to be proven wrong. In addition, I intentionally phrased my question more broadly. I don't necessarily want just a GtkVulkanArea widget. I want to know of any valid way to combine Gtk and Vulkan. For example, is it possible to embed a Gtk event loop and widgets in an SDL2 window? What about the other way around? Again, my Google searching has not been very helpful, and I hope someone knowledgeable on this topic will answer.
I have a small OpenGL application that has been developed using GLUT. What are my best options to render directly to a Linux framebuffer (fbdev) with OpenGL, without an X-Server? I understand that GLUT needs X, so I'm not looking for ways to use GLUT without X.
The framebuffer device I intend to use is confirmed working with fbi and mplayer.
I have done (or I'd like to think that I have done) a pretty exhaustive research, and found some resources and libraries that might work. But most of the info is a bit outdated, and I'm not sure what to trust.
DirectFB looks good, exactly what I'm looking for, but does not seem to be in active development.
I'm inclined to try this out on my target device: https://github.com/mcdoh/glGears-on-DirectFB-with-OpenGL-ES - but again, this is the only example code I can find, and it's six years old.
Mesa is another interesting candidate, but I can't seem to find any recent information.
This looks interesting: http://www.mesa3d.org/glfbdev-driver.html - but I can not find any example code to go from.
So, while a lot of SO answers mention DirectFB and Mesa as solutions, I can't bring myself to be confident in those options while so little material can be found.
So, if you can point me in the right direction here, give me any examples to go from, that would be highly appreciated. What am I missing?
Edit due to question being marked as duplicate:
The answer to the related question recommends using DRM. I intend to run my code on an Allwinner H3-based embedded computer that does not yet support the mainline Linux kernel. Currently, it's running on kernel version 3.14, which I believe does not have DRM support.
So, are there any alternatives?
I want to write some OpenGL 3.2, likely also OpenGL 4 stuff on Linux, and I just saw that libsdl 1.2 (the latest stable release) supports only 2.x. LibSDL 1.3 (which is in development) should support it, but it'll be a while before it gets into mainstream distributions. Is there any library out there right now which allows me to create an OpenGL window with a context of my choice, and preferably also help me with the input?
If not, is there some small library which reduces the pain with Xlib? My Windows path for OpenGL is written with plain-old WinAPI, with own message pump etc., and I wonder if X11 is worse than that. A quick web search indicates that one should use a library above Xlib. I'd be happy with something that just wraps XLib, so I can do the OpenGL context on my own with glX if XLib is really that horrible.
GLFW (GL Framework) supports creating 3.0+ contexts, and has input support, you can read about it on:
http://gpwiki.org/index.php/GLFW
http://www.glfw.org/
Sadly, the main page is down now.
I've been playing around with Direct3D 11 a little bit lately and have been frustrated by the lack of documentation on the basics of the API (such as simple geometry rendering). One of the points of confusion brought on by the sparse documentation is the (apparent) move away from the use of effects for shaders.
In D3D11 all of the effect (.fx) support has been removed from the D3DX libraries and buried away in a hard to find (sparsely documented, of course) shared source library. None of the included examples use it, preferring instead to compile HLSL files directly. All of this says to me that Microsoft is trying to get people to stop using the effect file format. Is that true? Is there any documentation of any kind that states that? I'm fine doing it either way, but for years now they've been promoting the .fx format so it seems odd that they would suddenly decide to drop it.
Many professional game and graphics developers don't use the effects interfaces in Direct3D, and many of the leading game engines do not use them either. Instead, custom material/effects subsystems are built on top of the lower-level shader and graphics state state management facilities. This allows developers to do things like target both Direct3D and OpenGL through a common asset management pipeline.
The main issue is that the fx_5_0 profile which is needed to compile Effects 11 shaders with the required metadata is deprecated by the HLSL compiler team. The runtime is shared-source, but the compiler is not. In the latest D3DCompiler (#47) it emits a warning about this. fx_5_0 was never updated for some newer language aspects in DirectX 11.1 and 11.2, but works "as is" for Direct3D 11.
The second issue is that you need D3DCompile APIs at runtime to make use of Effects 11. Since D3DCompile was 'development only' for Windows Store apps for Windows 8.0 and Windows phone 8.0, it wasn't an option there. It is technically possible to use Effects 11 today with Windows Store apps for Windows 8.1 and Windows phone 8.1 since D3DCompile #47 is part of the OS and includes the 'deprecated/as-is' compiler support for fx_5_0, but this use is not encouraged.
The bulk of the DirectX SDK samples and all the Windows Store samples avoid use of Effects 11. I did post a few Win32 desktop samples that use it to GitHub.
UPDATE: With the release of the legacy Microsoft.DXSDK.D3DX NuGet repacking of the original D3DX #43, I was able to update the rest of the legacy DirectX SDK samples so they can build with the modern Windows SDK and not require the legacy DirectX SDK to be installed. Most of the Direct3D 9 and Direct3D 10 samples, and a few Direct3D 11 samples, all use legacy Effects. See GitHub.
So in short, yes you are discouraged from using it but you still can at the moment if you can live with the disclaimers.
I'm in the exact same position, and after Googling like crazy for even the simplest sample that uses D3DX11CreateEffectFromMemory, I've too come to the conclusion that .fx file support isn't their highest prio. Although it is strange that they've added the EffectGroup concept, which is new to 11, if they don't want us to use it.
I've played a little with the new reflection API, so it looks like it will be pretty easy to hack together your own functions for setting variables etc, in essence creating your own Effect-class, and the next step is going to be to see what support their is for creating render state blocks via the API. Being able to edit those directly in the .fx file was very nice, so hopefully something like that still exists (or, at worst, I can rip that part from the Effect11 code).
There is an effect runtime provided as a sample in the DirectX SDK that should be able to help you to use .fx files.
Check out the directory: %DXSDK_DIR%\Samples\C++\Effects11
http://msdn.microsoft.com/en-us/library/ff476261(v=VS.85).aspx
This suggests that it can take a shader or an effect.
http://msdn.microsoft.com/en-us/library/ff476190(v=VS.85).aspx
Also, what is the difference between a shader and an effect?