This question already has an answer here:
Is there a way to programmatically select the rendering GPU in a multi-GPU environment? (Windows)
(1 answer)
Closed 8 years ago.
I have finished a project with opengl functions , which display 3D graph.
Then I plug a Nvidia gpu into my PC. Since the project can be running without Nvidia gpu, how can I control the opengl functions are running on the resource of the Nvidia gpu rather than origial cpu?
I'm afraid there's no built-in OpenGL functions for this very purpose. However, some extensions would do. WGL_NV_gpu_affinity shall work for NVIDIA cards.
Also see Select a graphic device in windows + opengl and Is there a way to programmatically select the rendering GPU in a multi-GPU environment? (Windows)
Related
I'm trying to draw a triangle according to LINK
The problem is that my GPU is AMD Radeon (TM) R5 M330 // Discrete/Hybrid.
It supports Vulkan™ API Version 1.2.170.
Vulkan Sdk version is 1.3.216.0.
I get
'FeatureRestrictionNotMet(FeatureRestrictionError { feature: "dynamic_rendering", restriction: NotSupported })'
due to Vulkan supported API version with GPU
Is there any way to draw the triangle without the need of dynamic_rendering or is it possible to draw that using another way?
I am new to graphics programming and this will be my first experience with Vulkano-rust-AMD stuff.
I'm only vaguely familiar with 3D graphics so I will explain this to the best of my abilities. I got ToonCar, an old game I used to play, running on my Windows 8.1 PC. On both my integrated Intel graphics card, as well as my Nvidia 840M, the game performs and sounds fine, but the 3D textures glitch all over the screen (see link below). All of the glitching textures seem to be .r3d files.
Compatibility mode for an older Windows OS hasn't helped, although running the game in reduced color mode, 640x480 resolution, and disabled display scaling for high DPI have been helpful.
In the game's setup there is a drop-down for "Video System", with the options being RGB emulation, Direct3D HAL, Direct3D T&L HAL, and Intel(R) HD Graphics Family. The game runs very slow on some of these modes but runs fine on the Intel option.
Is there any way to run the game on an older version of DirectX (I'm on 11.0) or OpenGL (I'm on 4.2), or are there options within the Nvidia Control Panel that could help me out? Even identifying the problem itself would be very helpful.
Here is a link to the video of the problem. Couldn't get my screen recording software to grab it, sorry about that. https://i.imgur.com/iqaKHDN.gifv
How to query GPU Usage in DirectX? Specifically DirectX 11.
If someone ever did it, could you provide me the code snippet?
Process Hakcer is able to do this. See here: http://processhacker.svn.sourceforge.net/viewvc/processhacker/2.x/trunk/plugins/ExtendedTools/gpumon.c?revision=4927&view=markup
A similar question has been asked here: How do you calculate the load on a nvidia (cuda capable), gpu card?
I was wondering if it would be possible to get graphical hardware acceleration without Xorg and its DDX driver, only with kernel module and the rest of userspace driver. I'm asking this because I'm starting to develop on an embedded platform (something like beagleboard or more roughly a Texas instruments ARM chip with integrated GPU), and I would get hardware acceleration without the overhead of a graphical server (that is not needed).
If yes, how? I was thinking about OpenGL or OpengGLES implementations, or Qt embedded http://harmattan-dev.nokia.com/docs/library/html/qt4/qt-embeddedlinux-accel.html
And TI provides a large documentation, but still is not clear to me
http://processors.wiki.ti.com/index.php/Sitara_Linux_Software_Developer%E2%80%99s_Guide
Thank you.
The answer will depend on your user application. If everything is bare metal and your application team is writing everything, the DirectFB API can be used as Fredrik suggest. This might be especially interesting if you use the framebuffer version of GTK.
However, if you are using Qt, then this is not the best way forward. Qt5.0 does away with QWS (Qt embedded acceleration). Qt is migrating to LightHouse, now known as QPA. If you write a QPA plug-in that uses your graphics acceleration by whatever kernel mechanism you expose, then you have accelerated Qt graphics. Also of interest might be the Wayland architecture; there are QPA plug-ins for Wayland. Support exists for QPA in Qt4.8+ and Qt5.0+. Skia is also an interesting graphics API with support for an OpenGL backend; Skia is used by Android devices.
Getting graphics acceleration is easy. Do you want compositing? What is your memory foot print? Who is your developer audience that will program to the API? Do you need object functionality or just drawing primitives? There is a big difference between SKIA, PegUI, WindML and full blown graphics frameworks (Gtk, Qt) with all the widget and dynamics effects that people expect today. Programming to the OpenGL ES API might seem fine at first glance, but if your application has any complexity you will need a richer graphics framework; Mostly re-iterating Mats Petersson's comment.
Edit: From the Qt embedded acceleration link,
CPU blitter - slowest
Hardware blitter - Eg, directFB. Fast memory movement usually with bit ops as opposed to machine words, like DMA.
2D vector - OpenVG, Stick figure drawing, with bit manipulation.
3D drawing - OpenGL(ES) has polygon fills, etc.
This is the type of drawing you wish to perform. A framework like Qt and Gtk, give an API to put a radio button, checkbox, editbox, etc on the screen. It also has styling of the text and interaction with a keyboard, mouse and/or touch screen and other elements. A framework uses the drawing engine to put the objects on the screen.
Graphics acceleration is just putting algorithms like a Bresenham algorithm in a separate CPU or dedicated hardware. If the framework you chose doesn't support 3D objects, the frameworks is unlikely to need OpenGL support and may not perform any better.
The final piece of the puzzle is a window manager. Many embedded devices do not need this. However, many handset are using compositing and alpha values to create transparent windows and allow multiple apps to be seen at the same time. This may also influence your graphics API.
Additionally: DRI without X gives some compelling reasons why this might not be a good thing to do; for the case of a single user task, the DRI is not even needed.
The following is a diagram of a Wayland graphics stack a blog on Wayland.
This is depend on soc gpu driver implement ,
On iMX6 ,you can use wayland composite on framebuffer
I build a sample project as a reference
Qt with wayland on imx6D/Q
On omap3 there is a project
omap3 sgx wayland
I'd like to open an OpenGL context without X in Linux. Is there any way at all to do it?
I know it's possible for integrated Intel graphics card hardware, though most people have Nvidia cards in their system. I'd like to get a solution that works with Nvidia cards.
If there's no other way than through integrated Intel hardware, I guess it'd be okay to know how it's done with those.
X11 protocol itself is too large and complex. Mouse/Keyboard/Tablet input multiplexing it provides is too watered-down for modern programs. I think it's the worst roadblock that prevents Linux desktop from improving, which is why I look for alternatives.
Update (Sep. 17, 2017):
NVIDIA recently published an article detailing how to use OpenGL on headless systems, which is a very similar use case as the question describes.
In summary:
Link to libOpenGL.so and libEGL.so instead of libGL.so. (Your linker options should therefore be -lOpenGL -lEGL
Call eglGetDisplay, then eglInitialize to initialize EGL.
Call eglChooseConfig with the config attribute EGL_SURFACE_TYPE followed with EGL_PBUFFER_BIT.
Call eglCreatePbufferSurface, then eglBindApi(EGL_OPENGL_API);, then eglCreateContext and eglMakeCurrent.
From that point on, do your OpenGL rendering as usual, and you can blit your pixel buffer surface wherever you like. This supplementary article from NVIDIA includes a basic example and an example for multiple GPUs. The PBuffer surface can also be replaced with a window surface or pixmap surface, according to the application needs.
I regret not doing more research on this on my previous edit, but oh well. Better answers are better answers.
Since my answer in 2010, there have been a number of major shakeups in the Linux graphics space. So, an updated answer:
Today, nouveau and the other DRI drivers have matured to the point where OpenGL software is stable and performs reasonably well in general. With the introduction of the EGL API in Mesa, it's now possible to write OpenGL and OpenGL ES applications on even Linux desktops.
You can write your application to target EGL, and it can be run without the presence of a window manager or even a compositor. To do so, you would call eglGetDisplay, eglInitialize, and ultimately eglCreateContext and eglMakeCurrent, instead of the usual glx calls to do the same.
I do not know the specific code path for working without a display server, but EGL accepts both X11 displays and Wayland displays, and I do know it is possible for EGL to operate without one. You can create GL ES 1.1, ES 2.0, ES 3.0 (if you have Mesa 9.1 or later), and OpenGL 3.1 (Mesa 9.0 or later) contexts. Mesa has not (as of Sep. 2013) yet implemented OpenGL 3.2 Core.
Notably, on the Raspberry Pi and on Android, EGL and GL ES 2.0 (1.1 on Android < 3.0) are supported by default. On the Raspberry Pi, I don't think Wayland yet works (as of Sep. 2013), but you do get EGL without a display server using the included binary drivers. Your EGL code should also be portable (with minimal modification) to iOS, if that interests you.
Below is the outdated, previously accepted post:
I'd like to open an OpenGL context without X in linux. Is there any way at all to do it?
I believe Mesa provides a framebuffer target. If it provides any hardware acceleration at all, it will only be with hardware for which there are open source drivers that have been adapted to support such a use.
Gallium3D is also immature, and support for this isn't even on the roadmap, as far as I know.
I'd like to get a solution that works with nvidia cards.
There isn't one. Period.
NVIDIA only provides an X driver, and the Nouveau project is still immature, and doesn't support the kind of use that you're looking for, as they are currently focused only on the X11 driver.
You might be interested in a project called Wayland
http://en.wikipedia.org/wiki/Wayland_%28display_server%29
Have you looked at this page?
http://virtuousgeek.org/blog/index.php/jbarnes/2011/10/31/writing_stanalone_programs_with_egl_and_
It is likely a bit outdated. I haven't tried yet, but I would appreciate more documentation of this type.
Probably a good idea, as of today, is to follow Wayland compositor-drm.c implementation:
http://cgit.freedesktop.org/wayland/weston/tree/src/compositor-drm.c
https://gitlab.freedesktop.org/mesa/kmscube/ is a good a reference implementation of OGL (or OGLES) hardware-accelerated rendering without an X11 or wayland dependency.
You can look at how Android has solved this issues. See Android-x86 project.
Android uses mesa with egl and opengles. Android has its own simple Gralloc component for mode setting and graphic allocations. On top of that they have SurfaceFlinger component which is a composition engine, which uses OpenGLES for acceleration.
Cannot see why couldn't you use these components in similar way and even reuse the Android glue code.