Morphological Filtering : how to disable in d3d11 - direct3d

I want to disable Morphological Filtering in my application.
My computer use AMD Radeon r7 m260 display card.
My application that developed with c++ use skia and ANGLE. The ANGLE use d3d11.
When I enable Morphological Filtering, my application looks blurry.
I can disable Morphological Filtering with AMD setting ui, but I want to disable Morphological Filtering in my c++ code.
Do you have good methods.
Thank you!

Related

Hardware acceleration without X

I was wondering if it would be possible to get graphical hardware acceleration without Xorg and its DDX driver, only with kernel module and the rest of userspace driver. I'm asking this because I'm starting to develop on an embedded platform (something like beagleboard or more roughly a Texas instruments ARM chip with integrated GPU), and I would get hardware acceleration without the overhead of a graphical server (that is not needed).
If yes, how? I was thinking about OpenGL or OpengGLES implementations, or Qt embedded http://harmattan-dev.nokia.com/docs/library/html/qt4/qt-embeddedlinux-accel.html
And TI provides a large documentation, but still is not clear to me
http://processors.wiki.ti.com/index.php/Sitara_Linux_Software_Developer%E2%80%99s_Guide
Thank you.
The answer will depend on your user application. If everything is bare metal and your application team is writing everything, the DirectFB API can be used as Fredrik suggest. This might be especially interesting if you use the framebuffer version of GTK.
However, if you are using Qt, then this is not the best way forward. Qt5.0 does away with QWS (Qt embedded acceleration). Qt is migrating to LightHouse, now known as QPA. If you write a QPA plug-in that uses your graphics acceleration by whatever kernel mechanism you expose, then you have accelerated Qt graphics. Also of interest might be the Wayland architecture; there are QPA plug-ins for Wayland. Support exists for QPA in Qt4.8+ and Qt5.0+. Skia is also an interesting graphics API with support for an OpenGL backend; Skia is used by Android devices.
Getting graphics acceleration is easy. Do you want compositing? What is your memory foot print? Who is your developer audience that will program to the API? Do you need object functionality or just drawing primitives? There is a big difference between SKIA, PegUI, WindML and full blown graphics frameworks (Gtk, Qt) with all the widget and dynamics effects that people expect today. Programming to the OpenGL ES API might seem fine at first glance, but if your application has any complexity you will need a richer graphics framework; Mostly re-iterating Mats Petersson's comment.
Edit: From the Qt embedded acceleration link,
CPU blitter - slowest
Hardware blitter - Eg, directFB. Fast memory movement usually with bit ops as opposed to machine words, like DMA.
2D vector - OpenVG, Stick figure drawing, with bit manipulation.
3D drawing - OpenGL(ES) has polygon fills, etc.
This is the type of drawing you wish to perform. A framework like Qt and Gtk, give an API to put a radio button, checkbox, editbox, etc on the screen. It also has styling of the text and interaction with a keyboard, mouse and/or touch screen and other elements. A framework uses the drawing engine to put the objects on the screen.
Graphics acceleration is just putting algorithms like a Bresenham algorithm in a separate CPU or dedicated hardware. If the framework you chose doesn't support 3D objects, the frameworks is unlikely to need OpenGL support and may not perform any better.
The final piece of the puzzle is a window manager. Many embedded devices do not need this. However, many handset are using compositing and alpha values to create transparent windows and allow multiple apps to be seen at the same time. This may also influence your graphics API.
Additionally: DRI without X gives some compelling reasons why this might not be a good thing to do; for the case of a single user task, the DRI is not even needed.
The following is a diagram of a Wayland graphics stack a blog on Wayland.
This is depend on soc gpu driver implement ,
On iMX6 ,you can use wayland composite on framebuffer
I build a sample project as a reference
Qt with wayland on imx6D/Q
On omap3 there is a project
omap3 sgx wayland

How to utilize 2d/3d Graphics Acceleration on a Single Board Computer

This may be a somewhat silly question, but if you are working with a single board computer that boasts that it has 2d/3d graphics acceleration, what does this actually mean?
If it supports DirectX or OpenGL obviously I could just use that framework, but I am not familiar with working from this end of things. I do not know if that means that it is capable of having those libraries included into the OS or if it just means that it does certain kinds of math more quickly (either by default or through some other process)
Any clarification on what this means or locations of resources I could use on such would be greatly appreciated.
On embedded system's, 2D/3D Graphics Acceleration could mean a lot of things. For instance, that framebuffer operations are accelerated through DirectFB, or that OpenGL ES is supported.
The fact is that the manufacturer of the board usually provides these libraries since the acceleration of the graphics itself is deeply connected to the hardware.
It's best to get in touch with your manufacturer and ask which graphics libraries they support that are hardware accelerated.
There are two very important features of 2D/3D graphics cards:
Take a load away from the CPU
Process that load much faster than the CPU can do because it has a special instruction set that was designed explicitly for calculations that are common in graphics (e.g. transformations)
Sometimes other jobs are passed on to the GPU because a job requires calculations that fit very good to the instructions of the GPU. E.g. a physics library requires lots of matrix calculation so a GPU could be used to do that. NVIDIA made PHYSIX to do exactly that. See this FAQ also
The minumum a graphics display required is to allow the setting of the state (colour) of individual pixels. This allows you to render any image within the resolution and colour depth of the display, but for complex drawing tasks and very high resolution displays this would be very slow.
Graphics acceleration refers to any graphics processing function off-loaded to hardware. At its simplest this may mean the drawing and filling of graphics primitives such as lines, and polygons, and 'blitting' - the moving of blocks of pixels from one location to another. Technically graphics acceleration has been largely replaced by graphics processors (GPUs), though the affect is the same - faster graphics. GPUs are more flexible since a hardware accelerator can accelerate only the set of operations they are hard wrired to perform, which may benefit some applications more than others.
Modern GPU graphics hardware performs far higher level graphics processing. It is also possible to use the GPU to perform more general purpose matrix computation tasks using interfaces such as Nvidia's CUDA, which can then accelerate computational tasks other than graphics but which require teh same kind of mathematical operations.
The Wikipedia "Graphics processing unit" article has a history of Graphics Accelerators and GPUs

Simple 2D graphics programming

I used DirectDraw in C and C++ years back to draw some simple 2D graphics. I was used to the steps of creating a surface, writing to it using pointers, flipping the back-buffer, storing sprites on off-screen surfaces, and so on. So today if I want write some 2D graphics programs in C or C++, what is the way to go?
Will this same method of programming still apply or do I have to have a different understanding of the video hardware abstraction?
What libraries and tools are available on Windows and Linux?
What libraries and tools are available on Windows and Linux?
SDL, OpenGL, and Qt 4 (it is gui library, but it is fast/flexible enough for 2D rendering)
Will this same method of programming still apply or do I have to have a different understanding of the video hardware abstraction?
Normally you don't write data into surface "using pointers" every frame, and instead manipulate/draw them using methods provided by API. This is because the driver will work faster with video memory than if you transfer data from system memory into video memory every frame. You still can write data into hardware surface/texture (even during every frame), if you have to, but those surfaces may need to be treated in special way to get optimal performance. For example, in DirectX you would need to tell the driver that surface data is going to change frequently and that you're going only to write data into surface, never reading it back. Also, in 3D-oriented APIs (openGL/DirectX) rendering surface on the other surface is a somewhat "special case", and you may need to use "Render Targets"(DirectX) or "Framebuffer Objects"(OpenGL). Which is different from DirectDraw (where, AFAIK, you could blit anything onto anything). The good thing is that with 3D api you get incredibly flexible way of dealing with surfaces/textures - stretching, rotating, tinting them with color, blending them together, processing them using shaders can be done on hardware.
Another thing is that modern 3D apis with hardware support frequently don't operate on 8bit palleted textures, and prefers ARGB images. 8 bit surfaces with palette may be emulated, when needed, and 2D low-level apis (SDL, DirectDraw) provide them. Also you can emulate 8bit texture on hardware using fragment/pixel shaders.
Anyway, if you want "old school" cross-platform way of using surfaces (i.e. "write data every frame using pointers" - i.e. you need software renderer or something), SDL easily allows that. If you want higher-level, more flexible operations - Qt 4 and OpenGL are for you.
On Linux you could use OpenGL, it is not only used for 3D support but also supports 2D.
SDL is also pretty easy to use, out of the box. It is also cross-platform, and includes (and has a lot of) plugins available to handle your needs. It interfaces nicely with openGL as well should you need 3D support.
Direct2D on Windows.
EGLOutput/EGLDevice or GEM depending on the GPU driver for Linux.

High & Low Level graphics in J2ME

I wann to know what is the difference between high & low level graphics
And what is the use of both ....
Thanks,
Neel
High-level graphics is handled by the device itself (its implementation, to be more precise). Low-level graphics require you to draw it manually, but you are in full control of what will be displayed to the user.
Talking about Java ME almost everything in javax.microedition.lcdui package is related to high-level graphics. Low-level graphics is handled by Canvas and Graphics and also by the classes inside the javax.microedition.lcdui.game package.
The advantage of the high-level graphics is that it's very easy to use, since you are not required to draw anyhing by hand, so it's perfect to create UI fast. The disadvantage is that you are limited to the elements in MIDP 2.0 (the only extension possibiility is CustomItem) and you can't control how the elements are drawn. Low-level graphics allows you to draw anything on the screen, but it requires a lot of work to create a good UI manually.
Games typically use mostly low-level graphics, other applications use only high-level graphics or both. By the way, there are some alternatives like LWUIT, it is a library that uses low-level graphics to create UI.

windows ce - 2d graphics library

I have Windows CE 5.0 device and it doesn't support any hardware accelearation.
I am looking for some good 2d graphics library to do following things.
I prefer backend programming in Compact .Net Framework.
Drawing fonts with antialiasing.
drawing lines, and simple vector objects with antialiasing.
I am not doing animation, so i don't care about frames per seconds performance.
i have looked into following libraries, but nothing suits me.
opengl (vincent 3d software rendering) - works, but api is very low level and complex.
openvg - no software implementation for windows ce.
Cairo - api is very neat, but no wince build.
Adobe Flash - installs as browser plugin , no activex support in wince.
Anti-aliased fonts in .Net CF 2.0+ can be done with Microsoft.WindowsCE.Form.LogFont -- after creating your logfont, you can use it with any WinForms widget's .Font property by converting it using System.Drawing.Font.FromLogFont().
...you might need to enable anti-aliasing in the registry for these to render properly, see this MSDN article for the right keys: [http://msdn.microsoft.com/en-us/library/ms901096.aspx][1].
There was a decent implementation of GDI+ for .Net CF 1.0 called "XrossOne Mobile GDI+", it's not longer supported, but you can get the source code here: http://www.isquaredsoftware.com/XrossOneGDIPlus.php -- Run it through the import wizard on VS2008 to build it for later versions of CF. I liked this library for its alpha transparency support without hardware acceleration, rounded rectangles and gradient support.
Someone was advertising this library in some forum. It's for Windows Mobile, but you can check it out. I have no experience with it.
link
I have Google's skia library compiling under WindowsCE, although I haven't done much with it yet :) It wasn't too hard to get working. It does support a OpenGL/ES backend.
There is also AGG (Anti Grain Geometry) which is a heavy C++ library based on templates.

Resources