Fast drawing/rendering 2d image texture - linux

I am freaking out to solve my software flickering while running/processing images.
First, I'll introduce my problem, explaining what I did, then at the end I'll leave my questions/needs:
My software process images and shows it imediatelly on screen, acting like a scanner. The image pixels is built using image photosensors, processed using OpenCV, and draw as soon as possible on screen to provide a smooth movement while the image is generated.
In practice, I first have a white-solid 1000x808 blank image, which its pixels are replaced with respective photosensored built-image and is show on the screen so that the image flows from the left to the right on the screen.
The entire process of each frame generation lasts about 10 ~ 12ms.
However, The frames does not flows as expected on screen. It flickers a lot. As I have no Idea how to measure FPS on OpenGL, I suppose my OpenGL code (which I know nothing about OpenGL at all) is causing the flickering.
I am using the following code on my GLWidget, which I did my best to write but I am quite sure that it is not OpenGL best-performance-written.
I have almost nothing on my GLWidget constructor, but I have playing around with these functions, with no significant effect:
QGLFormat glFormat;
glFormat.setSwapInterval(0);
glFormat.setDoubleBuffer(true);
glFormat.setSampleBuffers(true);
glFormat.setDirectRendering(true);
glFormat.setSamples(4);
setFormat(glFormat);
My initializeGL is shown as follows:
initializeGL()
{
glClearColor(255.0f,255.0f,255.0f,0.1f);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,this->width(), this->height(), 0.0f,0.0f,1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glGenTextures(3, &this->texture);
glBindTexture(GL_TEXTURE_2D, this->texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, this->width(), this->height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glDisable(GL_TEXTURE_2D);
this->original = cv::Mat(this->height(), this->width(), CV_8UC4);
memset(this->original.data, 0xff, this->original.rows * this->original.step);
this->repaint = true;
}
As you can see, at this point a blank image sized 1000x808 (screen size) is shown on initialization (because images aren't built yet)
When the images starts to being built (every 5 column) I send the image (through setImage) to the screen, then it renders with OpenGL on overriden paintGL function:
setImage(cv::Mat imageRGBA)
{
this->original = imageRGBA;
this->repaint = true;
update();
}
paintGL()
{
if (this->repaint) {
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, this->texture);
glTexImage2D(GL_TEXTURE_2D, 0, 3, this->original.cols, this->original.rows, 0, GL_RGBA, GL_UNSIGNED_BYTE, this->original.data );
glBegin(GL_QUADS);
glTexCoord2f(0,1); glVertex2f(0,this->height());
glTexCoord2f(0,0); glVertex2f(0,0);
glTexCoord2f(1,0); glVertex2f(this->width(),0);
glTexCoord2f(1,1); glVertex2f(this->width(),this->height());
glEnd();
glDisable(GL_TEXTURE_2D);
this->repaint = false;
}
}
Considerations:
As you can see, I work only with 2D images sized 1000x808 (fixed), and it shouldn't require too much power for rendering. I am currently coding this to work on Intel Core i3 Haswell machine (with integrated intel HD Graphics). On top a Linux Ubuntu 14.04 x64 OS, with latest intel drivers installed.
Expectation:
I expect to render/draw the images as soon as possible, without flickering. I have about 150 1000x808 images to render in a row (with ~12ms between images build process).
I would love to have a smooth ~60Hz smooth images being rendered on the screen
Questions:
Based on my code, what I need to do due to improve performance and stop flickering?
Is it really necessary to move to QOpenGLWidget other than current QGLWidget (deprecated?)?
As I have been written an OpenGL "immediate mode" code, Should I consider to move to VBO/PBO/VAO? And which one is best suitable for me?
Do I need linux tweaks due to improve performance? If so, what kind
of?
Any kind of help is very welcome!

Related

Awesomium offscreen rendered to Leadwerks texture

I'm using the Leadwerks game engine and I'm trying to get Awesomium to render to a Leadwerks Texture but having no luck. Below is the code where I create a texture, allocate unsigned char* variable that I put the Awesomium surface into via CopyTo() then set that variable to the LEadwerks Texture but the screen stays black so clearly I'm not understanding something here. Any ideas on what I'm missing?
Texture* uiTex = Texture::Create(window->GetWidth(), window->GetHeight());
unsigned char* pixels = (unsigned char*)malloc(uiTex->GetMipmapSize(0));
// copy surface to LE texture and draw that texture to screen
BitmapSurface* surface = static_cast<BitmapSurface*>(view->surface());
surface->CopyTo(pixels, 1024, 32, true, false);
uiTex->SetPixels((char*)pixels);
context->DrawImage(uiTex, 0, 0);
I know this is old but if some unlucky bastard like myself stumble upon this in the future there might as well be a little guidance.
Begin by checking that everything has been setup correctly in Awesomium. Otherwise it will seem fine but the surface returned will simply be null.
Save the Awesomium surface to file and inspect to verify that it gets generated in the first place.

Haxeflixel with Neko build, Screen size does not fit

I use Haxeflixel, choose build target Neko and Neko 64.
I coded 1280 x 720 resolution but executed screen is not fit.
change resoulution too.
I just reinstall my os x yosemite system. is the reason it?
I can not understand this situation
var gameWidth:Int = 1280; // Width of the game in pixels (might be less / more in actual pixels depending on your zoom).
var gameHeight:Int = 720; // Height of the game in pixels (might be less / more in actual pixels depending on your zoom).
var initialState:Class<FlxState> = PlayState; // The FlxState the game starts with.
var zoom:Float = -1; // If -1, zoom is automatically calculated to fit the window dimensions.
var framerate:Int = 60; // How many frames per second the game should run at.
var skipSplash:Bool = false; // Whether to skip the flixel splash screen that appears in release mode.
var startFullscreen:Bool = false; // Whether to start the game in fullscreen on desktop targets
very default settings...
If I understand your question, the problem is that gameWidth and gameHeight set the game's logical screen size, not the size in pixels of the window that the game runs in.
Try changing the window settings in your project's Project.xml file to set the physical size of the window to match the game coordinates and it might start looking how you expect:
<!--These window settings apply to all targets-->
<window width="1280" height="720" fps="60" background="#000000" hardware="true" vsync="true" />
It's lime 2.4.4 problem.
download http://www.openfl.org/builds/lime/lime-2.4.0-6-g837aa96.zip
and
haxelib local lime-2.4.0-6-g837aa96.zip on terminal
see below..
http://community.openfl.org/t/running-openfl-error-on-mac/1409/14

Windows metro app UI messes in high resolution devices when SwapChainBackgroundPanel is replaced by SwapChainPanel

I am developing winRt metro application. I was using SwapChainBackgroundPanel in UI for showing video content.
I recently replaced it by SwapChainPanel because Microsoft recommends it.
But now the UI is stretched beyond the screen.
If I don't convert dips to pixels while resizing swapchain buffers , The UI looks fine.
I am seeing this problem in Surface pro 3 and other high resolution devices
Beginning with SwapChainPanel, you have to take the properties CompositionScaleX and CompositionScaleY into consideration.
Just before m_swapChain->GetBuffer(), insert the following code:
// Setup inverse scale on the swap chain
DXGI_MATRIX_3X2_F inverseScale = { 0 };
inverseScale._11 = 1.0f / m_swapChainPanel->CompositionScaleX;
inverseScale._22 = 1.0f / m_swapChainPanel->CompositionScaleY;
ComPtr<IDXGISwapChain2> spSwapChain2;
DX::ThrowIfFailed(
m_swapChain.As<IDXGISwapChain2>(&spSwapChain2)
);
DX::ThrowIfFailed(
spSwapChain2->SetMatrixTransform(&inverseScale)
);

Image resizing using C#

The images that are fresh out from digital cameras are often above the size of 2-3 MB making it difficult for it to get transferred over email and other ways.
This requires the image to be resized (in terms of file size and not height or width). Quite similar to MS Paint offering image resizing functionality.
I am not well educated on image file theories. I would appreciate if someone can point me towards following information sources:
Image theory (how various image formats work jpeg, png, tiff etc).?
How does the image looses its sharpness when resized? Is there some
Are there any free .Net (I am using 4.0) libraries available for doing this? If not can I use any library using com interoperabilty?
Many thanks,
Image resizing is functionality is built right into the .NET framework. There are a couple of different approaches:
GDI+
WIC
WPF
Here's a nice blog post covering the differences between them.
Here's an example with GDI+:
public void Resize(string imageFile, string outputFile, double scaleFactor)
{
using (var srcImage = Image.FromFile(imageFile))
{
var newWidth = (int)(srcImage.Width * scaleFactor);
var newHeight = (int)(srcImage.Height * scaleFactor);
using (var newImage = new Bitmap(newWidth, newHeight))
using (var graphics = Graphics.FromImage(newImage))
{
graphics.SmoothingMode = SmoothingMode.AntiAlias;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
graphics.DrawImage(srcImage, new Rectangle(0, 0, newWidth, newHeight));
newImage.Save(outputFile);
}
}
}
I used the example provided by Darin Dimitrov,
Image was inflated to and took up a lot of disk space (from 1.5MB to 17MB or so).
This is due to a small mistake in the last line of code.
The function below will save the image as Bitmap (huge image size).
newImage.Save(outputFile)
The correct function should be:
newImage.Save(outputFile, ImageFormat.Jpeg);
Imageresizer works well.
http://imageresizing.net/

Draw 32-bits with alpha channel from resources using Direct2D

I have a legacy MCF application that displays some images (bmp 32-bits with alpha channel information) by pre-multiplying the images and using CDC::AlphaBlend method.
I would like to introduce some new graphics using Direct2D but I don't want to migrate all the images to png or other formats.
I managed to draw a bmp image from a file but I'm facing problems to get the image from resources and also the displayed image does not use the alpha channel information.
So could anybody help me out with this?
This is my code to create the bitmap:
hr = pIWICFactory->CreateDecoderFromFilename( L"D:\\image.bmp",
NULL,
GENERIC_READ,
WICDecodeMetadataCacheOnDemand,
&pDecoder);
if (SUCCEEDED(hr))
{
// Create the initial frame.
hr = pDecoder->GetFrame(0, &pSource);
}
if (SUCCEEDED(hr))
{
//create a Direct2D bitmap from the WIC bitmap.
hr = pRenderTarget->CreateBitmapFromWicBitmap(
pSource,
NULL,
ppBitmap
);
}
This is the code to draw the bitmap:
m_pRenderTarget->DrawBitmap(
m_pBitmap,
D2D1::RectF(0.0f, 0.0f, size.width, size.height)
);
You'll need to make an IStream from the resource to pass to IWICImagingFactory::CreateDecoderFromStream.
Since resources are available in memory (assuming the module that contains them is loaded), the easiest way to do that is to create an IWICStream object using IWICImagingFactory::CreateStream and initialize it using IWICStream::InitializeFromMemory.
To get the size of the resource and a pointer to the first byte, use the FindResource, LoadResource, LockResource, and SizeofResource functions.
If your bitmap's header uses BI_BITFIELDS to specify a format with alpha data, I believe WIC will respect that. I don't have any experience with Direct2D, so I can't say if you need to do anything further to make it use alpha data.
If you can't use BI_BITFIELDS (or if that doesn't work), you can write your own IWICBitmapSource implementation that wraps the frame's IWICBitmapSource. You should be able to pass most calls directly to the frame source, and supply your own GetPixelFormat method that returns the real format of your image data. Alternatively, you can create an IWICBitmap with the format you want, lock the bitmap, and copy in the pixel data from the frame source.

Resources