I want to include an existing openCV application into a GUI created with Qt. I've found some similar questions on stackoverflow
QT How to embed an application into QT widget
Run another executable in my Qt app
The problem is, that I don't want to simply launch the openCV application like I could with QProcess.
The OpenCV application has a "MouseListener", so if I click on the window, it should still call the function of the openCV App. Further would I like to display the detected coordinates in labels of the Qt GUI. Therefore it has to be some kind of interaction.
I've read about the createwindowContainer function (http://blog.qt.io/blog/2013/02/19/introducing-qwidgetcreatewindowcontainer/) but since I am not very familiar with Qt I'm not sure if this is the right choice and how to use it.
I am using Linux Mint 17.2, opencv version 3.1.0 and Qt version 4.8.6
thank you for your inputs
I haven't actually solved the problem how I wanted to at the beginning. But now it's working. If someone has the same problem, maybe my solution can provide some ideas. If you want to display a video in qt or if you have problems with the OpenCV libraries, maybe I can help.
following are a few code snippets. they are not very much commented but I hope the concept is clear:
First I have a MainWindow with a label that I promoted to the type of my CustomLabel. The CustomLabel is my container to display the video and react on my mouse inputs.
CustomLabel::CustomLabel(QWidget* parent) : QLabel(parent), currentImage(NULL),
tickrate_ms(33), vid_fps(0), video_width(0), video_height(0), myTimer(NULL), cap(NULL)
{
// init variables
showPoints = true;
calculatedCenter = cv::Point(0,0);
oldCenter = cv::Point(0,0);
currentState = STATE_NO_STREAM;
NOF_corners = 30; //default init value
termcrit = cv::TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 30,0.01);
// enable mouse Tracking
this->setMouseTracking(true);
// connect signals with slots
QObject::connect(getMainWindow(), SIGNAL(sendFileOpen()), this, SLOT(onOpenClick()));
QObject::connect(getMainWindow(), SIGNAL(sendWebcamOpen()), this, SLOT(onWebcamBtnOpen()));
QObject::connect(getMainWindow(), SIGNAL(closeVideoStreamSignal()), this, SLOT(onCloseVideoStream()));
}
You have to overwrite the paintEvent-Method:
void CustomLabel::paintEvent(QPaintEvent *e){
QPainter painter(this);
// When no image is loaded, paint the window black
if (!currentImage){
painter.fillRect(QRectF(QPoint(0, 0), QSize(width(), height())), Qt::black);
QWidget::paintEvent(e);
return;
}
// Draw a frame from the video
drawVideoFrame(painter);
QWidget::paintEvent(e);
}
method that was called in paintEvent:
void CustomLabel::drawVideoFrame(QPainter &painter){
painter.drawImage(QRectF(QPoint(0, 0), QSize(width(), height())), *currentImage,
QRectF(QPoint(0, 0), currentImage->size()));
}
And on every tick of my timer I call onTick()
void CustomLabel::onTick() {
/* This method is called every couple of milliseconds.
* It reads from the OpenCV's capture interface and saves a frame as QImage
* the state machine is implemented here. every tick is handled
*/
if(cap->isOpened()){
switch(currentState) {
case STATE_IDLE:
if (!cap->read(currentFrame)){
qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_IDLE";
}
break;
case STATE_DRAWING:
if (!cap->read(currentFrame)){
qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_DRAWING";
}
currentFrame.copyTo(currentCopy);
cv::circle(currentCopy, cv::Point(focusPt.x*xScale, focusPt.y*yScale),
sqrt((focusPt.x - currentMousePos.x())*(focusPt.x - currentMousePos.x())*xScale*xScale+(focusPt.y - currentMousePos.y())*
(focusPt.y - currentMousePos.y())*yScale*yScale), cv::Scalar(0, 0, 255), 2, 8, 0);
//qDebug() << "focus pt x " << focusPt.x << "y " << focusPt.y;
break;
case STATE_TRACKING:
if (!cap->read(currentFrame)){
qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_TRACKING";
}
cv::cvtColor(currentFrame, currentFrame, CV_BGR2GRAY, 0);
if(initGrayFrame){
currentGrayFrame.copyTo(previousGrayFrame);
initGrayFrame = false;
return;
}
cv::calcOpticalFlowPyrLK(previousGrayFrame, currentFrame, previousPts, currentPts, featuresFound, err, cv::Size(21, 21),
3, termcrit, 0, 1e-4);
AcquireNewPoints();
currentCopy = CalculateCenter(currentFrame, currentPts);
if(showPoints){
DrawPoints(currentCopy, currentPts);
}
break;
case STATE_LOST_POLE:
currentState = STATE_IDLE;
initGrayFrame = true;
cv::cvtColor(currentFrame, currentFrame, CV_GRAY2BGR);
break;
default:
break;
}
// if not tracking, draw currentFrame
// OpenCV uses BGR order, convert it to RGB
if(currentState == STATE_IDLE) {
cv::cvtColor(currentFrame, currentFrame, CV_BGR2RGB);
memcpy(currentImage->scanLine(0), (unsigned char*)currentFrame.data, currentImage->width() * currentImage->height() * currentFrame.channels());
} else {
cv::cvtColor(currentCopy, currentCopy, CV_BGR2RGB);
memcpy(currentImage->scanLine(0), (unsigned char*)currentCopy.data, currentImage->width() * currentImage->height() * currentCopy.channels());
previousGrayFrame = currentFrame;
previousPts = currentPts;
}
}
// Trigger paint event to redraw the window
update();
}
Don't mind the yScale and xScale factors, they are just for the opencv drawing functions because the customLabel size is not the same as the video resolution
OpenCV is used just for image processing. If you know to convert cv::Mat to any other required format, you can include OpenCV with any GUI development kit. For Qt, you can convert cv::Mat to QImage and then use it anywhere in Qt SDK. This example shows OpenCV and Qt integration including threading and webcam access. The webcam is accessed using OpenCV and cv::Mat received is converted to QImage and rendered onto a QLabel. https://github.com/nickdademo/qt-opencv-multithreaded
The code contains MatToQImage() function which shows the conversion from cv::Mat to QImage. The integration is pretty simple as everything is in C++.
Related
I googled and found out, that there is an option to draw on a separate thread using qml.
http://doc.qt.io/qt-5/qtquick-scenegraph-openglunderqml-example.html
But it's not what I need. How can I render in a separate thread using common qt widgets without qml?
If your QWidget inherits from QOpenglWidget you can just call
this->makeCurrent();
But I personally prefer a more efficient and robust way to encapsulate OpenGL context by using QWindow and configuring all the OpenGL related settings there:
Here is an example:
bool MyOpenGLWindow::Create()
{
this->requestActivate();
if(!glContext)
{
glContext= new QOpenGLContext(this);
QSurfaceFormat fmt = requestedFormat();
int maj = fmt.majorVersion();
int min = fmt.minorVersion();
glContext->setFormat( requestedFormat());
glContext->create();
if(glContext->isValid() == false)
{
QString str;
str.sprintf("Failed to create GL:%i.%i context",maj,min);
return false;
}
}
glContext->makeCurrent(this);
//after this line you can work with OpenGL
initializeOpenGLFunctions();
glDisable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
//etc...
Now,remember, every time you spawn (or move existing one) an instance of such a class in a thread you must set the context current.
I want to use live video I am decoding from media foundation efficiently.
Originally, I was running the render functions synchronously after decoding each frame. The incoming framerate is of around 25-30 fps, but I would like to render the graphics (game) content at 60fps.
If I do it asynchronously I will either get corrupted output / black screens / both or very low framerate due to aggressive locking. Since the GPU operations are async I haven't been able to find a reasonable critical section. How is this normally done? I can use one of my temporary surfaces (source, dest, or g_pDecodedTexture) as a synchronization point and surround writes to it/them with a CRITICAL_SECTION, but I don't know where the critical section should go on the render (reading) thread. If I surround the whole render function, my framerate is very low, and if I don't I get incorrect output. Maybe there is another more appropriated method for synchronization.
At render setup time
hr = g_d3dDevice->CreateShaderResourceView(g_pDecodedTexture, &shaderResourceViewDesc, &g_pTextureRV);
In the decode thread
void Decode()
{
MFT_OUTPUT_DATA_BUFFER output = { 0 };
//...
encoder->ProcessOutput(0,1,&output,&status);
//
CComPtr<IMFMediaBuffer> spMediaBuffer;
CComPtr<IMFDXGIBuffer> spDXGIBuffer;
CComPtr<IDXGIResource> spDecodedTexture;
output.pSample->GetBufferByIndex(0, &spMediaBuffer);
spMediaBuffer->QueryInterface(IID_PPV_ARGS(&spDXGIBuffer);
spDXGIBuffer->GetResource(IID_PPV_ARGS(&spDecodedTexture);
//....
CComPtr<ID3D11Texture2D> source;
spDXGIBuffer->QueryInterface<ID3D11Texture2D>(&source);
//
CComPtr<ID3D11Resource> dest;
swapChain->GetBuffer(0, __uuidof(ID3D11Resource), (void**)&dest);
deviceContext->CopyResource(dest, source);
deviceContext->CopyResource(g_pDecodedTexture, source);
}
In the render thread
void Render()
{
//...
deviceContext->PSSetShaderResources(0, 1, &g_pTextureRV);
//..
m_deviceContext->VSSetShaderResources(0, 1, &g_pTextureRV);
//..
immediateContext->DrawIndexed(...);
//..
immediateContext->DrawIndexed(...);
//..
immediateContext->DrawIndexed(...);
//..
immediateContext->DrawIndexed(...);
//
Present();
}
You can try this : insert the Frame Rate Converter DSP after the decoder. Be sure your input format is compatible with the DSP. Set frame rate at 60 fps.
Doing this, i think you can keep the synchronous approach.
If you want to manually display at 60 fps, we need more code to see where the problem comes from.
I tried to use both HaxeUI and HaxeFlixel, but what I obtain is HaxeUI's interface over a white background, covering everything underneath. Moreover, even if it was possible to somewhat make HaxeUI and HaxeFlixel work together, it's not clear how to change the UI of HaxeUI when the state change in HaxeFlixel. Here is the code I used:
private function setupGame():Void {
Toolkit.theme = new GradientTheme();
Toolkit.init();
var stageWidth:Int = Lib.current.stage.stageWidth;
var stageHeight:Int = Lib.current.stage.stageHeight;
if (zoom == -1) {
var ratioX:Float = stageWidth / gameWidth;
var ratioY:Float = stageHeight / gameHeight;
zoom = Math.min(ratioX, ratioY);
gameWidth = Math.ceil(stageWidth / zoom);
gameHeight = Math.ceil(stageHeight / zoom);
}
trace('stage: ${stageWidth}x${stageHeight}, game: ${gameWidth}x${gameHeight}, zoom=$zoom');
addChild(new FlxGame(gameWidth, gameHeight, initialState, zoom, framerate, framerate, skipSplash, startFullscreen));
Toolkit.openFullscreen(function(root:Root) {
var view:IDisplayObject = Toolkit.processXmlResource("assets/xml/haxeui-resource.xml");
root.addChild(view);
});
}
I can guess that, probably, both HaxeUI and HaxeFlixel have their own main loop and that their event handling might not be compatible, but just in case, can someone have a more definitive answer?
Edit:
Actually, it's much better when using openPopup:
Toolkit.openPopup( { x:20, y:150, width:100, height:100 }, function(root:Root) {
var view:IDisplayObject = Toolkit.processXmlResource("assets/xml/haxeui-naming.xml");
root.addChild(view);
});
It's possible to interact with the rest of the screen (managed with HaxeFlixel), but the mouse pointer present in the part of the screen managed with HaxeFlixel remains under the HaxeUI user interface elements.
When using Flixel and HaxeUI together, its almost like running two applications at once. However, they both rely on OpenFL as a back-end and each attach themselves to its display tree.
One technique I'm experimenting with right now is to open a Flixel sub state, and within the sub state, call Toolkit.openFullscreen(). From inside of this, you can set the alpha of the root's background to 0, which allows you to see through it onto the underlying bitmap that Flixel uses to render.
Here is a minimal example of how you might "embed" an editor interface inside a Flixel sub state:
import haxe.ui.toolkit.core.Toolkit;
import haxe.ui.toolkit.core.RootManager;
import haxe.ui.toolkit.themes.DefaultTheme;
import flixel.FlxG;
import flixel.FlxSubState;
// This would typically be a Haxe UI XMLController
import app.MainEditor;
class HaxeUIState extends FlxSubState
{
override public function create()
{
super.create();
// Flixel uses a sprite-based cursor by default,
// so you need to enable the system cursor to be
// able to see what you're clicking.
FlxG.mouse.useSystemCursor = true;
Toolkit.theme = new DefaultTheme();
Toolkit.init();
Toolkit.openFullscreen(function (root) {
var editor = new MainEditor();
// Allows you to see what's going on in the sub state
root.style.backgroundAlpha = 0;
root.addChild(editor.view);
});
}
override public function destroy()
{
super.destroy();
// Switch back to Flixel's cursor
FlxG.mouse.useSystemCursor = true;
// Not sure if this is the "correct" way to close the UI,
// but it works for my purposes. Alternatively you could
// try opening the editor in advance, but hiding it
// until the sub-state opens.
RootManager.instance.destroyAllRoots();
}
// As far as I can tell, the update function continues to get
// called even while Haxe UI is open.
override public function update() {
super.update();
if (FlxG.keys.justPressed.ESCAPE) {
// This will implicitly trigger destroy().
close();
}
}
}
In this way, you can associate different Flixel states with different Haxe UI controllers. (NOTE: They don't strictly have to be sub-states, that's just what worked best in my case.)
When you open a fullscreen or popup with haxeui, the program flow will be blocked (your update() and draw() function won't be called). You should probably have a look at flixel-ui instead.
From my experience haxeflixel and haxeui work well together but they are totally independent projects, and as such, any coordination between flixel states and displayed UI must be added by the coder.
I don't recall having the white background problem you mention, it shouldn't happen unless haxeui root sprite has a solid background, in that case it should be addressed to haxeui project maintainer.
I'm using PaintCode to convert a set of SVGs I have to use in to Swift code. It looks like it just converts the SVG paths in to UIBezierPath()s, which is great.
To display the generated code I'm doing the following:
class FirstImageView: UIView {
init(name: String) { // Irrelevant custom init
super.init(frame: CGRectMake(15, 15, 40, 40)) // 40x40 View
self.opaque = false // Transparent background
}
override func drawRect(rect: CGRect) {
ImagesCollection.firstImage() // Fill the view with this image
}
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
}
Where ImagesCollection.firstImage() is referencing:
public class ImagesCollection: NSObject {
public class func firstImage() {
let color4 = UIColor(red: 0.595, green: 0.080, blue: 0.125, alpha: 1.000)
var fill295Path = UIBezierPath()
fill295Path.moveToPoint(CGPointMake(27.5, 2.73))
// Rest of generated graphics code
}
}
Which works great - I generated the graphics at 40x40, set the frame to 40x40, and that works fine. What I'm wondering now, is how can I display that same graphic at a smaller (or larger size) - since they're bezier paths they should scale fine, right? Setting my View's frame to CGRectMake(15, 15, 20, 20) (for a desired 20x20 image) just seems to clip the graphic.
How can I ensure that whatever graphic is drawn in to my View is sized to the view's frame?
Thanks
You can use a CGAffineTransform to scale either your custom view or the bezier path itself.
Scale the view:
myFirstImageView.transform = CGAffineTransformMakeScale(0.5, 0.5)
Scale the bezier path:
// ...
// lots of generated graphics code
fill295Path.applyTransform(CGAffineTransformMakeScale(0.5, 0.5))
fill295Path.stroke() // or fill, etc.
Since you generated the code with PaintCode, you could draw their Frame object around your SVG/Bezier. It would generate the Bezier with dynamic size. In your example, you would get ImagesCollection.firstImage(#frame: CGRect) instead of ImagesCollection.firstImage(). You just pass the rectangle from drawRect to it and you are done.
Slightly off-topic, but per Nate's request here's how I did it with SVGKit:
Firstly, you'll want to incorporate SVGKit in your project by following these instructions
With the markup in your SVG file, parse the SVG data using the SVGKParser:
let svgData: String = "<svg>...</svg>"
let svgParse: SVGKParser = SVGKParser(source: SVGKSource(inputSteam: NSInputStream(data: svgData.dataUsingEncoding(NSUTF8StringEncoding, allowLossyConversion: false)!)))
svgParse.addDefaultSVGParserExtensions()
From there you'll load it into an image, using the SVGKImage type:
var svgImage: SVGKImage = SVGKImage(parsedSVG: svgResult, fromSource: nil)
And lastly, you can use it as the image in an SVGKFastImageView:
SVGKFastImageView(SVGKImage: svgImage)
I have ubuntu machine, and a command line application written in OS X which renders something offscreen using FBOs. This is part of the code.
this->systemProvider->setupContext(); //be careful with this one. to add thingies to identify if a context is set up or not
this->systemProvider->useContext();
glewExperimental = GL_TRUE;
glewInit();
GLuint framebuffer, renderbuffer, depthRenderBuffer;
GLuint imageWidth = _viewPortWidth,
imageHeight = _viewPortHeight;
//Set up a FBO with one renderbuffer attachment
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenRenderbuffers(1, &renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB, imageWidth, imageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer);
//Now bind a depth buffer to the FBO
glGenRenderbuffers(1, &depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, _viewPortWidth, _viewPortHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer);
The "system provider" is a C++ wrapper around OS X's NSOpenGLContext, which is used just to create a rendering context and making it current, without associating it with a window. All the rendering happens in the FBOs.
I am trying to use the same approach for Linux (Ubuntu) using GLX, but I am having a hard time doing it, since I see that GLX requires a pixel buffer.
I am trying to follow this tutorial:
http://renderingpipeline.com/2012/05/windowless-opengl/
At the end it uses a pixel buffer to make the context current, which I hear is deprecated and we should abandon it in favour of Frame Buffer Objects, is that right (I may be wrong about this).
Does anyone have a better approach, or idea?
I don't know if it's the best solution, but it surely works for me.
Binding the functions to local variables that we can use
typedef GLXContext (*glXCreateContextAttribsARBProc)(Display*, GLXFBConfig, GLXContext, Bool, const int*);
typedef Bool (*glXMakeContextCurrentARBProc)(Display*, GLXDrawable, GLXDrawable, GLXContext);
static glXCreateContextAttribsARBProc glXCreateContextAttribsARB = NULL;
static glXMakeContextCurrentARBProc glXMakeContextCurrentARB = NULL;
Our objects as class properties:
Display *display;
GLXPbuffer pbuffer;
GLXContext openGLContext;
Setting up the context:
glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc) glXGetProcAddressARB( (const GLubyte *) "glXCreateContextAttribsARB" );
glXMakeContextCurrentARB = (glXMakeContextCurrentARBProc) glXGetProcAddressARB( (const GLubyte *) "glXMakeContextCurrent");
display = XOpenDisplay(NULL);
if (display == NULL){
std::cout << "error getting the X display";
}
static int visualAttribs[] = {None};
int numberOfFrameBufferConfigurations;
GLXFBConfig *fbConfigs = glXChooseFBConfig(display, DefaultScreen(display), visualAttribs, &numberOfFrameBufferConfigurations);
int context_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB ,3,
GLX_CONTEXT_MINOR_VERSION_ARB, 2,
GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
None
};
std::cout << "initialising context...";
this->openGLContext = glXCreateContextAttribsARB(display, fbConfigs[0], 0, True, context_attribs);
int pBufferAttribs[] = {
GLX_PBUFFER_WIDTH, (int)this->initialWidth,
GLX_PBUFFER_HEIGHT, (int)this->initialHeight,
None
};
this->pbuffer = glXCreatePbuffer(display, fbConfigs[0], pBufferAttribs);
XFree(fbConfigs);
XSync(display, False);
Using the context:
if(!glXMakeContextCurrent(display, pbuffer, pbuffer, openGLContext)){
std::cout << "error with content creation\n";
}else{
std::cout << "made a context the current context\n";
}
After that, one can use FBOs normally, as he would in any other occasion. Up to this day, my question is actually unanswered (if there is any better alternative), so I am just offering a solution that worked for me. Seems to me that GLX does not use the notion of pixel buffers the same way as OpenGL does, hence my confusion. The preferred way to render offscreen is FBOs, but for an OpenGL context to be created on Linux, a pixel buffer (the GLX kind) must be created. After that, using FBOs with the code I provided in the question will work as expected, the same way it does on OS X.