How to render opengl in a separate thread in Qt, using QOpenGLWidget? - multithreading

I googled and found out, that there is an option to draw on a separate thread using qml.
http://doc.qt.io/qt-5/qtquick-scenegraph-openglunderqml-example.html
But it's not what I need. How can I render in a separate thread using common qt widgets without qml?

If your QWidget inherits from QOpenglWidget you can just call
this->makeCurrent();
But I personally prefer a more efficient and robust way to encapsulate OpenGL context by using QWindow and configuring all the OpenGL related settings there:
Here is an example:
bool MyOpenGLWindow::Create()
{
this->requestActivate();
if(!glContext)
{
glContext= new QOpenGLContext(this);
QSurfaceFormat fmt = requestedFormat();
int maj = fmt.majorVersion();
int min = fmt.minorVersion();
glContext->setFormat( requestedFormat());
glContext->create();
if(glContext->isValid() == false)
{
QString str;
str.sprintf("Failed to create GL:%i.%i context",maj,min);
return false;
}
}
glContext->makeCurrent(this);
//after this line you can work with OpenGL
initializeOpenGLFunctions();
glDisable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
//etc...
Now,remember, every time you spawn (or move existing one) an instance of such a class in a thread you must set the context current.

Related

Should the variable value be checked before assigning?

I know this might sound like a silly question but I'm curious should I check my variable value before assigning?
like for example if I'm flipping my skin (Node2D composed of sprite & raycast) based on direction (Vector2) :
func _process(delta):
...
if(direction.x>0):
skin.scale.x=1
elif(direction.x<0):
skin.scale.x=-1
#OR
if(direction.x>0):
if(skin.scale.x!=1):
skin.scale.x=1
elif(direction.x<0):
if(skin.scale.x!=-1):
skin.scale.x=-1
would the skin scale be altered every _process hence consuming more CPU usage
OR
if the value is same will it be ignored?
First of all, given that this is GDScript, so the number of lines will be a performance factor.
We will look at the C++ side…
But before that… Be aware that GDScript does some trickery with properties.
When you say skin.scale Godot will call get_scale on the skin object, which returns a Vector2. And Vector2 is a value type. That Vector2 is not the scale that the object has, but a copy, an snapshot of the value. So, in virtually any other language skin.scale.x=1 is modifying the Vector2 and would have no effect on the scale of the object. Meaning that you should do this:
skin.scale = Vector2(skin.scale.x + 1, skin.scale.y)
Or this:
var skin_scale = skin.scale
skin_scale.x += 1
skin.scale = skin_scale
Which I bet people using C# would find familiar.
But you don't need to do that in GDScript. Godot will call set_scale, which is what most people expect. It is a feature!
So, you set scale, and Godot will call set_scale:
void Node2D::set_scale(const Size2 &p_scale) {
if (_xform_dirty) {
((Node2D *)this)->_update_xform_values();
}
_scale = p_scale;
// Avoid having 0 scale values, can lead to errors in physics and rendering.
if (Math::is_zero_approx(_scale.x)) {
_scale.x = CMP_EPSILON;
}
if (Math::is_zero_approx(_scale.y)) {
_scale.y = CMP_EPSILON;
}
_update_transform();
_change_notify("scale");
}
The method _change_notify only does something in the editor. It is the Godot 3.x instrumentation for undo/redo et.al.
And set_scale will call _update_transform:
void Node2D::_update_transform() {
_mat.set_rotation_and_scale(angle, _scale);
_mat.elements[2] = pos;
VisualServer::get_singleton()->canvas_item_set_transform(get_canvas_item(), _mat);
if (!is_inside_tree()) {
return;
}
_notify_transform();
}
Which, as you can see, will update the Transform2D of the Node2D (_mat). Then it is off to the VisualServer.
And then to _notify_transform. Which is what propagates the change in the scene tree. It is also what calls notification(NOTIFICATION_LOCAL_TRANSFORM_CHANGED) if you have enabled it with set_notify_transform. It looks like this (this is from "canvas_item.h"):
_FORCE_INLINE_ void _notify_transform() {
if (!is_inside_tree()) {
return;
}
_notify_transform(this);
if (!block_transform_notify && notify_local_transform) {
notification(NOTIFICATION_LOCAL_TRANSFORM_CHANGED);
}
}
And you can see it delegates to another _notify_transform that looks like this (this is from "canvas_item.cpp"):
void CanvasItem::_notify_transform(CanvasItem *p_node) {
/* This check exists to avoid re-propagating the transform
* notification down the tree on dirty nodes. It provides
* optimization by avoiding redundancy (nodes are dirty, will get the
* notification anyway).
*/
if (/*p_node->xform_change.in_list() &&*/ p_node->global_invalid) {
return; //nothing to do
}
p_node->global_invalid = true;
if (p_node->notify_transform && !p_node->xform_change.in_list()) {
if (!p_node->block_transform_notify) {
if (p_node->is_inside_tree()) {
get_tree()->xform_change_list.add(&p_node->xform_change);
}
}
}
for (CanvasItem *ci : p_node->children_items) {
if (ci->top_level) {
continue;
}
_notify_transform(ci);
}
}
So, no. There is no check to ignore the change if the value is the same.
However, it is worth noting that Godot invalidates the global transform instead of computing it right away (global_invalid). This is does not make multiple updates to the transform in the same frame free, but it makes them cheaper than otherwise.
I also remind you that looking at the source code is no replacement for using a profiler.
Should you check? Perhaps… If there are many children that need to be updated the extra lines are likely cheap enough. If in doubt: measure with a profiler.

Swift objc_sync_enter/exit Linux alternatives

For multithread concurency editing objects in Swift I use:
import Foundation
func lockForEdit(object: NSObject, closure: () -> Void) {
objc_sync_enter(object)
closure()
objc_sync_exit(object)
}
// In each thread
lockForEdit (object: threadsDictionarie as NSObject) {
threadsDictionarie.append(dict)
}
But in Linux Ubuntu 14.04 with Swift 3.0.1 I get:
use of unresolver identifier 'objc_sync_enter',
use of unresolver identifier 'objc_sync_exit'
What to use for Swift in Linux for concurency editing objects?
There is no per-object locking in Swift. You could use NSLock or pthread locks as a replacement, but you need to maintain the lock/object mapping on your own.
Also, you may want to use a serial DispatchQueue instead of a lock in the first place (checkout: About Dispatch Queues). But this obviously depends on what you are doing.
A way to do this is to add something like a ThreadsafeDictionary and wrap the real dictionary inside it. Like so:
class ThreadsafeDictionary<T> : ... all the protocols... {
let lock = NSLock()
var values = Dictionary<T>()
... all the methods you need ...
}
Presumably you can find an implementation of this on GitHub.

WatchOS and when to put methods on the main thread

It seems like for WatchOS extensions, it is more often the case where code needs to be explicitly placed on the main thread, as opposed to explicitly put in the background or another queue as in IOS. What activities need to be explicitly put on the main thread in WatchOS extensions? What I have seen is that this is being done when either updating the user interface or when changing the state of a workout with HealthKit.
For example, before I update the label values on my watchOS interface view controller I am calling the main queue:
func locationUpdate(locationDict: [String:AnyObject]) {
dispatch_async(dispatch_get_main_queue()) {
if let first = locationDict["firstValue"] as? String {
self.firstValue.setText(first)
}
if let second = locationDict["second"] as? String {
self.secondValue.setText(second)
}
}
Is this required? I would not do this in iOS. Are there other common cases? Is there a good reference regarding special main queue considerations for WatchOS?

Can I use HaxeUI with HaxeFlixel?

I tried to use both HaxeUI and HaxeFlixel, but what I obtain is HaxeUI's interface over a white background, covering everything underneath. Moreover, even if it was possible to somewhat make HaxeUI and HaxeFlixel work together, it's not clear how to change the UI of HaxeUI when the state change in HaxeFlixel. Here is the code I used:
private function setupGame():Void {
Toolkit.theme = new GradientTheme();
Toolkit.init();
var stageWidth:Int = Lib.current.stage.stageWidth;
var stageHeight:Int = Lib.current.stage.stageHeight;
if (zoom == -1) {
var ratioX:Float = stageWidth / gameWidth;
var ratioY:Float = stageHeight / gameHeight;
zoom = Math.min(ratioX, ratioY);
gameWidth = Math.ceil(stageWidth / zoom);
gameHeight = Math.ceil(stageHeight / zoom);
}
trace('stage: ${stageWidth}x${stageHeight}, game: ${gameWidth}x${gameHeight}, zoom=$zoom');
addChild(new FlxGame(gameWidth, gameHeight, initialState, zoom, framerate, framerate, skipSplash, startFullscreen));
Toolkit.openFullscreen(function(root:Root) {
var view:IDisplayObject = Toolkit.processXmlResource("assets/xml/haxeui-resource.xml");
root.addChild(view);
});
}
I can guess that, probably, both HaxeUI and HaxeFlixel have their own main loop and that their event handling might not be compatible, but just in case, can someone have a more definitive answer?
Edit:
Actually, it's much better when using openPopup:
Toolkit.openPopup( { x:20, y:150, width:100, height:100 }, function(root:Root) {
var view:IDisplayObject = Toolkit.processXmlResource("assets/xml/haxeui-naming.xml");
root.addChild(view);
});
It's possible to interact with the rest of the screen (managed with HaxeFlixel), but the mouse pointer present in the part of the screen managed with HaxeFlixel remains under the HaxeUI user interface elements.
When using Flixel and HaxeUI together, its almost like running two applications at once. However, they both rely on OpenFL as a back-end and each attach themselves to its display tree.
One technique I'm experimenting with right now is to open a Flixel sub state, and within the sub state, call Toolkit.openFullscreen(). From inside of this, you can set the alpha of the root's background to 0, which allows you to see through it onto the underlying bitmap that Flixel uses to render.
Here is a minimal example of how you might "embed" an editor interface inside a Flixel sub state:
import haxe.ui.toolkit.core.Toolkit;
import haxe.ui.toolkit.core.RootManager;
import haxe.ui.toolkit.themes.DefaultTheme;
import flixel.FlxG;
import flixel.FlxSubState;
// This would typically be a Haxe UI XMLController
import app.MainEditor;
class HaxeUIState extends FlxSubState
{
override public function create()
{
super.create();
// Flixel uses a sprite-based cursor by default,
// so you need to enable the system cursor to be
// able to see what you're clicking.
FlxG.mouse.useSystemCursor = true;
Toolkit.theme = new DefaultTheme();
Toolkit.init();
Toolkit.openFullscreen(function (root) {
var editor = new MainEditor();
// Allows you to see what's going on in the sub state
root.style.backgroundAlpha = 0;
root.addChild(editor.view);
});
}
override public function destroy()
{
super.destroy();
// Switch back to Flixel's cursor
FlxG.mouse.useSystemCursor = true;
// Not sure if this is the "correct" way to close the UI,
// but it works for my purposes. Alternatively you could
// try opening the editor in advance, but hiding it
// until the sub-state opens.
RootManager.instance.destroyAllRoots();
}
// As far as I can tell, the update function continues to get
// called even while Haxe UI is open.
override public function update() {
super.update();
if (FlxG.keys.justPressed.ESCAPE) {
// This will implicitly trigger destroy().
close();
}
}
}
In this way, you can associate different Flixel states with different Haxe UI controllers. (NOTE: They don't strictly have to be sub-states, that's just what worked best in my case.)
When you open a fullscreen or popup with haxeui, the program flow will be blocked (your update() and draw() function won't be called). You should probably have a look at flixel-ui instead.
From my experience haxeflixel and haxeui work well together but they are totally independent projects, and as such, any coordination between flixel states and displayed UI must be added by the coder.
I don't recall having the white background problem you mention, it shouldn't happen unless haxeui root sprite has a solid background, in that case it should be addressed to haxeui project maintainer.

Linux Rendering offscreen with OpenGL 3.2+ w/ FBOs

I have ubuntu machine, and a command line application written in OS X which renders something offscreen using FBOs. This is part of the code.
this->systemProvider->setupContext(); //be careful with this one. to add thingies to identify if a context is set up or not
this->systemProvider->useContext();
glewExperimental = GL_TRUE;
glewInit();
GLuint framebuffer, renderbuffer, depthRenderBuffer;
GLuint imageWidth = _viewPortWidth,
imageHeight = _viewPortHeight;
//Set up a FBO with one renderbuffer attachment
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenRenderbuffers(1, &renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB, imageWidth, imageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer);
//Now bind a depth buffer to the FBO
glGenRenderbuffers(1, &depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, _viewPortWidth, _viewPortHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer);
The "system provider" is a C++ wrapper around OS X's NSOpenGLContext, which is used just to create a rendering context and making it current, without associating it with a window. All the rendering happens in the FBOs.
I am trying to use the same approach for Linux (Ubuntu) using GLX, but I am having a hard time doing it, since I see that GLX requires a pixel buffer.
I am trying to follow this tutorial:
http://renderingpipeline.com/2012/05/windowless-opengl/
At the end it uses a pixel buffer to make the context current, which I hear is deprecated and we should abandon it in favour of Frame Buffer Objects, is that right (I may be wrong about this).
Does anyone have a better approach, or idea?
I don't know if it's the best solution, but it surely works for me.
Binding the functions to local variables that we can use
typedef GLXContext (*glXCreateContextAttribsARBProc)(Display*, GLXFBConfig, GLXContext, Bool, const int*);
typedef Bool (*glXMakeContextCurrentARBProc)(Display*, GLXDrawable, GLXDrawable, GLXContext);
static glXCreateContextAttribsARBProc glXCreateContextAttribsARB = NULL;
static glXMakeContextCurrentARBProc glXMakeContextCurrentARB = NULL;
Our objects as class properties:
Display *display;
GLXPbuffer pbuffer;
GLXContext openGLContext;
Setting up the context:
glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc) glXGetProcAddressARB( (const GLubyte *) "glXCreateContextAttribsARB" );
glXMakeContextCurrentARB = (glXMakeContextCurrentARBProc) glXGetProcAddressARB( (const GLubyte *) "glXMakeContextCurrent");
display = XOpenDisplay(NULL);
if (display == NULL){
std::cout << "error getting the X display";
}
static int visualAttribs[] = {None};
int numberOfFrameBufferConfigurations;
GLXFBConfig *fbConfigs = glXChooseFBConfig(display, DefaultScreen(display), visualAttribs, &numberOfFrameBufferConfigurations);
int context_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB ,3,
GLX_CONTEXT_MINOR_VERSION_ARB, 2,
GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
None
};
std::cout << "initialising context...";
this->openGLContext = glXCreateContextAttribsARB(display, fbConfigs[0], 0, True, context_attribs);
int pBufferAttribs[] = {
GLX_PBUFFER_WIDTH, (int)this->initialWidth,
GLX_PBUFFER_HEIGHT, (int)this->initialHeight,
None
};
this->pbuffer = glXCreatePbuffer(display, fbConfigs[0], pBufferAttribs);
XFree(fbConfigs);
XSync(display, False);
Using the context:
if(!glXMakeContextCurrent(display, pbuffer, pbuffer, openGLContext)){
std::cout << "error with content creation\n";
}else{
std::cout << "made a context the current context\n";
}
After that, one can use FBOs normally, as he would in any other occasion. Up to this day, my question is actually unanswered (if there is any better alternative), so I am just offering a solution that worked for me. Seems to me that GLX does not use the notion of pixel buffers the same way as OpenGL does, hence my confusion. The preferred way to render offscreen is FBOs, but for an OpenGL context to be created on Linux, a pixel buffer (the GLX kind) must be created. After that, using FBOs with the code I provided in the question will work as expected, the same way it does on OS X.

Resources