Linux GUI stack confusion [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am trying to understand how Linux GUI stack works. Let me explain:
In Windows, thigs are relativelly simple. You have GDI/GDI+. It handles all the subsequent operations from windows drawing and positiong to drawing buttons etc, right?
But in Linux I get pretty confused. Maybe you better understand where my confusion comes from if I explain my thoughts. So, first of Linux I read about its desktop managers. Gnome and KDE. So I picked KDE (for no specific reason) and learned it is based using Qt library. So I read about Qt library a bit more.
I first thought that Qt actually renders UI elements like buttons, sliders and so on. But when I saw example for Windows since its multiplatform, I realised it does not. It is using GDI for rendering. So the Linux version must use some Linux way to render UI elements.
So if I am right, KDE uses Qt just to organize things, I would say in very simplistic way as layout manager, right? I assume this, if on Windows is it using GDI for rendering, its widely used just becouse its simpler and cleaner then directly manipulating GDI.
So from this point of view, Linux desktop (actually windows too) is "just" a window which is always fullscreen and cannot be minimised, shut down and so. It is using Qt for higher level of rendering basic UI elements. But that means there is another deeper layer under Qt library. I read about X system and its window managers. Are X window managers the layer that renders UI elements (buttons and so on) ? Becouse if I am right, X system is "just" a graphical interface between upper levels and graphical subsystem of PC. Something like GDI use DirectDraw to access framebuffer etc...
In Windows this whole stack seems more compact, I am NOT saying it is better, becouse GDI seems to be in role of Window manager and UI elements renderer together. I believe this is why advanced UI interfaces (Compiz...) are developed for Linux.
So, please, where am I wrong? I tried to understand it as much as I could, but I still think I miss something. Thanks.

Related

Standard way of determining placement of window frame controls

The More General Question
I am wondering if there is a standard way that operating systems / desktop managers use to expose the user's preference regarding the placement of the window frame controls (Close, Maximize/Miniaturize, Minimize).
For platforms like Windows and MacOS, it's "pretty" safe to assume that the users wants their window controls on the right and left respectively to match the rest of the windows in the GUI. But the key word here is "assume'. I hate to assume things when I code.
Furthermore, what about all the different Linux distributions and flavors?
I think this information could be useful to application developers in the same way that it's useful to know the user's preferences regarding dark or light themes.
My More Specific Question
Now, what I'm building currently is an Electron application that could really benefit from a custom title bar (a.k.a. a frameless window). And I do understand that my problem is caused by the fact that I want to bypass the window frame abstraction that is normally offered by the operating systems, but I'd really like to be able to position my custom controls in my title bar without having to guess.
But anyway, since I use Electron, I do have access to native features using NodeJS, but I'd also be curious to know if browsers have or are planning to implement a way for the CSS or JavaScript running in the browser to determine the intended placement of the window controls, again, similarly to prefers-color-scheme?

Decide Pixi.js or Phaser [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 21 days ago.
The community reviewed whether to reopen this question 21 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
One of my school project is to make a realtime multiplayers webpage game, I am currently having difficulty to decide if I should go Pixi.js or Phaser for the game graphic and control, could anyone talk a little bit about what they are good at and better that each other?
Phaser uses Pixi for rendering, albeit an older and heavily modified version of it. Current versions of Pixi may give you better performance, but you'll have to implement by hand what's readily available in Phaser.
They are different by that Pixi is a rendering engine and Phaser is a game framework.
I'll quote Rich, the creator of Phaser:
Off the top of my head, here is what Phaser adds onto Pixi:
Choice of physics systems (arcade or full body)
A Game World and a Camera which can pan around it
Tilemap support
A particle system
Sound support (both web audio and legacy audio)
More advanced input handling (input priority, drag and drop, etc)
Keyboard and Gamepad inputs
Scale Manager to handle game / scene resizing + full screen support
Tween Manager for tweening game objects, hooked into the core clock (so it pauses properly when your game does)
Asset loader (supporting all kinds of file types) and Cache
A State Manager to let you swap between game states easily
Game clock + custom timers + timer events
And probably lots more I forgot. As someone has commented though, it depends entirely on what you want to make. Lots of people use Pixi who don't make games at all. However as this is a game dev forum, I'm going to suspect you do :)
I guess just try it. If you don't like it put it down to experience and just use Pixi "raw".
Source: http://www.html5gamedevs.com/topic/12656-phaser-pixi/#comment-72893
Depending on how much you can wait, you may actually wait to try Phaser 3 (Lazer), which is currently in the works, and will have its own rendering engine. I think, however, that learning the current version of Phaser is a good starting point, and many things in Lazer will be the same.
Phaser gives you a full game framework. Pixi is a rendering engine as Kamen described above.
My idea, if you are a beginner on HTML5 game development, you can have two different approaches;
If you have a product ahead of you to complete, Phaser gives you more tools and therefore speed. It is the biggest sea to swim in for HTML5 game development. But it has its own limitations. Off course you can write your own tools but at the end it is a framework and like every framework it forces you to use its own flow and tools to run smoothly. It would require some time for a developer to understand its flaw, pinpoint their needs and if Phaser doesn't meet them, implement their own solutions. But many people use Phaser and most possibly, there is an answer to all of the problems for a beginner. At the beginning they were using Pixi.js as renderer but now they have their own.
If you want to learn by digging deep into HTML5 renderers and game development, starting by using Pixi.js might be a better decision. As mentioned, Pixi.js is only the renderer. It has cool features but it needs more development upon it to make games. But it also gives you the freedom. You mostly won't have to deal with renderers(WebGL or Canvas) but rest is fully up to you. Personally, I started with Pixi.js, I knew about Phaser but I didn't look deeper into it and wrote my own framework. After my framework got into some point on development, I checked Phaser and I realized that what I had in mind was mostly already existed on Phaser. But still it gave me a deeper information about HTML5 game development.

Creating 2D graphics with a 3D tool? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm planning to develop a RTS game with 2D graphics, since it will be a sprites-based game it will require multiple views of every actor (or at least for most of them)
for instance
Now the problem is I kinda sucks as designer, can work a bit with Photoshop but being a 2D software would be really hard to create different views of a single character and make it to look the same.
That's why I was thinking to create models with a 3D tool, then i could get all the renders just rotating them...does this make sense for you guys?
If so, that drives me to a second question: What software could I use?
Again, I'm programmer not designer so I will need to learn from scratch, and 3D studio and Blender look really complex, Google Sketchup seems to be easier but not sure if worth it.
Well that's it, thanks in advance for any feedback.
Creating your actors with a 3D modelling tool and then creating sprites by rendering multiple viewpoints of them is a sound approach. The main thing is to make sure you have a way of scripting the sprite production so that you don't have to tediously generate 100s of sprite images manually!
These days though, I'd have to query why, if you have actors as 3D models, you wouldn't just render them directly in 3D on whatever platform the game is running on. Even the most humble mobile platform has enough graphics power to 3D render any model you're likely to cook up for sprite-sized objects, and a fully 3D approach gives much more flexibility.
Hybrid approaches are also possible; I seem to remember the Total Annihilation RTS games stored 3D models of the game objects, but created and cached 2D spites of them on demand (within the otherwise 2D game engine) rather than relying on the flaky 3D HW of the day or on pregeneration of sprite images for loading. It was a good solution for it's time but I'd be surprised if the approach was still needed today.
It's worth persevering with a package like blender or 3D Studio. The skills you pick up will be useful for other stuff in the future.
If you're dealing with relatively small or low rez graphics like in your example, you don't need to worry about putting too much detail into your model. Just render it out, scale it down and then adjust it in a paint package.

"Seamless" multi user session in linux/X11 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
The goal
I would love to have a multi user system (based on linux) using only one X11 session with multiple screens and pairs of mouse and keyboard. So two (or more) people can work with the same computer sharing not only the same hardware but also the same "screen" (which would be split into two physical screens of course, but you could move a window to your partner for example...). Sharing the windows should not only make it more convenient to "show" your partner what you have done - if user A started to work on something using a complex application (assert that it wouldn't be convenient to save the files and open them in the other session) moving the window of the application to user B should be as simple as moving a window within your own screen. That's why I call it a "seamless" multi user session.
Possible solutions
I read about X11 "multi seat" in this article which doesn't have the features that I want. It uses a session for each user rather than one single session.
I found XI2 aka Xinput2 which provides a multi-pointer support. This allows having two separate mouse pointers controlled by two mice. I read that you can assign two keyboards to the two mice providing independant focus and text input. But I wonder if the clipboards (both "real" and "middle mouse button" clipboards) are treated separately too... I found only few information on XI2 multi pointer feature but no "field report".
Another, completely different idea would be having two separate X11 sessions on the computer but share the windows using X11-forward between the two sessions. BUT: As far as I know, you can not share a X11-forwarded window so that user A runs an application and while it runs, send the window to user B. As I know, only user B can run an application on the hardware of user A and display the window on it's own X11 session. That's again not what I want... Or am I wrong and it is possible to forward a window via X11-forwarding AFTER the application has been started?
edit: I just found XPRA which is similar to X11 forwarding but allows detaching and attaching a running application from / to an X11 session. I give it a try now.
Any other ideas to get this done?
I think I found a solution:
Win Switch (uses Xpra, licenced under GPL3)

How can I make a single PyQt code to work in Windows and Linux?

PyQt experts: I developed the GUI in Windows and used setGeometry to position the widgets. When I tried to run the same code in Linux it looks cluttered.
And added to that in Windows the font size of 8 seems good. But in Linux, especially in Ubuntu, it doesn't appear well since the font size is 10 by default. Some among the differences are the border of the group box doesn't appear in Linux while it is visible in Windows..
Is there a way that I can make the same code to get the same look and feel in Windows and Linux irrespective of the font and size changes and other differences?
In future if I port my application to Mac will the same code work there too? Or should I have to maintain the separate code for each by checking with platform.system() equal to "windows" or "linux"?
The answer is simple: don't use setGeometry directly (to position your widgets).
Consider the following: what if the user wants to resize your application window?
Compose the user interface (you could do this from Designer or from code) within QSplitters (if you want a resize handle between two components) and/or within QVBoxLayouts / QHBoxLayouts (note that these can be nested).
This will make your UI components behave consistently.
I agree with #ChristopheD. Using setGeometry is bad. It's like designing a webpage with fixed pixel geometry and then wondering why it looks bad on another device.
Qt has a lot of wonderful layout code. Let it do it's job.
Qt by default will paint a widget according to instructions contained in the QStyle. You can test how badly you break your layout in different styles easily enough... run your program with different style options. Like so:
program.py -style motif
Also try -style platinum or -style windows. Even different versions of Windows will probably break your layout.
If you really want to see how bad pixel-based layouts are, try running your program with the -reverse parameter... that's how your program will look to someone running it who speaks a Right-To-Left language, like Hebrew or Farsi.
The problem that you have with widgets not drawing where you want them to can be solved by creating custom painting code for your widget. See the PyQt QPainter docs or better yet, the original Qt QPainter docs..
While I hope my answer is useful, it probably means your program needs to be partially rewritten. In the long term, however, it means that you'll have code that is portable between styles and operating systems, and will even work translated (assuming you care about that).

Resources