"Seamless" multi user session in linux/X11 [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
The goal
I would love to have a multi user system (based on linux) using only one X11 session with multiple screens and pairs of mouse and keyboard. So two (or more) people can work with the same computer sharing not only the same hardware but also the same "screen" (which would be split into two physical screens of course, but you could move a window to your partner for example...). Sharing the windows should not only make it more convenient to "show" your partner what you have done - if user A started to work on something using a complex application (assert that it wouldn't be convenient to save the files and open them in the other session) moving the window of the application to user B should be as simple as moving a window within your own screen. That's why I call it a "seamless" multi user session.
Possible solutions
I read about X11 "multi seat" in this article which doesn't have the features that I want. It uses a session for each user rather than one single session.
I found XI2 aka Xinput2 which provides a multi-pointer support. This allows having two separate mouse pointers controlled by two mice. I read that you can assign two keyboards to the two mice providing independant focus and text input. But I wonder if the clipboards (both "real" and "middle mouse button" clipboards) are treated separately too... I found only few information on XI2 multi pointer feature but no "field report".
Another, completely different idea would be having two separate X11 sessions on the computer but share the windows using X11-forward between the two sessions. BUT: As far as I know, you can not share a X11-forwarded window so that user A runs an application and while it runs, send the window to user B. As I know, only user B can run an application on the hardware of user A and display the window on it's own X11 session. That's again not what I want... Or am I wrong and it is possible to forward a window via X11-forwarding AFTER the application has been started?
edit: I just found XPRA which is similar to X11 forwarding but allows detaching and attaching a running application from / to an X11 session. I give it a try now.
Any other ideas to get this done?

I think I found a solution:
Win Switch (uses Xpra, licenced under GPL3)

Related

Standard way of determining placement of window frame controls

The More General Question
I am wondering if there is a standard way that operating systems / desktop managers use to expose the user's preference regarding the placement of the window frame controls (Close, Maximize/Miniaturize, Minimize).
For platforms like Windows and MacOS, it's "pretty" safe to assume that the users wants their window controls on the right and left respectively to match the rest of the windows in the GUI. But the key word here is "assume'. I hate to assume things when I code.
Furthermore, what about all the different Linux distributions and flavors?
I think this information could be useful to application developers in the same way that it's useful to know the user's preferences regarding dark or light themes.
My More Specific Question
Now, what I'm building currently is an Electron application that could really benefit from a custom title bar (a.k.a. a frameless window). And I do understand that my problem is caused by the fact that I want to bypass the window frame abstraction that is normally offered by the operating systems, but I'd really like to be able to position my custom controls in my title bar without having to guess.
But anyway, since I use Electron, I do have access to native features using NodeJS, but I'd also be curious to know if browsers have or are planning to implement a way for the CSS or JavaScript running in the browser to determine the intended placement of the window controls, again, similarly to prefers-color-scheme?

How to make sure to not miss event [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Say I have a program that does the following:
wait for key press
Once key pressed, do complex query (takes 10 seconds) and print the result
repeat
Now if I my key presses are 10 seconds apart, this would not be a problem. How do I handle key presses really close together. Even worse, keys pressed at the exact same time.
Is this information bound to be lost?
I know threading might be an option but what is general way of doing this?
Is it possible to just store every key pressed even though other code is running and be able to tend to it later?
Interrupts. Universally, computers provide a mechanism for peripherals to request the attention of a CPU by asserting an Interrupt Request Signal. The CPU, when it is ready to accept interrupts, responds by saving minimal state somewhere, then executing code which can query the peripheral, possibly accepting data (keypress) from it.
If you are using an OS this is all hidden by the kernel, which typically exposes some mechanisms for you to choose how you want to deal with it:
Queue up the keypresses and process them later. Thus if I want to have query 1,3,5 in that order, I can press those keys in succession and go for a smoke while your long processing occurs.
Discard the lookahead keypresses; thus demand the user interact with a lousy UI. Search for "homer simpson work from home" to see how to work around this.
If you are using an OS, you might need to look up various ioctl's to enable this behaviour, use a UI packages similar to curses, or other.
If you aren't using an OS, your job is both tougher and easier: you have to write the code to talk to the keyboard, but implementing the policy is 1/10 th work of figuring out some baroque UI library.

real time online push button based counting system [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am doing this project:
I have 4 inputs. These are push buttons, connected to a microcontroller.
Each time a push button is pressed, say for example pushbutton_1 is pressed, the press of a switch button should be recognised as a HIGH.
In its normal unpressed state it should be recognised as a LOW.
Then using serial communication i should transfer it to a computer.
Along with this, i need to implement a count for each button.
Each time a push button is pressed, the count that is assigned to that push button, should increment by 1.
The data arriving through serial communication should be transferred to an excel sheet/database.
The excel sheet/database should display a count for each pushbutton.
I have 4 important question areas:
Which microcontroller should i use? (I have experience with arduino development platform)
How do i implement the transfer of data from microcontroller to computer via serial communication?
Afterwards, how do i transfer the arriving data to MS excel/database?
How do i run implement the system in realtime?
Please suggest me the best possible way to implement this system.
To solve this using an MPU like an RPi via the Internet, its pretty trivial. To do this:
Wire your switches to the GPIO inputs on the Pi. This is a trivial example: http://razzpisampler.oreilly.com/ch07.html
When the state changes, send a message via a realtime service such as PubNub (free for student and other uses: http://www.pubnub.com/free-evangelism-program/)
On a remote "server-side", take the data received via the subscriber logic write to a CSV.
If you followed these directions, you would use the PubNub python client to publish the data from the Pi: https://github.com/pubnub/python/tree/master/python#publish
and then you would use python (PubNub supports over 70 languages, so you could use python or the language of your choice) to subscribe to the pushbutton data channel(s)
https://github.com/pubnub/python/tree/master/python#subscribe
You could even make a cool realtime updating web page in HTML/JS using the PubNub JS client
Source: https://github.com/pubnub/javascript/tree/master/web
Docs: http://www.pubnub.com/docs/javascript/api/reference.html#subscribe
to dynamically update a dashboard, with no file writing needed.

Linux GUI stack confusion [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am trying to understand how Linux GUI stack works. Let me explain:
In Windows, thigs are relativelly simple. You have GDI/GDI+. It handles all the subsequent operations from windows drawing and positiong to drawing buttons etc, right?
But in Linux I get pretty confused. Maybe you better understand where my confusion comes from if I explain my thoughts. So, first of Linux I read about its desktop managers. Gnome and KDE. So I picked KDE (for no specific reason) and learned it is based using Qt library. So I read about Qt library a bit more.
I first thought that Qt actually renders UI elements like buttons, sliders and so on. But when I saw example for Windows since its multiplatform, I realised it does not. It is using GDI for rendering. So the Linux version must use some Linux way to render UI elements.
So if I am right, KDE uses Qt just to organize things, I would say in very simplistic way as layout manager, right? I assume this, if on Windows is it using GDI for rendering, its widely used just becouse its simpler and cleaner then directly manipulating GDI.
So from this point of view, Linux desktop (actually windows too) is "just" a window which is always fullscreen and cannot be minimised, shut down and so. It is using Qt for higher level of rendering basic UI elements. But that means there is another deeper layer under Qt library. I read about X system and its window managers. Are X window managers the layer that renders UI elements (buttons and so on) ? Becouse if I am right, X system is "just" a graphical interface between upper levels and graphical subsystem of PC. Something like GDI use DirectDraw to access framebuffer etc...
In Windows this whole stack seems more compact, I am NOT saying it is better, becouse GDI seems to be in role of Window manager and UI elements renderer together. I believe this is why advanced UI interfaces (Compiz...) are developed for Linux.
So, please, where am I wrong? I tried to understand it as much as I could, but I still think I miss something. Thanks.

Easy-to-use AutoHotkey/AutoIt alternatives for Linux [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm looking for recommendations for an easy-to-use GUI automation/macro platform for Linux.
If you're familiar with AutoHotkey or AutoIt on Windows, then you know exactly the kind of features I need, with the level of complexity. If you aren't familiar, then here's a small code snippet of how easy it is to use AutoHotkey:
InputBox, varInput, Please enter some random text...
Run, notepad.exe
WinWaitActive, Untitled - Notepad
SendInput, %varInput%
SendInput, !f{Up}{Enter}{Enter}
WinWaitActive, Save
SendInput, SomeRandomFile{Enter}
MsgBox, Your text`, %varInput% has been saved using notepad!
#n::Run, notepad.exe
Now the above example, although a bit pointless, is a demo of the sort of functionality and simplicity I'm looking for. Here's an explanation for those who don't speak AutoHotkey:
----Start of Explanation of Code ----
Asks user to input some text and stores it in varInput
Runs notepad.exe
Waits till window exists and is active
Sends the contents of varInput as a series of keystrokes
Sends keystrokes to go to File -> Exit
Waits till the "Save" window is active
Sends some more keystrokes
Shows a Message Box with some text and the contents of a variable
Registers a hotkey, Win+N, which when pressed executes notepad.exe
----End of Explanation----
So as you can understand, the features are quite obvious: Ability to easily simulate keyboard and mouse functions, read input, process and display output, execute programs, manipulate windows, register hotkeys, etc. - all being done without requiring any #includes, unnecessary brackets, class declarations, etc. In short: Simple.
Now I've played around a bit with Perl and Python, but it's definitely not AutoHotkey. They're great for more advanced stuff, but surely, there has to be some tool out there for easy GUI automation, right?
PS: I've already tried running AutoHotkey with Wine, but sending keystrokes and hotkeys don't work.
I'd recommend the site alternativeto.net to find alternative programs.
It shows three alternatives for AutoIt: AutoKey, Sikuli, and Silktest. AutoKey seems up to the job.
IronAHK is being developed as a cross-platform flavor of AutoHotkey which can be used on Linux, but it's not a fleshed out product yet.
Sikuli lets you automate your interface using screenshots. It runs on any Java platform, so it is cross-platform.
You should look at Experitest. I'm using the Windows version, but it's Java-based and I think it supports Linux as well.

Resources