I'm trying to send keystrokes of regular keyboard input from 'a-z' which may or may not include the directional arrow keys to a running game process, however i'm confused at the pywinauto documentation:
I've already connected the existing process via pid by:
from pywinauto.application import Application
from pywinauto.keyboard import SendKeys
app = Application().connect(process=1234)
#app.SendKeys('a')? Doesn't seem to work
I've read some other answers on this but it's not very clear as to what the next step is on the documentation, there aren't any real examples.
I've also read from some other answers that SendKeys auto focuses the windows, which isn't want I want, if possible would it be possible to send keystrokes to the process silently?
There are few moments. If the game process has its own window with native handle, you may try the following:
app.window(title="Window title").send_keystrokes("something")
app.window(title="Window title").send_chars("something")
It should work even for minimized window. The difference may appear for special symbols which may not work for some of these methods or even for both. But arrows should probably work with send_keystrokes.
If it's DirectX game, sending keys might be more complicated task. A while ago I found some references about potential implementation of this: https://github.com/pywinauto/pywinauto/issues/469 Though I had no chance to try it yet.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Background: for the past five years or so, I have been using Mac hardware (high end MacBook Pro laptops for the most part) and software after many years of using Gnu/Linux on typical PC hardware with ergonomic keyboards. More importantly, as a heavy Emacs user, the switch to Mac was painful, with the Apple standard short keyboard both maddening and unavoidable. I prevented RSI onset by using the Karabiner tool to make two small but very important changes: 1) changing the capslock key to generate a menu (f13) key when pressed alone and a control key modified keycode when pressed with another key; 2) changing the return key in a similar fashion, get return when pressed alone and a control key modified keycode when pressed with another key. Disappointed with recent Apple decisions for both hardware and software, I am now moving back to Gnu/Linux (Ubuntu if it matters) but sticking with Mac laptops.
Question: since Karabiner is an OS X only tool with no readily available Gnu/Linux counterpart, it looks like I will have to write and/or modify some code to achieve the capslock and return key dual function behaviors Karabiner enables. The Karabiner author writes that xbindkeys and rbindkeys do key remapping but at first glance they do not seem to handle the dual function behaviors. Now I am wrestling with porting Karabiner or creating a new tool entirely. And no doubt there may be other approaches as well. So my question is: what programming advice would you suggest for solving this problem? Especially one that can be developed in hours, days or weeks rather than in months.
Notes:
1) There are different approaches involving changes of behavior such as swapping control and command keys. Many have been tried with varying degrees of satisfaction. Karabiner's dual function approach is, IMHO, far and away, the most effective in that it provides control key symmetry on the keyboard home row, and for all applications!
2) Different hardware is also likely to be suggested. I've tried Dell, HP, Lenovo, Acer systems and looked at a lot more. None are comparable to the combined power, size, feel, and style of the Apple top end products, albeit at a premium price. For example, the Dell Precision 7510 is bulky and has a trackpad that feels like sandpaper; the Lenovo X1 (a very nice system) lacks a Thunderbolt port; etc.
3) External keyboards are also a non-starter because of the laptop requirement; an external keyboard on the plane or train is not happening.
You can achieve this on Waylan, a TTY or X11 using Interception Tools, which talks directly with libevdev and libudev.
Wayland, TTY or X11
Install Interception Tools and a plugin such as caps2esc or interception-k2k. Then you need to configure Interception to use this plugin. For caps2esc, you can use the following /etc/udevmon.yaml file:
- JOB: "intercept -g $DEVNODE | caps2esc | uinput -d $DEVNODE"
DEVICE:
EVENTS:
EV_KEY: [KEY_CAPSLOCK, KEY_ESC]
Then run it as root:
nice -n -20 /usr/bin/udevmon -c /etc/udevmon.yaml
You should make sure it starts on login. For systemd, you can use the following service:
[Unit]
Description=udevmon
[Service]
ExecStart=/usr/bin/nice -n -20 /usr/bin/udevmon -c /etc/udevmon.yaml
[Install]
WantedBy=multi-user.target
X11-only
As an alternative or on older systems without udev, you can use setxkbmap and xcape.
First change Caps Lock to act as a Ctrl modifier:
setxkbmap -option caps:ctrl_modifier
Then set Caps Lock to act as Menu key when pressed for less than the timeout (the default is 500 ms):
xcape -e 'Caps_Lock=Menu'
xcape runs as a daemon, so you need to ensure it starts on login. setxkbmap sets the keyboard layout for the current X session only, you can set it permanently on xinitrc, xprofile or X configuration files.
P.S. For those that want to use an external keyboard, the open source (software and hardware) Ultimate Hacking Keyboard (UHK) allows this functionality.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am trying to understand how Linux GUI stack works. Let me explain:
In Windows, thigs are relativelly simple. You have GDI/GDI+. It handles all the subsequent operations from windows drawing and positiong to drawing buttons etc, right?
But in Linux I get pretty confused. Maybe you better understand where my confusion comes from if I explain my thoughts. So, first of Linux I read about its desktop managers. Gnome and KDE. So I picked KDE (for no specific reason) and learned it is based using Qt library. So I read about Qt library a bit more.
I first thought that Qt actually renders UI elements like buttons, sliders and so on. But when I saw example for Windows since its multiplatform, I realised it does not. It is using GDI for rendering. So the Linux version must use some Linux way to render UI elements.
So if I am right, KDE uses Qt just to organize things, I would say in very simplistic way as layout manager, right? I assume this, if on Windows is it using GDI for rendering, its widely used just becouse its simpler and cleaner then directly manipulating GDI.
So from this point of view, Linux desktop (actually windows too) is "just" a window which is always fullscreen and cannot be minimised, shut down and so. It is using Qt for higher level of rendering basic UI elements. But that means there is another deeper layer under Qt library. I read about X system and its window managers. Are X window managers the layer that renders UI elements (buttons and so on) ? Becouse if I am right, X system is "just" a graphical interface between upper levels and graphical subsystem of PC. Something like GDI use DirectDraw to access framebuffer etc...
In Windows this whole stack seems more compact, I am NOT saying it is better, becouse GDI seems to be in role of Window manager and UI elements renderer together. I believe this is why advanced UI interfaces (Compiz...) are developed for Linux.
So, please, where am I wrong? I tried to understand it as much as I could, but I still think I miss something. Thanks.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
The goal
I would love to have a multi user system (based on linux) using only one X11 session with multiple screens and pairs of mouse and keyboard. So two (or more) people can work with the same computer sharing not only the same hardware but also the same "screen" (which would be split into two physical screens of course, but you could move a window to your partner for example...). Sharing the windows should not only make it more convenient to "show" your partner what you have done - if user A started to work on something using a complex application (assert that it wouldn't be convenient to save the files and open them in the other session) moving the window of the application to user B should be as simple as moving a window within your own screen. That's why I call it a "seamless" multi user session.
Possible solutions
I read about X11 "multi seat" in this article which doesn't have the features that I want. It uses a session for each user rather than one single session.
I found XI2 aka Xinput2 which provides a multi-pointer support. This allows having two separate mouse pointers controlled by two mice. I read that you can assign two keyboards to the two mice providing independant focus and text input. But I wonder if the clipboards (both "real" and "middle mouse button" clipboards) are treated separately too... I found only few information on XI2 multi pointer feature but no "field report".
Another, completely different idea would be having two separate X11 sessions on the computer but share the windows using X11-forward between the two sessions. BUT: As far as I know, you can not share a X11-forwarded window so that user A runs an application and while it runs, send the window to user B. As I know, only user B can run an application on the hardware of user A and display the window on it's own X11 session. That's again not what I want... Or am I wrong and it is possible to forward a window via X11-forwarding AFTER the application has been started?
edit: I just found XPRA which is similar to X11 forwarding but allows detaching and attaching a running application from / to an X11 session. I give it a try now.
Any other ideas to get this done?
I think I found a solution:
Win Switch (uses Xpra, licenced under GPL3)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Pretty much every other editor that isn't a vi descendant (vim, cream, vi-emu) seems to use the emacs shortcuts (ctrl+w to delete back a word and so on)
Early software was often modal, but usability took a turn at some point, away from this style.
VI-based editors are total enigmas -- they're the only real surviving members of that order of software.
Modes are a no-no in usability and interaction design because we humans are fickle mammals who cannot be trusted to remember what mode the application is in.
If you think you are in one "mode" when you are actually in another, then all sorts of badness can ensue. What you believe to be a series of harmless keystrokes can (in the wrong mode) cause unlimited catastrophe. This is known as a "mode error".
To learn more, search for the term "modeless" (and "usability")
As mentioned in the comments below, a Modal interface in the hands of an experienced and non-fickle person can be extremely efficient.
Um... maybe there isn't much of a need for one, given that Vi/Vim is pretty much available everywhere and got the whole modal thing right? :)
I think that it's because vi (and its ilk) already occupies the ecological niche of modal editors.
The number of people who prefer modal and haven't yet been attracted to vi is probably 0, so the hypothetical vi competitor would have to be so great as to make a significant number of vi users switch. This isn't likely. The cost of switching editors is huge and the vi-s are probably already as good as modal editors go. Well, maybe a significant breakthrough could improve upon them, but I find this unlikely.
#Leon: Great answer.
#dbr: Modal editing is something that takes a while to get used to. If you were to build a new editor that fits this paradigm, how would you improve on VI/VIM/Emacs? I think that is, in part, an answer to the question. Getting it "right" is hard enough, competing agains the likes of VI/VIM/Emacs would be extremely tough -- most people who use these editors are "die hard" fans, and you'd have to give them a compelling reason to move to another editor. Those people who don't use them already are most likely going to stay in a non-modal editor. IMHO of course ;)
Modal editors have the huge advantage to touch typists that you can navigate around the screen without taking your hands off the home row. My wrists only hurt when I'm doing stuff that requires me to move my hand off the keyboard and onto the mouse or arrow keys and back constantly.
Remember that Notepad is a modal editor!
To see this, try typing E, D, I, T; now try typing Alt, E, D, I, T. In the second case the Alt key activates the "menu mode" so the results are different. :oP People seem to cope with that.
(Yes, this is a feature of Windows rather than specifically of Notepad. I think it's a bad feature because it is easy to hit Alt by mistake and I don't think you can turn it off.)
VIM and emacs make about as much user interface design sense as qwerty. We now have available modern computer optimized key layouts (see the colemak layout and the carpalx project); it's only a matter of time before someone does the same for text editors.
I believe Eclipse has Vi bindings and there is a Visual Studio plugin/extension, too (which is called Vi-Emu, or something).
It's worth noting that the vi input models survival is in part due it's adoption in the POSIX standard, so investing time in learning would mean your guarenteed to be able to work on any system complying to these standards. So, like English, theres power in ubiquity.
As far as alternatives go, I doubt an alternate model editor would survive a 30 day free trial period, so its the same reason more people drive automatics than fly jets.
Since this is a question already at odds with the "no subjective issues" mantra, allow me to face that head on in kind.
Non-Modal editing seeks to solve the problem caused by non-modal editing in the first place.
Simply put, with Modal editing I can do nearly everything without my hands leaving the keyboard, and without even tormenting my pinky with reaching for the control, or interrupting my finger placement by hunting for the arrow keys.
Reaching for mouse completely interrupts the train of thought. I have hated the intense reliance upon this with Intellij IDEA and Netbeans for many years. Even with vim-style addons.
Most of what you do has to do with fine-tuning with very small increments and changes within the same paragraph of code. Move up, move over, change character, etc., etc. These things are interrupted with control keys and arrows and mouse.
Though not really answering your question, there used to be a "modal like" way to write Japanese on cell phones before :
The first letter you hit was a conson let's say K, and then, and then the next key you would hit would have the role of a conson. (Having two conson in a row is impossible in Japanese)
Though it was main a few years ago, today it's only used by people who really want to hit fast.
I think the answer to the question is actually there are quite a few modal text editors that aren't forks of vi/vim. However they all use the vi key bindings. Vi users get the key bindings into their muscle memory so relearning a different set of key bindings would be really hard, so no-one would create a different set of key bindings.
But lots of different editors have re-implemented the vi key bindings from scratch. Just look at this question about IDEs with vi key bindings. At least half of the answers are editors built from scratch that implement vi key bindings, not versions of vi embedded.
I recently came across divascheme - an alternative set of key bindings for DrScheme. This is modal, and part of the justification is to do with RSI - specifically avoiding lots of wrist twisting to hit Ctrl-Alt-Shift-something. The coder has done an informal survey of fellow coders and found that emacs users suffered from more wrist pain than vi coders.
You can see him doing a short talk at LugRadio Live USA. (The video is a series of 5 minute talks and I can't remember how far through it is, sorry - if someone watches it and posts that here I'll edit this post to say when in the video it is).
Note I have not used divascheme.
The invention of the mouse took one mode and moved it to an input device, and context menus took another mode and moved it to a button. Ironically, the advent of touch devices has had the reverse effect, producing multi-modal interfaces:
aware multi-modal - touch and speech are aware of each other and intersect
unaware multi-modal - touch and speech are unaware of each other and conflict
The traditional WIMP interfaces have the basic premise that the information can flow in and out of the system through a single channel or an event stream. This event stream can be in the form of input (mouse, keyboard etc) where the user enters data to the system and expects feedback in the form of output (voice, vibration, visual, etc) when the system responds. But the channel maintains its singularity and can process information one source at a time. For example, in today’s interaction, the computer ignores typed information (through a keyboard) when a mouse button is depressed.
This is very much different from a multimodal interaction where the system has multiple event streams and channels and can process information coming through various input modes acting in parallel, such as those described above. For example, in an IVR system a user can either type or speak to navigate through the menu.
References
User Agent Accessibility Guidelines working group (UAWG): Keyboard Interface use cases
W3C Multimodal Standard Brings Web to More People, More Ways
Next steps for W3C work on Multimodal Standards
The Future of Interaction is Multimodal
Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions - naturalinfovis_infovis2012.pdf
Setting the scope for light-weight Web-based applications
Jan. 26, 1983: Spreadsheet as Easy as 1-2-3
Multi-modal design: Gesture, Touch and Mobile devices...next big thing? | Experience Dynamics