I am trying to control a game Hill climb racing using python. I am using the code from this answer How to simulate keys in game with pywinauto. But my system is not recognizing scan codes for the left arrow(0x4B) and right arrow keys(0x4D). Instead, it is recognizing these as pressing numbers 4 and 6. I couldn't find any alternate scan codes. Can someone please help me with this? I am using windows 10.
Related
I'm creating a small game in python where two players choose 1 of 3 characters and fight each other by turns. So when I launch game in cmd, there are some info that I want to refresh on cmd every turn so I use "import os" and "os.system('cls')". This cleans whole window. The thing is I want some information to stay on screen, like how much one's character done damage on last rounds etc. Is it possible at all?
Or maybe its possible to do that when I open my program, two cmd screens opens, both store different information and communicate with each other?
ANSI escape codes are your friend. Among other things, it allows you to move the cursor around the screen. For example
print('\e[2A\e[3C')
will move the cursor two spaces up and three to the right. This wikipedia article gives a good synopsis, see the section on CSI sequences. These codes work on most Unix/Linux OSs and newer versions of Windows 10.
How do you simulate a key press in python on a linux machine?
This is for use with an emulator and making a bot which can play a game.
So primarily the 'wasd' keys, space and so on, this thread here is more or less what I want, however I believe that this solution is windows specific using
ctypes.windll
I believe that the main problem and why this is hard to do is to do with 'ScanCodes' and 'VKs', games tend to ignore as this is not how the user interacts with the game
So is there any linux workaround like the above for linux?
Any help is appreciated, thank you.
I had the same problem when using pyautogui. I just seemed to not have the right focus. Selecting the window in a few different ways did not help. Using xdo however, I managed to get the desired result.
Example:
from xdo import Xdo
xdo = Xdo()
win_id = xdo.get_active_window()
print(xdo.get_window_name(win_id))
xdo.send_keysequence_window(win_id, "Return")
More information:
https://github.com/rshk/python-libxdo
Consider the following: let's say we have a multiplayer game that can be played with one keyboard by two players. Hovewer, first player's control keys are far easier to use that second player's (for example, first player uses arrow keys, when second one has to use WSAD). Important thing is that we can't change these settings in the game options menu.
I figured out the simplest way would be to plug in second keyboard and map its arrow keys as WSAD keys, so both players could use arrow keys when playing that game. But it turns out that there isn't any ready solution for that problem. I've searched for some programs and system options for key mapping, and after my research I've learned that this kind of software - one that would allow to change key mapping for certain keyboard device - is nowhere to be found.
Does that mean I'd have to write some kind of a driver for that particular keyboard I want its key mapped to another keys? I have no experience in writting device drivers of that kind, and any other solutions (including global hacks for keyboard messages, considering I'm using Windows or using programs such as KeyMapper) would work for every keyboard plugged to PC, not just desired one.
So, uhm... Is there some simpler way? I do have basic coding skills, but writting a driver for USB keyboard would be too much for me, I guess (I heard writting device drivers isn't that simple after all).
Well, after some research, I found perfect solution:
http://www.oblita.com/interception.html
This API can do some really cool things with keyboard device. Key mapping is just on of them.
Hope this helps anyone who encounters problems similiar to mine.
I'm looking into making a project with the Kinect to allow my Grandma to control her TV without being daunted by using the remote. So, I've been looking into basic gesture recognition. The aim will be to say turn the volume of the TV up by sending the right IR code to the TV when the program detects that the right hand is being "waved."
The problem is, no matter where I look, I can't seem to find a Linux based tutorial which shows how to do something as a result of a gesture. One other thing to note is that I don't need to have any GUI apart from the debug window as this will slow my program down a fair bit.
Does anybody know of something somewhere which will allow me to in a loop, constantly check for some hand gesture and when it does, I can control something, without the need of any GUI at all, and on Linux? :/
I'm happy to go for any language but my experience revolves around Python and C.
Any help will be accepted with great appreciation.
Thanks in advance
Matt
In principle, this concept is great, but the amount of features a remote offers is going to be hard to replicate using a number of gestures that an older person can memorize. They will probably be even less incentivized to do this (learning new things sucks) if they already have a solution (remote), even though they really love you. I'm just warning you.
I recommend you use OpenNI and NITE. Note that the current version of OpenNI (2) does not have Kinect support. You need to use OpenNI 1.5.4 and look for the SensorKinect093 driver. There should be some gesture code that works for that (googling OpenNI Gesture yields a ton of results). If you're using something that expects OpenNI 2, be warned that you may have to write some glue code.
The basic control set would be Volume +/-, Channel +/-, Power on/off. But that will be frustrating if she wants to go from Channel 03 to 50.
I don't know how low-level you want to go, but a really, REALLY simple gesture recognize could look at horizontal and vertical swipes of the right hand exceeding a velocity threshold (averaged). Be warned: detected skeletons can get really wonky when people are sitting (that's actually a bit of what my PhD is on).
I'm doing research about a schoolproject. The project is to develop a program that can change the colors of the screen (of the OS aswell of all programs that run on it). The endproduct is supposed to be a single program that is able to change the colors by input (i.e. increasing the presentness of a primary color, for instance add 10% RED), and is an experimental approach to manipulating color blindness. I've already done the theoretical biological research, now I'm looking into the practical deployment of such an application.
I have not set on a single programming language, as I do not know which ones would be the best for, let's say, the windows 7 environment. (which language features the easiest/fastest function calls, for example)
Some examples of function calls I intend to program:
GetColorValues (return data about the current colors the pixels of the screen are displaying)
ProcessColorValues (A simple modification of all respective colors returned by the function above)
SetColorValues (Return the modified colors back to their respective places on the screen)
I would prefer being able to intercept the data whilst it is being pipelined to the screen, in order to keep the processing smooth.
Technically now, I don't really know where to start. I don't even know if I'm supposed to look into the OS, or the drivers of the graphics card.
I was hoping someone could guide me and tell me what I should look for, or where I could find these.
Thanks for reading.
Arnaud
The Windows Monitor Configuration Functions could be a starting point - for example the SetMonitorRedGreenOrBlueGain function to boost specific colors. You should be able to call these functions from C# or VB.Net using PInvoke