I would like to collect multi-touch pointer raw data in my Windows UWP application, so I can do gesture recognition.
I have done this in a Win32 application previously by using the GetPointerFrameInfo() method. It can retrieve the information of whole frame of pointer input. However, this method does not seem to be available in UWP.
What is the solution to retrieve the whole frame of pointer input?
For example, when I use three fingers to press screen, drag for a short distance, then release, I received the following event sequence in registered pointer hanbdler (onPointerPressed() / onPointerMoved() / onPointerReleased(), my handler functions)
pointer1 pressed event,
pointer2 pressed event,
pointer3 pressed event,
pointer1 moved event,
pointer2 moved event,
pointer3 moved event,
pointer1 moved event,
pointer2 moved event,
pointer3 moved event,
...
pointer1 released event,
pointer2 released event,
pointer3 released event,
Because the above events all happen in a sequential timing pattern, it is so hard to do the multi-touch processing since the total pointer number can not be known in advance.
I did notice that UWP's PointerPoint class provides a property called FrameID, used to identify the input frame, but I can not find any method to use this frame id to retrieve the whole frame of pointer input.
Because the above events all happen in a sequential timing pattern, it is so hard to do the multi-touch processing since the total pointer number can not be known in advance.
Yes you're absolutely right, but unfortunately there is no equivalent to Win32's GetPointerFrameInfo() method in UWP, we can only focus on the base Pointer events. As your test result of event sequence in registered pointer handler, a common way to solve this problem is to count the Pressed events before Moved events, and clear this count in the Released events, each pressed events represents a finger pointer, you can refer to my similar case here: How can I get touch input in uwp?
But, if your want won't be published on the store, there are methods to use Win32's API in UWP app. One way is using VS2015TemplateBrokeredComponents to build a bridge between UWP app and traditional desktop app, you can follow the steps here to try your solution out. Alternatively you can try to use PInvoke and this Win32 API in your UWP app.
Above all the ideas I provided here, I hope you may submit a request to add this new features for development through the Windows Feedback tool. I personally think this is a good feature request for UWP apps.
Related
I am making a an app with 3 main screens: a reminders list, a homescreen, and a chatbot. The chatbot I will integrate in using Dialogflow, and i've completed the todo list. What the homescreen does is display pulse and temp taken from arduino through a bluetooth connection and if either exceeds or falls below a certain threshold then a call should be initiated. I currently did the call initiation and the pulse and temp values are hardcoded in for now. But if I go to another screen, the homescreen obviously doesn't do its job. What is the way I can make the homescreen kind of like the main thread . Basically even if i am on the chatbot screen, the homescreen should be checking if temp and pulse are abnormal and initiating calls if required.
I am not sure what concept this falls under so any help is appreciated.
You can use Isolate class to achieve this.
An isolate has its own memory and a single thread of execution that runs an event loop even in background until you kill Isolate.
Here is the example of how to use Isolate in flutter
There is also flutter package available for Isolate flutter_isolate
Would someone please answer my question?
Does the C++ program (written using visual studio) create a separate thread for handling mouse events? Would you please describe it concisely?
Thanks
In Windows, each thread that creates a window, and some that don't create any, receive a message queue (and remember that any application has at least one -the main- thread).
This queue is a OS structure that contains any message directed to any windows created by this thread; that includes window handling messages, timers, mouse events directed to any of these windows, keyboard events when any of these windows has the keyboard focus, system events, etc...
It is the responsibility of any thread that has a message queue to pump these messages periodically. This is usually done in what is called the main loop of the thread.
This main loop, in its simplest form is:
MSG msg;
while (GetMessage(&msg, 0, 0, 0))
DispatchMessage(&msg);
But it is usually much more convoluted, depending on the complexities of the program.
These two functions:
GetMessage(&msg) removes one message from the queue and puts it in msg. The 0s mean: do not filter.
DispatchMessage(&msg) handles the message, probably calling the callback function relevant to this particular message. With Window messages (mouse and keyboard included) this usually means to locate the window class and then call the window function from within.
So, answering your question: mouse messages are handled in the same thread that created the window that receives them. And it processes them one by one.
No, the mouse events are submitted to the main UI thread/Message loop, along with keyboard and any other peripherals (and system events, and messages from other processes, etc.)
if you want to create a keyboard and mouse hook in Visual C++ 2005 Check this..
http://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/3d9bb875-8e79-4c1e-b2ef-b24503e6abbd/how-to-create-a-keyboard-and-mouse-hook-in-visual-c-2005?forum=windowssdk
How can we generate event, so that the framework will invoke its message handler OnSize() function in MFC at the instance at which I need.
Thanks
Use SendMessage or PostMessage functions and send WM_SIZE message
I am very often repeating this statement: Windows is not an event driven system; hence, you do not generate events. Event in Windows is an entity used to synchronize threads.
Each window works by processing messages from the system or application and acting accordingly. They can be predefined messages or message defined specifically for the application.
I respectfully but strongly disagree with previous posts. Even though information was given with good intentions, it shows a bad programming practice.
You should never use Send/Postmessage to change windows size. Use windows API:
MoveWindow or SetWindowPos. This will send WM_SIZE (and other companion messages) to the window to notify about size change request.
In general:
Never send or post messages that are generate by the system, since this does not work in most cases because system usually generates additional messages that you do not send, causing unexpected behavior.
You can use SendMessage function, something like this:
SetWindowPos (NULL, 0,0, myrect. Height (), myrect. Width (), SWP_FRAMECHANGED|SWP_NOZORDER);
To be more general, the way to synthesize events in MFC is by using SendInputfunction:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms646310(v=vs.85).aspx
Currently, I am able to hook onto Direct3D application and draw custom stuff onto its surface. However, I would like to suspend this application and then draw something else.
Is this even remotely possible to do so? Like creating another my own Direct3D window on top of that application?
I'm targetting only Windows 7, but the application I want to draw on is using only DirectX 9.
The problem is that I have very little experience with DirectX in general.
Sort of.
You're working with two different elements here, one quite large and but not particularly complex: hooking D3D. The other ("suspending" the app) is simple within that, but you don't quite want what you think you want.
To hook D3D, by the simplest method, you need to intercept the call to CreateDirect3D9 and return your own IDirect3D9, which later creates and returns your own IDirect3DDevice9. This will give you full control over the app's render process.
In order to "suspend" it, you need to wait for the desired trigger, then in your IDirect3DDevice9::Present, call your own event loop. This will, for all intents and purposes, suspend execution of the original app's code, but not the process itself (allowing your code and event loop to process). There will be some limitations of this, and you may not be able to consume window/Windows events (simply), but it will give you full control and effectively pause the original app.
Note, however, that you must intercept and reroute execution in every thread you want to "suspend," it's only specific to a single thread and you don't want physics or AI crunching on while render and UI are paused.
You need to perform your overlay drawing, whatever that may be, during your loop or your IDirect3DDevice9::Present hook, then call the real device's Present method as needed. If you want to run multiple frames of your overlay, then call the real Present repeatedly before returning from your Present. Tweak as necessary. Rendering here is done pretty much normally (check out general D3D tutorials for that), but there is one major catch: the device's state is unknown and may be incompatible, but must be "untouched" on return. This is handled simply by caching an IDirect3DStateBlock9 created from the device immediately after creating it. In your Present hook, create another state block with the state on entrance, restore the clean state block, run your code, then restore the entrance state block. You can work with any states, off a fresh slate, without damaging the device's state (I use this in practice, in works great).
If you want some rather extensive examples of how this works, I'd suggest checking out the Voodoo Shader project, which has full D3D8 and 9 hooks, including everything needed for overlays [/shameless own-project promotion]. Feel free to reuse any of the concepts, or comment with further questions; this certainly isn't all the details that may be useful to you.
This is a very complex thing to accomplish, as it is very much a hack to do so. The only people you see doing such things are steam, teamspeak, xfire, fraps, and a few hard-core devs.
There are kits out on the internet that show you have to inject a DLL into the memory space of the target application to achieve such a feat, and methods such as proxy DLLs.
Proxy DLL:
http://www.codeguru.com/cpp/g-m/directx/directx8/article.php/c11453
Injection:
http://www.progamercity.net/d3d/372-c-directx9-0-hooking-via-detours.html
Good luck, this will take you a while.
I would like to hook into, intercept, and generate keyboard (make/break) events under Linux before they get delivered to any application. More precisely, I want to detect patterns in the key event stream and be able to discard/insert events into the stream depending on the detected patterns.
I've seen some related questions on SO, but:
either they only deal with how to get at the key events (key loggers etc.), and not how to manipulate the propagation of them (they only listen, but don't intercept/generate).
or they use passive/active grabs in X (read more on that below).
A Small DSL
I explain the problem below, but to make it a bit more compact and understandable, first a small DSL definition.
A_: for make (press) key A
A^: for break (release) key A
A^->[C_,C^,U_,U^]: on A^ send a make/break combo for C and then U further down the processing chain (and finally to the application). If there is no -> then there's nothing sent (but internal state might be modified to detect subsequent events).
$X: execute an arbitrary action. This can be sending some configurable key event sequence (maybe something like C-x C-s for emacs), or execute a function. If I can only send key events, that would be enough, as I can then further process these in a window manager depending on which application is active.
Problem Description
Ok, so with this notation, here are the patterns I want to detect and what events I want to pass on down the processing chain.
A_, A^->[A_,A^]: expl. see above, note that the send happens on A^.
A_, B_, A^->[A_,A^], B^->[B_,B^]: basically the same as 1. but overlapping events don't change the processing flow.
A_, B_, B^->[$X], A^: if there was a complete make/break of a key (B) while another key was held (A), X is executed (see above), and the break of A is discarded.
(it's in principle a simple statemachine implemented over key events, which can generate (multiple) key events as output).
Additional Notes
The solution has to work at typing speed.
Consumers of the modified key event stream run under X on Linux (consoles, browsers, editors, etc.).
Only keyboard events influence the processing (no mouse etc.)
Matching can happen on keysyms (a bit easier), or keycodes (a bit harder). With the latter, I will just have to read in the mapping to translate from code to keysym.
If possible, I'd prefer a solution that works with both USB keyboards as well as inside a virtual machine (could be a problem if working at the driver layer, other layers should be ok).
I'm pretty open about the implementation language.
Possible Solutions and Questions
So the basic question is how to implement this.
I have implemented a solution in a window manager using passive grabs (XGrabKey) and XSendEvent. Unfortunately passive grabs don't work in this case as they don't capture correctly B^ in the second pattern above. The reason is that the converted grab ends on A^ and is not continued to B^. A new grab is converted to capture B if still held but only after ~1 sec. Otherwise a plain B^ is sent to the application. This can be verified with xev.
I could convert my implementation to use an active grab (XGrabKeyboard), but I'm not sure about the effect on other applications if the window manager has an active grab on the keyboard all the time. X documentation refers to active grabs as being intrusive and designed for short term use. If someone has experience with this and there are no major drawbacks with longterm active grabs, then I'd consider this a solution.
I'm willing to look at other layers of key event processing besides window managers (which operate as X clients). Keyboard drivers or mappings are a possibility as long as I can solve the above problem with them. This also implies that the solution doesn't have to be a separate application. I'm perfectly fine to have a driver or kernel module do this for me. Be aware though that I have never done any kernel or driver programming, so I would appreciate some good resources.
Thanks for any pointers!
Use XInput2 to make device(keyboard) floating, then monitor KeyPress and KeyRelease event on the device, using XTest to regenerate KeyPress & KeyRelease event.