I am doing a project where a regular monitor could be converted into a touchscreen.
For this purpose i have designed a grid of ir sensors and installed them into a frame that could be put around the screen. That concludes the hardware.
What i want to do is control the mouse movement using the grid, so that when the user moves his/her finger inside the frame it moves the mouse on the screen. Thus giving the effect of a touchscreen. I hope I am clear in explaining my problem. I am using Windows,MS Visual C++.
If there is any suggestion other than visual C++, please let me know.
Thank you.
You can use the SetCursorPos function.
Related
I have a new project but I'm really not sure where to start, other than I think GNAT Ada would be good. I taught myself Ada three years ago and I have managed some static graphics with GNAT but this is different.
Please bear with me, all I need is a pointer or two towards where I might start, I'm not asking for a solution. My history is in back-end languages that are now mostly obsolete, so graphics is still a bit of a challenge.
So, the project:
With a static background image (a photograph) - and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line. Once in place, I need to report on the position of the cursor relative to the overall length of the line. I can probably handle the reporting with what I already know but I have no clue as to how to create a graphic that I can slide around over another image. In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
If I am wrong to choose GNAT Ada for this, please suggest an alternative.
I have looked in Stackoverflow for anything similar but have found answers only for Blackberry and Java, neither of which seems relevant.
For background, this will be a useful means of measuring relative lengths of the features of insect bodies from photographs, hopefully to set up some definitive identification guides for closely-related species.
With a static background image (a photograph)
So first you need a window to put your interface in. You can get this from any GUI framework (link as given by trashgod in the comments).
and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line.
These are affine transformations. They are commonly employed in low-level graphics rendering. You can, like Zerte suggested, employ OpenGL – however modern OpenGL has a steep learning curve for beginners.
GtkAda includes a binding to the Cairo graphics library which supports such transformations, so you can create a window with GtkAda with a Cairo surface and then render your image & line on it. Cairo does have a learning curve and I never used the Ada binding, so I cannot really give an opinion about how complex this will be.
Another library that fully supports what you want to do is SDL which has Ada bindings here. The difference to GtkAda is that SDL is a pure graphics drawing library so you need to „draw“ any interactive controls yourself. On the other hand, setting up a window via SDL and drawing things will be somewhat simpler than doing it via GtkAda & Cairo.
SFML which has also been mentioned in comments is on the same level as SDL. I do not know it well enough to give a more informed opinion but what I said about SDL will most probably also apply to SFML.
In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
HID event processing is handled by whatever GUI library you use. If you use SDL, you'll get mouse events from SDL; if you use GTK, you'll get mouse events from GTK, and so on.
Need to convert mouse coordinates into PS position or row and column on mainframe emulator.
I'm using Whllapi to connect and automate mainframe emulator. I need to find a underlying field when user move mouse or click on a field at emulator screen. To identify a field on mainframe emulator i need to know row and column or PS position. I need to convert mouse position (in pixels) to emulator row and column. But there is no API in whllapi that provides such functionality.
I used whllapi api "QueryWindowCoordinates" and 'WindowStatus" to get emulator window coordinates and window hwnd. I used that handle in window API "SreenToCleint" to get mouse position with respect to emulator window. But i'm unable to translate those co-ordinates into emulator rows and column. I tried many algorithm but unable to get consistent results. I need translate mouse position precisely into PS position.
Whllap documentation has mentioned "WindowStatus" api to return font sizes for x and y but i'm unable to retrieve any value from Rumba emulator. In order to get fond height and width, I also tried window api 'GetTextMetrcies' but that was not much help either.
IBM Personal Communications for Windows offers a "Get Mouse Input" DDE function which returns PS position data (row, column) when the user clicks the mouse. There's another DDE function, Set Mouse Intercept Condition, to establish which mouse click(s) (left, right, middle, single, double, etc.) should be intercepted. I don't see a direct way to capture mere mouse movements using the DDE functions, but it might be possible (if you're very careful in your Windows programming) if you generate simulated, rate limited mouse clicks and only when the mouse pointer is moved within the emulator window.
Perhaps Rumba offers similar functions? Rumba evidently has some DDE functions, but I haven't found any DDE function reference for Rumba publicly available online.
One possible caveat is that the DDE functions are 32-bit (and 16-bit functions are also still supported since there's still some 16-bit code running on 32-bit Windows). You can use the 32-bit functions if you're doing 64-bit Windows programming, but of course you'll need to know how to do that if you don't already. Another caveat is that you probably ought to test whatever you're doing for user accessibility, for example with screen reading tools that aid vision impaired users.
Another possible approach is to embed the whole emulator within your own "wrapper" application since that might give you more programming power and control. IBM offers both ActiveX/OLE-style embedding and Java-style embedding (their "Host Access Class Libraries," a.k.a. HACL). Rumba might offer something broadly similar.
And yet another possible approach is to shift the interactions with these applications toward APIs and in favor of brand new, more portable user interfaces, usually Web and mobile interfaces. There are myriad ways to do that. If you still need terminal (3270)-driven automation -- maybe because the application source code is lost or it's otherwise really hard to create useful APIs for it? -- there are a variety of ways to shift that automation into the backend. For example, CICS Transaction Server for z/OS comes with multiple terminal automation technologies as standard included features. Look for references to "3270 bridge" and "FEPI" in IBM's Knowledge Center for CICS to explore that range of choices.
I've been looking around for a few weeks trying to find a library to help fit my needs.
I'm looking for a way to create what would be as a secondary virtual keyboard / mouse cursor. It doesn't necessarily have to be visible per se, but the idea behind it is that I want to be able to have my normal cursor and keyboard free to be used, while the secondary virtual keyboard and mouse are used for automation in another window.
I'm not entirely sure where to begin on this journey... Any ideas, stackoverflow community?
I'm writing an OpenGL application in Linux using Xlib and GLX. I would like to use the mouse pointer to draw and to drag objects in the window. But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen? So I could use the same type of code to place the graphics on the screen and have the mouse pointer and the graphic objects positions always perfectly aligned.
Even a pointer to the relevant source file(s) of XOrg would be great.
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen?
If everything goes right no code at all is drawing the mouse pointer. So called "hardware cursor" support has been around for decades. Essentially it's what's being known as a "sprite engine" in the hardware that takes some small picture and a pair of values (x,y) where it shall appear on the screen. At every frame the graphics hardware sends to the display the cursor image is overlaid at the specific position.
The graphics system is constantly updating the position values based on the input device movements.
Of course there is also graphics hardware that does not have this kind of "sprite engine". But the trick here is, to update often, to update fast and to update late.
But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
Yes, that happens if you read it at integrate it into your image at the wrong time. The key ingredient to minimizing latency is to draw as late as possible and to integrate as much input for as long as long as possible before you absolutely have to draw things to meet the V-Sync deadline. And the most important trick is not draw what's been in the past, but to draw what will be the state of affairs right at the moment the picture appears on screen. I.e. you have to predict the input for the next couple of frames drawn and use that.
The Kalman filter has become the de-facto standard method for this.
In my application I want to draw polygons using Windows Create Graphics method and later edit the polygon by allowing the user to select the points of the polygon and allowing to re-position them.
I use moue move event to get the new position of the point to get the new coordinates of the point being moved and use Paint event to re-draw the polygon. The application is working but when a point is moved the movement is not smooth.
I dont know weather the mouse move or the paint event the performance hindrance.
Can anyone make a suggestion as to how to improve this?
Make sure that you don't repaint for every mouse move. The proper way to do this is to handle all your input events, modifying the polygon data and setting a flag that a repaint needs to occur (on windows possibly just calling InvalidateRect() without calling UpdateWindow()).
You might not have a real performance problem - it could be that you just need to draw to an off screen DC and then copy that to your window, which will reduce flicker and make the movement seem much smoother.
If you're coding using the Win32 api, look at this for reference.
...and of course, make sure you only invalidate the area that needs to be repainted. Since you're keeping track of the polygons, invalidate only the polygon area (the rectangular union of the before and after states).