VS2012 CodedUI Recording: How to handle MouseWheel? - visual-studio-2012

I have recorded a set of actions on a UI with the integrated Coded UI Test Recorder. It works kind of nice.. actually, right now I am going through the code and adding some polling for .Enabled and recently found the .TryGetClickablePoint which seems to be really useful. So it looks like recording is a breeze now (with those few polls to be added to get the timing right).
One thing that is a typical use case of the GUI is using the mouse wheel (for interacting with a zoomable image like in google maps). It seems like those are not captured.
I could manually generate those events, but would be far off from reproducing the actual behavior during recording with manually guessing the extent of the scrolling.
Is there some integrated way?
Would I have to capture the events myself with some external/selfmade tool to at least get an idea of how many ticks I made?

For stuff like this I'd recommend using the API directly for your test. You can use the Mouse.MoveScrollWheel method for this. You might have to experiment a little to get the number of ticks right.

Related

Gesture Recognition for my Grandma (Kinect) Linux

I'm looking into making a project with the Kinect to allow my Grandma to control her TV without being daunted by using the remote. So, I've been looking into basic gesture recognition. The aim will be to say turn the volume of the TV up by sending the right IR code to the TV when the program detects that the right hand is being "waved."
The problem is, no matter where I look, I can't seem to find a Linux based tutorial which shows how to do something as a result of a gesture. One other thing to note is that I don't need to have any GUI apart from the debug window as this will slow my program down a fair bit.
Does anybody know of something somewhere which will allow me to in a loop, constantly check for some hand gesture and when it does, I can control something, without the need of any GUI at all, and on Linux? :/
I'm happy to go for any language but my experience revolves around Python and C.
Any help will be accepted with great appreciation.
Thanks in advance
Matt
In principle, this concept is great, but the amount of features a remote offers is going to be hard to replicate using a number of gestures that an older person can memorize. They will probably be even less incentivized to do this (learning new things sucks) if they already have a solution (remote), even though they really love you. I'm just warning you.
I recommend you use OpenNI and NITE. Note that the current version of OpenNI (2) does not have Kinect support. You need to use OpenNI 1.5.4 and look for the SensorKinect093 driver. There should be some gesture code that works for that (googling OpenNI Gesture yields a ton of results). If you're using something that expects OpenNI 2, be warned that you may have to write some glue code.
The basic control set would be Volume +/-, Channel +/-, Power on/off. But that will be frustrating if she wants to go from Channel 03 to 50.
I don't know how low-level you want to go, but a really, REALLY simple gesture recognize could look at horizontal and vertical swipes of the right hand exceeding a velocity threshold (averaged). Be warned: detected skeletons can get really wonky when people are sitting (that's actually a bit of what my PhD is on).

real time refreshing in processing

I am new to processing, i found it by searching for "draw with coding" , and i tried it, seems every time i modify the code, i have to stop and render again to get the final result
Is there any way to get updated graph without re-rendering? that can be much more convenient for creating simple figures.
if not, is there any alternative to processing that can draw a graph with coding?
I've used Tikz in Latex, but that is just for Latex, I want something that can let me draw a figure by coding, I've suffered enough though using software like coreldraw, it lacks the fundamental elegance of coding..
thanks alot!
Please have a look at the FluidForms libraries.
easy to setup
documentation and video tutorials
as long as you don't run into exceptions, live code comfortably
if you prefix public variables with param you also get sliders for free :)
Do check out the video tutorials, especially this one:
Also, if using Python isn't a problem I recommend having a look at:
NodeBox
Field
Python is a brilliant scripting language - which makes prototyping/'live coding' easy(although it can be compiled and it also plays nicely with c/c++) and is easy to pick up and a joy to use.
In Processing, you must re-run your program to see the changes (graphically), unless you write code to receive input from the user to dynamically adjust what you are drawing. For creating user interfaces there's for example the controlP5 library (http://www.sojamo.de/libraries/controlP5/).
It doesn't support "live coding" (at least that I know of).
You must re-run the code to see the new result.
If Live coding is what you're looking for, check out Fluxus (http://www.pawfal.org/fluxus/) or Impromptu (http://en.wikipedia.org/wiki/Impromptu_(programming_environment)

Animated sprites using Awe6 game framework

I am looking for a way to have an animated character in my game. I would like to have a running animation loop when he is running, a jump animation when he jumps etc.
Does the awe6 framework provide anything like this? Maybe using spritesheets, or separate images for each frame.
If I have to use my own system, are there any popular libraries that can help do this, and that work well with awe6? And how would I use it with the framework?
awe6 is not something that works out of the box but rather an architecture for you to build on. It is not designed to fill your requirements without work from you and requires you to have at least a good understanding to integrate another framework/target runtime for what you want. They do have a wiki however you can start with http://code.google.com/p/awe6/wiki/QuickStart
There are many popular frameworks out there that have a lower barrier to entry, such as haxepunk.com, haxeflixel.com, https://github.com/aduros/flambe they all have their own strengths/weaknesses etc for you to decide on and can all easily complete your requests.

When playing tours authored in KML, is it possible to dynamically to control the camera?

Using the Google Earth plugin AI, I want to play a tour authored in KML with the touring capability, but let the user modify the camera controls during the play.
Is it possible?
It depends on how much modification you want to allow.
Tour playback is designed to work with the user changing the orientation of the view (via dragging or the camera controls), but not the position. If the user stops changing the view for long enough, the camera will smoothly snap back to the default orientation for that point in the tour. The zoom and panning controls disappear during the tour, but if the user tries to change the camera position via other methods (like the keyboard), the tour will typically be paused.
The Earth API, however, allows you to absorb or change any of those event behaviors, since you can add a listener for mouse and keyboard events and prevent them from processing as usual or act on them in a completely different way.
If you haven't tried it, there's a tour example in the Google Code Playground where you can see what happens with different interactions based on the default event responses.
Finally, if you want really custom tour behavior -- like allowing certain kinds of movement of the camera away from the tour path even as the tour continues -- you will most likely need to write your own camera movement code. Getting the basics of this working isn't too difficult, but getting the right intuitive feel for that kind of interaction is difficult, and probably dataset-dependent. To get started, you can parse the KML directly, find the tour and the tour primitives it contains, and then use the regular camera controls you cited to move between those primitives, adding offsets for any user-supplied movements.
edit: the Earth API tour page cited in the question has an example of getting started with parsing the KML file by getting the plugin to do it for you. You can use this to implement the above suggestion by using the KML DOM walking code to find all the tour primitives (instead of halting as soon as a Tour element is found).
This isn't always the most efficient approach (plugin function calls have overhead, and meanwhile browsers have built-in XML parsing capabilities), but it may be the most straightforward way to start. For many tours, this approach would be perfectly sufficient.
It is possible, but pretty hard to implement and even harder to control well. I have been playing around with trying to do this for quite a while now. I have not had much success myself, but here are two example by others who have made some progress.
Firstly, the underlying principle they are using is based upon the TICK - a simple example of it is here
http://earth-api-samples.googlecode.com/svn/trunk/examples/event-frameend.html
The two example are :
http://maps.myosotissp.com/
and
http://racemyrace.com/race.php
Also, here is an example that used to work up until recently, I am not sure why it has stopped but it appears you can still read the JS being used. It is made by the same person who created the racemyrace website
http://www.thekmz.co.uk/GEPlugin/pathtour/v3/path_tour_v3.htm
If you happen to work something out, I would appreciate you creating a simple example page and sharing the link. It will probably take a while so if you could look up my email via profile and notify me that would be even better.
Good Luck!

J2ME app menu develop

I'm developing a small j2me game and i want to create a menu for this application. I imagine the menu as a vertical list of items with a cursor on the left or right side that i can move from item to item, something like this menu example but as a main menu.
What elements should i use to obtains such effects? I need only advices or links, i will develope it myself.
Thanks in advance!
import java.util.Vector;
import javax.microedition.lcdui.Canvas;
import javax.microedition.lcdui.Font;
import javax.microedition.lcdui.Graphics;
import javax.microedition.lcdui.Image;
What you plan looks doable. Can't give much links because don't recall any that could help on stuff like you're doing. Actually, most useful link for you will probably be MIDP (JSR 118) API reference - your part is going to be mostly lcdui package, and especially Graphics API.
As for advice, no problem. First thing to note is that there will be more coding and more (much more) testing/debugging than it was in your prior experiment with implicit list. If you can think of some possible deadline / timing requirements that may become a problem - just keep in mind that prior design with implicit list as a fallback. It won't look as fancy but it'll work work safe and correct.
Another important thing is to decide what kind devices you are going to target. For menu like one you are going to develop, it may be rather difficult to get consistent look and feel both at 160x200 basic phone with ITU-T keypad and on 400x600 touchscreen smartphone. Below I am going to assume you'll try to target as wide variety of devices as possible - note the narrower you can get it, the easier it will be to code and test.
When targeting lots of different devices it is helpful to use an emulator that can be configured to simulate various display sizes and resolution, presense or absence of touchscreen input etc. Keep in mind though that emulator alone won't fully simulate real device. To keep your feets on the ground, consider also some regular smoke testing of your application with real device, preferable using over-the-air (OTA) installation.
Here are some particular API tips that I can think of now.
Use Canvas.getGameAction to handle pressed key code - that is likely the most reliable/portable way to figure up/down and select actions for menu.
Use Canvas.hasPointerEvents to figure if there's touch screen support. Users with touch screen devices may get disappointed if it turns out that your fancy menu can't react when they tap on screen.
Use Font.getHeight and Font.stringWidth to figure how much space is occupied by menu item text.
Use Image.getGraphics if you want to draw something over the image object.
As I mentioned, you most likely will do a lot of stuff using lcdui.Graphics API. It's mostly rather simple, but you will probably need to understand somewhat tricky stuff about clipping. Good luck.

Resources