I am currently developing a screen reader friendly drag and drop, and I was wondering if it is possible to trigger some kind of buzz noise, when some actions are not possible. E.g.: The user is on the first item and presses "arrow left". Of course, I could use normal Text, but I am curious.
Thanks in advance!
If you can detect that you can't move an item to the left anymore, you can play a sound using normal javascript with the Audio object and the play() method.
Whether someone is using a screen reader is irrelevant. There are a variety of reasons why someone might be using a screen reader but since you can't detect if one is being used, you shouldn't base your logic on screen reader usage.
An aria-live region is not necessary in this case.
Related
Motivation: I'm trying to write scripts which send keystrokes to the currently focused window. Right now I use xdotool, which lets me send raw keystrokes. However, I want the exact keystrokes to be a function of the current text around the input caret in the focused window.
Problem: Is there a generic way of reading the state of the text input caret -- both its current position as well as the text around it? Intuitively, I want the content of the current "text box" as well as the location of the cursor within that text box. Perhaps this is not possible in the general case, but is there a way of doing it which would work for emacs and firefox? I'm running Ubuntu Linux
Further motivation: due to a bad case of RSI I control my computer by voice rather than typing. This works by setting up voice-activated scripts that are triggered by saying different phrases. When dictating English prose, it would be helpful to automatically capitalize words at the beginning of sentences. This automatic capitalization can be accomplished by reading the characters immediately before the input caret, checking if they contain a period, and if so, capitalizing the start of the next phrase that I dictate by voice.
Thanks so much! If anybody can help me here, it would greatly increase my day-to-day accessibility.
Since there is no standard widget toolkit for X11, but only a buch of independently developed arbitrary toolkits, there is no generic way to implement this.
As far as X11 and tools operating on its level (like xdotool) is concerned, there's only windows of either the InputOutput variety (i.e. visible windows, that receive events and one can draw to) or the just Input which are invisible and only receive events. There are no further refined "widgets" so to speak. You get a pixel grid, which you can draw to.
Accessibility interfaces are the burden of the toolkits (or if you don't use a toolkit – then you're a badass – you, the developer), to implement: https://www.freedesktop.org/wiki/Accessibility/
The absolute generic way would be to take a screenshot of the currently focused window, employ a computer vision / machine learning based solution to identify the caret, then OCR the line of text around it. And to be honest, IMHO doing it that way would probably be a lot more reliable than hoping for the accessibility interfaces to be properly implemented.
Using the Google Earth plugin AI, I want to play a tour authored in KML with the touring capability, but let the user modify the camera controls during the play.
Is it possible?
It depends on how much modification you want to allow.
Tour playback is designed to work with the user changing the orientation of the view (via dragging or the camera controls), but not the position. If the user stops changing the view for long enough, the camera will smoothly snap back to the default orientation for that point in the tour. The zoom and panning controls disappear during the tour, but if the user tries to change the camera position via other methods (like the keyboard), the tour will typically be paused.
The Earth API, however, allows you to absorb or change any of those event behaviors, since you can add a listener for mouse and keyboard events and prevent them from processing as usual or act on them in a completely different way.
If you haven't tried it, there's a tour example in the Google Code Playground where you can see what happens with different interactions based on the default event responses.
Finally, if you want really custom tour behavior -- like allowing certain kinds of movement of the camera away from the tour path even as the tour continues -- you will most likely need to write your own camera movement code. Getting the basics of this working isn't too difficult, but getting the right intuitive feel for that kind of interaction is difficult, and probably dataset-dependent. To get started, you can parse the KML directly, find the tour and the tour primitives it contains, and then use the regular camera controls you cited to move between those primitives, adding offsets for any user-supplied movements.
edit: the Earth API tour page cited in the question has an example of getting started with parsing the KML file by getting the plugin to do it for you. You can use this to implement the above suggestion by using the KML DOM walking code to find all the tour primitives (instead of halting as soon as a Tour element is found).
This isn't always the most efficient approach (plugin function calls have overhead, and meanwhile browsers have built-in XML parsing capabilities), but it may be the most straightforward way to start. For many tours, this approach would be perfectly sufficient.
It is possible, but pretty hard to implement and even harder to control well. I have been playing around with trying to do this for quite a while now. I have not had much success myself, but here are two example by others who have made some progress.
Firstly, the underlying principle they are using is based upon the TICK - a simple example of it is here
http://earth-api-samples.googlecode.com/svn/trunk/examples/event-frameend.html
The two example are :
http://maps.myosotissp.com/
and
http://racemyrace.com/race.php
Also, here is an example that used to work up until recently, I am not sure why it has stopped but it appears you can still read the JS being used. It is made by the same person who created the racemyrace website
http://www.thekmz.co.uk/GEPlugin/pathtour/v3/path_tour_v3.htm
If you happen to work something out, I would appreciate you creating a simple example page and sharing the link. It will probably take a while so if you could look up my email via profile and notify me that would be even better.
Good Luck!
Is there a way to trigger an audio file to play in the After Effects timeline when a layer has visible content.
It's a small click sound and when the text layer IN point is reached, I simply want the click wav file to play. Any help would be appreciated.
You have to use scripts to change anything other than keyframe-able layer and effect parameters. You might be able to fake a "click" effect with an expression by triggering a momentary change in the volume of a constant noise audio layer, and using markers to trigger it.
I think using the start times of other layers is problematic because writing an expression that would check any number of layers would involve some kind of for-loop that could get complicated, and you can't easily pass values or variables among different expressions. The question with expressions in AE is always whether the solution saves you time in the long run over just doing it manually, so it depends on your needs.
The quickest way to do it would probably be to just pre-comp your sound effect and whatever layer it needs to match, so that each time the pre-comp plays, you also get the click.
try pressing period(.) After effects dosent let you listen to audio while scrubbing, due to the fact your not looking at the true frame rate. So if you click RAM Preview and play you timeline you will hear you audio files. But for your instance if you press period(.) it will override and play your audio file. I use it when placing a small accent or foley sound.
I'm developing a small j2me game and i want to create a menu for this application. I imagine the menu as a vertical list of items with a cursor on the left or right side that i can move from item to item, something like this menu example but as a main menu.
What elements should i use to obtains such effects? I need only advices or links, i will develope it myself.
Thanks in advance!
import java.util.Vector;
import javax.microedition.lcdui.Canvas;
import javax.microedition.lcdui.Font;
import javax.microedition.lcdui.Graphics;
import javax.microedition.lcdui.Image;
What you plan looks doable. Can't give much links because don't recall any that could help on stuff like you're doing. Actually, most useful link for you will probably be MIDP (JSR 118) API reference - your part is going to be mostly lcdui package, and especially Graphics API.
As for advice, no problem. First thing to note is that there will be more coding and more (much more) testing/debugging than it was in your prior experiment with implicit list. If you can think of some possible deadline / timing requirements that may become a problem - just keep in mind that prior design with implicit list as a fallback. It won't look as fancy but it'll work work safe and correct.
Another important thing is to decide what kind devices you are going to target. For menu like one you are going to develop, it may be rather difficult to get consistent look and feel both at 160x200 basic phone with ITU-T keypad and on 400x600 touchscreen smartphone. Below I am going to assume you'll try to target as wide variety of devices as possible - note the narrower you can get it, the easier it will be to code and test.
When targeting lots of different devices it is helpful to use an emulator that can be configured to simulate various display sizes and resolution, presense or absence of touchscreen input etc. Keep in mind though that emulator alone won't fully simulate real device. To keep your feets on the ground, consider also some regular smoke testing of your application with real device, preferable using over-the-air (OTA) installation.
Here are some particular API tips that I can think of now.
Use Canvas.getGameAction to handle pressed key code - that is likely the most reliable/portable way to figure up/down and select actions for menu.
Use Canvas.hasPointerEvents to figure if there's touch screen support. Users with touch screen devices may get disappointed if it turns out that your fancy menu can't react when they tap on screen.
Use Font.getHeight and Font.stringWidth to figure how much space is occupied by menu item text.
Use Image.getGraphics if you want to draw something over the image object.
As I mentioned, you most likely will do a lot of stuff using lcdui.Graphics API. It's mostly rather simple, but you will probably need to understand somewhat tricky stuff about clipping. Good luck.
i don't really know if it is actually possible, but i believe that it can be made. How possible is it to make a program that recognizes different sound bouncing from the screen and turn it into a position that will obviously be later fed to the mouse.
I know that it sounds kind of dumb, but lately i've been noticing that a very dull, strong sound is made when touching the screen, and that sound varies when doing so at different positions. Probably the microphone "hears" differently because the screen acts as a drum with the casing. Anyways, what do you think, anyone has any experience programming with sound?
First of all most domestic touch screens work by detecting pressure based on a criss-cross mesh layer underneath the display layer.
However I have seen an example where a touch interface was interrogated onto a pane of glass, it used 4 microphones to determine the corners, when you tapped a certain part of the screen it measures the delay in the sound getting to each microphone, therefore allowing one to triangulate the touch.
This is the methodology you would use, you don't even need to set up the hardware to test it, you could throw up an interface in VB, when you click in a box it sends out a circular wave and just calculate using the times it takes to reach the 4 points where the pointer is.
EDIT
As nikie suggested, drag & drop, or any kind of gestures would be impossible using the microphone method, as the technique needs a wave of sound to detect the input.
http://computer.howstuffworks.com/question7161.htm
I don't know if this will get you far, but you can investigate the techniques used in MIDI drums for returning various nuances of play.