Tizen:Where can we find the implementation code for Tizen::Ui::SystemUtil::GenerateKeyEvent? - window-managers

I would like to know how the key event is read and dispatched from the kernel to the Tizen OS. Haven't found any informative document on the same. While surfing through Tizen wiki I came across Tizen::Ui::SystemUtil::GenerateKeyEvent function which says it generates a key event. I would like to see the implementation of this function and understand how the key event is generated and injected into the input queue.
Please let me know where can we find the source code of this implementation. Also, point me to,if any, documentation is available that gives information about the event propagation from keypress till it reaches the application.
Thanks in advance.

The code for Tizen::Ui::SystemUtil::GenerateKeyEvent is found here. It in turn calls the SystemUtilImpl version of GenerateKeyEvent, which you can find here.

Related

Dialogflow - Repeat last sentence (voice) for Social Robot Elderly

I am using actions-on-google and Dialogflow to build a social robot for Elderly.
I was wondering how I can easily repeat the last sentence when asked by the user ("repeat please") as often the Senior doesn't hear the sentence the first time.
One way would be to have repeat followup intents in Dialogflow but this is quite heavy since :
you need to add one after each intent and I have many
in a multi-user environment you need to keep track of the last sentence for every user ...
Another way would be be to take advantage of Dialogflow Contexts. As you send the message, you can also add that message to a context (for example, you can call it "last_message"). You can then have another Intent that takes as an input context the "last_message" context and, if triggered, uses the value saved in the context to repeat it.
However, I still have the problem that I need to add a context to every intent I have, which are many.
Does anyone know how to accomplish this in a quicker way? I found this package but it is in JS and I need it Python: https://github.com/SysCoder/VoiceRepeater/pulls .
How do I implement this VoiceRepeater library? Do I put the code under fulfillment function 'repeat' I have made and mapped to an intent called 'repeat' that I have made which responds to utterances such as 'Sorry, could you repeat that'? Also, where do I install the VoiceRepeater library (code: npm install voice-repeater --save)?
Using Followup Intents is probably the wrong way to do this. As you note, it is way too heavy for more than a few Intents. It may be useful in certain circumstances if you want the "repeated" message to clarify the response in a different way, but in general, it isn't very useful. (It should also be noted that Followup Intents use Contexts, but in a different way than discussed below.)
You don't need to add the Context to the UI as part of the Outgoing Context - you would set this as part of your Fulfillment. It would include a parameter that either contained exactly what you said, or the information you needed to recreate what you said (possibly in a different form, if appropriate). In your "repeat" Intent, you'd read the value that you had saved in this Context, and send it as the output again. If you're using SSML, you may wish to change the speed or volume, if that is appropriate.
Update based on new questions
The readme for VoiceRepeater has the basics of what you need to do to use it, but it does assume a little familiarity with Node. But in general, yes, you install it the way you describe, setup an Intent that captures requests to repeat, and registers a handler function (repeatLastStatement(app) in the readme) that handles the Intent to send a reply through voiceRepeater.lastPromptWithPrefix().
It also may assume you're using version 1 of the actions-on-google library. I haven't dug too deeply into the code, but it looks like it replaces the library's ask function with its own, and I'm not sure how well that works with version 2 of the actions-on-google library.
Unlike Voice Repeater, multivocal doesn't require you to register handlers specifically since it tries to hide as much boilerplate under the covers. You just need to define the replies that you might want it to use. It uses the Context scheme I outline above to store responses and make them available when the user asks for it to be repeated.
There aren't any videos on using multivocal, but the simple example does include the configuration illustrating how to configure responses for the "multivocal.repeat" Intent. While VoiceRepeater works with the actions-on-google library, multivocal is a complete replacement, offering a more template-based approach to building fulfillment.
However, neither of these directly help you if you want to implement it for Python. But if you look at the source for VoiceRepeater, you can get a sense for how to implement it yourself in Python.
The key bit is on line 47 where it saves the reply in a context. (It also saves the reply with a prefix message.) It then calls the original function that would send the reply:
app.setContext("last_prompt", 100,
{
"last_prompt": textToSpeech,
"prefixed_last_prompt": repeatPrefix + lastStatement,
});
originalAsk(response);
Later, in the call to lastPromptWithPrefix(), it uses the contents of the Context to send a reply.
lastPromptWithPrefix() {
return this.app.getContext("last_prompt") !== null
? this.app.getContextArgument("last_prompt", "prefixed_last_prompt").value
: "um....I don't remember what I said!";
}

XPages: where to get detailed documentation

This is not a technical question, but rather an inquiry on how to get better information regarding the huge numbr of parameters and properties of the various controls you can put in an XPage.
A concrete example:
I have a button which had a property save=true in its event Handler. I added some code in the postSave event, so a lotuscript angent could do some processing, and I started having save conflicts. It took a while but I managed to figure out that the save=true in the event Handler was causing the issue.
I like to know my options, so I wanted to look at what exactly that property ws doing (although the name kinds of give it), but that's when it hit me: where do I look for that kind of information?
Is there a site somewhere that lists all properties we can add and a description of what they are doing?
Maybe my Google skills are not the best, but I couldn't find anything yet...
The three IBM Press XPages books (Mastering XPages 2nd Edition, XPages Portable Command Guide and XPages Extension Library) are key to understanding the implications of the properties. There are the equivalent of Javadocs for controls (here's the link for the XPages Extension Library one), but they're not intended to go into the kind of depth to identify the problem you hit.
These might be useful:
http://xpageswiki.com/
http://www-01.ibm.com/support/knowledgecenter/SSVRGU_9.0.1/com.ibm.designer.domino.ui.doc/wpd_controls_cref.html
Howard

How to invoke "next track" and "star current track" from my Spotify App?

I just started writing a little Spotify App and can't figure out how to invoke the two functions next/star from Javascript. I just need this simple functionality: From within my App (Javascript) call a method that skips the current track and plays the next one (if there is any) OR call a method that "stars" (is this really a verb?) the current song.
Is this API DOC the only resource for building my own App? Thanks in advance for any hints on this!
UPDATE: Just found out how to SKIP: sp.trackPlayer.skipToNextTrack();
Unfortunately, how to "star" a track remains unknown.
UPDATE 2: GOT IT! : models.library.starredPlaylist.add(models.player.track); – yep that makes sense.
The correct way to star a track is indeed the function you wrote:
models.library.starredPlaylist.add(models.player.track);
trackPlayer is not a supported object that shouldn't be accessed by developers since it's not versioned properly. This means that it may break in the future when we do updates to the platform bridge.
We recommend only using the documented classes on our developer website.
https://developer.spotify.com/technologies/apps/docs/beta/
The correct way to skip to the next track is to use:
models.player.next()

How to write text values in masked field?

i need some help related to masked field in web form. Syntax of phone field is (___)___-_____, if i execute this code in ruby shell
browser.text_field(:id => 'txtphone').set '7893457889'
... nothing has been added in the phone field.
then i find this solution in one blog, someone said first unmask this field using this code.
browser.text_field(:id,'txtphone').fire_event("unmask")
then write the above code again.
browser.text_field(:id => 'txtphone').set '7893457889'
but still nothing has happened. kindly help me out...am i doing right or still there is a mistake.
If you could provide some sample of the page HTML it will be easier to give you an answer more likely to work.
Given what you have provided us to work from, we have to go with the normal way that such masked input fields typically work and go from there. Usually pages with this kind of thing are calling a javascript function which is triggered by a specific event. Most often this is an event such as onchange but it may be something like keypress or any other even that happens when a normal user types or pasts text into the cell.
You likely need to experiment with using the '.fire_event' method to fire the proper javascript event, or if that fails entirely making a direct call to execute the proper script
When doing this do not confuse the name of a script such as 'applymask' or somesuch with the javascript event which causes that script to be invoked.
The answers to this question How to find out which JavaScript events fired? include some good information on how to use firebug or the chrome developer tools to figure out what events are being fired when you interact with an object on the browser screen.
Update: instead of responding here to indicate if this answer was of any use the OP reposted their question here Masked Text Box issue and by digging around on the vendor's demo site (since that time he actually had posted some of the HTML when we asked for it) I was able to find a solution using watir-webdriver that worked for him.

YUI Uploader 2.6.0 example

I'm trying to simply use some of the examples and instructions regarding the YUI-Uploader, and I'm being frustrated by a number of issues.
The "YUI Library: Uploader" cheat sheet's simple use case doesn't work for me because all the listed methods except addListener() do not exist on the myUploader object.
The example is for version 2.5.1 and includes a method called browse(), which not only was removed in version 2.6.0 but I cannot find any documentation for how to use the 2.5.1 version if I so choose.
I can't find the source FLA to the uploader.swf file so that I could theoretically diagnose all these issues.
Has anyone successfully used the 2.6.0 YUI Uploader, and if so is there some common interfering JavaScript I should avoid, or a better example to follow? Thank you.
Thanks for the replies.
I might note that I finished my "uploader" project before receiving any responses to this.
Part of my problems were due to some of the examples being for v2.5.1 and another part were due to not using an event listener to see when the component was ready. I got the most help from just dissecting what Flickr did.
You can find the source to uploader.swf here at Uploader.as now that the YUI source is available on GitHub.
You've got the wrong link for the simple example, here's the correct one.
YUI Uploader Simple Example
You could also take a look at my implementation if you'd like, it's pretty barebones and works fine using YUI 2.6.0.
Tivac.com YUI Uploader Implementation
It sounds like for #1 that you're trying to call methods on the uploader immediately. You should instead add listeners for all the events it can fire and do any configuration once the "contentReady" event fires. All the YUI examples & mine do that, so you can check there for a code sample.

Resources