Dialog flow Agent Query with input String in Japanese / Arabic - dialogflow-es

I am using dialog flow V1 Apis to query the agent. There is a scenario where the input query string is not english. The language I want to send in is a Japanese / Arabic string
Sample Request
endpoint = "https://api.dialogflow.com/v1/query?v=20150910";
JSON Data
{
"originalRequest": {
"data": {
"incomingMessage": "朝食時間は何ですか?"
}
},
"lang": "ja",
"query": "朝食時間は何ですか?",
"sessionId": "###########"
}
In dialog flow agent, it is received as
æé£æéã¯ä½ã§ããï¼
How do i pass it to the query endpoint so that the dialog flow agent can read the input query in the language i send.
I am also aware that Dialog flow will not Arabic. I tried with a Japanese string as well and ended up with same kind of results. I tried changing the "lang" property to "ja", still it didn't work. Should I encode the "query" property in a certain format?

Unfortunately, Dialogflow does not currently support Arabic. In terms of working with Japanese if you're only interested in having it all in that language then you'll need to set the agent's language root language to Japanese:
Go to your agent's Settings ⚙ > Languages tab > Choose the language > Save
And if you'll need multi-language agent then here's the reference docs

Related

How to create envelope from template with checkbox tab data?

DocuSign has a mountain of great documentation when it pertains to Java, Ruby, Node.js, C# and the like, but their documentation is relatively light on sending raw JSON requests. I have a template that has checkbox tabs and I need to be able to create a document to sign with prefilled checkbox data. No examples exist on how to do that with a raw JSON request.
How do you create an envelope from template with checkbox tab data?
After reverse engineering the format from the /accounts/$accountId/envelopes/$envelopeId/documents/$documentId/tabs endpoint, I was able to discover that the checkboxTabs node of your request must look like this:
"checkboxTabs": [
{
"tabLabel": "ACCESSORIES",
"name": "LIGHT_USB_C_ADAPTER",
"selected": "true"
}
]
Glad you found the answer. Just wanted to point out that we do show how to make direct JSON calls in both our reference material as well as in our code examples. We use bash scripts with curl to make these calls, so you may see "Bash" or "CURL" in the title of the language when you look up our code examples.
For your case you can dinf it here - https://developers.docusign.com/esign-rest-api/code-examples/set-envelope-tab-values
Just wanted to add, You can always visit corresponding SDK Method documentation in DocuSign website. For Envelopes::createEnvelope method, refer - https://developers.docusign.com/esign-rest-api/reference/Envelopes/Envelopes/create
You will see the definition of the checkbox and use the available options accordingly.

Blocked: In Azure Logic app, how can you add group of checkboxes for each of the languages for a user to choose for caption translation option?

I have created a logic app, to pull the Video Transcript(VTT) caption file, once the videos have been indexed. I want the user to have the ability to choose which language they would like the captions to be translated in ( For e.g English, Spanish, French, Portuguese etc).
Is there a way to add group of check boxes for each of the languages for the user to choose? I was looking at this :https://api-portal.videoindexer.ai/docs/services/Operations/operations/Get-Video-Index but it looks like the API supports only one language. In my case, I want to present the user with (check boxes) with at least 10 different languages they can choose from.
Question: How can we have this implemented for a user to choose from a checkbox of languages? Or can I accept a list of languages as my HTTP request and loop over them in my logic app?
Here is the current workflow of my logic app, where it allows only one language at the moment:
The easiest way would be to pass the list languages through HTTP trigger body (it accepts JSON array). You can generate the request schema by pasting your sample request data (payload) through "Use sample payload to generate schema" link.
Then in the "For each" action you'll iterate through the languages and call the other actions.
After you can test your Logic App for example in Postman:

require format for text input in slack dialog

I recently got into making dialogues for my Slack application and was wondering how to take text input from a user with a required formatting style. I have seen apps like EventBot do this when you try to make a new event it opens up a dialogue and a text input line asks you for Date & Time in the specific format MM/DD/YY HH:mm am/pm. If you don't follow this formatting a little red warning appears below the text box when you try to submit the dialogue.
I can't seem to find any documentation as to how to throw this warning when a user doesn't follow your formatting and haven't seen any attribute for getting a date from the user.
Does anyone know what method to call or what attribute I need to include to make this kind of restriction?
-Thank you
This works a bit differently. There is no API to call.
Instead, your apps needs to evaluate the user input (after the dialog is submitted) and can then respond with an error message to Slack if necessary. That error message is then displayed in the Dialog.
Here is the relevant part of the official documentation:
If your app finds any errors with the submission, respond with an
application/json payload describing the elements and error messages.
The API returns these errors to the user in-app, allowing the user to
make corrections and submit again.
And here is the example for a response from the official documentation:
{
"errors": [
{
"name": "email_address",
"error": "Sorry, this email domain is not authorized!"
},
{
"name": "username",
"error": "Uh-oh. This username has been taken!"
}
]
}

Dialogflow / Actions on Google: Provide dynamic response data for link out suggestions

I've tried to implement an Dialogflow app (Actions on Google) and it works quite well so far. However: does anyone know if it is possible to define further action parameters / context via node.js, so I can use them somehow to create dynamic "link out suggestions" in Dialogflow?
In Detail: I try to request some parameters from the users, map them on a set of urls (=implemented as some kind of database) and then write the result url into the json response. Goal: include these response url as $url, #deeplink.url (or similar) in Dialogflow's "Response > Google Assistant > Enter URL".
Is this possible in any way? Thank you in advance.
UPDATE: I also tested the approach of building a rich reponse, but it does not seem to work. Example:
const richResponse = app
.buildRichResponse()
.addSimpleResponse('Flight from ' + origin + ' to' + destination)
.addSuggestions("Find your flight:")
.addSuggestions("Basic Card", "List", "Carousel")
.addSuggestionLink("Search now", url);
(app is an instance of require('actions-on-google').DialogflowApp)
However, he seems to stop after "addSimpleResponse".
Yes. You can create a context in your webhook, and include parameters in that context that contain the values that you want. To use your example, you could create a context "deeplink" and set a parameter in it named "url" with the URL you're going to link to. You should probably also have a "title" parameter, since the Link Out Suggestion and Basic Card requires a title or website name in addition to the link.
Creating a context is fairly simple, but depends on exactly how you're generating the JSON. If you're using the actions-on-google library for node.js, you would create it with a command something like
var contextParameters = {
title: "Example Website!",
url: "http://example.com/"
};
app.setContext( "deeplink", 1, contextParameters );
If you're creating the response JSON yourself, you will have a contextOut array with the context objects you want to set. This portion of the JSON might look something like
"contextOut": [
{
"name": "deeplink",
"lifespan": 1,
"parameters": {
"title": "Example Website!",
"url": "http://example.com/"
}
}
]
Then, in the fields for the Link Out or Basic Card, you would reference them as #deeplink.title and #deeplink.url. For a Link Out, it might look something like this:
However, once you're doing fulfillment, sometimes it becomes easier to generate the VUI and GUI elements in the webhook instead of setting them as part of the Dialogflow builder. This is particularly true if you have a varying number of cards or carousel items that you want to generate.
The Actions on Google documentation provides the various UI elements that can be returned along with sample JSON and node.js code to generate each. These are the same elements that Dialogflow offers through the Actions on Google response tab - just that you can generate them from your webhook instead.

Webhook generated list fetch option selected by user

I'm pretty new in API.AI and Google Actions. I have a list of items which is generated by a fulfillment. I want to fetch the option selected by user. I've tried reading the documentation but I can't seem to understand it.
https://developers.google.com/actions/assistant/responses#handling_a_selected_item
I also tried setting follow up intents but it wont work. It always ends up giving fallback responses.
I'm trying to search a product or something and the result is displayed using list selector format. I want to fetch the option I selected. This a search_product intent and I have a follow up intent choose_product
You have two options to get information on a Actions on Google list/carousel selection event in API.AI:
Use API.AI's actions_intent_OPTION event
As Prisoner already mentioned, you can create an intent with actions_intent_OPTION. This intent will match queries that include a list/carousel selection as documented here.
Use a webhook
API.AI will pass the list/carousel selection to your webhook which can be retrieved by either:
A) using Google's Action on Google Node.js client library using the app.getContextArgument() method.
B) Use the originalRequest JSON attirbute in the body of the reques to your webhook to retrieve list/carousel selection events. The structure of a list/carousel selection event webhook request will look something like this:
{
"originalRequest": {
"data": {
"inputs": [
{
"rawInputs": [
{
"query": "Today's Word",
"inputType": "VOICE"
}
],
"arguments": [
{
"textValue": "Today's Word",
"name": "OPTION"
}
],
"intent": "actions.intent.OPTION"
}
],
...
This is a sideways answer to your question - but if you're new to Actions, then it may be that you're not really understanding the best approaches to designing your own Actions.
Instead of focusing on the more advanced response types (such as lists), focus instead on the conversation you want to have with your user. Don't try to limit their responses - expand on what you think you can accept. Focus on the basic conversational elements and your basic conversational responses.
Once you have implemented a good conversation, then you can go back and add elements which help that conversation. The list should be a suggestion of what the user can do, not a limit of what they must do.
With conversational interfaces, we must think outside the dialog box.
Include 'actions_intent_OPTION' in the event section of the intent that you are trying to trigger when an item is selected from list/carousel (both work).
Then use this code in the function that you will trigger in your webhook instead of getContextArguments() or getItemSelected():
const param = assistant.getArgument('OPTION');
OR
app.getArgument('OPTION');
depending on what you named your ApiAiApp (i.e.):
let Assistant = require('actions-on-google').ApiAiAssistant;
const assistant = new Assistant({request: req, response: response});
Then, proceed with how it's done in the rest of the example in the documentation for list/carousel helpers. I don't know exactly why this works, but this method apparently retrieves the actions_intent_OPTION parameter from the JSON request.
I think the issue is that responses that are generated by clicking on a list (as opposed to being spoken) end up with an event of actions_intent_OPTION, so API.AI requires you to do one of two things:
Either create an Intent with this Event (and other contexts, if you wish, to help determine which list is being handled) like this:
Or create a Fallback Intent with the specific Context you want (ie - not your Default Fallback Intent).
The latter seems like the best approach since it will also cover voice responses.
(Or do both, I guess.)

Resources