Dialogflow: how to automatically translate intents via Google Translate API over user requests - dialogflow-es

I'm really stuck at the process of auto-translation...
There is someone who can do a clear tutorial on how to automatically translate intents via Google Translate API over user request?
Searching all over the internet and haven't found anything clear...
I just need that if a Spanish user (for example) writes to my chatbot, the source language where the dialogflow agent is programmed is automatically translated to the detected one (Spanish in this case)

Related

How do you make sure you're getting the correct user's email in Dialogflow when they are speaking it?

Hi I'm not a coder and need guidance.
I'm creating a simple skill for Google Assistant in Dialogflow where the goal is to get a user's email. However, when I test it out verbally in Google actions console most of the time it picks up the wrong email address, (I'll say nhs.com and it thinks I'm saying something different) even though I have put example emails in the entities bit.
What is the solution around this? Is it possible to ask permission in Dialogflow to get a users data? I think Google Assistant says no you can only do that (account linking) if you build in Google Assistant? Can you ask the user to verbally spell out their email address, although no idea how you would go about doing that.
It is not recommended to ask the user for their email. Emails can have a very difficult structure consisting of characters and numbers. Because of this Google provides you with the option to retrieve the users details via accountlinking. I've listed some options for retrieving an email.
1) Google Sign-in (Requires Code)
Since you said you aren't a coder it will be a bit challenging to get the user's email easily. Your best option would be to use Google Sign-in accountlinking. This provides your bot with a flow that asks the user permission to use their email automatically.
To be able to use this code, you might have to use some code since I do not know if Dialogflow supports retrieving the user email from the webpage when using accountlinking.
The benefit of Google Sign-in is that you will get the active email that is in their Google profile.
2) Regex entity (Requires some technical knowledge about regex)
Dialogflow supports a feature called Regex Entities. With these entities you can provide a regex which will look through the user input for a pattern. If the user input matched the pattern it will take this from the user input. In your case you would need a regex to check for an email pattern.
With a regex entity the user can be prompted to tell their email. With this you approach you won't be certain if it actually is their real email and you might have to add a flow to double check if there weren't any typos in the email.
3) Email entity (Least technical option)
As Rally mentioned in the comments, Dialogflow also supports an email entity. This can be used to automatically detect an email in your user's input. Though it is an easy option to use, I've noticed that it doesn't always detect every email and since you can't improve it's behavior, it might not be the best choice. It definitely is the least technical option, but it might not always work.

Dialogflow - Twitter Knowledge Base Not Matched

While trying to use knowledge base Beta feature for Frequently Used Questions, I faced the below issue:
From Dialogflow console, the knowledge base seems to be working fine (matches with user request for every question found in the knowledge document)
From Twitter Direct Message, the knowledge base is not being matched with the user's request and Dialogflow is returning the Fallback Intent. To note, that:
Dialogflow - Twitter integration was done based on the latest release (https://github.com/GoogleCloudPlatform/dialogflow-integrations/tree/master/twitter#readme)
The rest of the intents are being matched normally
The knowledge base in enabled (all detect intent requests should find automated responses using this knowledge base as per Google Documentation)
Any help is much appreciated.

Dialogflow query working in console but not when you use embedded url

I have made a query flow in dialogflow and it has four intent and in the last intent it uses a webhook to get data from server side and displays the result. It is trained automatically and works perfectly in dialogflow console returning response and query answer. The issue is it does not work when I use embedded url. It fails to recognize the name intent and asks (Can you repeat again or fallback intent). I've removed all intents and made query again, there is no similar name intents. Yet it works well in console and not in embedded url.
For the webhook part I've used node.js service.
Please help in this issue.
By 'embedded url' if you mean Dialogflow Web Demo (DWD), you should keep in mind that DWD can be used only for simple text messages. It does not support messages from webhook or even rich responses. However, if you steel need a web widget to embed your bot to your webpage you either create your own or you should use third party solutions (like Kommunicate). Here's the link regarding the limitations of DWD.

Connecting Alexa to my own NodeJS back-end

I'm back again with a question about NLP. I made my own back-end, which on one side can connect to websites, the Google Assistant and Facebook Messenger, and on the other end to Dialogflow. On the side, is logs interactions and does some other database stuff.
Now, I'm trying to connect this back-end to Alexa. I made a project which calls my endpoint. This project has one intent, which has a paramater which should get the raw user input, send it to my back-end, process it, parse and send the response to get back. I feel like there is not a real way to collect and send the raw user input, so I can process it myself (on Dialogflow) instead of using the Amazon way of mapping intents and such.
I know Dialogflow can export to Alexa, but this is not an option for me. I really hope one of you can point me in the right direction.
I just need a way to collect the raw user input, and respond in an Alexa accepted response format.
For Actions on Google for example, I'm using a Custom Project Action Package.
Thanks a lot in advace!
To accept or get any user input, you can use sys.any in google assistant and AMAZON.SearchQuery in AMAZON ALEXA.
In Alexa, You have to add the carrier phrase to use AMAZON.SearchQuery. You can't combine any other slot with AMAZON.SearchQuery.
So there are also some limitations. I hope this answer will help you.

Port existing custom chatbot as Google Assistant action

We have a framework that implements chatbot / voice assistant logic for handling complex conversations in the health domain. Everything is implemented on our server side. This gives us full control of how responses are generated.
The channel (such as Alexa or Facebook Messenger cloud) calls our webhook:
When user messages, the platform sends these to our webhook: hashed user id, message text (chat message or transcribed voice)
Our webhook responds with the appropriately structured response, which includes text to be displayed, spoken, possibly choice buttons and some images etc. It also includes a flag whether the current session has finished or user input is expected.
Integrating a new channel involves conversion of the response returned into the form expected by a channel and setting some flags (has voice, has display etc.).
This simple framework has worked so far for Facebook Messenger, Cortana, Alexa (a little bit of hacking was needed to abandon it's intent and slot recognition), our web chatbot.
We wanted to write a thin layer of support for Google Assistant action.
Is there any way of passing all the input from Assistant user intact into a webhook such as the one described above and taking full control of the way responses are generated and the end of conversation is determined?
I'd rather not delve into those cumbersome ways of API.AI of structuring a conversation which seems good for a trivial scenarios such as ordering an Uber but seems very bad for longer conversation.
Since you already have a Natural Language Understanding layer for your system, you don't need API.AI/Dialogflow, and you can skip this layer completely. (The NLU is useful, even for large and extensive conversations, but doesn't make sense in your case where you've already defined the conversation through other means.)
You'll need to use the Actions SDK (sometimes known as actions.json after the configuration file it uses) to define triggering phrases, but after that you'll get all the text that the user says as part of your conversation through a webhook that delivers JSON to you. You'll reply with JSON that contains the text/audio response, images on cards, possibly suggestion chips, etc.

Resources