Create an instagram chatbot calling an external API - instagram

I've already made an online API which takes some text as input and give an answer as text. It is based on a ML generative algorithm which find the end of the input sentence.
Now I want to create a chatbot (ideally on instagram) and make it call this external API, whatever the input text is
user> "Hello"
bot> "Hello my name is bot"
What is the simplest way to do this ? Can I do it for free ?
I've tried DialogFlow but I'm a bit lost, I'm new with chatbots...

Related

Issues with attempting to enter intents

I'm having an issue when attempting to enter specific intents based on the value of a property.
I currently have a question that gets asked, which then fires off to the Microsoft Translator via a HTTP Request and from that, it fires off to the LUIS API with that text.
After that, I would like to enter an intent based on the top intent that the LUIS API Call brought back.
I have the Translator and The LUIS API bringing back values and I can output these using Send Responses:
However, when I attempt to call an intent based on the value of the property, I just get an Object Reference error:
Is what I'm trying to do possible and if so am I going about this entirely the wrong way causing more issues for myself?
Thanks In Advance
I'm trying to understand exactly what you are trying to achieve. Do I summarize it correctly as following?
You start a main dialog. In that dialog you take some user input.
You translate the input, and manually send the the translated text off to LUIS for intent recognition.
Based on the recognized intent, you want to start a specific sub dialog.
I don't believe you can just 'call an intent'. An intent is the result of a LUIS or Regex recognizer, which is processed automatically by Bot Framework. The recognizer is processed at every user input. There is no need to call LUIS yourself as a HTTP request. The recognizer (LUIS or RegEx) is configured on the main dialog properties in Bot Framework Composer:
Although in this case it looks like you are manually doing the LUIS intent recognition, because you want to do translation upfront. To achieve that scenario with the built-in recognizer, you would need a translation middleware. There is a short discussion going on here on Github about translation middleware for Bot Framework Composer, although the sample code is not ready yet.
While there is no code samples for the translation middleware yet, I believe what could already help you today is to start a subdialog based on the recognized intent, similar to what you already show in your screenshots.
Basically instead of "Send a response" at the end of your dialog, you would have something following like:
My sample here uses user input instead of the recognized intent. You would replace the user input with your intent variable instead. Based on the recognized intent, you would be able to spin up a specific dialog to handle that recognized intent.
The result would look something like:
About triggers, what you currently configured in your screenshot shows "no editor for null". I believe this might cause the "object reference" issue. Normally it should display a trigger phrase. For example, the below means:
If user inputs the text "triggerphrase"
And the dialog variable 'topintent' was previously set to 'test', then run this trigger.

How to handle user names in Dialogflow?

First of all, I'm new to Dialogflow as well as new to coding in general. I'm trying to build a bot that handles subscription pauses.
I have set up some intents and entities for the following steps:
Greet the user and explain what the bot can do
Request a pause for a service subscription (from a pool of ~10
services)
Ask for start time and end time of the pause (two different values)
Sum up the request and repeat the key values
I'm (almost) happy with it but I want to implement a prompt for a username. I don't know if any of the built-in variables can help me here.
That's what I'd like the conversation to look like:
(User): Hi, I would like to pause my subscription for [SUB_NAME] from
[START_DATE] to [END_DATE]
(Assistant): What is your user name for the subscription?
(User): [user_name_123 or UserName123 or USER_NAME] (alphanumeric, not following a certain pattern)
(Assistant): Done. You requested a pause for [SUB_NAME] from [START_DATE] to [END_DATE] for [user_name_123]. Please check your e-mails and confirm your request.
What (I think) I need is a very simple custom variable. In Python I would go for something like this:
user_name = input("What's your user name?")
I'd like to store this as a variable that I can reference with '$'.
Is there any way to do this with Dialogflow?
Also, is it possible to pick up the user name as shown above, i.e. without ML-compatible surrounding sentence structures?
I wouldn't want the conversation to be forcedly repetitive like so:
(Assistant): What's your user name?
(User): My user name is [user_name_123]
If you are using Actions on Google, you can use userStorage to save the username of the user and then later access it to perform tasks ( in your case pausing Subscriptions )
Assuming your intent returns a username, Setting a username in storage is as simple as :
app.intent('ask_username', (conv, params) => {
conv.user.storage.username = params.username; // use $ as conv.user.storage.$ if you want
conv.ask(`Ok, What would I help you with ?.`);
});
Then you can simply access the username as:
conv.user.storage.username
Hope that helps!
You can tag specific words in Dialogflow's training phrases with the type #sys.any, which will be able to grab a part of the input. Then you can grab it as a parameter.
Sys.any is really useful in these types of abstract input types, but will require more training phrases as matching only the username becomes harder.
Instead of using usernames, which don't seem to be authenticated to your service, you may want to look at Google sign-in or OAuth instead. The recommendation above will work, but isn't the best way to do usernames.

Dialogflow parameter entity similar to Alexa's AMAZON.SearchQuery

I've developed an Alexa skill and now I am in the process of porting it over to a Google action. At the center of my Alexa skill, I use AMAZON.SearchQuery slot type to capture free-form text messages. Is there an entity/parameter type that is similar for google actions? As an example, see the following interactions from my Alexa skill:
Alexa, tell my test app to say hello everyone my name is Corey
-> slot value = "hello everyone my name is Corey"
Alexa, tell my test app to say goodbye friends I'm logging off
-> slot value = "goodbye friends I'm logging off"
Yes, you have a few options depending on exactly what you want to accomplish as part of your Action.
Using #sys.any
The most equivalent entity type in Dialogflow is the built-in type #sys.any. To use this, you can create an Intent, give it a sample phrase, and select any of the text that would represent what you want included in the parameter. Then select the #sys.any entity type.
Afterwards, it would look something like this.
You may be tempted to select all the text in the sample phrase. Don't do this, since it messes up the training and parsing. Instead use...
Fallback Intents
The Fallback Intent is something that isn't available for Alexa. It is an Intent that gets triggered if there are no other Intents that would match. (It has some additional abilities when you're using Contexts, but thats another topic.)
Fallback Intents will send the entire contents of what the user said to your fulfillment webhook. To create a Fallback Intent, you can either use the default one that is provided, or from the list of Intents select the three dot menu next to the create button and then select "Create Fallback Intent"
So you may be tempted to just create a Fallback Intent if all you want is all the text that the user says. If that is the case, there is an easier way...
Use the Action SDK
If you have your own Natural Language Processing / Understanding (NLP/NLU) system, you don't need Dialogflow in the mix. You just want the Assistant to send you the result of the speech-to-text processing.
You can do this with the Action SDK. In many ways, it is similar to how ASK and Dialogflow work, but it has very basic Intents - most of the time it will just send your webhook a TEXT intent with the contents of what the user has said and let you process it.
Most of the Platform based ASR systems are mainly built on 3 main parameters
1. Intent - all sorts of logic will be written here
2. Entity - On which the intent will work
3. Response - After executing all the process this is what the user will able to hear.
There is another important parameter called webhook, it is used to interact with an external API.
the basic functionalities are the same for all the platforms, already used dialogflow(google developed this platform- supports most of the platforms even Alexa too), Alexa, Watson(developed by IBM).
remember one thing that to get a precise result giving proper training phases is very much important as the o/p hugely depends on the sample input.

Sending specific words to webhook

I'm trying to make an agent that can give me details about movies.
For example, the user says "Tell me about (movie-name)", which sends a post request to my API with the (movie-name) which then returns the response.
However, I don't understand how to grab the movie name from the user's speech without creating a movieName entity with a list of all the movies out there. I just want to grab the next word the user says after "tell me about" and store it as a parameter. How do I go about achieving that?
Yes, you must create a movieName entity, but you do not need to create a list of all movies. Maybe you are experienced with Alexa which requires a list of suggested values, but in api.ai you don't need to do that.
I find that api.ai is not very good at figuring out which words are part of a free-form entity like movieName, but hopefully adding enough user expressions will help it with that.
edit: the entity I was thinking of is '#sys.any' but maybe it would be better to use a list of movie names with the 'automated expansion' feature. I haven't tried that, but it sounds like the way that Alexa's custom slots work, which is actually a lot more flexible (just using the list as a guideline) then people seem to think.

How to find out when user has enter wrong input when chatting with bot?

i am developing one Bot Framework related application in that i am showing like near by places for that user enter like this way "show me nearby places" here i am pass the key value "places" to google API and its producing the exact results, But here my question is when user enter wrong input like "show nearby places" and show places nearby me" at this time i want to show message "please enter correct input" for this how to show the user friendly message. please give the proper suggestion for me.
Thanks in advance.
You will have to use a NLP tool such as wit.ai, luis.ai or api.ai. The jury is out on which is the best tool so my advice will be to try out all and see for yourself.
You will essentially define stories and tell the NLP engine what the components of a statement are. So if you pass a statement to the NLP engine, it will parse the intents and objects to you.
For example your statement is "show me places nearby". Set your intent as 'nearby' and your entity as 'wit/location'. The tool should recognize variants of the above statement.
You can check out the recipe wit.ai have created for it here.
Else if you want just a string matching mechanism, check if your user's message has the substring 'location' and then show nearby places. Check out gupshup.io which has a Bot Builder that allows you to do this easily. (disclosure: I work there)

Resources