Does all 4 services Speech to Text (STT), Natural Language Classifier (NLC), Conversation, and Text to Speech (TTS) are required for project-into? - project-intu

I am new to BluemixIBM and project-INTU.
Does all 4 services Speech to Text (STT), Natural Language Classifier (NLC), Conversation, and Text to Speech (TTS) are required for project-into?
I have only create one service "Conversation" and added its credentials into Intu Gateway.
There is a client - Intu Gateway. When I am trying to establish a connection using it. It is not connecting and saying that "connecting to parent" and nothing more. Don't have any idea about it.

For the expected usage of Project Intu you will want all those services (STT/NLC/TTS/Conversation). If you are an advanced user you could leave some of those services out. But in the general case you need them.
As of this last week we working to make connections to parents more stable. We should be sending out an stability update later this week. Keep tuned.

Related

Google Hangouts Chatbot Create Room

I am writing service for messaging between rooms using Hangouts chatbot. Is it possible to create a room with Hangouts chatbot?
https://developers.google.com/hangouts/chat/concepts
If it were generally possible to create a new room programmaticaly with the Hangous API, you could do it with the chatbot.
For example, if you implement the chatbox with Apps Script, you can create a function onMessage(event) and establish was will to happen in case of a certain event.message.text (e.g. create a new chat room if the message text contains the string create new room.
However, for the moment it is not possible to create a new chat room programmatically.
There is a feature request asking for this functionality, but given the potential of abuse, it is controversial either it will be implemented.
See comment #2:
Hello, thank you for the feature request! At the moment rooms cannot
be created via the API to prevent abuse such as a bot or script
spamming room creation. However, this kind of feature has been
discussed internally and may be coming in the future (with
limitations). I will update this issue if more information is
released.
And comment #25:
Thanks for the input. It's great to see some real life use cases. We
fully acknowledge the importance of a CreateRoom(DM) API and we are
actively looking into the right permission model to allow bots to do
so. Please continue to follow this bug as we will post updates here
when appropriate.

Agent Training in DialogFlow

I have written and submitted an app via DialogFlow for Google Home which is now live. If I make use of the training facility in DialogFlow (https://dialogflow.com/docs/training-analytics/training) and match un-matched user questions to existing intents, do I need to resubmit my app to google for the training to take effect? Unfortunately the documentation is not clear on this point.
I contacted DialogFlow re the question and they confirmed that that app does need to be resubmitted to google if you use their training function as this alters the language model.

Port existing custom chatbot as Google Assistant action

We have a framework that implements chatbot / voice assistant logic for handling complex conversations in the health domain. Everything is implemented on our server side. This gives us full control of how responses are generated.
The channel (such as Alexa or Facebook Messenger cloud) calls our webhook:
When user messages, the platform sends these to our webhook: hashed user id, message text (chat message or transcribed voice)
Our webhook responds with the appropriately structured response, which includes text to be displayed, spoken, possibly choice buttons and some images etc. It also includes a flag whether the current session has finished or user input is expected.
Integrating a new channel involves conversion of the response returned into the form expected by a channel and setting some flags (has voice, has display etc.).
This simple framework has worked so far for Facebook Messenger, Cortana, Alexa (a little bit of hacking was needed to abandon it's intent and slot recognition), our web chatbot.
We wanted to write a thin layer of support for Google Assistant action.
Is there any way of passing all the input from Assistant user intact into a webhook such as the one described above and taking full control of the way responses are generated and the end of conversation is determined?
I'd rather not delve into those cumbersome ways of API.AI of structuring a conversation which seems good for a trivial scenarios such as ordering an Uber but seems very bad for longer conversation.
Since you already have a Natural Language Understanding layer for your system, you don't need API.AI/Dialogflow, and you can skip this layer completely. (The NLU is useful, even for large and extensive conversations, but doesn't make sense in your case where you've already defined the conversation through other means.)
You'll need to use the Actions SDK (sometimes known as actions.json after the configuration file it uses) to define triggering phrases, but after that you'll get all the text that the user says as part of your conversation through a webhook that delivers JSON to you. You'll reply with JSON that contains the text/audio response, images on cards, possibly suggestion chips, etc.

How to ensure my Google Home Assistant application is not rejected?

During our testing, we were unable to complete at least one of the behaviors or actions advertised by your app. Please make sure that a user can complete all core conversational flows listed in your registration information or recommended by your app.
Thank you for submitting your assistant app for review!
During testing, your app was unable to complete a function detailed in the app’s description. The reviewer interacted with the app by saying: “how many iphones were sold in the UK?” and app replied “I didn't get that. Can you try with other question?" and left conversation.
How can I resolve the above point to approve my Google Assistant action skills?
Without seeing the code in question or the intent you think should be handling this in Dialogflow, it is pretty difficult - but we can generalize.
It sounds like you have two issues:
Your fallback intent that generated the "I didn't get that" message is closing the conversation. This means that either the "close conversation" checkbox is checked in Dialogflow, you're using the app.tell() method when you should be using app.ask() instead, or the JSON you're sending back has close conversation set to true.
You don't have an intent to handle the question about how many iPhones were sold in the UK. This could be because you just don't list anything like that as a sample phrase, or the two parameters (the one for object type and the one for location) aren't using entity types that would match.
It means that somewhere, either in your app description or in a Dialogflow intent(they have full access to see what's in your intents) you hinted that “how many iphones were sold in the UK?” would be a valid question. Try changing the description/intents to properly match the restrictions of your app.

How to transfer conversation from Bot to human agents? in ibm watson using node js

I have created watson chat bot which answers users FAQ's using Node js as a middle ware. but how can i transfer the conversation from bot to any human agent.
In this case, you need to know: Watson Conversation Service is one endpoint API, so, you can call the service and creates one condition in your backend for identifying if the user wants to be attended for one's Human Agents or anything that you want to do with yours application.
For example, you can see the Project by IBM Developer's inside Watson Developer Cloud called conversation-simple using Node.js.
You can simply create one #intent condition in your chatbot likes:
if bot recognizes #wantsHumanAgent
response: Do you want to talk with one Professional?
And creates one #entity with the values: yes or no, after it, try to create one child node with the condition:
if bot recognizes #yesOrNo:yes
response: Please wait! I'll pass you on to an attendant.
Or you can add the link for the user's talks with the Attendant too, like:
if bot recognizes #yesOrNo:yes
response: The link to talk with one Attendent is <a target="_blank" href="https://yourlink.com">Talk to one Professional!</a>.<br/><br/>
Obs.: You can add one custom code to creates your functions or do something in your application using this example as a base, and creates one custom code for what you want: Add in your chat one option to talk with Human Agents.
Note.: This is just some's suggestions to use based on good practices. You need to guide your user in your Virtual Assistant for one better Attendance.
See more examples to build with Watson Conversation Service.

Resources