Building a productivity chatbot. Hosting it on Heroku. Since I want to have very minimal dependency on Azure, I want to avoid directline APIs. Is it possible to use PubNub instead? Any insights / examples would be very helpful.
Building a Chatbot
You can build a chatbot with PubNub instead of DirectLine with BotBuilder. The following shows you how to build a chatbot using serverless techniques. The example also includes voice recognition using Google's voice API. So it's just a step up from what you're asking. You can include or exclude the voice recognition technology used.
Walkthrough: https://www.pubnub.com/blog/build-an-80s-chatbot-with-an-npm-package/
Try it live: https://stephenlb.github.io/artificial/
GitHub: https://github.com/stephenlb/artificial
Related
Can you like with the Test Simulator have an embedded Google Action or use a Live chat style bot on a website with Google Actions responses? Are there any code labs or third party platforms that do this?
Yes, some of the 3rd party chat windows I know of:
- Smooch
- Kommunicate
You will have to use the Payload response to send specific components (such as cards, quick replies or images)
There's also this Github repository which allows you to set Google Actions replies and it will display it in chat:
https://github.com/mishushakov/dialogflow-web-v2
Or you can write your own in React or Vue.
Actions on Google is the platform for Google Assistant developers and have own library and components. You can use these features and components only on Google Assistant projects, since every platform has different features and capability. If you need to create a chatbot with these kind of features, you should check platform's docs eg. Facebook, Telegram...
If you want to create a chatbot which has some rich responses, Dialogflow has own attributes such as Card, Suggestion. So, you can build your agent and integrate Dialogflow (not Google Action).
You can check here for platform and Dialogflow's response and payload ability.
I have developed my chatbot in PyTorch framework for college purposes. The model is working fine with log loss value of 0.5 and is able to answer questions appropriately. I have seen few productionization suggestions like fast.ai, flask and Django. But I want the model to be deployed on Google Assistant so that my end users can utilize the service of bot without any external installations. How do I have to integrate my PyTorch model to the Google Assistant on Dialogflow ?
Google has published a series of Codelabs to help developers start building actions for the Google Assistant. Each module can be taken standalone or in a learning sequence with other modules.
In each module, the codelabs provide you with end-to-end instructions on how to build Actions from given software requirements and how to test your code. They also teach the necessary concepts and best practices for implementing Actions that give users high-quality conversational experiences.
You can start here.
This is my first experience of using IBM watson and I am stuck with integrating watson conversation with speech-to-text and text-to-speech api services using node.js platform.
Done with conversation part but can't find a method to make
input speech ==> output of STT => input of conversation => output of conversation => input to TTS ==> output speech
I have tried multiple ways but still can't get even 1% of success. Followed multiple github repos even this one too with most forks https://github.com/watson-developer-cloud/node-sdk and multipletjbot recipes, etc still no results.
Can anyone here guide me with the right method?
the error with this link is attached below
does this help? I think this demo is similar to what you are doing. It is using STT, TTS, Conversation I believe.
https://github.com/watson-developer-cloud/speech-javascript-sdk/tree/master/examples
https://speech-dialog.mybluemix.net/
https://github.com/nfriedly/speech-dialog
There are some great examples that you can download, and play around with on the Watson Starter Kits Page.
Create a few of them and download the code, and then plunder what you need for your app or use one of the starter kits as the beginning of your app.
Starter kits on the page linked above that I think can help:
Watson Speech to Text Basic
Watson Assistant Basic
Watson Text to Speech Basic
Each of the starter kits listed above are available in Node and have README.md files to help you set everything up.
I have built a bot using Microsoft bot framework. Now I want to connect it to channels which are not supported by the Microsoft bot connector. However, I need to build an interface (or a substitute for bot connector) to connect to those channels. But as I am using Bot framework SDK (NodeJs), I need the best approach to expose the endpoint of my bot engine to other connectors/channels.
The Bot Framework has a mechanism specifically for this scenario, called Direct Line. Essentially, you build the UI interface yourself, but use the Direct Line API to forward events to/from the Bot Connector. You can use the Direct Line REST API, or find an npm package where someone has taken care of the underlying plumbing for you, as in the botframework-directline.js package. Microsoft has some node.js BotBuilder-Samples on their GitHub site too.
Note: The typical way to get the best help on SO is to post code of
what you've tried and that gives people a better idea of how to help.
I understand you don't know where to start, so that doesn't help much,
but maybe it will explain why you're getting Close flags.
Is it possible to use cortana voice commands in electron? I'm talking about the actual UWP API not cortana skills. I don't need a bot I want to be able to use my voice commands offline and the type of actions that my app provides doesn't need any third-party API. (something like "hey cortana ask [MY APP] how many movies do I have?")
I have seen cortana voice command sample with winJS and it is possible to use winJS in electron. but how am I actually going to use a VCD file in Electron with winJS? the sample code is for visual studio and winJS only
so I'm hoping for some clarification or a guideline on how to use VCD in electron-
Electron enables developers to build Desktop apps using JavaScript and Node modules. Then, if you want to know whether the UWP APIs callable from a classic desktop app, you could check this document: https://msdn.microsoft.com/en-us/library/windows/desktop/mt695951(v=vs.85).aspx
After you know if the specific UWP API is callable from desktop app, then, next step is how to call this API in Electron. There’s an open-source project named as NodeRT.
NodeRT automatically exposes Microsoft’s UWP/WinRT APIs to the Node.js environment by generating Node modules. This enables Node.js developers to write code that consumes native Windows capabilities. The generated modules' APIs are (almost) the same as the UWP/WinRT APIs listed in MSDN.
So, you could use it to call the specific UWP APIs in Electron.