I am new to Google Assistant having one query about Google assistant with Google Home,
How to enable Google Home to be enable to speak without voice input? Is this possible to give input by any other way except voice and take output from Google Home in voice format?
This is equivalent to doing a notification or push event through the Google Home, and this is not currently available. Interactions using Google Home and the Actions on Google API require the user to initiate the conversation and the reply to go through the same channel as the input.
Unfortunately, you cannot do that yet, if you are asking for automating the triggering of actions through a rest api for example and then the google home just starts answering, there is no rest api for that but this will be the proactive assistant functionality.
In Google IO 2017, they introduced a new concept coming to the Google assistant which is proactive functionality, some calls it notifications, which is allowing the google assistant to start the conversation with the user, to give him info about the traffic for example if he has to be in a meeting in time.
but they announced neither a time frame nor any information about it.
so if this is what you are looking for, you just have to wait.
There's another answer that suggests you can programmatically synthesize speech audio and send it directly to Google Home on the user's behalf. You can use whatever input mechanism you want, as long as under the hood you produce audio that Google Home recognizes and can act on.
Can I initiate an action on Google Home from another application without a voice command?
It might seem strange to have a robot talking to a robot, but it opens up the possibility of users being able to type "commands" using natural language, then assign those commands to whatever trigger they want. Could be great for non-verbal folks or people with privacy concerns related to microphones.
[edit] I've since done more research and it looks like interfacing directly to Assistant (rather than through Google Home) does allow non-verbal integration: https://developers.google.com/assistant/sdk/
Related
I want chatbot like I open the chat window it's automatically multiple questions that have come to the window.possible with Dialogflow if yes then how it's possible.
A chatbot is meant to be interactive unless user started the chat conversation, you should not do that. Better make a conversational tree and make a user start a conversation and have questions.
I see a lot of question-related to dialog flow and google assistant when are building assistant, we need to think a conversational design paradigm instead of app paradigm, that we are used for a long time.
The Assistant is meant to be conversational for a user to deliver the right experience. Due to that, you will find a lot of things we can not do with google assistant explicitly like sending a notification. This is not a conversational design pattern.
So, make your assistant more conversational. In those, you will not come an across with such Delima.
I want to incorporate a few new things in an audio chatbot. Can I please check the best way to do it?
- I want to record actor's voices to replace the chatbot's default computerised voice
- I want to include sound files that play on demand (and with variety, so the file that plays depends on user choices) - is that possible and if so is there much delay before they start playing?
- I would also like to use sensor motion to start the program, so that the chatbot automatically says hello and starts a conversation when a user enters a room, rather than the user having to say 'hello google, can I talk to...blah blah' to activate the chatbot.
Thus far I've been using dialogflow to build natural language processing chatbots. Does dialogflow have the capacity to do all this, or should I use another programme linked to it as well? Or, for this sort of functionality would it be better to build a chatbot using python - and does anybody know any open source versions?
It is not possible to have the chatbot start a conversation without the user saying "Okay, Google. Talk to..". This has been done so that Google Assistant cannot be triggered without the user activating it themselves.
As for using sound files, you can record parts of your conversation and use these files in your conversation using SSML. With SSML you can edit what your assistant says using simple code. The audio tag is what you need to play sound files.
I'd like to know if it's possible to record an audio extract on Google Assistant (with Dialogflow and Firebase) and play it later? The idea is to:
Ask the user to tell his name vocally.
Record it.
Play it afterwards.
I read these answers. The answer was no, but maybe there's an update as now we can listen to what we said on Google Assistant "myactivity" as seen here.
The answer is still no, as a developer. While users have recordings available, there's no way to access them programmatically and no way to play them through Dialogflow.
You can read this great article which explains a trick to record audio and play it back on the Google Assistant using a progressive web app.
The design consists of thses two parts:
A web client for recording and uploading the audio files to Google Cloud Storage.
An Action that plays the audio file from Cloud Storage
You'll also find the url of the open-sourced code on github at the end of the article.
I have a google home speaker, and I can issue commands like what's the time or play some music, but I'd like to be able to define my own responses to certain commands, like
how many appointments do I have today
or
are there any cancellations
I would like the above commands to a run a script where I can either run a web-service, or pull information from my SmartThings hub (that bit is optional) and respond with an appropriate response.
I've done a bit of research, and it seems that IFTTT, can do something similar, but I don't really want to be dependent on a 3rd party app, and if this can be done directly with Google.
I guess I'm looking for something similar to Groovy for SmartThings, where I can write Smart Apps.
The API to develop your own commands is known as Actions on Google. Broadly speaking, Actions will send JSON to a webhook that you control, and you can have it do whatever you wish at that point.
I am using dialogflow to create a google assistant app. I want to hear what the user said for error resolving. How can I do that? I know it is possible in Alexa but I cannot find it on Google.
Developers do not get access to the user's original audio clips, just the transcriptions. If you are detecting a number of errors from your action, it may be useful to try to get a better understanding of how users are conversing with your actions in general.