I want to ask about somethings
I have already ready API REST services (http get,post ,delete ... ) for my smart charger at my house AND I want to create a google assistant action (in nodejs with voice) in order to manipulate my charger .
if some one knows about the steps to follow (Iam a beginner) just tell me please because im confused in google documentation slides .
Other advices are welcomed as well!
Thank you so much friendss <3
Related
I am a new user to Google Home SDK. I am developing a simple app where it takes what I said and takes some defined actions.
What I wanted to implement is, when I say "play the special song for someones-name", Google assistant will respond "here you go" followed by playing the defined song from Spotify. I can hard code the artist's name, album into the app. And I have already linked Spotify to my Google Home Assistant.
I have a couple specific questions after getting lost in reading the topics on Create conversational experiences from scratch by Google:
(1) Suppose I just need to hard code the song and album name and let Spotify play it, is there any code snippet for that purpose? I'm new to Node.js, so may be it's easier than I thought.
(2) I am developing the app using my dev account on GCP, say Account-A, it is different from the Google Account I signed in on my home device, say Account-B. How do I deploy and test the app on the home device?
Much appreciated for your help and advise.
There's no way to start up a standard Spotify session through a conversational action. If you have the media file, you could have your conversational action play a MediaResponse.
Alternatively, you may instead want to create a routine that accepts a given query and completes an action. That will allow you to start a media stream for whatever you want.
I'm back again with a question about NLP. I made my own back-end, which on one side can connect to websites, the Google Assistant and Facebook Messenger, and on the other end to Dialogflow. On the side, is logs interactions and does some other database stuff.
Now, I'm trying to connect this back-end to Alexa. I made a project which calls my endpoint. This project has one intent, which has a paramater which should get the raw user input, send it to my back-end, process it, parse and send the response to get back. I feel like there is not a real way to collect and send the raw user input, so I can process it myself (on Dialogflow) instead of using the Amazon way of mapping intents and such.
I know Dialogflow can export to Alexa, but this is not an option for me. I really hope one of you can point me in the right direction.
I just need a way to collect the raw user input, and respond in an Alexa accepted response format.
For Actions on Google for example, I'm using a Custom Project Action Package.
Thanks a lot in advace!
To accept or get any user input, you can use sys.any in google assistant and AMAZON.SearchQuery in AMAZON ALEXA.
In Alexa, You have to add the carrier phrase to use AMAZON.SearchQuery. You can't combine any other slot with AMAZON.SearchQuery.
So there are also some limitations. I hope this answer will help you.
I've been browsing Twilio's docs and API reference but I was unable to find how long does the chat history is stored.
I don't have any own DB for storing messages as well as I don't have any other logic on my backend related to chat. I'm using Twilio to handle everything for me. I'm only using their client SDKs to interact.
Can anyone help me with that. Thanks in advance.
Twilio developer evangelist here.
My apologies for the delay from support getting back to you, but the good news is I have an answer for you.
At Twilio we store everything forever until you either:
Close your account
Delete the messages or channels yourself
Delete the entire service instance
So not fetching that from a database is the right choice in my opinion as it would potentially just add extra latency and logic into your code.
Hope this help yoiu.
just getting started with Assistant features in RPi and I am able to successfully implement upto this point and wondering few thing.
Scenario:
user: hey google "please turn on my living room Lights"
List item my code in horword.py : has a function to perform same action based on ON_RECOGNIZING_SPEACH_FINISHED
RPi/google home: I am not sure how respond to that
I was able to capture the request query asked by user using ON_RECOGNIZING_SPEACH_FINISHED = Args.text(str) and use it in my logic to perform the task. However, at the same time, "ok google" is responding with this answer.
to mitigate this problem, I created an google-actions, now it understands my query and respond with intention from api.ai. However, didn't acts on turn lights ON. So, wondering how can I read response from google home/api.ai in text and change code to act on it locally.
appreciate it.
You will not get response as text.
For getting response to client app use webhook in API.AI and send message using fcm to client app.
Read the fcm message in client app and do the corresponding actions.
finally was able to figure out multiple ways. answered this in other stack question. find more details in this post.
Multiple ways to handle this since google doesn't gives voices transcript and we let google say our transcript which is kind off solution for now.
I was given a task to create a conference in Asterisk using ARI with Node.js. The objective is create a conference room and send email invitations so people can click and enter de conference room. I also need a admin web interface to show who's talking, mute and some other things.
I don't have any experience in Asterisk. So I need some start point. Innitally I have to create a Channel and then add some SIP to it.
So taking this page as a base: https://wiki.asterisk.org/wiki/display/AST/Asterisk+13+Channels+REST+API
I have a configured test server and a sip number (852001). So I opened up Insomnia and create a POST request like this:
http://<serverip>:8088/ari/channels/400?endpoint=852001&extension=400
But allocation failed. So I thought that before I continue with this I have to make some concepts clear:
What do I need to create a conference room ? It's just create a channel or I have to create a bridge first ? What should be the right values in endpoint, extension or app fields ?
Is ARI URLs the best approach or it's better to use node.js's ari-client module ? I'm using urls because I couldn't get any working example on creating a conference with ari-client.
Any code examples on how I could do this would be greatly appreciated. Thanks.
Read Orelly's "Asterisk the future of telephony" as starting point.
ps but do it via ARI only seams like impossible even for expert. Anyway you need some dialplan.