Voice options for google action using dialogflow SDK - dialogflow-es

I created some actions and intents using the dialogflow interface, I then exported the JSON files that were created. I noticed that there is a parameter: "voiceType": "MALE_1", in the agent.JSON file.
My question is what other values will this voiceType key take, also is there a place I can find documentation on the structure of this agent.JSON file?
Cheers!

Yes, the Action Package reference documentation shows the available types.
You can use MALE_1, MALE_2, FEMALE_1, and FEMALE_2.

Related

create space/room and send message via google-api in node.js

Case: Google Chat support in node.js using the googleapis library.
I studied the documentation, created a service account and implemented authentication. In first step I used the chat.spaces.list() method and it worked (no error returned).
I want to send a message via chat, so I wanted to create a new space, I found the chat.spaces.create method (https://developers.google.com/chat/api/reference/rest/v1/spaces/create) Unfortunately, this method is not present in "googleapis" for node.js In general, I see that the list of methods in "googleapis" is different than the one in the documentation. Only spaces in the documentation, but spaces and rooms in the library... I'm lost. How to do it? Any tips?
I see that the list of methods in "googleapis" is different than the one in the documentation
I think you are seeing wrong documentation. Your reference link is REST API documentation. The nodejs googleapis client documentation you can see on https://googleapis.dev/nodejs/googleapis/latest/chat/classes/Resource$Spaces.html.
And also based on the REST API documentation, the API that you looking for(create space) is not general available.
† Supports user authentication in Developer Preview. App authentication isn't available.
You need to join Google Workspace Developer Preview Program to access that feature.

Where does Dialogflow store simple responses?

For example: I add a simple response to an intent with Dialogflow interface,
like this.
This response is associated with my intent and my agent, but where is it stored? Is there some google db? Can I access all the agent data stored by dialogflow?
I can't find any hints in the documentation.
Where is it stored? Is there some google db?
There is a data store of some sort, just like most things are stored in a database or data store of some sort. But that doesn't mean that it is a conventional database in the way you may think about it.
It also doesn't mean that you would have access to the database directly.
Can I access all the agent data stored by dialogflow?
Depends what you mean by "all the agent data". Everything that you build using the web interface (the Intents and Entities, for example) is available to you. You can export it as a zip file that contains various JSON files with the configuration. You can also use the Dialogflow API to get various parts of your configuration.

Google Drive API v3 using Python

Can anybody give me sample code in python to find the folder Id (especially the last folder created) in google drive? Your help will be immensely appreciated.
Stackoverflow and the Drive API documentation have enough samples of python code for Google Drive API requests, you just need to define the basic steps and patch the corresponding code parts together
Any Google Drive API request need to be based on the Drive API Quickstart for Python which implements OAuth2 authorization flow and creation of an authenticated service.
Once you have this, you can list your files to retrieve their Ids.
In order to narrow down the results, you can define the search parameter q, e.g. specifying mimeType = 'application/vnd.google-apps.folder'.
Use the parameter orderBy to request that the most recent modified folders will be shown first.
Use the paramter pageSize to define how many results you want to obtain (if you want to obtain the newst folder Id only - 1 is a valid value).
Stackoverflow is not meant to help you write a code from scratch, I recommend you to search e.g. with the following specifications for similar questions and try to patch together your code yourself.
Then, if necessary, post a new question with your code explaining where you got stuck and asking for specific help.
Hint: Before implementing your request into Python, test it with the "Try this API" functionality of the Files:
list to make sure that you adapted the parameters correctly to your needs

Does Azure Text to Speech API support <mark> tags?

Does Azure Text to Speech API support tags as part of its support for SSML 1.0? How can I make a call to get the timestamps of marker positions? By default, it just returns the audio file output.
I have searched everywhere, but I was not able to find any endpoint or info on this. Thanks
The <mark> tag is not currently supported.
However, we do support word boundary events now; an example can be found here.
No, Based on Azure TTS API reference , it just reply an audio file as a response:
If you have custom requirements, maybe separating your sentences into multiple calls would be a workaround.
Instead of <mark>, you may use <bookmark>, which seems to have the same usage:
Cf the doc: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=python#bookmark-element

How to structure and deploy JS files for Dialogflow intents for an online store

I want to do the following:
- In Dialogflow I want to create an AGENT for an online store.
- The AGENT should include 15 INTENTS.
Now I want to write my code in JavaScript and save it locally and then save and deploy it in a Dialogflow-Webhook via URL *.
My question now is:
Should I create multiple JavaScript files for each of the Intents?
Is there a specific procedure or examples that can help me better?
Many Greetings
It would be better to create separate js files for your functions called by intents to keep your code modularized.
You can follow Actions on Google GDG Node.js Sample wherein the Meetup API functions are implemented in seperate file & exported to be used in the index.js file.
Hope this helps.

Resources