Is there a way to connect Custom Translator with Speech Translation? - azure

I want to find the way how can I connect Speech Translation with Custom Translator.
On Custom Translator webpage there is mentioned that: "Custom Translator can be used for customizing text when using the Microsoft Translator Text API , and speech translation using the Microsoft Speech services."
Unfortunatelly, I didn't find any example of that usage.
Can anyone help with that?
Thank you!

If you build your application based on this example,
You can set the category id at the SpeechTranslationConfig as follows:
config->SetServiceProperty("category", "", ServicePropertyChannel.UriQueryParameter);
This is available since Microsoft Speech SDK 1.5. details

Related

Hi all I am looking for implementation examples of azure cognitive translator for websites . Please help us

So as to provide multilingual support for one of our websites.
I could so only plain text conversion examples so it would really helpful if any one supports us
Thanks
You can use document translator for HTML.
Code Samples
See documentation for more info.

To use Azure LUIS with voice do i need to get the text first?

I think the title explains my doubt.
I've tried before the Speech to Text feature from Azure.
The question is:
Is there a way to use the sound binary to Azure LUIS instead of the text?
Yes, LUIS can accept speech input instead of text. LUIS provides this tutorial on how to setup speech services. The tutorial is in C#, however it appears their GitHub repo has samples in other languages, if of use.
Hope of help!
if you are creating a speech bot, here is a new approach.
so it leverages azure speech (TTS and SR), integrate with bot service in a more easy way
https://learn.microsoft.com/en-us/azure/bot-service/directline-speech-bot?view=azure-bot-service-4.0

Is it possible to use Googles WaveNet Text-To-Speech model for the Actions-On-Google integration of a Dialogflow agent?

Google Clouds Text-To-Speech API has a WaveNet model whose output in my opinion sounds way better than the standard speech. This model can be used in Dialogflow agents (Settings > Speech > Text To Speech), which results in the generated speech being included in the DetectIntentResponse. However, I can find no way to use this speech with the Actions-On-Google integration, i.e. in an actual Google Assistant app. Have I overlooked this, or is this really not possible, and if so, does anyone know when they plan to enable this?
In the Actions console, going to the Invocation page lets you select a TTS voice.
All of the voices can be demoed on the Languages & Locales page of the docs, and the vast majority of them use WaveNet voices.

Launch Tool is missing in IBM Watson Natural understanding when creating custom model

I am facing a weird issue. I am trying to create a custom model in IBM Watson natural language understanding with lite plan.No launch tool option is shown to create custom model. To be clear, ideallly the page should be like this as described in all the tutorials,
But What I am getting is
I tried all possibilities there is no way to navigate to the annotator tool page. Please somebody help
Your first pic looks Watson Knowledge Studio. Watson Knowledge Studio is a different service you can also create IBM Cloud Catalog. Please check it.
https://www.ibm.com/watson/services/knowledge-studio/

Does google nl api support "word hints"

This article references a feature called "Word Hints." But I am not able to find more info on how to use that. Anyone run into this?
https://cloudplatform.googleblog.com/2016/07/the-latest-for-Cloud-customers-machine-learning-and-west-coast-expansion.html
Word hints are available for the Speech API (docs here) but this feature is not currently supported for the NL API.

Resources