Is there a way to see qtty characters I use in the azure portal metrics? There is "SynthesizedCharacters" metric but I only view data when I use it from Speech Studio. I want to see this metric when I use cognitive sdk. Is it possible?
Thanks
Unfortunately, AFAIK there is no metric to track that from the Azure Portal. However, you can maintain the count locally at your end or the central location where you can query yourself --- add an additional logic to maintain the metrics in your code.
The character is counted based on the below conditions (that can be found here):
Text passed to the text-to-speech service in the SSML body of the
request
All markup within the text field of the request body in the SSML
format, except for and tags
Letters, punctuation, spaces, tabs, markup, and all white-space
characters
Every code point defined in Unicode
Related
I've been looking for a model that is capable to detect what we call "full breaks" or "filler words" such as "eh", "uhmm" "ahh" but Azure doesn't get them.
I've been playing with Azure's speech to text web UI but it seems it doesn't catch these types of words/expressions.
I wonder if there is some option in the API configuration to "toggle" the detection of full breaks or filler words.
Thank you in advance
Please tell me how I can change the stress in some words in the Azure voice engine text-to-speech. I use Russian voices. I am not working through SSML.
When I send a text for processing, then in some words he puts the stress on the wrong syllable or letter.
I know that some voice engines use special characters like + or 'in front of a stressed vowel. I have not found such an option here
To specify the stress for individual words you can use the SpeakSsmlAsync method and pass a lexicon url or you can directly specify it directly in the ssml by using the phoneme-element. In both cases you can use IPA.
I am using azure search in my bot application.
In this if we give input with spelling mistake, for small words like trvel => travel we are getting response properly.
But if i enter "travelexpense" for this i am not getting any result.
Currently i am passing input to do fuzzy search.
I have suggested to use Bing Spell Check API, but it is not approved as they think our input may be stored outside.
Is there any option available in azure search to correct the words like "travelexpense".
Is there any option available for this scenario?
The closest I would say is a phonetic Analyzer.
https://learn.microsoft.com/en-us/azure/search/index-add-custom-analyzers
There a couple of other things you can try:
Enable Auto Complete and Suggestions (https://learn.microsoft.com/en-us/azure/search/search-autocomplete-tutorial)
Create synonyms (https://learn.microsoft.com/en-us/azure/search/search-synonyms)
I am adding Azure Search and trying to add skills for content enrichment.
I can see the Key Phrase Extraction and the Language Detection predefined skills but not the Text Split skill on the screen. Is there a reason why Text Split skill is not visible? Or is it something that can only be added via API?
The capabilities exposed throught the portal focus on core scenarios that customers want to perform so they do not include text splitting. If you want to split your text, you should do it by creating your own skillset programatically through the API, that will allow you to define the language and the size of a page.
I am using Microsoft Azure OCR web service. When I use flag "detectOrientation" as true, sometimes it gives weird result. (Tries to identify vertical text, even though I want it to read horizontal text) So, I want to set my orientation as I know it as "Up". Even if I set "detectOrientation" as false, it returns same result.
Surprisingly, if I use Microsoft demo page, https://azure.microsoft.com/en-in/services/cognitive-services/computer-vision/, it is returning correct result. Might be it is doing some pre/post processing or adding some flags. But, it is not revealing this information. Reported this issue so many times to Microsoft but no reply.
You can't set the orientation manually as the parameter detectOrientation is a boolean (true/false) as stated here
The response from the demo page is not the result of the Computer Vision API's OCR, it is the result of using the Computer Vision API's Recognize Text then Get Recognize Text Operation Result to get the result of the operation.
The response of the OCR includes following:
textAngle
orientation
language
regions
lines
words
boundingBox
text
While the response from the Get Recognize Text Operation Result includes the following:
Status Code
Lines
Words
BoundingBox
Text
If you compare the results of the demo page you'll find that they match the Recognize Text, not the OCR.
Surprisingly, if I use Microsoft demo page,
https://azure.microsoft.com/en-in/services/cognitive-services/computer-vision/,
it is returning correct result.
On the demo page, as stated, they don't use OCR operation of the Web service but the new Recognise Text API operation.
Switch to this one, and your results will be consistent.
And to answer your other question about passing the orientation, no there is no such parameter.
I believe the detect Orientation parameter just detects the orientation of all the text in the image, it is not an actual setting that lets you choose which text to read based on its orientation like you're trying to use it.