DataRecognitionClient returns duplicated results - speech-to-text

I am using the Microsoft (Oxford) Cognitive Services Speech API client SDK.
The following is the test result after running the sample code
with the stock audio samples and without changing the code (as is)
--- OnDataShortPhraseResponseReceivedHandler ---
********* Final n-BEST Results *********
Confidence=High, Text="What's the weather like?"
Confidence=High, Text="What's the weather like?"
As you can see, I am getting two identical results. I wonder if you can shed some light to why that is (duplicated results)?

This might be due to the fact that the underlying phoeneme structure might be different than the text recognition. This is not common but can happen.

Related

I'm trying to finetune a bank-bot

Of course, it's not really a bankbot yet.
The source data that has been given to me is a hundred rows or so of telephone scripts (probably artificial in nature).
I'm trying to use them to fine-tune the davinci model on open-ai. Just feeding them in in the
{prompt:question, completion:answer}
format has not yielded great results. I've also tried sending the conversation back to it in the prompts in hopes of improving the outcome, but that's been a mixed bag too as it's prone to repetition.
On the openAI website, there I have found this:
{"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent:", "completion":" <response2>\n"}
{"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent: <response2>\nCustomer: <message3>\nAgent:", "completion":" <response3>\n"}
Can someone point me to some examples of this in action with some actual data plugged into it in a q&a bot context? Thanks.

Dialogflow response is not stable, it is keep on changing? How to get stable results from dialogflow in response?

I am using Google DialogFlow for my application for identifying the text response in parsing the resume. Every time the response keeps on changing.
1 week back I trained a string and get the proper response but today while checking the same string the response is not coming proper, it is not taking a few of the fields.
Also for date identification, the problem is the very similar dialog flow after training the string properly keep on varying the response.
If I try the same string 5 times all the time's results are not same it keep on changing like -
This is the string i trained-
"SSCE(CBSE) from L.B.S. Public School, Pratap Nagar,Jaipur(2013-2014) with aggregate 69.20%."
below are the screenshots attached of varying response-
response I am getting first time
response I am getting second time
Dialogflow is not a parser - the training phrases you give it aren't strings that will be matched, they help set the pattern for a Natural Language Understanding (NLU) system. So they're tuned for how people naturally speak or type.
It is also somewhat unusual to have multiple parameters with the same name. I can easily see how the system would ignore a second occurrence when done this way. (Although you may try setting up those parameters as lists.)

Reading a grib2 message into an Iris cube

I am currently exploring the notion of using iris in a project to read forecast grib2 files using python.
My aim is to load/convert a grib message into an iris cube based on a grib message key having a specific value.
I have experimented with iris-grib, which uses gribapi. Using iris-grib I have not been to find the key in the grib2 file, althrough the key is visible with 'grib_ls -w...' via the cli.
gribapi does the job, but I am not sure how to interface it with iris (which is what, I assume, iris-grib is for).
I was wondering if anyone knew of a way to get a message into an iris cube based on a grib message key having a specific value. Thank you
You can get at anything that the gribapi understands through the low-level grib interface in iris-grib, which is the iris_grib.GribMessage class.
Typically you would use for msg in GribMessage.messages_from_filename(xxx): and then access it like e.g. msg.sections[4]['productDefinitionTemplateNumber']; msg.sections[4]['parameterNumber'] and so on.
You can use this to identify required messages, and then convert to cubes with iris_grib.load_pairs_from_fields().
However, Iris-grib only knows how to translate specific encodings into cubes : it is quite strict about exactly what it recognises, and will fail on anything else. So if your data uses any unrecognised templates or data encodings it will definitely fail to load.
I'm just anticipating that you may have something unusual here, so that might be an issue?
You can possibly check your expected message contents against the translation code at iris_grib:_load_convert.py, starting at the convert() routine.
To get an Iris cube out of something not yet supported, you would either :
(a) extend the translation rules (i.e. a Github PR), or
(b) sometimes you can modify the message so that it looks like something
that can be recognised.
Failing that, you can
(c) simply build an Iris cube yourself from the data found in your GribMessage : That can be a little simpler than using 'gribapi' directly (possibly not, depending on detail).
If you have a problem like that, you should definitely raise it as an issue on the github project (iris-grib issues) + we will try to help.
P.S. as you have registered a Python3 interest, you may want to be aware that the newer "ecCodes" replacement for gribapi should shortly be available, making Python3 support for grib data possible at last.
However, the Python3 version is still in beta and we are presently experiencing some problems with it, now raised with ECMWF, so it is still almost-but-not-quite achievable.

Azure Custom Speech Service "non" response

I've been using the (preview) CRIS speech to text service in Azure. For some short wav files, i get a correct text equivalent, but it is followed by "non". Is this a keyword meaning "non-word" or is this a bug? -- it happens both when i use the base conversational model, and also when i use a custom language model based on the base conversational model, but it does not happen with the "search and dictation" model.
for example, i send a noisy wav file of someone saying "yes" and i get back "yes non". If the wav file is not noisy this doesn't happen, and if the spoken text is two or more words it doesn't happen. it just seems to happen for noisy one-word files. what does "non" mean?
After talking with the product group, this is apparently a bug in the current build of CRIS and will be fixed shortly. The "non" doesn't mean anything, it just appears when there are bursts of background noise.

How to embed basic weather report for current time for fixed location in web page?

What I need:
I need to output a basic weather reports based on the current time and a fixed location (a county in the Republic of Ireland).
Output requirements:
Ideally plain text accompanied with a single graphical icon (e.g.
sun behind a cloud etc.).
Option to style output.
No adverts; no logos.
Free of charge.
Numeric Celsius temperature and short textual description.
I appreciate I'm that my expectations are high so interpret the list more as a "wish-list" rather than delusional demands.
What I've tried:
http://www.weather-forecast.com - The parameters for the iframe aren't configurable enough. Output is too bloated.
Google Weather API - I've played with PHP solutions to no avail though in any case, apparently the API is dead: http://thenextweb.com/google/2012/08/28/did-google-just-quietly-kill-private-weather-api/
My question:
Can anyone offer suggestions on how to embed a simple daily weather report based on a fixed location with minimal bloat?
Take a look at http://www.zazar.net/developers/jquery/zweatherfeed/
It's pretty configurable, although I'm not sure if there is still too much info for your needs. I've only tried it with US locations; all you need is a zipcode. The examples show using locations from other countries. I'm assuming it's a similar setup to get locations added for Ireland.

Resources