Setting a fallback tier (enhanced vs base) for different languages using Deepgram speech-to-text - speech-to-text

I would like to use Deepgram's enhanced tier in all cases where the enhanced tier is available, but in the event that the enhanced tier is not supported (as is the case with several languages), I would like to fallback to using the base tier.
The way I'm currently handling it is:
Make the API call specifying tier=enhanced and language={some language code}
If response.status_code != 200, try making the call again with tier=base and language={some language code}
If response.status_code != 200 still, give up.
Is there a cleaner way to do this? For example, is there a supported syntax for setting a fallback model, e.g. tier=[enhanced, base], so that if enhanced fails, it automatically tries the base tier?

Related

Reading a grib2 message into an Iris cube

I am currently exploring the notion of using iris in a project to read forecast grib2 files using python.
My aim is to load/convert a grib message into an iris cube based on a grib message key having a specific value.
I have experimented with iris-grib, which uses gribapi. Using iris-grib I have not been to find the key in the grib2 file, althrough the key is visible with 'grib_ls -w...' via the cli.
gribapi does the job, but I am not sure how to interface it with iris (which is what, I assume, iris-grib is for).
I was wondering if anyone knew of a way to get a message into an iris cube based on a grib message key having a specific value. Thank you
You can get at anything that the gribapi understands through the low-level grib interface in iris-grib, which is the iris_grib.GribMessage class.
Typically you would use for msg in GribMessage.messages_from_filename(xxx): and then access it like e.g. msg.sections[4]['productDefinitionTemplateNumber']; msg.sections[4]['parameterNumber'] and so on.
You can use this to identify required messages, and then convert to cubes with iris_grib.load_pairs_from_fields().
However, Iris-grib only knows how to translate specific encodings into cubes : it is quite strict about exactly what it recognises, and will fail on anything else. So if your data uses any unrecognised templates or data encodings it will definitely fail to load.
I'm just anticipating that you may have something unusual here, so that might be an issue?
You can possibly check your expected message contents against the translation code at iris_grib:_load_convert.py, starting at the convert() routine.
To get an Iris cube out of something not yet supported, you would either :
(a) extend the translation rules (i.e. a Github PR), or
(b) sometimes you can modify the message so that it looks like something
that can be recognised.
Failing that, you can
(c) simply build an Iris cube yourself from the data found in your GribMessage : That can be a little simpler than using 'gribapi' directly (possibly not, depending on detail).
If you have a problem like that, you should definitely raise it as an issue on the github project (iris-grib issues) + we will try to help.
P.S. as you have registered a Python3 interest, you may want to be aware that the newer "ecCodes" replacement for gribapi should shortly be available, making Python3 support for grib data possible at last.
However, the Python3 version is still in beta and we are presently experiencing some problems with it, now raised with ECMWF, so it is still almost-but-not-quite achievable.

Profile parameter login/password_downwards_compatibility meaning?

I am currently evaluating the use of the login / password_downwards_compatibility parameter, but I can not fully understand it.
In what cases is it used? And what risks would it have associated?
What exactly don't you understand in official documentation for this parameter?
Values from 0 to 5 shows strictness of SAP password hash generation and evaluation during logon, starting from the most strict (0) to the most legacy way (5).
You do not need to change the default value (aka 1) of the parameter unless:
your system is a central storage system of CUA, where landscape consists of old systems
your system is very old itself, i.e. SAP Netveawer ≤ 7.00 (SAP_BASIS 700)
You must not touch this parameter at all unless you are a professional BASIS.
Period.

Relationship between Parameter set context and model mode?

Origen has modes for the top level DUT and IP. However, the mode API doesn't allow the flexibility to define attributes at will. There are pre-defined attributes, some of which (e.g. typ_voltage) look specific to a particular company or device.
In contrast, the Parameters module does allow flexible parameter/attribute definitions to be created within a 'context'. What is really the conceptual difference between a chip 'mode' and a parameter 'context'? They both require the user to set them.
add_mode :mymode do |m|
m.typ_voltage = 1.0.V
# I believe I am limited to what I can define here
end
define_params :mycontext do |params|
params.i.can.put.whatever.i.want = 'bdedkje'
end
They both contain methods with_modes and with_params that look similar in function. Why not make the mode attributes work exactly like the more flexible params API?
thx
Being able to arbitrarily add named attributes to a mode seems like a good idea to me, but you are right that it is not supported today.
No particular reason for that other than nobody has seen a need for it until now, but there would be no problems accepting a PR to add it.
Ideally, when implementing that, it would be good to try and do it via a module which can then be included into other classes to provide the same functionality e.g. to give pins, bits, etc. the same ability.

Azure Recommendations API's Parameter

I would like to make a recommendation model using Recommendations API on Azure MS Cognitive Services. I can't understand three API's parameters below for "Create/Trigger a build." What do these parameters mean?
https://westus.dev.cognitive.microsoft.com/docs/services/Recommendations.V4.0/operations/56f30d77eda5650db055a3d0
EnableModelingInsights Allows you to compute metrics on the
recommendation model. Valid Values: True/False
AllowColdItemPlacement Indicates if the recommendation should also
push cold items via feature similarity. Valid Values: True/False
ReasoningFeatureList Comma-separated list of feature names to be
used for reasoning sentences (e.g. recommendation explanations).
Valid Values: Feature names, up to 512 chars
Thank you!
That page is missing references to content mentioned at other locations. See this page for a more complete guide...
https://azure.microsoft.com/en-us/documentation/articles/machine-learning-recommendation-api-documentation/
It describes Cold Items in the Rank Build section in the document as...
Features can enhance the recommendation model, but to do so requires the use of meaningful features. For this purpose a new build was introduced - a rank build. This build will rank the usefulness of features. A meaningful feature is a feature with a rank score of 2 and up. After understanding which of the features are meaningful, trigger a recommendation build with the list (or sublist) of meaningful features. It is possible to use these feature for the enhancement of both warm items and cold items. In order to use them for warm items, the UseFeatureInModel build parameter should be set up. In order to use features for cold items, the AllowColdItemPlacement build parameter should be enabled. Note: It is not possible to enable AllowColdItemPlacement without enabling UseFeatureInModel.
It also describes the ReasoningFeatureList in the Recommendation Reasoning section as...
Recommendation reasoning is another aspect of feature usage. Indeed, the Azure Machine Learning Recommendations engine can use features to provide recommendation explanations (a.k.a. reasoning), leading to more confidence in the recommended item from the recommendation consumer. To enable reasoning, the AllowFeatureCorrelation and ReasoningFeatureList parameters should be setup prior to requesting a recommendation build.

Standardized Error Classification & Handling

I need to standardize on how I classify and handle errors/exceptions 'gracefully'.
I currently use a process by which I report the errors to a function passing an error-number, severity-code, location-info and extra-info-string. This function returns boolean true if the error is fatal and the app should die, false otherwise. As part of it's process, apart from visual-feedback to the user, the function also log-to-file errors of above some severity-level.
Error-number indexes an array of strings explaining the type of error, e.g.:'File access','User Input','Thread-creation','Network access', etc. Severity-code is binary OR of 0,1,2 or 4, 0=informative, 1=user_retry, 2=cannot_complete, 4=cannot_continue. Location-info is module & function, and Extra-info is parameter- and local variable values.
I want to make this into a standard way of error-handling that I can put in a library and re-use in all my apps. I mainly use C/C++ on Linux, but would want to use the resultant library with other languages/platforms as well.
An idea is to extend the error-type
array to indicate some default
behavior for a given severity-level,
but should this then become the
action taken and give no options to
the user?
Or: should such extension be a
sub-array of options that the user
need to pick from? The problem with
this is that the options would of
necessity be generalized
programming-related options that may
very-well completely baffle an
end-user.
Or: should each app that uses the
error-lib routine pass along its own
array of either errors or default
behaviors - but this will defeat the
purpose of the library...
Or: should the severity-levels be
handled in each app?
Or: what do you suggest? How do you handle errors? How can I improve this?
How you handle the errors really depends upon the application.A Web application has a different Error-Catching mechanism than A Desktop Application, and both of those differ drastically to an asynchronous messaging system.
That being said the a common practice in error handling is to handle it at the lowest possible level where it can be dealt with. This usually means the Application Layer or the GUI.
I like the severity levels. Perhaps you can have a pluggable Error-collection library with different error output providers and severity level provider.
Output providers could include things like a logginProvider and IgnoreErrorsProvider.
Severity providers would probably be something implemented by each project since severity levels are usually determined by that type of project in which it occurs. (For example, network connection issues are more severe for a banking application than for a contact management system).

Resources