I am asked to develop a text-to-speech module in our product, which should support a variety of text-to-speech engines.
Is there a standard describes how to interface with third party TTS(text-to-speech) service or ASR(auto-speak-recognition) service?
Most ASR's use Media Resource Control Protocol (MRCP) as the standard for their interface. It can also be used for TTS.
it depends on what is your purpose or the field you would use ASR & TTS in.
you can use MRCP to control ASR, TTS media resources if you will use it in IVR (Interactive Voice Response) apps like call centers and so on, in this case you would interface your MRCP server with Voice Gateway like CISCO and VXML server.
a famous and common MRCP implementation is unimrcp , its a C implementation of the protocol , its a good and stable open source project.
but at end, it depends on your purpose as I said, you may never need to use MRCP, you can use your TTS engine as a stand alone server if it would work alone.
famous open source TTS engines are Mary TTS written in Java, Festival written in C++.
famous open source ASR engines are cmu Sphinx4 written in Java, Julius written in C.
Related
I am working on IoT project over google cloud. I use Publish/Subscribe to allow devices contact each others. I developed the backend system using nodejs, then I will develop Mobile app that will use google library to publish/subscribe.
The problem that I face now is that. Does google have any C/C++ Library for contacting PubSub/googlecloud API or not, and if not, is there any alternative way to keep embedded devices (programmed in C/C++) updated with mobile applications actions.
Note: I need real-time control between mobile app and embedded device.
Thanks
Google Cloud Pub/Sub has a HTTP/JSON based API (under the API Reference tag in the sidebar), so you can roll your own library in this case.
The client APIs that Google currently supports are listed here. If you can run Go or Java on your embedded device (both a lot less common than C/C++ on embedded devices, and usually only supported if you stretch the definition of "embedded"), you can have a fully-supported client library.
I noticed that JavaCard 3.0 may have the ability to use HTTPS from the Oracle website (oracle.com/technetwork/articles/javase/javacard3-142122.html).
Are there any ways to create HTTPS connections to a normal Internet website ?
Basically with Java Card Classic you are limited to the APDU interface. This interface has been specified in the Java Card API and the ISO/IEC 7816-4 standard.
It is of course possible to channel any kind of protocol through an APDU interface, but you would have to program it yourself. Furthermore, you would have to do so on the terminal side as well, because Java doesn't know anything about TCP/IP, name resolution etc. As Java Card environments are very limited, it would be tricky to create something that resembles an HTTP client.
There have been demonstrations that implemented a tiny web server on a Java Card. Those obviously also require some kind of proxy on the terminal side.
The Connected Edition - if you can find it anywhere - uses the same idea; it implements a web-server for e.g. authentication. It doesn't provide a client to my knowledge.
A1: There are no JavaCard Connected (which describes such option) devices publicly available.
A2: Classic JavaCard does not specify/allow any kind of connections.
I study the construction of mobile networks and began to study MVAS. But could not find a specific iinformation what protocols are used in the VAS or MVAS.
I understood that main protocol using SMS - it SMPP.
It would be great if someone made a list of the protocols used, or links where I could read more information about the protocols used.
There is such a list; it is published by 3GPP in specification TS 23.039.
3GPP (earlier ETSI) specified the GSM, UMTS and LTE systems, with standard protocols for most of the interfaces. They did not specify any standard protocol between Short Message Service Centres and external messaging servers though.
Instead, this was left open, and each SMSC developer specified their own protocol. An early and successful SMSC developer was an Irish company called Aldiscon, which was later taken over by Logica. They developed the Short Message Peer-to-Peer protocol (SMPP), and published it as an open standard, which is the reason why it's so widely used today.
I am looking to write a small program that receives input from an external device and then sends MIDI signals to any MIDI compatible software. What is the best way, from the MIDI perspective, to go about this? Are there any specific libraries I should look into?
Thanks.
PortMidi! http://portmedia.sourceforge.net/
It's easy to use, examples for Windows are provided.
MIDI protocol is quite simple, most MIDI APIs offer manipulation with MIDI events and their parameters. What differs is the way how MIDI devices are enumerated and opened.
Correct answer depends on your requirements.
What input from what external device will you use? Will it be another MIDI device, mouse, keyboard or something that will allow input event parsing? Or will you need some low level hardware access? This one may influence programming language selection, if Java, C++ or something different and therefore the library choice.
What programming language do you prefer? C++, Java... If you plan to develop in Java, you can do that with API that JDK provides.
Should the program be multiplatform? If it should and you plan to develop in C++, you should use multiplatform MIDI library, e.g http://portmedia.sourceforge.net/ mentioned by darasan or https://github.com/jdkoftinoff/jdksmidi Otherwise you could just stick with native platform API (Windows API, ALSA, not sure about Mac stuff).
Do you plan to use some specific MIDI device? Maybe there is library that provides easy access to some device functions via MIDI, that you would have to handle by yourself (e.g. some predefined SysEx data)
With more question details more libraries (or less libraries) can be recommended.
I'm developing a mobile application using j2me. There I need to have a speech recognition function, so that application should be able to process and act upon the commands given by the user. What I wanted to know is
Is this technically possible (I'm a novice to j2me programming)?
If it is possible, where can I find a j2me library for speech recognition?
Thanks in advance,
Nuwan
This is technically possible, but in
reality most devices that run J2ME
aren't powerful enough to do it in pure Java code. You need to look for devices which support JSR 113 - JavaTM Speech API 2.0.
Look at JSR 113 - JavaTM Speech API 2.0.
There is a Java Speech API Implementation (JSR-113), which supposed to do speech recognition:
But, unfortunately, I don't know if any device support it :)
If you want to implement speech recognition yourself, there are many limitations in j2me such as slow performance, and impossibility to access audio data while recording.
An in-between way may be to do very simple ASR in the client (e.g. yes,no,digits etc) and for anything beyond you can send it to the server. The limits on what the client can do can change in the future in you upgrade your phone.