Detecting known words using the Web Speech API - web

I'm in the planning stages of a web app that is intended to help children learn vocabulary. We would like to make the app as interactive as possible. For example, we would show a picture of an apple and ask the child to identify the object. The child would then say "apple" and we would determine whether they are correct, etc.
The new Web Speech API seems like a promising tool for this project. However, looking through the documentation, I believe it will only produce transcripts from speech (i.e. it cannot match a spoken word to a known word and produce a confidence value – at least not out of the box).
Does anyone have experience with leveraging the Web Speech API in this way (or any other API for that matter)? I'm trying to stick to technologies that can run in the browser, if possible.

Try ispikit.com, it's way more suitable for your needs, it's specifically designed for education tasks and works in a browser on a client side. Web Speech API is not designed for detection, so you will not be able to use it.

Related

The technology behind Google Translate and DeepL websites

I am working on a front-end solution for translating text on Google Cloud platform. I want to know what is the technology behind the front-end of apps like deepl?
Which cloud components do they use to efficiently translate the text as the user types new characters inside the input field?
Translator services like Google Translate, or the one you mentioned, are normally built on top of many different software components, layers and servicers (quite often also involving ML/NLU etc).
If you are a frontend developer looking for an easy way to translate UIs or some user input, have a look to Google Translate API. Be aware of free quotas and prices.

How should I build this app over communcation apps?

These days I am finding myself in the position of having to implement for one of my college courses a system that should act as a giant wrapper over many communications apps like Gmail , Facebook Messenger maybe even WhatsApp .To put it simply you should have a giant web interface where you can authorize Gmail , Messenger and use them at once when required. I am thinking of going with an REST API to manage the user's services authorized by OAuth2.Also I am thinking of using Node.JS and Express.js in the backend and React.js in the frontend. I found some sweet libraries in npm that should take care of interacting with the involved APIs(https://www.npmjs.com/package/node-gmail-api this one for instance), but I am also doubtful about this approach , for example I have no idea how to keep the use notified about its incoming mails or messages for example . I am in dire need of some expertise since I forgot to mention but I am quite the newbie in this field. To sum it up for once my question is how would you implement such an infrastructure ? Is it my approach viable or I am bound to hit some really hard to overcome obstacles?
As a college exercise, it would be a really fun experiment, so it definitely worth the time you want to put into it. However, once you want to add more features, the complexity will go up pretty fast.
Here are a couple of ideas you can think of:
It's pretty clear that your system can't do more things than the capabilities exposed by the APIs of communication apps (e.g. you can't have notifications in gmail if the API doesn't have this capability).
For that reason, you should carefully study the APIs and what functionalities they expose. They have public docs that you can check out: (Gmail API, Facebook Messanger API)
Some of the apps you want to communicate with may not have an official API (e.g. WhatsApp) - those kinds of details you definitely want to know from the start.
Based on the analysis of those APIs, you should lay out a list of requirements for your system, which can be extracted from all the APIs, for example: message notifications, file transfers, user profiles, etc.
In this way, you know exactly what capabilities your system should have, and you don't end up implementing a feature that is available only in 1 API out of 4.
Also, it would be a bit challenging to design your system from a user perspective, because the apps have different usage patterns - chat apps, where messages are coming in real-time, vs email, which is not real-time communication. That's just a detail anyway, the gist of your project is to play with those APIs.
Also, it may worth checking out the Gateway Aggergation Pattern, which is related to this project - you may want the user to send a message to multiple apps, by using a single request to your service.

Using Cortana for dictation of documents

I'm currently doing research about Cortana as I'm interested in doing some development of custom skills for it. Currently I'm using Cortana to invoke Windows Speech Recognition where I can then use WSR as a means to dictate text into Word. I'm experimenting with this as a possibility to be used for recording and generating a transcript in real time for meetings.
Now this is quite a hassle as I've found and I'm curious to know if there is something that I can do to integrate a bot within Cortana for the same purpose. I've looked up and done some reading about Azure Bot Framework, Cognitive Services, LUIS, etc.
Is it possible to develop such a solution using the above mentioned services ?
Thank you in advance !
Yes, it is possible.
You can feed the streams to the Speech to Text API, then chunk the audio according to the returned Offset and Duration of each phrase, then send those chunks to the Speaker Recognition API to identify the speaker by name so you'd have a name for each chunk to put with it's transcribed phrase and create a dialog out of
Since you're considering it mainly for meetings, the solution you've mentioned was announced a while ago as a feature of Microsoft Teams, and it is going to be publicly available in the near feature, you can also watch a demo that was presented at Build 2018 from here

Use Web Speech API implementation in Google Chrome to circumvent rate limit of standard API?

To use Google's Speech API directly it is now required that you obtain an API key. To get that key you must subscribe to the chromium-dev#chromium.org newsgroup, and then follow a few steps and Google will give you a developer's key that is "not for distribution." The key is rate limited for 50 requests/day.
For example, node-google-speech-api outlines the need for having this key for a node application to access Google's Speech API directly (without the use of a browser):
https://github.com/psirenny/node-google-speech-api
There are also PHP libraries and Java libraries for accessing Google's Speech API, also requiring this key.
I would like to write a desktop application that utilizes Google's speech recognition technology, but the 50 requests/day limit is unacceptable for wide distribution and even for a single desktop deployment of my envisioned software. I see up to 500 requests/day by an individual desktop user if the voice recognition is broken up somehow, and most of these would probably be long-polling/continuous so maybe it'd only be 2 or 3 requests/day but hours at a time. Multiply that by a few hundred users and I'd be easily exceeding 50 requests/day.
I was trying to think of a way to access Google's superior speech recognition technology on the desktop in my own app (language doesn't matter but node.js would likely be part of the mix so a node.js solution would be preferred) without this limit and that brought me to consider the Web Speech API standard which Google Chrome happens to implement.
As far as I know, there is not a hard request/day limit imposed on Google Chrome's implementation of the Web Speech API, and I could happily write websites that used Web Speech API all day long without or with minimal restrictions compared to Google Speech API direct. This brought me to thinking, what if I distributed a Chrome (not Chromium) browser, so the bonafide Google Chrome browser, but added an "extension" to it that allowed javascript within a custom html5 web page to interface with other applications on the client's system (ie a Node.js app running alongside this special installation of Chrome) and wrote my speech recognition portion in Javascript, Web Speech API style, and piped the output into the other application I design and have installed on clients' systems.
Would/could that work?
What are the pitfalls of this approach?
Do you have suggestions of another approach or would you perhaps recommend a commercially-licensed solution that is comparable to the ease of use and extreme natural language accuracy of Google's speech technology?
One possible approach to try is a Chrome App
It will run in a sandboxed instance of Chrome and will be implemented with HTML + Javascript.
To the user it will look just like a desktop application.

how to implement a web site like youtube?

I'm doing a language web site for my university language center, where students login and see videos to learn English. i have to do it like this,
person is logging in to the system, search using a search area and find the details,lessons and videos relevant to that videos. this functionality exactly matches the youtube scenario.
for implementing twitter like functionality we can use status-net, is there a similer library, statusnet like famous implementation for youtube or a some kind of platform or a framework like codeigniter that we can use to implement youtube like site very easily??
please suggest some options?? a open source one or a commercial one ???
and what is the best video format to use in a such web site?? flv?? mp4?? or mov???
regards,
Rangana
Your best option is to use a 'cloud' based video processing service. Most have a sample project / library for many different languages and frameworks. Here is a list of a few I've tried and liked:
http://zencoder.com/
http://transloadit.com/
http://pandastream.com/
The typical steps involve uploading the video files to a large 'cloud' static asset host (such as S3) through the browser. If you are inexperienced it is best to select a processor that provides an uploader (it will handle putting the files in the right spot). Of the three, Transloadit and Panda both have custom unloaders.
Usually the service will allow you to either pass the encoding settings (what formats and qualities to) output to as parameters or configure them in your account. To support all current HTML5 browsers you just need H264 (.m4a) and OGG (.ogv). However, the new trend in the video world is for WEBM (.webm) so you might want to include it as well.
Next you will receive a unique code from the web service that you must store in persistent storage (database). The web service can be configured to 'callback' (perform an HTTP POST or GET request to your service) once the video is encoded.
Once your recieve a callback you can activate your video and start dislpaying it on your pages. For displaying, if you are inexperienced I'd highly recommend you use one of the following players:
http://sublimevideo.net/
http://longtailvideo.com/
http://videojs.com/
They all do similar things for different prices. My current personal favourite is Sublime Video (it offers cool light box effects and a gorgeous player).
Why do you have to re-implement Youtube when you can just use it for hosting your videos for free? Many online e-learning portals (e.g. Khan academy) do exactly that.
As far as the best video format to use -- go read about H.264/AVC. It's what Youtube currently uses.
I think you will not find already built solution ;)
But it's not really that hard. You can use existing frameworks that will make your life easier while you build account management system, the rest shouldn't be really that hard (assuming you don't really want to re-build the whole Youtube ;D ).
For playing videos, you can use JW Player. A great piece of software, you should check it out.

Resources