Amazon Echo: Visualize temperature on external website - sensors

The Amazon Echo has a temperature sensor inside. Now I'd like to visualize the temperature profile on my website. Furthermore I have a two other temperature sensors connected to the Echo, which I would also like to visualize.
Is there any easy way to do this?
I searched for Alexa Skills for this. But I have not yet found anything.
I could try to create an own skill for this. But for this a small example would be helpful, and I have not yet found one.
Isn't there any smart home service, which provides such a functionality?
If not, I also could run my own server, which polls the temperature regularly or receives the temperature from the Echo.

Related

Web Bluetooth API not seeing any new characteristic values

I successfully wrote a basic JS application reading what is the Stereo/Left/Right value of a Sony Bluetooth speaker. I was able to do this by going to chrome://bluetooth-internals/ , finding and connecting to the device, then I began to click Read on each characteristic while changing the stereo on the speaker. Eventually I found the correct one that is changing when I change the setting.
I was able to get the service and characteristic ID from this and successfully write a basic prototype JS application to read this data live when notifications have started. I used code in various samples in the following page to build it. https://googlechrome.github.io/samples/web-bluetooth/index.html
My second experiment is with a Goodmans fitness watch, something very close to this (exact model does not seem to be online).
This provides information like heart rate, amount of steps complete etc.
I successfully paired it with the recommended app and can confirm that I can receive data using this Orunning app.
My next step I was hoping to achieve was to repeat the same process as that I did with the speaker. However values do not seem to be updating in chrome://bluetooth-internals/
I am successfully connecting to the device, I then check all the characteristics that have read values, I then proceeded to try read them again but no updates are occurring even though the data in the watch has changed.
My questions are:
Is there a better way to figure out the service you are looking for? I have reading this to get a better understanding: https://learn.adafruit.com/introduction-to-bluetooth-low-energy/further-information
Am I missing something regarding how it reads data? I even tried the ones with notification permissions but literally no values have changed.
I even disconnected everything, refreshed the browser and reconnected, and when I click Read on all the characteristics, its still the same values.
I have searched online to try understand this process, so if there is documentation you can point me to understand specifically this I would appreciate it.

How to tell Alexa to punctuate user responses properly. Please see the use case

I am sorry if this looks like a stupid question!
My skill is recording user responses in the database. This part is working fine. But my concern is Alexa is not punctuating the response at all. Here is an example:
User: The loading speed of the website is very slow (a few milliseconds of pause) can't we make it faster (vocal tone was used in such a way so that Alexa can understand that this part is a question)
Recorded: the loading speed of the website is very slow can't we make it faster
Expected: the loading speed of the website is very slow. can't we make it faster?
Is there any way to accomplish this? Because it is very important to have correctly punctuated responses to be stored in the database. as this skill will be used for project management purpose.
I am afraid it's not possible with Alexa. However you can use AWS Transcribe Service - build an app for the mobile/web and send recorded audio to the service. According to their docs:
Easy-to-Read Transcriptions
Amazon Transcribe automatically adds punctuation and formatting so that the output closely matches the quality of manual transcription at a fraction of the time and expense.

I want to use Alexa API to read grocery list (for display in a web app.) Must I write a "skill"?

First my apologies if this is a dumb question, or has been asked before. I've been searching for a couple of days and can't find an answer. This usually means I'm asking a dumb question. ;-)
I have a rPi and touchscreen in my kitchen; it displays helpful things like appointments for next 7 days, weather next 7 days, news headlines, etc. It's a web app I wrote in Angular 7, and it queries a NodeJS v8 backend which I also wrote. This was a hobby project to learn Angular and Node.
The Node app performs all interactions with the outside world, using Google's Calendar API to get appointments, Yahoo Weather API, newsapi.org. All of these integrations followed the same pattern -- obtain some apikey token, and use that when invoking some API method to get the requested data in a JSON wad.
Now, I would like to also get the grocery list from my Echo. After reviewing Amazon's Alexa API documentation, it doesn't appear that I can do this the same way as the previous three integrations I did. Must I really write a "skill" for this, though I never intend to invoke it on the Echo?
And if so, could you point to a decent sample? None of the samples provided by Amazon utilize the Lists API.
Yes, you must create a skill in order to receive access_token. And then - Out of Session Interaction feature might be useful.

Google popular times in nodejs

Google provided the latest Api "Popular times" to get data regarding to the specific time a particular business or else is busy or popular. However it comes with python implementation.
Does anyone know a way to use the google popular times api inside a Node.js project?
https://github.com/m-wrzr/populartimes
this link gets you for a python code..how to include or how we can get api for node project
You could try the foot traffic API service BestTime.app, which also works with Node.JS. Unfortunately, it's paid, but they offer a free test account.
BestTime.app provides foot traffic data almost similar to Google Popular Times and Foursquare data, but with more functionality. You can also analyze and filter foot traffic data of multiple places in an area. So you can for example filter places and show only bars that are busy on Friday evening, or show only supermarkets that are quiet on Sunday morning.
You can view the data through their website tools (e.g. on a heatmap), or get the same data through their software API (Software API tutorial).
Integrating the API is really useful if you want to e.g. make consumer-focused apps/websites to inform people to which place they should and at what time.
In the picture above the BestTime.app Radar tool shows foot traffic data for popular attractions in New York City. On the left the a foot traffic prediction is shown for the whole day. The map is overlayed with a heatmap that indicates the predicted foot traffic intensity for current hour per place. Using the filters, in the right panel, you can narrow down your search by selecting for example only the quiet NYC attractions on Friday afternoon.
Disclosure: I work for BestTime.app

One 2 many audio streaming, via NodeJS or whatever

For sometime now I've been trying to do something that I never thought that it would be that hard: audio streaming. My objective is simple; a simple web app through which a certain someone can click a button and live-stream his own voice to other people using this app. It's an online classroom of sorts. Here's the details:
A broadcast/lecture is scheduled for a certain date and time (done)
A user logs-in as a teacher/instructor to a simple interface where he can click "start broadcasting" (done)
When the instructor clicks "broadcast" his voice is streamed to other users. Other student-type users can also log in and start listening to THE BROADCAST this teacher started. (and here is the trick!)
The broadcast itself should be automatically stored to a local file in the process. So that students can go back to it anytime.
Of course I spent so many hours googling and stackoverflow-ing this problem, and here is what I could understand so far:
If the starting point is the browser, I must use the GetUserMedia API, the result is raw PCM data that I can download, send to server or stream to others. (simple)
Offering the broadcast to the listeners (students) will be done via HTML5's Audio API. (simple)
WebRTC cannot help me here, because it's a p2p thing, there cannot be a server middling in the process, and I NEED TO KEEP A COPY OF THE LECTURE LOCALLY. (Here's a working example)
I can use tools like Binary.js to stream the audio binary data to the students, but this requires a file to be present already on the desk.
I need to convert the PCM data to a format like MP3 or OGG in the process, and not use WAV because it's much expensive bandwidth-wise.
I feel like it should be straight forward, but I cannot get it to work, I cannot piece all of this together and offer a stable and good experience for the user.
So again, I would love to know how to do the following:
Break the GetUserMedia raw data into packets and convert it to mp3, stream it to the server, where a script (NodJS probably) can store it locally and stream it whoever tuned-in, in real time.
I am open to whatever tool you recommend, I know that NodeJS will be present in the solution, and I am happy to use it. If the streaming could be done via a 3rd-party tool, I have no problem with that.
Thanks you in advance.
I see your comment about WebRTC, but I think you should investigate it more.
Like what you see here in this (old) post: http://servicelab.org/2013/07/24/streaming-audio-between-browsers-with-webrtc-and-webaudio/
Otherwise, you might have to go for a third party solution, like https://www.crowdcast.io/
(Even if you find a video-only solution, you can use a static picture or so for the video)
Event broadcasting is a good business for many companies. If it was that easy, there wouldn't be only few and well known competitors in the market.

Resources