Web Bluetooth API not seeing any new characteristic values - bluetooth

I successfully wrote a basic JS application reading what is the Stereo/Left/Right value of a Sony Bluetooth speaker. I was able to do this by going to chrome://bluetooth-internals/ , finding and connecting to the device, then I began to click Read on each characteristic while changing the stereo on the speaker. Eventually I found the correct one that is changing when I change the setting.
I was able to get the service and characteristic ID from this and successfully write a basic prototype JS application to read this data live when notifications have started. I used code in various samples in the following page to build it. https://googlechrome.github.io/samples/web-bluetooth/index.html
My second experiment is with a Goodmans fitness watch, something very close to this (exact model does not seem to be online).
This provides information like heart rate, amount of steps complete etc.
I successfully paired it with the recommended app and can confirm that I can receive data using this Orunning app.
My next step I was hoping to achieve was to repeat the same process as that I did with the speaker. However values do not seem to be updating in chrome://bluetooth-internals/
I am successfully connecting to the device, I then check all the characteristics that have read values, I then proceeded to try read them again but no updates are occurring even though the data in the watch has changed.
My questions are:
Is there a better way to figure out the service you are looking for? I have reading this to get a better understanding: https://learn.adafruit.com/introduction-to-bluetooth-low-energy/further-information
Am I missing something regarding how it reads data? I even tried the ones with notification permissions but literally no values have changed.
I even disconnected everything, refreshed the browser and reconnected, and when I click Read on all the characteristics, its still the same values.
I have searched online to try understand this process, so if there is documentation you can point me to understand specifically this I would appreciate it.

Related

NFC background task

I'm struggling in finding a working solution and I need aid.
I am working on a small IoT project where I want to abuse NFC tags.
I've succeeded in reading/writing in the open app but I wish to read while the app is closed.
More or less I just want to send a small UDP message when reading the appropriate NFC tag, which turns out is a bit more difficult doing in a background task.
The main headache is that I can't find a task trigger that runs upon NFC chatter. I've tried SmartCardsTrigger and ProximitySensorTrigger from the following sources:
https://msdn.microsoft.com/en-us/windows/uwp/devices-sensors/host-card-emulation
https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/ProximitySensor
The ProximitySensorTrigger seems like it's almost triggering at random, and kinda triggers less when I push the NFC tag against the phone. Maybe I'm doing something wrong.
The SmartCardsTrigger doesn't trigger at all. I guess the EmulatorNearFieldEntry trigger type is what I want but for some reason it's unsupported (?).
Anyhow, I am using a Lumia 920 running windows 10 mobile. To my knowledge it does not support smartcards but I just hoped it could use the same trigger for NFC tags.
Reading the responses on a similar question, Akash Chowdary suggested that it may be possible writing a custom trigger. If you know any tips that may point me in the right direction then please do tell them. I got the competence in researching but it's a big sea, and it would really help knowing where to start ^^.
I'm quite the noob when it comes to background tasks, and I am very confused of why after registering a SmartCardTrigger task I have no tasks running.
If I do for example a TimezoneChange trigger or a ProximitySensor trigger, the task is shown as it should. Maybe because my Lumia doesn't support the SmartCardTrigger? Would've guessed it to thrown an error if that's the case but what do I know.
Tl;dr: I want to read NFC tags in a background task, how do I do that on a Lumia 920 inside a basic UWP project?

One 2 many audio streaming, via NodeJS or whatever

For sometime now I've been trying to do something that I never thought that it would be that hard: audio streaming. My objective is simple; a simple web app through which a certain someone can click a button and live-stream his own voice to other people using this app. It's an online classroom of sorts. Here's the details:
A broadcast/lecture is scheduled for a certain date and time (done)
A user logs-in as a teacher/instructor to a simple interface where he can click "start broadcasting" (done)
When the instructor clicks "broadcast" his voice is streamed to other users. Other student-type users can also log in and start listening to THE BROADCAST this teacher started. (and here is the trick!)
The broadcast itself should be automatically stored to a local file in the process. So that students can go back to it anytime.
Of course I spent so many hours googling and stackoverflow-ing this problem, and here is what I could understand so far:
If the starting point is the browser, I must use the GetUserMedia API, the result is raw PCM data that I can download, send to server or stream to others. (simple)
Offering the broadcast to the listeners (students) will be done via HTML5's Audio API. (simple)
WebRTC cannot help me here, because it's a p2p thing, there cannot be a server middling in the process, and I NEED TO KEEP A COPY OF THE LECTURE LOCALLY. (Here's a working example)
I can use tools like Binary.js to stream the audio binary data to the students, but this requires a file to be present already on the desk.
I need to convert the PCM data to a format like MP3 or OGG in the process, and not use WAV because it's much expensive bandwidth-wise.
I feel like it should be straight forward, but I cannot get it to work, I cannot piece all of this together and offer a stable and good experience for the user.
So again, I would love to know how to do the following:
Break the GetUserMedia raw data into packets and convert it to mp3, stream it to the server, where a script (NodJS probably) can store it locally and stream it whoever tuned-in, in real time.
I am open to whatever tool you recommend, I know that NodeJS will be present in the solution, and I am happy to use it. If the streaming could be done via a 3rd-party tool, I have no problem with that.
Thanks you in advance.
I see your comment about WebRTC, but I think you should investigate it more.
Like what you see here in this (old) post: http://servicelab.org/2013/07/24/streaming-audio-between-browsers-with-webrtc-and-webaudio/
Otherwise, you might have to go for a third party solution, like https://www.crowdcast.io/
(Even if you find a video-only solution, you can use a static picture or so for the video)
Event broadcasting is a good business for many companies. If it was that easy, there wouldn't be only few and well known competitors in the market.

Azure Media Services - Stream resuming?

I have a problem with Azure Media Services. It's configured to get stream from RTMP source, then encode it to multiple resolutions (pretty standard i think). But the problem is, that when the source stream ends (for example, powers goes down or internet disconnects) and I resume streaming it doesn't come back, so to speak.
The only thing that anyone using the player can see is the slate, that I've set up.
It happens with every piece of software that I could use, that is OBS, FLE, vMix.
Stream is published all the time, and I'm using DefaultProgram, but this happens anyway, doesn't matter if on Default or created manually.
If anyone has an idea what's going on, it would be greatly appreciated.
Unfortunately, if you disconnect the streaming ingesting, the current solution is to restart the channel.

Issues Getting Started with Restcomm

I've been trying to get started with programming with Restcomm for a few weeks now, and I'm having trouble figuring what I need in order to get myself set up with all of the services.
So far, I have gotten myself situated with the Restcomm software via the AWS Marketplace; I am able to log into the software, but have failed to register a phone number yet. Whenever I select a number from the page by clicking "Register Number," a message comes up saying that registration "failed" without any additional information.
Additionally, I have downloaded and unzipped the folders for Mobicents (which I have not read much about the use of on my desktop yet) and the Telscale USSD Gateway (which I have read most of the background documentation for, but am yet to get in motion with because of my inability to utilize Restcomm).
I have really been trying to make sense of all of these pieces on my own, but I'm at a point of frustration. Could I get some guidance just walking me through what I need in order to get started with using Restcomm and having functions through telephones correspond with a simple database?
Thanks!
If you have been able to log into Restcomm AMI, you should be able to use the pre-packaged demo apps. Here is the documentation explaining how to test the demo apps. http://docs.telestax.com/restcomm-testing-default-demos/

UIWebview issues

I am using a UIwebview in my iphone app. It is working fine. No memory leaks when you browse websites, google, news etc. But when you start a video in youtube then it shows me many memory leaks (Under AudioToolBox library). How do I fix them. I imported AVFoundation.h and audioToolBox.h and added these frameworks but I am still getting the same problem.
One more thing, I know apple checks no connectivity condition. Means when there is no internet available or connection problem then user must get some message about connection issues. How do you do that? How do i check internet is connected? what kind of message do you show? Alert or something else?
Do I also need to show activity indicator? How do you show that? can you please reply with sample code?
Apple's designated way of checking connectivity is "try it first". If you get connection problems, you can diagnose with their Reachability suite, available here. But always try first, because Reachability can give false negatives in some situations.
As far as UIWebView's memory problems, I don't think you're in a position to do anything about them. Hope they get fixed in iOS5, I guess. If you want to use AVMediaPlayer for rich media, like Safari and UIWebView do, use it directly, rather than going through WebKit.

Resources