Libfreenect2 point cloud frame capture - linux

I've been looking through the Libfreenect2 repo if there is the possibility to capture just 1 point cloud frame out of my Kinect V2, by using Ubuntu 16.04lt, but I cannot find anything relevant to do so.
How would that possible?

libfreenect and libfreenect2 are mostly just drivers for Kinect devices. Post-processing is best applied in a middleware layer such as pointclouds.org or AForge.Net; it depends on the goals of your application.
If you really want to get your hands dirty, check out this C++ point cloud example. It's written for the Kinect v1, but it might give you some ideas. If you have trouble getting the hardware to work, please also visit the repositories linked above for documentation and bug reports.

Related

How to get Average Visualization Time of an asset in Azure Media Services v3

im currently working in a module of analysis of stadistics of videos from azure media services. I want to ask how can i get some data like average visualization time, number of visualizations and more stuff like that. im pretty sure it has to exist a very easy way to get this data but i cannot find it. I found that application insights could be useful. I have found that i may have to manually track this information. Im working on .net6. An example of code would be awesome. Thanks in advance!
pd: https://github.com/Azure-Samples/media-services-javascript-azure-media-player-application-insights-plugin/blob/master/options.md
I have found that Application Insights could be useful to my problem. Some classes like TelemetryClient (from the package Microsoft.ApplicationInsights) seems to be useful to my problem, but i cant find clear information about them.
No, there is no concept of client side analytics or viewer analytics in Azure Media Services. You have to track and log things on your own on the client side. App Insights is a good solution for this, and there are some older samples out there showing how to do that with a player application.
Take a look at this sample - https://learn.microsoft.com/en-us/samples/azure-samples/media-services-javascript-azure-media-player-application-insights-plugin/media-services-javascript-azure-media-player-application-insights-plugin/
Just WARNING: it is very old and probably very out of date. I would not use much of the code from that sample, as it is using SDK's from 4 year ago. Just use it as guidance at a high level for what the architecture might look like.
Another solution would be to look to a 3rd party service like Mux.com/Data that can plug into any player framework for client analytics.

#mrtk Accessing Hololens Spatial Map MRTK v2

I have been having trouble trying to understand how to use the Spatial Awareness user guide for the latest MRTK release to get access to the spatial mapping meshes to use in a multi-user app. I cannot find a way to serialize the meshes to be able to send them to a remote device as was previously possible in the older toolkit. I have tried to add the meshes to a list and used the old simplemeshserializer but that did not seem to work at all. Any help would be greatly appreciated in trying to understand the current capabilities in current MRTK and how the same functionality can be replicated.
I have been facing problems with mrtk v2 spatial awareness too. Have you tried using surface types? I couldn't make it work yet.
You mean that you want to transfer the mesh between multiple devices, but when you use the simplemeshserializer to serialize the mesh in MRTKv2 and transfer it, the remote device becomes unresponsive?
According to some previous cares of transferring mesh between multiple clients, we hope that you can follow the steps below to troubleshoot:
Is data received correctly when transferring data using WebRTC?
Is the data still reliable after deserialization? You can try to save it locally and verify it.
After receiving the data and deserializing it, how did you handle these deserialized meshes? Can you provide the small sample to reproduce the problem?

About project using Node.js with openCV

I am planning a project CCTV system using Node.js and openCV, WebGL.
Would you please take a look at my plan and find flaw or give me advice?
My plan is: Entire system consists of 3 types of host, CCTV-server-watchmen. Numbers of each host may be (more than 10)-1-3? CCTV take a video and send it to the server. The server identifies persons in the video, and analyzes who this person is and where he or she is(using OpenCV). Finally, watchmen can seize entire status of field he or she manages(map drawn by webGL helps it). I will use node.js as network method.
I have a few issues about my plan.
Is it efficient to use Node.js as video data transmitter?
Basic concept of Node.js is single-thread, so maybe large data like video does not fit to it. But, count of CCTV and watchmen is limited and fixed(It is system for closed intranet)
Is there any method can replace Node.js?
I will not replace openCV and WebGL. But Node.js could matters. At the beginning of planning, I was finding other means for networking between C/C++ program and web-browser. Honestly, I got failed at school-project last year. One of problems that I can't find solution was "How to send/receive data between C program installed at Raspberry Pi and web Browser". I chose Node.js as method this project, but also heard other means of Qt, DB, CGI. Is there a better way?
Thank you for reading it.

How to communicate Node JS and Kinect?

I want to do some stuff using kinect and my research took me to two libs, libfreenect and OpenNi, the first one apparently just extract video data, am I right? The second one was acquired by Apple and dissolved, however some of the binary data and documentation was recovered by structure.io and this library does give the complete Kinect data. My idea is to use a socket.io server to process the Kinect input data and send it to the browser, then use JavaScript to process it on the client. My question is, does anyone here has achieved such thing? And if so, could you give me some guidance on how to achieve this or where to start please?
For Kinect for Windows V2 =>
https://www.npmjs.com/package/kinect2 [I've used it]
For kinect v1 =>
https://github.com/nguyer/node-kinect
http://metaduck.com/09-kinect-browser-node.html
http://blog.whichlight.com/post/53241512333/streaming-kinect-data-into-the-browser-with-nodejs
http://depthjs.media.mit.edu/
This library achieves something similar to what you were looking to do. It uses Kinect2 (mentioned in another response) to get the Kinect data, but also lets you stream it to another browser.
https://github.com/kinectron/kinectron

Is it correct to use voiceXML as a tool in this scenario

I have a telephony scenario in which the following happens:
Customer calls a Voice Gateway
TCL script runs and a code is taken from customer
Authentication is done through a RADIUS server
Customer will hear correct voice menu
The problem is that RADIUS server must connect to a SQL Database and check the credentials. I have currently designed the solution using cisco secure ACS and through managed stored procedures on MS SQL server.
My question is: Is the VoiceXML a better tool to do this job and because some extenstions and wrappers of VoiceXML exists in .net, does it fit in this simple scenario??
Sincerely speaking, I am a little confisued with the technology and looking for a good tutorial on its features as well.
Thanks
In a strict sense, only step 4 is implemented by VoiceXML. Other aspects are handled by the platform or external code. VoiceXML is the standards mechanism for implementing step 4, but if all you are going to do is limited audio output and simple input, it may be overkill depending on the solutions available to you.
The following is just an example of a way to solve your problem and is fairly fictitious given I don't know anything about your environment nor constraints.
Given most VoiceXML platforms, upon receiving of a call your VoiceXML application will be executed. If this is a servlet/ASP based solution, you can perform steps 2 & 3 then generate/return the VoiceXML to play the menu, gather the input and move to the next step. If this is a static VoiceXML 2.1 solution, you can use a Data element call to make an HTTP request to a system that can perform these actions. The system will need to return XML that the Javascript/ECMAScript in VoiceXML application can parse and provide the correct audio output and input processing.
Since you are asking about VoiceXML, I'm assuming your challenge is the telephony aspect of the problem. Unless you have a system already available, choosing and activating a premise or hosted solution is far more complicated than the call flow code involved. Depending on your requirements, there are solutions as low as a single line, analog modem that supports audio output and DTMF input to massively scaled on premise and hosted solutions to handle 10,000s of concurrent calls that implement VoiceXML as well as a wide range of other call flow technologies.
VoiceXML would work fine in this scenario. There is a an open source project called VoiceModel that uses ASP.NET MVC to generate the VoiceXML and therefore integrates nicely with the .NET stack. There are a lot of examples in the project with discussions on how to use the examples in this blog. The examples use Voxeo Prophecy as the VoiceXML platform which has a SIP interface that will connect with a Voice Gateway. You can download two ports for free to try it out.

Resources