I want to do some stuff using kinect and my research took me to two libs, libfreenect and OpenNi, the first one apparently just extract video data, am I right? The second one was acquired by Apple and dissolved, however some of the binary data and documentation was recovered by structure.io and this library does give the complete Kinect data. My idea is to use a socket.io server to process the Kinect input data and send it to the browser, then use JavaScript to process it on the client. My question is, does anyone here has achieved such thing? And if so, could you give me some guidance on how to achieve this or where to start please?
For Kinect for Windows V2 =>
https://www.npmjs.com/package/kinect2 [I've used it]
For kinect v1 =>
https://github.com/nguyer/node-kinect
http://metaduck.com/09-kinect-browser-node.html
http://blog.whichlight.com/post/53241512333/streaming-kinect-data-into-the-browser-with-nodejs
http://depthjs.media.mit.edu/
This library achieves something similar to what you were looking to do. It uses Kinect2 (mentioned in another response) to get the Kinect data, but also lets you stream it to another browser.
https://github.com/kinectron/kinectron
Related
Hello, I am looking for a way to forward my live stream from my server to another server, for example, Facebook via rtmp.
the structure would be something like:
My cam -> my server -> other server rtmp -> viewers
My intention is to capture the transmission and forward it to many rtmp servers to consume the server's resources and not the client's resources, I don't have much knowledge in video transmissions, if it is possible to do it via nodejs it would be great, thanks
I have searched for SFU and other ways that are possible, but I want to have several alternatives and find the most ideal to implement it in production
I never did it myself, so I can't recommend the best way to do it.
After some research, if you want to stay with nodejs, I personallly recommend Mediasoup.
It is a powerfull SFU developed in c++ which provided really good bindings with nodejs. All the heavy process is done in c++ and the nodejs API call a child process where the c++ mediasoup worker runs on it. You only have to care about the nodejs API nothing else.
With mediasoup it should not be too difficult to get your stream on the nodejs server.
After that, for transmitting you stream to a rtmp server, it seems you can call ffmpeg in a child process to transfer it from your nodejs server to a rtmp server.
I found two github projects with this kind of approach.
The first one is a bit outdated, using an old mediasoup version but maybe you can find something interesting. Specially for the client/browser part, you have an HTML file that should be helpfull. Be aware the API for Mediasoup may have changed, both the front and the back.
EDIT : The first project does not use Mediasoup client library, you can look at it here
The second is more recent and really seems to match your need, maybe you will need some cutomization. But they don't provide any front end part.
For mediasoup, you will find a lot of ressources over the internet, github, youtube for the client/server part.
If you want to look at it, the installation guide for the Mediasoup v3 (last) version. You have to install a python specific version and set few environment variables. After that you can install the npm package and happy coding !
It is easier to install on linux, so if you are on windows, preferably use WSL2 for testing. I don't know anything about Mac, but I know docker is possible, so should be good too.
A lot simpler option to stream your webcam to other servers will be to use OBS studio, but you must have already considered it
They have a plug in that permits to send your stream to multiple platform at once, looks really cool ! Here
Hope it can give you some more options !
I am very confused about the calling sdk specs. They are clear about the fact that only one video stream can be rendered at one time see here...
BUT when I try out the following sample I get video streams for all members of the group call. When I try the other example (both from ms), it behaves like written in the specs... So I am totally confused here why this other example can render more than one video stream in parallel? Can anybody tell me how to understand this? Is it possible or not?
EDIT: I found out that both examples work with multiple videos streams. So it is cool that the service provide more than the specs say, but I do not get the point why the specs tell about that not existing limitations...
Only one video stream is supported on ACS Web (JS) calling SDK, multiple video stream can be rendered for incoming calls but A/V quality is not guaranteed at this stage for more than one video. Support for 4(2x2) and 9(3x3) is on the roadmap and we'll publish support as network bandwidth paired with quality assurance testing and verification is identified and completed.
I am very new to webrtc, I am slightly confused about it.
I am able to do one-to-one video/audio call using node.js, but still confused is it possible to check how long two people had talked?
If yes, please guide me.
If not then what is the best way to monitor call length? (I don't want to record audio or video, just the length).
Thanks in Advance.
Are you using nodejs as your socket server, or as the actual endpoints? Last I checked webrtc didn't have a native nodejs interface but you could use one of the available NPM modules.
It's always possible to track from the app side. Get the time at the start, get the time at the end and report that to your server. The WebRTC api for iOS, Android, and JS has a GetStats api you can call during or after a session to get this information as well. AppRTC has examples on how to do that.
I am planning a project CCTV system using Node.js and openCV, WebGL.
Would you please take a look at my plan and find flaw or give me advice?
My plan is: Entire system consists of 3 types of host, CCTV-server-watchmen. Numbers of each host may be (more than 10)-1-3? CCTV take a video and send it to the server. The server identifies persons in the video, and analyzes who this person is and where he or she is(using OpenCV). Finally, watchmen can seize entire status of field he or she manages(map drawn by webGL helps it). I will use node.js as network method.
I have a few issues about my plan.
Is it efficient to use Node.js as video data transmitter?
Basic concept of Node.js is single-thread, so maybe large data like video does not fit to it. But, count of CCTV and watchmen is limited and fixed(It is system for closed intranet)
Is there any method can replace Node.js?
I will not replace openCV and WebGL. But Node.js could matters. At the beginning of planning, I was finding other means for networking between C/C++ program and web-browser. Honestly, I got failed at school-project last year. One of problems that I can't find solution was "How to send/receive data between C program installed at Raspberry Pi and web Browser". I chose Node.js as method this project, but also heard other means of Qt, DB, CGI. Is there a better way?
Thank you for reading it.
I've been looking through the Libfreenect2 repo if there is the possibility to capture just 1 point cloud frame out of my Kinect V2, by using Ubuntu 16.04lt, but I cannot find anything relevant to do so.
How would that possible?
libfreenect and libfreenect2 are mostly just drivers for Kinect devices. Post-processing is best applied in a middleware layer such as pointclouds.org or AForge.Net; it depends on the goals of your application.
If you really want to get your hands dirty, check out this C++ point cloud example. It's written for the Kinect v1, but it might give you some ideas. If you have trouble getting the hardware to work, please also visit the repositories linked above for documentation and bug reports.