How to divide the camera stream into multiple streams? - node.js

Hello there i hope that you are good and doing fine. I have recently developed a surveillance system using web RTC multi connection. Now i have to apply facial detection and facial recognition on the stream at one end (where the camera is placed i.e camera stream).For this i have to divide the one stream into multiple stream so that one stream is showed to user and other one is sent to opencv so that i can apply facial recognition operation.Can anyone guide me that how will i be able to achieve this?

Related

Multi Client video tcp stream intel realsense

The intel - real sense camera provides two video streams at once, a rgb as well as a depth stream. I now need to make those two streams separately accessible locally over a server for multiple clients. I found ways to do that for only rgb and and a single client, but could not find a guide for a multi stream - multi client solution in Python.
I did my research but could not quite find what I needed. If anyone could point me to a/multiple tutorials or generally tools I should use, that would be helpful.
How about ROS wrapper of realsense.
Realsensen camera will be wrapped inside a ROS node. Then it will publish color, depth frame into seperated message queues. Finally, you can subcribe to those message queues to get the frame.
(Image source)

Live streaming from UWP to Linux/Python Server

I have an UWP app that capture a live video stream (webcam), encodes it in h264, and sends it through a TCP socket (in a local network, I need high performance) to a Linux device.
Is there a way to do this? I need the video not for playing it but for extract single frames. I could do that with opencv but it requires a local video file, instead I'm using a live stream.
I would send photos instead of a video stream if the time needed for capture one was acceptable, but it requires about 250 ms.
Is RTP required? Does UWP (windows) provides a way to achive this?
Thank you
P.S.: The UWP app runs in Hololens.
You can use WebRTC to transmit live video from the HoloLens easily to any target. That's probably the easiest way to do it without going really low level.
For an introduction just grab this repo and try the sample app which runs perfectly on the HoloLens https://github.com/webrtc-uwp/PeerCC/tree/e95f231e1dc9c248ca2ffa040276b8a1265da145/Client

Get Camera resolution in Flex Builder

I am developing an application using Flex Builder and i am a newbie to it...
My task is to develop an application using flex builder having two cameras 1 for video chat and one for capturing images and send it to the friend with whom you are chatting....
I have developed the application and written a script for it...but i am facing a problem that if i am having an arrays of camera then how i will get to know that which one is HD cam and which one is normal web cam... and according to it i have to set the cameras for chatting and capturing ....
so my question is how can i set the HD cam in my application if we have two cameras simultaneously. Is there any built in function to know the resultion or something like that....

Low audio quality with Microsoft Translator

I'm working on a desktop application built with XNA. It has a Text-To-Speech application and I'm using Microsoft Translator V2 api to do the job. More specifically, I'm using is the Speak method (http://msdn.microsoft.com/en-us/library/ff512420.aspx), and I play the audio with SoundEffect and SoundEffectInstance classes.
The service works fine, but I'm having some issues with the audio. The quality is not very good and the volume is not loud enough.
I need a way to improve the volume programmatically (I've already tried some basic solutions in CodeProject, but the algorithms are not very good and the resulting audio is very low quality), or maybe use another api.
Are there some good algorithms to improve the audio programmatically? Are there other good text-to-speech api's out there with better audio quality and wav support?
Thanks in advance
If you are doing off-line processing of audio, you can try using Audacity. It has very good tools for off-line processing of audio. If you are processing real-time streaming audio you can try using SoliCall Pro. It creates virtual audio device and filters all audio that it captures.

Playing multiple audio streams simultaneously from one audio file

I have written an application that receives media files from a central server and plays those files according to a playlist. All works well.
A client has contacted us and wants to use our application to play some audio files as presentations in a kiosk-style application. So far, so good, our application can handle this no problems.
He has requested as a potential feature that we would have a number of headphone sockets at the front of the kiosk. Each headphone socket would play the same audio presentation in a different language.
I have come up with the idea of encoding a single audio file with the presentation in multiple languages, and each language in a different channel. We would then require a sound card that could decode each channel and output it on a different headphone socket.
Thing is, while I'm think the theory is sound, I have absolutely no idea whether this is feasible and what would be required to pull it off.
Any ideas?!
As a side-note: the application uses Media Player as the underlying component to handle the playback of audio and video. I'd appreciate any help as to the software we could use to generate the multi-channel audio stream and the hardware (USB sound card would be fine) that we could use to decode the stream.
Thanks!
You need to use multiple files not channels, its going to be way easier that way.
Instead of using Media Player use DirectShow (on .NET you have DirectShow.NET), In DirectShow you have the notation of Multiple files on the same graph.
You will be able to control to which audio device play which files, and your Play, Pause, Stop commands will be preformed on all files without you need to worry about syncing.
There are many samples on how to build media player like with DiectShow, extending them to use multiple files should be really easy.
For HW take a look at this (USB with 8 output channels)
I think with Shay's hardware you've got a complete solution:
Encode a 7.1 file with a different mono voice track on each channel.
Use the 8 channel output device in 7.1 mode, with a different headset in each port, and you've got it. Or, if you only have 6 languages, a 5.1 file would work. Many PC's have 5.1 outputs built in, you'd only need 3 splitters to break out the left and right channels from each jack.
You can do the encoding with Windows Media Encoder, or other pro audio tool.

Resources